Skip to content

.pr_agent_auto_best_practices

qodo-merge-bot edited this page Mar 28, 2026 · 17 revisions

Pattern 1: When catching broad exceptions in code paths that may be interrupted/canceled, preserve the cancellation signal (restore thread interrupt status in Java, swallow/propagate OperationCanceledException appropriately in .NET) so shutdown and cooperative cancellation remain reliable.

Example code before:

try {
  doWork();
} catch (Exception e) {
  throw new RuntimeException(e);
}

Example code after:

try {
  doWork();
} catch (InterruptedException e) {
  Thread.currentThread().interrupt();
  throw new RuntimeException(e);
} catch (Exception e) {
  throw new RuntimeException(e);
}
Relevant past accepted suggestions:
Suggestion 1: [reliability] Interrupt flag lost
Interrupt flag lost LocalNode.executeWithInterceptors catches all checked Exceptions and wraps them in RuntimeException without restoring the thread’s interrupted status, so an InterruptedException thrown by an interceptor can be silently consumed and cancellation/shutdown logic can misbehave. This only becomes possible with the new interceptor SPI because NodeCommandInterceptor.intercept() is allowed to throw Exception.

Issue description

LocalNode.executeWithInterceptors wraps checked exceptions into RuntimeException but does not preserve the thread’s interrupt status. If an interceptor throws InterruptedException (allowed by the SPI), the interrupt flag can be cleared and never restored.

Issue Context

  • NodeCommandInterceptor.intercept(...) declares throws Exception, so InterruptedException is a valid failure mode.
  • The new code catches Exception and rethrows new RuntimeException(e).
  • In other parts of the codebase, InterruptedException is handled by calling Thread.currentThread().interrupt() before rethrowing.

Fix Focus Areas

  • java/src/org/openqa/selenium/grid/node/local/LocalNode.java[842-867]

Implementation notes

  • Add a dedicated catch (InterruptedException e) branch (both in the empty-interceptors fast path and the chained path):
  • call Thread.currentThread().interrupt();
  • then rethrow (e.g., throw new RuntimeException(e); or a more specific unchecked exception).
  • Keep existing behavior for RuntimeException rethrow.
  • (Optional but recommended) add a small unit test that runs executeWebDriverCommand on a thread, has an interceptor throw InterruptedException, and asserts the thread is interrupted after the call.

Suggestion 2:

Propagate cancellation token

Propagate the cancellationToken from UnsubscribeAsync to registration.DrainAsync to allow the draining operation to be cancelled.

dotnet/src/webdriver/BiDi/EventDispatcher.cs [78]

-await registration.DrainAsync().ConfigureAwait(false);
+await registration.DrainAsync(cancellationToken).ConfigureAwait(false);

Suggestion 3:

Make close handshake cancellation-safe

Use CancellationToken.None for the close handshake (and optionally guard it with try/catch) so a canceled receive token doesn’t prevent replying to the close frame and doesn’t obscure the real disconnect.

dotnet/src/webdriver/BiDi/WebSocketTransport.cs [70-76]

 if (result.MessageType == WebSocketMessageType.Close)
 {
-    await _webSocket.CloseOutputAsync(WebSocketCloseStatus.NormalClosure, string.Empty, cancellationToken).ConfigureAwait(false);
+    try
+    {
+        await _webSocket.CloseOutputAsync(WebSocketCloseStatus.NormalClosure, string.Empty, CancellationToken.None).ConfigureAwait(false);
+    }
+    catch
+    {
+        // Ignore close handshake failures; connection is already closing/closed.
+    }
 
     throw new WebSocketException(WebSocketError.ConnectionClosedPrematurely,
         $"The remote end closed the WebSocket connection. Status: {_webSocket.CloseStatus}, Description: {_webSocket.CloseStatusDescription}");
 }

Suggestion 4:

Don’t throw on cancellation

Swallow OperationCanceledException when awaiting _receivingMessageTask (or inside ReceiveMessagesAsync) so DisposeAsync remains idempotent and doesn't fail during normal cancellation-based shutdown.

dotnet/src/webdriver/BiDi/Broker.cs [96-256]

 public async ValueTask DisposeAsync()
 {
     _receiveMessagesCancellationTokenSource.Cancel();
     _receiveMessagesCancellationTokenSource.Dispose();
 
-    await _receivingMessageTask.ConfigureAwait(false);
+    try
+    {
+        await _receivingMessageTask.ConfigureAwait(false);
+    }
+    catch (OperationCanceledException)
+    {
+        // Expected during disposal.
+    }
 
     _transport.Dispose();
 
     GC.SuppressFinalize(this);
 }
-...
-private async Task ReceiveMessagesAsync(CancellationToken cancellationToken)
-{
-    try
-    {
-        while (!cancellationToken.IsCancellationRequested)
-        {
-            var data = await _transport.ReceiveAsync(cancellationToken).ConfigureAwait(false);
-            ...
-        }
-    }
-    catch (Exception ex) when (ex is not OperationCanceledException)
-    {
-        ...
-        throw;
-    }
-}

Suggestion 5:

Improve process cancellation for older frameworks

For frameworks older than .NET 8, refactor the process exit and cancellation logic to use a TaskCompletionSource with the process.Exited event. This avoids blocking a thread pool thread and provides a more efficient and robust implementation.

dotnet/src/webdriver/Manager/SeleniumManager.cs [287-324]

 #if NET8_0_OR_GREATER
             await process.WaitForExitAsync(cancellationToken).ConfigureAwait(false);
 #else
-        var processExitTask = Task.Run(() => process.WaitForExit(), CancellationToken.None);
-
+        var tcs = new TaskCompletionSource<object?>();
+        process.EnableRaisingEvents = true;
+        process.Exited += (sender, args) => tcs.TrySetResult(null);
         using (cancellationToken.Register(() =>
         {
             try
             {
-                process.Kill();
+                // The process might have already exited, so we need to be defensive.
+                if (!process.HasExited)
+                {
+                    process.Kill();
+                }
             }
-            catch
+            catch (InvalidOperationException)
             {
-                // Process may have already exited
+                // Process has already exited.
             }
+            tcs.TrySetCanceled();
         }))
         {
-            await processExitTask.ConfigureAwait(false);
+            await tcs.Task.ConfigureAwait(false);
         }
-
-        cancellationToken.ThrowIfCancellationRequested();
 #endif

Pattern 2: Validate configuration and external/protocol-derived inputs (null/empty strings, negative values, unexpected shapes) early and fail with clear, actionable exceptions instead of relying on implicit parsing/unboxing/casts.

Example code before:

int grace = config.getInt("stop-grace-period");
return Duration.ofSeconds(grace);

Example code after:

int grace = config.getInt("stop-grace-period");
if (grace < 0) {
  throw new IllegalArgumentException("stop-grace-period must be >= 0");
}
return Duration.ofSeconds(grace);
Relevant past accepted suggestions:
Suggestion 1: [reliability] Negative grace period accepted
Negative grace period accepted DockerOptions.getStopGracePeriod() converts the configured integer directly into a Duration without rejecting negative values, so a config like `stop-grace-period = -1` becomes `Duration.ofSeconds(-1)`. That negative duration is propagated into container stop calls and is serialized into the Docker stop request as a negative `t` query parameter, which violates the intended semantics of a grace period and can lead to incorrect stop behavior.

Issue description

stop-grace-period is a newly user-configurable integer that is converted directly into Duration.ofSeconds(...) without validating that it is non-negative. Because the Docker stop request serializes the duration into a t query parameter, negative configuration values can produce a negative timeout in the Docker stop HTTP call.

Issue Context

  • Config.getInt accepts negative integers (it uses Integer.parseInt).
  • DockerOptions.getStopGracePeriod() currently performs no range validation.

Fix Focus Areas

  • java/src/org/openqa/selenium/grid/node/docker/DockerOptions.java[115-123]
  • java/src/org/openqa/selenium/grid/config/Config.java[38-44]
  • java/src/org/openqa/selenium/docker/client/StopContainer.java[38-51]

Suggested change

In DockerOptions.getStopGracePeriod(), read the int into a local variable and enforce &amp;gt;= 0 (and potentially a sane max). If invalid, throw ConfigException with a clear message (or clamp to 0 if you prefer permissive behavior). Add/adjust unit tests to cover the negative-value case if there is an existing config-parsing test harness for DockerOptions.


Suggestion 2:

Validate VNC address string

In findVncEndpoint, add a check to ensure the vncLocalAddress string is not empty or just whitespace before attempting to create a URI from it.

java/src/org/openqa/selenium/grid/node/ProxyNodeWebsockets.java [225-229]

 String vncLocalAddress = (String) caps.getCapability("se:vncLocalAddress");
-if (vncLocalAddress == null) {
-  LOG.warning("No VNC endpoint address in capabilities");
+if (vncLocalAddress == null || vncLocalAddress.trim().isEmpty()) {
+  LOG.warning("No valid VNC endpoint address in capabilities");
   return Optional.empty();
 }

Suggestion 3:

Validate required input early

Add an explicit validation for options.BrowserName and pass the validated value to Selenium Manager instead of using !, so missing/invalid browser names produce a deterministic, actionable error.

dotnet/src/webdriver/DriverFinder.cs [67-74]

 private async ValueTask DiscoverBinaryPathsAsync()
 {
-    BrowserDiscoveryResult smResult = await SeleniumManager.DiscoverBrowserAsync(options.BrowserName!, new BrowserDiscoveryOptions
+    if (string.IsNullOrWhiteSpace(options.BrowserName))
+    {
+        throw new NoSuchDriverException("Browser name must be specified to find the driver using Selenium Manager.");
+    }
+
+    BrowserDiscoveryResult smResult = await SeleniumManager.DiscoverBrowserAsync(options.BrowserName, new BrowserDiscoveryOptions
     {
         BrowserVersion = options.BrowserVersion,
         BrowserPath = options.BinaryLocation,
         Proxy = options.Proxy?.SslProxy ?? options.Proxy?.HttpProxy
     }).ConfigureAwait(false);

Suggestion 4:

Validate truncation length is non-negative

In the WithTruncation method, add a check to throw an ArgumentOutOfRangeException if the provided length is negative to prevent downstream errors.

dotnet/src/webdriver/Internal/Logging/LogContext.cs [131-135]

 public ILogContext WithTruncation(int length)
 {
+    if (length < 0)
+    {
+        throw new ArgumentOutOfRangeException(nameof(length), "Truncation length must be non-negative.");
+    }
+
     _truncationLength = length;
     return this;
 }

Suggestion 5:

Prevent NullPointerException on parameter creation

Add a null check for the 'height' and 'width' values retrieved from the screenArea map to prevent a NullPointerException during auto-unboxing to primitive int types.

java/src/org/openqa/selenium/bidi/emulation/SetScreenSettingsOverrideParameters.java [15-17]

-this.height = screenArea.get("height");
-this.width = screenArea.get("width");
+Integer heightValue = screenArea.get("height");
+Integer widthValue = screenArea.get("width");
+
+if (heightValue == null || widthValue == null) {
+  throw new IllegalArgumentException("'height' and 'width' in screenArea must not be null");
+}
+
+this.height = heightValue;
+this.width = widthValue;
 map.put("screenArea", screenArea);

Suggestion 6:

Check empty browser name

Use string.IsNullOrEmpty to validate options.BrowserName to handle both null and empty strings.

dotnet/src/webdriver/DriverFinder.cs [113]

-if (options.BrowserName is null)
+if (string.IsNullOrEmpty(options.BrowserName))

Suggestion 7:

Safely handle missing payload

In fireSessionEvent, safely handle the payload from the request by checking its type before casting, and default to an empty map if it's missing or not a map to prevent a ClassCastException.

java/src/org/openqa/selenium/grid/node/local/LocalNode.java [1103]

-Map<String, Object> payload = (Map<String, Object>) incoming.get("payload");
+Object rawPayload = incoming.get("payload");
+Map<String, Object> payload =
+    rawPayload instanceof Map
+        ? (Map<String, Object>) rawPayload
+        : Collections.emptyMap();

Suggestion 8:

Fail fast on missing inputs

Validate that $DOTNET_DIR exists and that at least one .csproj was found before running, so mis-invocations fail fast with a clear message.

dotnet/private/dotnet_format.bzl [49-59]

 # Find the workspace root
 WORKSPACE_ROOT="${{BUILD_WORKSPACE_DIRECTORY:-$RUNFILES_DIR/_main}}"
 DOTNET_DIR="$WORKSPACE_ROOT/dotnet"
 
+if [[ ! -d "$DOTNET_DIR" ]]; then
+    echo "ERROR: Could not find dotnet directory at $DOTNET_DIR" >&2
+    exit 1
+fi
+
 cd "$DOTNET_DIR"
 
 echo "Running dotnet format on all projects..."
-find "$DOTNET_DIR/src" "$DOTNET_DIR/test" -name "*.csproj" 2>/dev/null | while read -r proj; do
+mapfile -t projects < <(find "$DOTNET_DIR/src" "$DOTNET_DIR/test" -name "*.csproj" 2>/dev/null)
+if [[ "${#projects[@]}" -eq 0 ]]; then
+    echo "ERROR: No .csproj files found under $DOTNET_DIR/src or $DOTNET_DIR/test" >&2
+    exit 1
+fi
+for proj in "${projects[@]}"; do
     echo "  Formatting $proj..."
     "$DOTNET" format "$proj" || exit 1
-done || exit 1
+done

Suggestion 9:

Add path existence validations

Validate that DOTNET_DIR exists and each project file is present before cd/formatting, and fail with a clear error message to avoid confusing runtime failures when env/runfiles assumptions don't hold.

dotnet/private/dotnet_format.bzl [50-59]

 WORKSPACE_ROOT="${{BUILD_WORKSPACE_DIRECTORY:-$(cd "$(dirname "$0")/../.." && pwd)}}"
 DOTNET_DIR="$WORKSPACE_ROOT/dotnet"
+
+if [[ ! -d "$DOTNET_DIR" ]]; then
+    echo "ERROR: Could not find dotnet directory at: $DOTNET_DIR" >&2
+    exit 1
+fi
 
 cd "$DOTNET_DIR"
 
 echo "Running dotnet format on src projects..."
 for proj in src/webdriver/Selenium.WebDriver.csproj src/support/Selenium.WebDriver.Support.csproj; do
+    if [[ ! -f "$DOTNET_DIR/$proj" ]]; then
+        echo "ERROR: Missing project file: $DOTNET_DIR/$proj" >&2
+        exit 1
+    fi
     echo "  Formatting $proj..."
     "$DOTNET" format "$DOTNET_DIR/$proj"
 done

Pattern 3: Make concurrent code race-safe by using correct synchronization/visibility and ordering (publish with volatile/atomic ops, update shared maps/queues safely, and only advance counters/state on successful writes).

Example code before:

_pending[id] = tcs;
token.Register(() => {
  tcs.TrySetCanceled();
  _pending.Remove(id);
});

Example code after:

_pending[id] = tcs;
token.Register(() => {
  if (_pending.Remove(id)) {
    tcs.TrySetCanceled();
  }
});
Relevant past accepted suggestions:
Suggestion 1: [correctness] Fallback receive drops bytes
Fallback receive drops bytes WebSocketTransport.ReceiveAsync receives into a temporary array created by memory.ToArray() when MemoryMarshal.TryGetArray fails, then calls writer.Advance(result.Count) without copying the received bytes into the writer buffer, so the writer reports bytes written that were never written to it.

Issue description

WebSocketTransport.ReceiveAsync has a fallback for non-array-backed IBufferWriter&amp;lt;byte&amp;gt; buffers. In that fallback, it receives into a temporary byte[] created by memory.ToArray(), but it never copies the received bytes into the writer’s buffer before calling writer.Advance(result.Count). This can corrupt incoming messages if the fallback path is taken.

Issue Context

Broker.ReceiveMessagesLoopAsync reads from receiveBufferWriter.WrittenMemory immediately after ReceiveAsync completes, so ReceiveAsync must guarantee that the bytes it counts as written are actually written into the provided writer.

Fix Focus Areas

  • dotnet/src/webdriver/BiDi/WebSocketTransport.cs[59-77]

Suggested implementation direction

  • Keep var memory = writer.GetMemory(sizeHint).
  • If TryGetArray fails:
  • Allocate/rent a temp array segment of length memory.Length.
  • Receive into that temp segment.
  • Copy temp.AsSpan(0, result.Count) into memory.Span (or into writer.GetSpan(result.Count)).
  • Then call writer.Advance(result.Count).
  • (Optional) Consider using ArrayPool&amp;lt;byte&amp;gt;.Shared instead of memory.ToArray() to avoid a GC allocation in the fallback path.

Suggestion 2: [reliability] GetHandlers lacks volatile read
GetHandlers lacks volatile read `_handlers` is written under a lock but read without any memory-barrier/volatile semantics, so other threads may observe stale handler arrays and miss recent add/remove operations. This undermines the intended thread-safety of event handler registration and dispatch.

Issue description

EventRegistration._handlers is updated under lock but read via GetHandlers() without volatile/Volatile.Read/Interlocked semantics. This can lead to stale reads across threads and undermines the intended thread-safety improvements.

Issue Context

This PR introduces a copy-on-write array for event handlers and claims it is &quot;safe to iterate&quot;. For correctness under the .NET memory model, publishing the new array reference should include appropriate visibility guarantees for lock-free readers.

Fix Focus Areas

  • dotnet/src/webdriver/BiDi/EventDispatcher.cs[132-147]
  • dotnet/src/webdriver/BiDi/EventDispatcher.cs[98-105]

Suggestion 3:

Increment only on successful enqueue

In EnqueueEvent, only call registration.IncrementEnqueued() if _pendingEvents.Writer.TryWrite(...) returns true to prevent a potential deadlock in DrainAsync.

dotnet/src/webdriver/BiDi/EventDispatcher.cs [88-89]

-registration.IncrementEnqueued();
-_pendingEvents.Writer.TryWrite(new EventItem(jsonUtf8Bytes, bidi, registration));
+if (_pendingEvents.Writer.TryWrite(new EventItem(jsonUtf8Bytes, bidi, registration)))
+{
+    registration.IncrementEnqueued();
+}

Suggestion 4:

Fix race condition in pending commands

Fix a race condition when failing pending commands by atomically clearing the _pendingCommands dictionary and iterating over the removed items to prevent hangs.

dotnet/src/webdriver/BiDi/Broker.cs [272-280]

 // Fail all pending commands, as the connection is likely broken if we failed to receive messages.
-foreach (var pendingCommand in _pendingCommands.Values)
+var pendingCommands = _pendingCommands.Values.ToList();
+_pendingCommands.Clear();
+foreach (var pendingCommand in pendingCommands)
 {
     pendingCommand.TaskCompletionSource.TrySetException(ex);
 }
 
-_pendingCommands.Clear();
-
 throw;

Suggestion 5:

Fix race condition causing memory leak

Fix a race condition by adding the command to _pendingCommands before registering the cancellation callback to prevent a potential memory leak.

dotnet/src/webdriver/BiDi/Broker.cs [150-157]

-using var ctsRegistration = cts.Token.Register(() =>
-{
-    tcs.TrySetCanceled(cts.Token);
-    _pendingCommands.TryRemove(command.Id, out _);
-});
-var data = JsonSerializer.SerializeToUtf8Bytes(command, jsonCommandTypeInfo);
 var commandInfo = new CommandInfo(tcs, jsonResultTypeInfo);
 _pendingCommands[command.Id] = commandInfo;
 
+using var ctsRegistration = cts.Token.Register(() =>
+{
+    if (_pendingCommands.TryRemove(command.Id, out _))
+    {
+        tcs.TrySetCanceled(cts.Token);
+    }
+});
+var data = JsonSerializer.SerializeToUtf8Bytes(command, jsonCommandTypeInfo);
+

Suggestion 6:

Remove pending entry on send failure

Add a try/catch block around the _transport.SendAsync call to ensure the _pendingCommands entry is cleaned up if the send operation fails.

dotnet/src/webdriver/BiDi/Broker.cs [159]

-await _transport.SendAsync(data, cts.Token).ConfigureAwait(false);
+try
+{
+    await _transport.SendAsync(data, cts.Token).ConfigureAwait(false);
+}
+catch
+{
+    _pendingCommands.TryRemove(command.Id, out _);
+    throw;
+}

Suggestion 7:

Make event maps thread-safe

Protect _eventTypesMap and per-event handler lists with a lock (or use immutable/concurrent collections) because SubscribeAsync, UnsubscribeAsync, and event processing can run concurrently and List/Dictionary are not thread-safe.

dotnet/src/webdriver/BiDi/EventDispatcher.cs [35-65]

-private readonly ConcurrentDictionary<string, List<EventHandler>> _eventHandlers = new();
-private readonly Dictionary<string, JsonTypeInfo> _eventTypesMap = [];
+private readonly ConcurrentDictionary<string, ConcurrentBag<EventHandler>> _eventHandlers = new();
+private readonly ConcurrentDictionary<string, JsonTypeInfo> _eventTypesMap = new();
 ...
 _eventTypesMap[eventName] = jsonTypeInfo;
 
-var handlers = _eventHandlers.GetOrAdd(eventName, (a) => []);
+var handlers = _eventHandlers.GetOrAdd(eventName, _ => new ConcurrentBag<EventHandler>());
 ...
 handlers.Add(eventHandler);

Pattern 4: When introducing or changing observable behavior (default methods, duplicate handling, exception types), add/adjust unit tests and gate integration tests by feature/driver support to avoid cross-environment CI failures.

Example code before:

// Behavior changed: duplicates now ignored, but no test asserts it.
Map<String, Writer> m = fields.stream()
  .collect(toMap(Field::getName, this::writer, (a, b) -> a));

Example code after:

// Test locks in behavior and documents which field wins.
@Test
void duplicateFieldNameKeepsFirstWriter() {
  assertThat(coercer.fieldWriterFor("x")).isSameAs(first);
}
Relevant past accepted suggestions:
Suggestion 1: [reliability] Test hard-codes Chrome
Test hard-codes Chrome The new common-suite test always instantiates `webdriver.Chrome()` directly, so it can run even when the suite is configured for a different `--driver` and fail due to missing Chrome/chromedriver. It also bypasses the suite’s `driver_executable` and `clean_driver` fixtures, ignoring the configured `--driver-binary` path.

Issue description

A new test under py/test/selenium/webdriver/common/ hard-codes webdriver.Chrome() and does not use the shared fixtures that honor --driver and --driver-binary. This can fail when running the suite for non-Chrome drivers and can ignore configured chromedriver paths.

Issue Context

The suite filters tests by directory name via pytest_ignore_collect, and common tests should be driver-agnostic (use the driver/clean_driver fixtures).

Fix Focus Areas

  • py/test/selenium/webdriver/common/test_service_logging.py[1-26]

Suggested approach

  • Prefer moving this test into the Chrome-specific directory (so it’s only collected when running Chrome):
  • Move to py/test/selenium/webdriver/chrome/chrome_service_tests.py and implement using clean_driver, clean_options, and driver_executable.
  • Or, if keeping it in common, skip unless the selected driver is Chrome and still use driver_executable for the service executable.

Suggestion 2: [reliability] Unknown pytest marker
Unknown pytest marker The new test uses `@pytest.mark.chrome`, but this marker is not registered in the repo’s pytest configuration, so it will produce an unknown-mark warning (and could fail if strict marker enforcement is enabled).

Issue description

@pytest.mark.chrome is used but not registered in [tool.pytest].markers, leading to unknown-marker warnings.

Issue Context

This suite relies primarily on directory-based collection filtering (pytest_ignore_collect) and the registered markers listed in py/pyproject.toml.

Fix Focus Areas

  • py/test/selenium/webdriver/common/test_service_logging.py[1-8]

Suggested approach

  • Remove @pytest.mark.chrome and instead scope by moving the test under py/test/selenium/webdriver/chrome/.
  • If a marker is desired, add an appropriate registered marker (or register the new marker in py/pyproject.toml), but directory scoping is the existing pattern.

Suggestion 3: [reliability] `getFieldWriters` duplicates untested
`getFieldWriters` duplicates untested The change alters runtime behavior for duplicate field names (now silently keeping the first writer) without adding a unit test to prevent regressions. This makes it easy for future refactors to reintroduce duplicate-key failures or change which field “wins” without detection.

Issue description

InstanceCoercer#getFieldWriters now retains the first writer when duplicate field names occur, but this behavior change is not covered by tests.

Issue Context

Duplicate field names can happen via field hiding (same-named fields across a class hierarchy). Historically this could throw Duplicate key during collection; now duplicates are ignored. This should be locked in with a unit test to prevent regressions and to ensure deterministic &quot;winner&quot; behavior.

Fix Focus Areas

  • java/src/org/openqa/selenium/json/InstanceCoercer.java[99-132]
  • java/test/org/openqa/selenium/json/JsonTest.java[113-145]

Suggestion 4: [reliability] New `ExecuteMethod` defaults untested
New `ExecuteMethod` defaults untested The PR introduces new default methods (`executeOptional` and `execute(..., defaultValue)`) but does not add unit tests to validate their null-handling/casting behavior. This risks regressions in callers relying on the new compatibility helpers.

Issue description

New default methods were added to ExecuteMethod to provide backward-compatible, typed accessors, but there are no unit tests validating their behavior (null propagation, default fallback, and executeRequired null rejection).

Issue Context

ExecuteMethod is a core API surface used by multiple components/bindings; subtle regressions in these helper methods could silently affect many callers.

Fix Focus Areas

  • java/src/org/openqa/selenium/remote/ExecuteMethod.java[43-54]
  • java/test/org/openqa/selenium/remote/ExecuteMethodTest.java[1-200]

Suggestion 5: [reliability] Ungated new BiDi test
Ungated new BiDi test The new `SetScrollbarTypeOverride` tests run on all configured browsers without any feature-gating. If a given browser/driver doesn’t implement `emulation.setScrollbarTypeOverride` yet, these tests will fail and block CI even though the binding code is correct.

Issue description

The new scrollbar override tests run unconditionally. In environments where the underlying browser/driver doesn’t support emulation.setScrollbarTypeOverride yet, they can fail CI even though the client binding is correct.

Issue Context

This test suite already uses IgnoreBrowser to avoid running unsupported BiDi commands on specific browsers.

Fix Focus Areas

  • dotnet/test/common/BiDi/Emulation/EmulationTests.cs[134-152]
  • dotnet/test/common/BiDi/Emulation/EmulationTests.cs[86-110]
  • dotnet/test/common/CustomTestAttributes/IgnoreBrowserAttribute.cs[28-75]

Suggestion 6: [reliability] Ungated new BiDi test
Ungated new BiDi test The new `SetScrollbarTypeOverride` tests run on all configured browsers without any feature-gating. If a given browser/driver doesn’t implement `emulation.setScrollbarTypeOverride` yet, these tests will fail and block CI even though the binding code is correct.

Issue description

The new scrollbar override tests run unconditionally. In environments where the underlying browser/driver doesn’t support emulation.setScrollbarTypeOverride yet, they can fail CI even though the client binding is correct.

Issue Context

This test suite already uses IgnoreBrowser to avoid running unsupported BiDi commands on specific browsers.

Fix Focus Areas

  • dotnet/test/common/BiDi/Emulation/EmulationTests.cs[134-152]
  • dotnet/test/common/BiDi/Emulation/EmulationTests.cs[86-110]
  • dotnet/test/common/CustomTestAttributes/IgnoreBrowserAttribute.cs[28-75]

Suggestion 7: [reliability] `KeysTest` expects `charAt(10)`
`KeysTest` expects `charAt(10)` `Keys.charAt(int)` now throws `IndexOutOfBoundsException` for any index other than `0`, but the existing unit test still asserts that `charAt(10)` returns `0`. This functional change needs corresponding test updates/additions to avoid incorrect expectations and failing CI.

Issue description

Keys.charAt(int) now throws for index != 0, but KeysTest.charAtOtherPositionReturnsZero() still asserts a 0 return value, which no longer matches the new behavior.

Issue Context

The production change enforces the CharSequence contract by throwing IndexOutOfBoundsException for invalid indices. The unit test suite should be updated to reflect this and prevent CI failures.

Fix Focus Areas

  • java/test/org/openqa/selenium/KeysTest.java[32-41]
  • java/src/org/openqa/selenium/Keys.java[141-147]

Pattern 5: Harden build/CI/scripts and tooling integrations by making argument parsing strict, handling paths/filenames safely (null-delimited find/xargs), validating required env/inputs, and centralizing duplicated workflow logic to prevent drift.

Example code before:

find . -name '*.java' | xargs google-java-format --replace

Example code after:

find . -name '*.java' -print0 | xargs -0 -r google-java-format --replace
Relevant past accepted suggestions:
Suggestion 1:

Enforce mutually exclusive modes

Add a check to ensure that the --pre-commit and --pre-push flags are mutually exclusive, exiting with an error if both are provided.

scripts/format.sh [10-24]

 run_lint=false
 mode="default"
+seen_mode=""
 for arg in "$@"; do
     case "$arg" in
         --lint) run_lint=true ;;
 
-        --pre-commit) mode="pre-commit" ;;
-        --pre-push) mode="pre-push" ;;
+        --pre-commit|--pre-push)
+            if [[ -n "$seen_mode" && "$seen_mode" != "$arg" ]]; then
+                echo "ERROR: --pre-commit and --pre-push are mutually exclusive" >&2
+                exit 1
+            fi
+            seen_mode="$arg"
+            [[ "$arg" == "--pre-commit" ]] && mode="pre-commit" || mode="pre-push"
+            ;;
         *)
             echo "Unknown option: $arg" >&2
             echo "Usage: $0 [--pre-commit] [--pre-push] [--lint]" >&2
             exit 1
             ;;
     esac
 done

Suggestion 2:

Harden formatter file piping

Improve the robustness of the find | xargs pipeline by using find -print0 and xargs -0 -r to correctly handle filenames containing spaces or newlines.

scripts/format.sh [82-87]

 if changed_matches '^java/'; then
     section "Java"
     echo "    google-java-format" >&2
     GOOGLE_JAVA_FORMAT="$(bazel run --run_under=echo //scripts:google-java-format)"
-    find "${WORKSPACE_ROOT}/java" -type f -name '*.java' | xargs "$GOOGLE_JAVA_FORMAT" --replace
+    find "${WORKSPACE_ROOT}/java" -type f -name '*.java' -print0 | xargs -0 -r "$GOOGLE_JAVA_FORMAT" --replace
 fi

Suggestion 3:

Detect untracked formatting outputs

Replace git diff --quiet with git status --porcelain to detect both tracked and untracked file changes, ensuring the script correctly identifies all modifications made by formatters.

scripts/format.sh [114-122]

-# Check if formatting made changes
-if ! git diff --quiet; then
+# Check if formatting made changes (including untracked files)
+if [[ -n "$(git status --porcelain)" ]]; then
     echo "" >&2
     echo "Formatters modified files:" >&2
-    git diff --name-only >&2
+    git status --porcelain >&2
     exit 1
 fi
 
 echo "Format check passed." >&2

Suggestion 4:

Reject unknown CLI arguments

Treat any unrecognized argument as an error and print a short usage message so CI/manual runs don’t silently ignore typos or unsupported flags.

scripts/format.sh [10-15]

 run_lint=false
 for arg in "$@"; do
     case "$arg" in
         --lint) run_lint=true ;;
+        -h|--help)
+            echo "Usage: $0 [--lint]" >&2
+            exit 0
+            ;;
+        *)
+            echo "ERROR: unknown argument: $arg" >&2
+            echo "Usage: $0 [--lint]" >&2
+            exit 2
+            ;;
     esac
 done

Suggestion 5:

Refine change detection logic

Modify the change detection logic to differentiate between "no changed files" and "unable to determine changes". This prevents the script from inefficiently running all formatters when no files have actually been modified.

Examples:

scripts/format.sh [24-39]

if [[ -n "$trunk_ref" ]]; then
    base="$(git merge-base HEAD "$trunk_ref" 2>/dev/null || echo "")"
    if [[ -n "$base" ]]; then
        changed="$(git diff --name-only "$base" HEAD)"
    else
        changed=""
    fi
else
    # No trunk ref found, format everything
    changed=""

 ... (clipped 6 lines)

Solution Walkthrough:

Before:

# ... find trunk ref ...
if [[ -n "$trunk_ref" ]]; then
    # ... find merge-base ...
    changed="$(git diff --name-only "$base" HEAD)" # This is empty if no files changed
else
    changed="" # This is empty if trunk is not found
fi

changed_matches() {
    # This is TRUE if `changed` is empty, causing all formatters to run
    [[ -z "$changed" ]] || echo "$changed" | grep -qE "$1"
}

if changed_matches '^java/'; then
    # run java formatter
fi
# ... and so on for all other formatters

After:

# Use a flag to control behavior
run_all_formatters=false
if [[ -z "$trunk_ref" ]]; then
    run_all_formatters=true
    changed=""
else
    # ... find merge-base ...
    changed="$(git diff --name-only "$base" HEAD)"
fi

changed_matches() {
    # Run all if flagged, otherwise only if there are changes that match
    "$run_all_formatters" || ([[ -n "$changed" ]] && echo "$changed" | grep -qE "$1")
}

if changed_matches '^java/'; then
    # run java formatter
fi
# ... and so on for all other formatters

Suggestion 6:

Handle filenames safely in xargs

Modify the find and xargs command to use null delimiters (-print0 and -0) to handle filenames containing spaces or special characters correctly.

scripts/format.sh [57]

-find "$PWD/java" -type f -name '*.java' | xargs "$GOOGLE_JAVA_FORMAT" --replace
+find "$PWD/java" -type f -name '*.java' -print0 | xargs -0 "$GOOGLE_JAVA_FORMAT" --replace

Suggestion 7:

Make formatter invocation robust

Refactor the java:format task to robustly handle file paths by using find -print0 | xargs -0 and a safer method to capture the formatter path.

rake_tasks/java.rake [396-399]

-formatter = `bazel run --run_under=echo //scripts:google-java-format 2>/dev/null`.strip
-raise 'Failed to get google-java-format path' if formatter.empty? || !$CHILD_STATUS.success?
+formatter = nil
+Bazel.execute('run', ['--run_under=echo'], '//scripts:google-java-format') do |output|
+  formatter = output.lines.last&.strip
+end
+raise 'Failed to get google-java-format path' if formatter.to_s.empty?
 
-sh "find #{Dir.pwd}/java -name '*.java' | xargs #{formatter} --replace"
+sh %(find "#{File.join(Dir.pwd, 'java')}" -name '*.java' -print0 | xargs -0 "#{formatter}" --replace)

Suggestion 8:

Avoid command line length overflow

Batch the list of Java files passed to Bazel.execute to avoid exceeding command-line length limits, which can cause the format task to fail.

rake_tasks/java.rake [396-397]

 files = Dir.glob("#{Dir.pwd}/java/**/*.java")
-Bazel.execute('run', ['--', '--replace'] + files, '//scripts:google-java-format')
+files.each_slice(200) do |batch|
+  Bazel.execute('run', ['--', '--replace'] + batch, '//scripts:google-java-format')
+end

Suggestion 9:

Gate Slack notify on inputs

Only run Slack reporting when the webhook secret exists and the disk step produced a valid output, otherwise this can fail the workflow or evaluate invalid comparisons.

.github/workflows/bazel.yml [277-287]

 - name: Report disk space
-  if: always()
+  if: always() && secrets.SLACK_WEBHOOK_URL != '' && steps.disk.outputs.avail != ''
   uses: rtCamp/action-slack-notify@v2
   env:
     SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }}
     SLACK_CHANNEL: ci-disk-alerts
     SLACK_USERNAME: Disk Monitor
     SLACK_TITLE: "${{ steps.disk.outputs.avail }}GB remaining"
     SLACK_MESSAGE: "${{ inputs.name }} on ${{ inputs.os }}"
     SLACK_COLOR: ${{ steps.disk.outputs.avail < 10 && 'danger' || (steps.disk.outputs.avail < 30 && 'warning' || 'good') }}
     SLACK_ICON_EMOJI: ":floppy_disk:"

Suggestion 10:

Centralize duplicated release tag parsing logic

The release tag parsing and validation logic is duplicated across pre-release.yml and release.yml. This logic should be extracted into a single reusable workflow to serve as a single source of truth, improving maintainability.

Examples:

.github/workflows/pre-release.yml [117-166]

        run: |
          TAG="${{ inputs.tag }}"
          TAG="${TAG//[[:space:]]/}"

          # Validate tag format: selenium-X.Y.Z or selenium-X.Y.Z-lang
          if [[ ! "$TAG" =~ ^selenium-[0-9]+\.[0-9]+\.[0-9]+(-[a-z]+)?$ ]]; then
            echo "::error::Invalid tag format: '$TAG'. Expected selenium-X.Y.Z or selenium-X.Y.Z-lang"
            exit 1
          fi


 ... (clipped 40 lines)

.github/workflows/release.yml [38-79]

        run: |
          if [ "$EVENT_NAME" == "workflow_dispatch" ]; then
            TAG="$INPUT_TAG"
          else
            # Extract tag from branch name: release-preparation-selenium-4.28.1-ruby -> selenium-4.28.1-ruby
            TAG=$(echo "$PR_HEAD_REF" | sed 's/^release-preparation-//')
          fi

          # Extract version
          VERSION=$(echo "$TAG" | grep -oE '[0-9]+\.[0-9]+\.[0-9]+')

 ... (clipped 32 lines)

Solution Walkthrough:

Before:

# In .github/workflows/pre-release.yml
jobs:
  parse-tag:
    steps:
      - name: Parse tag
        run: |
          TAG="${{ inputs.tag }}"
          # ... complex parsing and validation logic ...
          if [[ "$PATCH" -gt 0 && -z "$LANG_SUFFIX" ]]; then
            echo "::error::Patch releases must specify a language"
            exit 1
          fi
          # ... more validation ...

# In .github/workflows/release.yml
jobs:
  prepare:
    steps:
      - name: Extract and parse tag
        run: |
          # ... logic to get TAG from branch or input ...
          # ... complex parsing and validation logic (duplicated) ...
          if [[ "$PATCH" -gt 0 && -z "$LANG_SUFFIX" ]]; then
            echo "::error::Patch releases must specify a language"
            exit 1
          fi
          # ... more validation (duplicated) ...

After:

# In .github/workflows/parse-release-tag.yml (new file)
on:
  workflow_call:
    inputs:
      tag_string:
        type: string
    outputs:
      tag:
      version:
      language:
jobs:
  parse:
    steps:
      - name: Parse tag
        run: |
          # ... single source of truth for parsing and validation ...
          echo "tag=$TAG" >> "$GITHUB_OUTPUT"
          echo "version=$VERSION" >> "$GITHUB_OUTPUT"
          echo "language=$LANGUAGE" >> "$GITHUB_OUTPUT"

# In .github/workflows/pre-release.yml and release.yml
jobs:
  parse-tag: # or 'prepare' job
    uses: ./.github/workflows/parse-release-tag.yml
    with:
      tag_string: ${{ inputs.tag }} # or ${{ github.event.pull_request.head.ref }}

Suggestion 11:

Centralize tag parsing logic

Move tag parsing/validation into a single shared script (or reusable workflow/composite action) and call it from all workflows to prevent future drift and inconsistent rules.

.github/workflows/pre-release.yml [114-166]

 - name: Parse tag
   id: parse
   shell: bash
   run: |
-    TAG="${{ inputs.tag }}"
-    TAG="${TAG//[[:space:]]/}"
+    ./scripts/parse-release-tag.sh "${{ inputs.tag }}" >> "$GITHUB_OUTPUT"
 
-    # Validate tag format: selenium-X.Y.Z or selenium-X.Y.Z-lang
-    if [[ ! "$TAG" =~ ^selenium-[0-9]+\.[0-9]+\.[0-9]+(-[a-z]+)?$ ]]; then
-      echo "::error::Invalid tag format: '$TAG'. Expected selenium-X.Y.Z or selenium-X.Y.Z-lang"
-      exit 1
-    fi
-
-    # Extract version (strip 'selenium-' prefix and optional language suffix)
-    VERSION=$(echo "$TAG" | sed -E 's/^selenium-([0-9]+\.[0-9]+\.[0-9]+)(-[a-z]+)?$/\1/')
-    PATCH=$(echo "$VERSION" | cut -d. -f3)
-
-    # Extract language suffix (default to 'all' if no suffix)
-    if [[ "$TAG" =~ -([a-z]+)$ ]]; then
-      LANG_SUFFIX="${BASH_REMATCH[1]}"
-    else
-      LANG_SUFFIX=""
-    fi
-
-    # Patch releases must have a language suffix
-    if [[ "$PATCH" -gt 0 && -z "$LANG_SUFFIX" ]]; then
-      echo "::error::Patch releases must specify a language (e.g., selenium-${VERSION}-ruby)"
-      exit 1
-    fi
-
-    # Full releases (X.Y.0) must not have a language suffix
-    if [[ "$PATCH" -eq 0 && -n "$LANG_SUFFIX" ]]; then
-      echo "::error::Full releases (X.Y.0) cannot have a language suffix"
-      exit 1
-    fi
-
-    # Validate language suffix (rake namespace aliases allow full names)
-    case "$LANG_SUFFIX" in
-      ruby|python|javascript|java|dotnet)
-        LANGUAGE="$LANG_SUFFIX"
-        ;;
-      "")
-        LANGUAGE="all"
-        ;;
-      *)
-        echo "::error::Invalid language suffix: '$LANG_SUFFIX'. Expected ruby, python, javascript, java, or dotnet"
-        exit 1
-        ;;
-    esac
-
-    echo "tag=$TAG" >> "$GITHUB_OUTPUT"
-    echo "version=$VERSION" >> "$GITHUB_OUTPUT"
-    echo "language=$LANGUAGE" >> "$GITHUB_OUTPUT"
-

Suggestion 12:

Fix workspace root resolution

Replace the brittle workspace root discovery logic with a more robust method using the Bazel runfiles tree to prevent unexpected failures.

dotnet/private/dotnet_format.bzl [49-51]

 # Find the workspace root
-WORKSPACE_ROOT="${{BUILD_WORKSPACE_DIRECTORY:-$(cd "$(dirname "$0")/../.." && pwd)}}"
+WORKSPACE_ROOT="${BUILD_WORKSPACE_DIRECTORY:-$RUNFILES_DIR/{workspace}}"
 DOTNET_DIR="$WORKSPACE_ROOT/dotnet"
 
+if [[ ! -d "$DOTNET_DIR" ]]; then
+    echo "ERROR: Could not find dotnet/ directory at $DOTNET_DIR" >&2
+    exit 1
+fi
+

Suggestion 13:

Resolve repo root via runfiles

Replace the brittle workspace root discovery logic with a more robust method using the Bazel runfiles tree to prevent unexpected failures.

dotnet/private/paket_deps.bzl [50-52]

 # Find the workspace root (where dotnet/.config/dotnet-tools.json lives)
-WORKSPACE_ROOT="${{BUILD_WORKSPACE_DIRECTORY:-$(cd "$(dirname "$0")/../.." && pwd)}}"
+WORKSPACE_ROOT="${BUILD_WORKSPACE_DIRECTORY:-$RUNFILES_DIR/{workspace}}"
 DOTNET_DIR="$WORKSPACE_ROOT/dotnet"
 
+if [[ ! -d "$DOTNET_DIR" ]]; then
+    echo "ERROR: Could not find dotnet/ directory at $DOTNET_DIR" >&2
+    exit 1
+fi
+

[Auto-generated best practices - 2026-03-28]

Clone this wiki locally