Skip to content

Comments

Add input layer to profiling plots (fixes #404)#1439

Open
siddiquifaras wants to merge 1 commit intofastmachinelearning:mainfrom
siddiquifaras:fix-input-layer-profiling-404
Open

Add input layer to profiling plots (fixes #404)#1439
siddiquifaras wants to merge 1 commit intofastmachinelearning:mainfrom
siddiquifaras:fix-input-layer-profiling-404

Conversation

@siddiquifaras
Copy link

Description

Fixes #404 by including input layer distribution and precision overlay in profiling plots.

Problem: Profiling plots showed all layers except the input. The activations_keras, activations_torch, and activations_hlsmodel functions all skipped input data, and activation_types_hlsmodel had no input tick label to anchor the type overlay to.

Solution: Added two private helper functions (_normalize_input_data and _add_input_distributions) and updated four existing functions to prepend input distributions before processing layer activations. The type overlay now explicitly includes input variable precisions from model.get_input_variables().

Dependencies: None - uses existing numpy operations and follows the same vectorized pattern as existing layer processing.

Type of change

  • Bug fix (non-breaking change that fixes an issue)

Implementation Details

New helpers

def _normalize_input_data(X, input_names):
    """Return list of (name, ndarray) pairs from X, respecting input_names order."""
    if isinstance(X, dict):
        return [(n, np.asarray(X[n])) for n in input_names if n in X]
    if isinstance(X, (list, tuple)):
        return [(n, np.asarray(a)) for n, a in zip(input_names, X)]
    return [(input_names[0], np.asarray(X))]


def _add_input_distributions(data, X, input_names, fmt='longform', plot='boxplot'):
    """Prepend input-data distribution entries to *data* (modified in place)."""
    for name, arr in _normalize_input_data(X, input_names):
        print(f'   {name}')
        y = arr.flatten()
        y = abs(y[y != 0])  # Same pattern as existing layer processing
        if len(y) == 0:
            print(f'Input data for {name} contains only zeros, ignoring.')
            continue
        if fmt == 'longform':
            data['x'].extend(y.tolist())
            data['weight'].extend([name] * len(y))
        elif fmt == 'summary':
            data.append(array_to_summary(y, fmt=plot))
            data[-1]['weight'] = name

Updated functions

activations_keras: Prepends input distribution by collecting InputLayer names

input_names = [layer.name for layer in model.layers if isinstance(layer, keras.layers.InputLayer)]
if input_names:
    _add_input_distributions(data, X, input_names, fmt=fmt, plot=plot)

activations_torch: Prepends input distribution before converting X to tensor

activations_hlsmodel: Prepends input distribution and handles multi-input X formats (ndarray/list/dict) for trace

activation_types_hlsmodel: Explicitly adds input variable precisions with O(1) duplicate checking

input_var_names = set()
for input_var in model.get_input_variables():
    T = input_var.type.precision
    W, I, F, S = ap_fixed_WIFS(T)
    data['layer'].append(input_var.name)
    data['low'].append(-F)
    data['high'].append(I - 1 if S else I)
    input_var_names.add(input_var.name)

for layer in model.get_layers():
    if layer.name in input_var_names:
        continue  # Skip duplicates
    # ... rest unchanged

Why the ordering works

boxplot(fmt='summary') calls data.reverse() before plotting. Input entries are prepended first, so after reversal they appear at the top (highest y-position). Type overlay matches by layer name, so both the distribution tick and precision rectangle align automatically.

Tests

New tests (test/pytest/test_profiling_input_layer.py):

  • Unit tests for _normalize_input_data helper (single input, multi-input list, multi-input dict)
  • Integration tests for activations_keras (summary and longform formats)
  • Integration tests for activation_types_hlsmodel (presence and valid precision bounds)

Test execution:

# New tests (8 tests)
pytest test/pytest/test_profiling_input_layer.py -v
# Result: 8 passed in 2.38s

# Existing profiling tests (32 tests)
pytest test/pytest/test_keras_v3_profiling.py test/pytest/test_pytorch_profiler.py -v
# Result: 32 passed, 1 skipped (pre-existing), 0 regressions

Test Configuration:

  • Python 3.12.12
  • pytest 9.0.2
  • keras 3.10+
  • torch 2.10.0
  • tensorflow 2.19.1
  • All tests run with -p no:randomly to avoid unrelated qonnx seed issues

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

Additional Notes

  • No breaking changes: All public function signatures unchanged
  • No new dependencies: Uses existing numpy, pandas, matplotlib
  • Performance: O(n) vectorized operations, O(1) duplicate checking with set
  • Compatibility: Works for Keras, PyTorch, and HLS ModelGraph; handles single-input and multi-input models
  • Pre-commit: All 17 hooks passed (ruff, ruff-format, flake8, pyupgrade, trailing whitespace, etc.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Input layer in profiling

1 participant