Skip to content

Audit: TensorGeneratorFactory dispatch arms with no tensorflow.xml class binding fire never #511

@khatchad

Description

@khatchad

Summary

Investigating #510 surfaced that tf.sparse.from_dense's SparseFromDense generator never fired because tensorflow.xml had no <class name="sparse_from_dense"> block bound for it (closed in ponder-lab#270). Auditing the rest of the dispatch table for the same pattern turned up 5 more dispatched arms with no matching XML class binding:

Dispatched arm in TensorGeneratorFactory TypeReference target XML status
DIVIDE Ltensorflow/math/divide Missing
SUBTRACT Ltensorflow/math/subtract Missing
FLATTEN Ltensorflow/functions/flatten Missing
MAX_POOL Ltensorflow/functions/max_pool Missing
SOFTMAX_CROSS_ENTROPY_WITH_LOGITS Ltensorflow/functions/softmax_cross_entropy_with_logits Missing

The SparseFromDense precedent (ponder-lab#270) shows what each of these means: the Java generator and dispatch arm both exist, but without the XML binding the dispatch arm never fires — calls to these ops produce ⊤ output instead of the precision the dedicated generator would compute. None of these is currently exercised by a fixture that asserts non-⊤ shape/dtype, which is why CI hasn't surfaced the silent precision loss.

Audit Method

Cross-referenced (a) the isType(calledFunction, X.getDeclaringClass()) arms in TensorGeneratorFactory.getGeneratorBody, (b) the corresponding TypeName.string2TypeName("L...") strings in the matching TensorFlowTypes constants, and (c) the <new def="..." class="L..."> declarations in tensorflow.xml plus numpy.xml. Anything dispatched but not declared is in the table above.

Suggested

For each of the 5 ops:

  1. Verify the ground-truth API (some may have been renamed / moved upstream — e.g., tf.flatten is tf.keras.layers.Flatten now; tf.divide/tf.subtract are real top-level aliases under tf.math).
  2. Add the missing <putfield> aliases (mirroring the sparse_add / sparse_from_dense pattern) and the <class> block.
  3. Add a fixture exercising the op with a kwarg form so the dedicated generator's Parameters#getName() is also covered.

Mirror the diff shape from ponder-lab#270 (3 file changes: tensorflow.xml aliases + class block, tf2_test_<op>.py fixture, test<Op> JUnit method).

Why This Matters

Each silent gap is precision loss on real TF ops — e.g., tf.divide(x, y) should currently dispatch to a divide-aware generator (likely ElementWiseOperation semantics), but instead produces ⊤ because the call site doesn't resolve to Ltensorflow/math/divide. Real-world code using these ops won't get the typing the Ariadne engine is theoretically capable of.

Cross-Refs

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions