Summary
Investigating #510 surfaced that tf.sparse.from_dense's SparseFromDense generator never fired because tensorflow.xml had no <class name="sparse_from_dense"> block bound for it (closed in ponder-lab#270). Auditing the rest of the dispatch table for the same pattern turned up 5 more dispatched arms with no matching XML class binding:
Dispatched arm in TensorGeneratorFactory |
TypeReference target |
XML status |
DIVIDE |
Ltensorflow/math/divide |
Missing |
SUBTRACT |
Ltensorflow/math/subtract |
Missing |
FLATTEN |
Ltensorflow/functions/flatten |
Missing |
MAX_POOL |
Ltensorflow/functions/max_pool |
Missing |
SOFTMAX_CROSS_ENTROPY_WITH_LOGITS |
Ltensorflow/functions/softmax_cross_entropy_with_logits |
Missing |
The SparseFromDense precedent (ponder-lab#270) shows what each of these means: the Java generator and dispatch arm both exist, but without the XML binding the dispatch arm never fires — calls to these ops produce ⊤ output instead of the precision the dedicated generator would compute. None of these is currently exercised by a fixture that asserts non-⊤ shape/dtype, which is why CI hasn't surfaced the silent precision loss.
Audit Method
Cross-referenced (a) the isType(calledFunction, X.getDeclaringClass()) arms in TensorGeneratorFactory.getGeneratorBody, (b) the corresponding TypeName.string2TypeName("L...") strings in the matching TensorFlowTypes constants, and (c) the <new def="..." class="L..."> declarations in tensorflow.xml plus numpy.xml. Anything dispatched but not declared is in the table above.
Suggested
For each of the 5 ops:
- Verify the ground-truth API (some may have been renamed / moved upstream — e.g.,
tf.flatten is tf.keras.layers.Flatten now; tf.divide/tf.subtract are real top-level aliases under tf.math).
- Add the missing
<putfield> aliases (mirroring the sparse_add / sparse_from_dense pattern) and the <class> block.
- Add a fixture exercising the op with a kwarg form so the dedicated generator's
Parameters#getName() is also covered.
Mirror the diff shape from ponder-lab#270 (3 file changes: tensorflow.xml aliases + class block, tf2_test_<op>.py fixture, test<Op> JUnit method).
Why This Matters
Each silent gap is precision loss on real TF ops — e.g., tf.divide(x, y) should currently dispatch to a divide-aware generator (likely ElementWiseOperation semantics), but instead produces ⊤ because the call site doesn't resolve to Ltensorflow/math/divide. Real-world code using these ops won't get the typing the Ariadne engine is theoretically capable of.
Cross-Refs
Summary
Investigating #510 surfaced that
tf.sparse.from_dense'sSparseFromDensegenerator never fired becausetensorflow.xmlhad no<class name="sparse_from_dense">block bound for it (closed in ponder-lab#270). Auditing the rest of the dispatch table for the same pattern turned up 5 more dispatched arms with no matching XML class binding:TensorGeneratorFactoryDIVIDELtensorflow/math/divideSUBTRACTLtensorflow/math/subtractFLATTENLtensorflow/functions/flattenMAX_POOLLtensorflow/functions/max_poolSOFTMAX_CROSS_ENTROPY_WITH_LOGITSLtensorflow/functions/softmax_cross_entropy_with_logitsThe
SparseFromDenseprecedent (ponder-lab#270) shows what each of these means: the Java generator and dispatch arm both exist, but without the XML binding the dispatch arm never fires — calls to these ops produce ⊤ output instead of the precision the dedicated generator would compute. None of these is currently exercised by a fixture that asserts non-⊤ shape/dtype, which is why CI hasn't surfaced the silent precision loss.Audit Method
Cross-referenced (a) the
isType(calledFunction, X.getDeclaringClass())arms inTensorGeneratorFactory.getGeneratorBody, (b) the correspondingTypeName.string2TypeName("L...")strings in the matchingTensorFlowTypesconstants, and (c) the<new def="..." class="L...">declarations intensorflow.xmlplusnumpy.xml. Anything dispatched but not declared is in the table above.Suggested
For each of the 5 ops:
tf.flattenistf.keras.layers.Flattennow;tf.divide/tf.subtractare real top-level aliases undertf.math).<putfield>aliases (mirroring thesparse_add/sparse_from_densepattern) and the<class>block.Parameters#getName()is also covered.Mirror the diff shape from ponder-lab#270 (3 file changes:
tensorflow.xmlaliases + class block,tf2_test_<op>.pyfixture,test<Op>JUnit method).Why This Matters
Each silent gap is precision loss on real TF ops — e.g.,
tf.divide(x, y)should currently dispatch to a divide-aware generator (likelyElementWiseOperationsemantics), but instead produces ⊤ because the call site doesn't resolve toLtensorflow/math/divide. Real-world code using these ops won't get the typing theAriadneengine is theoretically capable of.Cross-Refs
tf.sparse.from_densethrough dedicatedSparseFromDensegenerator (wala/ML#510) ponder-lab/ML#270 — theSparseFromDenseprecedent.Cast/ReadDataSets/SparseFromDenseParameters#getName()are dispatch-bypassed vs. missing fixtures #510 — the investigation that surfaced the audit.