We designed a new method for shape program inference based on self-training that
works with black box program executors. We demonstrate that it converges
faster and achieves better reconstruction quality than policy gradient reinforcement learning.
We developed an approach for synthesizing LTL formulas from demonstrations using
deep networks outfitted with a custom neural operator. We show that our method
learns LTL formulas that capture extended sequences of actions, scaling better
than SAT-based approaches.