Abstract: Ultra Large-scale Linear Programming refers to class of problems where number of linear inequality constraints grows exponentially w.r.t. the number of variables. "Continuum Computing" is a generalization of powerful interior point algorithms, and involves computing with transcendental functions, often in several complex variables.
There are many potential applications of this work to several practical problems-e.g. Explainable AI, Topology optimization for 3D printing, Optimal mask design for 7nm lithography etc. However, our focus in this seminar will be on core mathematical technique and it's parallelization on contemporary parallel machines.
We show that the "Potential function" and it's first two derivatives, involved in the algorithm for certain class of ultra large scale LPs, can be computed in polynomial time in the continuum computing paradigm, in spite of the fact that the feasible region is given by exponential number of constarints. Actual numerical cross-simulation on standard computing model and machines involves approximate numerical computation of inverse Laplace transform. We also apply this approach to establising non-satifiability of boolean formula, which is known to require exponentially long resolution-based proofs in the standard Turing machine model. It is also believed that this task can not be done efficiently in quantum computing model. Earlier exposition of this work from FOCM can be found in Cornell Arxive 1412.3335.pdf, and LNCS6457.