Optimization Services

 

 

 

 

 

 

 

 

 

OS Related Research

  • The Optimization Services Project

Optimization Services is a young research area that has potential for many benefits to operations research and the optimization community. Motivated by a vision of the next generation of optimization software and standards, Optimization Services deals with a wide variety of issues that have accumulated over the past few decades in computing and optimization. This work addresses design as well as implementation issues by providing a general and unified framework for such tasks as standardizing problem representation, automating problem analysis and solver choice, working with new Web service standards, scheduling computational resources, benchmarking solvers, and verification of results – all in the context of the special requirements of large-scale computational optimization. The criteria required of Optimization Services must therefore be very high. Improving the quality of OS related standards, tools and systems should be a constant effort. Adapting to the new needs of researchers (scalability) and best serving the ultimate users (ease of use) should always be the goal of OS. Collaborations have been established in many of the following areas.

  • Standardization

Optimization Services involves a large set of standard protocols that need to be adopted quickly and universally. The standardization process starts from working group notes, and goes through stages such as working drafts, candidate recommendations, and finally becomes recommended as standards. Such a process not only requires further research efforts such as new optimization problem extensions but also entails more organizational efforts that require formal establishment of collaborations under the OS framework.

  • Problem Repository Building

With the standardization of various problem representations naturally comes the task of building repositories of optimization problem instances using the OS standards. Problem repositories no longer need to be categorized by the format the problems are using. Rather they are only classified by the different optimization types supported in the OS standards.

  • Library Building

The OS library and the OS server are provided to facilitate the adoption and use of the OS standards. Besides the original OS designers, other researchers are free to develop their own OS-compatible libraries, such as readers and writers of standard instances, and communication agents to transmit these instances.

  • Derived Research in Distributed and decentralized Systems

A distributed system leaves open many questions in coordination, job scheduling and congestion control. One distinct issue for example is how optimization “jobs” should best be assigned to run on available registered services after the optimization types are determined. The usual centralized scheme of an optimization server maintains one queue for each solver/format combination, along with a list of the workstations on which each solver can run. In a decentralized environment, we may still want to maintain this scheduling control, while at the same time making the scheduling decisions more distributed, i.e. transferring some controls to the solver service sides. Further study is needed to better understand how categorization of optimization problem instances together with statistics from previous runs can be used to improve scheduling decisions. As just one example, an intelligent scheduler should not assign two large jobs to a single-processor machine, since they will only become bogged down contending for resources; but a machine assigned one large job could also take care of a series of very small jobs without noticeable degradation to performance on either kind of job. Both the kind and size of optimization instances must be assessed to decide which should be considered “large” or “small” for purposes of this scheduling approach.

The central issue in a decentralized architecture is the design of a registration and discovery mechanism for acting on service requests. For example the optimization registry could assign requests based on some overall model of solver performance and resource availability. Requests can be scheduled after they are matched to some services, or scheduling could be made an integral part of the assignment process. Pricing could involve agent “rents” as well as charges determined by various measures of resource use. Besides keeping and maintaining information on optimization solvers and other services, one critical and more complex role of an optimization registry in a decentralized environment is a “more confident” determination of appropriate solvers. A relatively easy and straightforward scheme can rely on a database that matches solvers with problem types they can handle. Characteristics of a problem instance, determined from the analyzers, can be used to automatically generate a query on the database that will return a list of appropriate solver services. But how should solver recommendations deal with problem types (e.g. bound-constrained optimization) that are subsets of other problem types (e.g. nonlinear optimization)? Or how can recommendations be extended to solver options? For these purposes, a straightforward database approach for a server or registry may not be adequate. Developers will consider more sophisticated ways of determining recommendations, such as through business rules systems. A more complicated and advanced scheme may consider extensions to generate lists ranked by degree of appropriateness.

  • Derived Research in Local Systems

The Optimization Services framework standardizes all the communications between any two Optimization Services components on an OS distributed system. The framework does not standardize local interfacing. Related projects such as COIN and derived research from Optimization Services such as the OS Instance interface, OS Option interface, and OS result interface are intended to do this job. The COIN project includes the OSI (Open Solver Interface) library which is an API for linear programming solvers, and NLPAPI, a subroutine library with routines for building nonlinear programming problems. Another proposed nonlinear interface by Halldórsson, Thorsteinsson, and Kristjánsson is MOI (Modeler-Optimizer Interface), which specifies the format for a callable library. This library is based on representing the nonlinear part of each constraint and the objective function in post-fix (reverse Polish) notation and then assigning integers to operators, characters to operands, integer indices to variables and finally defining the corresponding set of arrays. The MOI data structure then corresponds to the implementation of a stack machine. A similar interface is described in the LINDO API manual. The OS framework is complementary to all of the standardization of local interfaces.

  • Derived Research in Optimization Servers

Optimization Services is motivated by the current issues faced by many optimization servers. More specifically Optimization Services is intended to provide the next-generation NEOS. The effects of Optimization Services on NEOS are multifaceted:

The NEOS server and its connected solvers will communicate using the Optimization Services framework, e.g. using standard representation for data communication.

External optimization submissions can still be kept as flexible as possible and may become even more flexible. At least one more networking mechanism will be provided, i.e. the communication based on the Optimization Services Protocol (OSP). That means NEOS will add an interface so that it can be invoked exactly as what’s specified by the Optimization Services hook-up Language (OShL). It will also accept OSiL as a standard input, and may gradually deprecate the other formats.

The entire OS system can be viewed as a new decentralized NEOS. In effect the old NEOS will become another OS-compatible solver in the new system. The “NEOS solver” can then solve more types of optimization by delegating the job further to different solvers behind it. We therefore regard the old NEOS as a “meta-solver” registered on the new OS system.

  • Derived Research in Computational Software

With the advent of Optimization Services and its standard OSP protocol, related software developers may need to think about how to best adapt to the OS framework and be “OS-compatible.” There have already been two immediate projects that are related to the Optimization Services framework. One is the Optimization Services modeling Language described in Section 3 and the other is the IMPACT solver development project that is under development by Professor Sanjay Mehrotra’s group at the Industrial Engineering and Management Sciences department at Northwestern University. The two projects are the two sides of Optimization Services: client and service. Both are natively built for the Optimization Services framework and strictly follow the Optimization Services Protocol. There are existing modeling languages and solvers that are or will be adapted (by writing wrapper classes) to the Optimization Services framework such as the AMPL modeling language, Lindo API, open COIN software and solvers from the NEOS system.

  • Derived Research in Computational Algorithms
    • The design of effective and efficient computational algorithms that fit the Optimization Services design is important. Optimization Services immediately opens up the questions of how to best utilize the available services on the OS network. Following are some of the potential research areas in computational algorithms related to Optimization Services:
    • Parallel computing where many services can simultaneously solve similar optimization problems.
    • Optimization via simulation where simulation services are located remotely from OS solvers.
    • Optimization job scheduling at the registry side and queuing at the service side.
    • Analyzing optimization instances according to the needs of the OS registry.
    • Modeling and compilation that generates OSiL instances quickly and accurately.
    • Efficient OSxL instance parsing and preprocessing algorithms.
    • Effective Optimization Services process orchestration.
    • As the OS standards allow representations of various optimization types, algorithm development (e.g. stochastic programming) that has lagged due to lack of good representations can hopefully get a boost.
  • Commercialization and Derived Business Models

Optimization Services, though itself an open framework, does not prevent registered services and related business to be commercialized. Following are some of the related business models:

    • Modeling language developers can leverage on using Optimization Services to provide more and better solver access to their customers and become more competitive.
    • Solver developers concentrate on developing better algorithms to increase their competitiveness without worrying about representation, communication and interfacing that are taken care by the OS standards.
    • evelopers can commercialize their OS libraries, e.g. readers and writers of standard instances.
    • Registry/server developers can provide auxiliary services such as storage services, business process related flow orchestration services, advertisement services and consulting services.
    • Auxiliary services such as analyzers and benchmarkers may possibly charge fees to involved parties.
    • Solver owners may adopt a “computing on demand” model by charging the user for using their services.
    • A solver service owner may also adopt a “result on demand” model by reporting the objective results that his solver service has found but hiding the solutions that are only to be revealed when the user agrees to pay. For example in the Optimization Services result Language (OSrL) that the solver service returns, it may write out only the <objectiveValue> value and in the <solverMessage> element it may provide the payment instructions for obtaining the <variableSolution> values.