Machine learning aims at dealing with uncertainty and making predictions about the future scenarios in the unknown environment. The core idea of coping with such uncertainty, or decision making without knowing the information of future input is very similar to another well-studied area in theoretical algorithm design, i.e. online algorithms with competitive analysis. Specifically, an online algorithm receives an input sequence of items, one at a time. To process each input item one-by-one, an online algorithm must immediately generate an output or make a decision with no idea about the whole input from the beginning. More precisely, online algorithms are somehow more inclined to develop a machine learning model in which new training data is gradually input over time to continuously train the model and further extend the strength of the existing model. This is also similar to the so-called incremental learning model. Another similar direction is the online learning model, which is also a variant of traditional machine learning problems. Online learning algorithms mainly learn from a stream of past events and make a prediction for each element one by one in the stream. Then the algorithm receives immediate feedback on each prediction and uses this feedback to improve its accuracy of subsequent predictions.
For online algorithms, our recent study is to investigate power-down mechanism which decides when a machine transitions between states such that the total energy consumption, characterized by execution cost, idle cost and switching cost, is minimized. In contrast to most of the previous studies on the offline model, we focus on the online model in which a sequence of jobs with their release time, execution time and deadline, arrive in an online fashion. More precisely, we exploit a different switching on and off strategy and present an upper bound of 3, and further show a lower bound of 2.1, in a dual-machine model, introduced by Chen et al. [STACS 2014: 226-238], both of which beat the currently best result [1].
Moreover, we also explore a recent exciting area, i.e., learning-augmented online algorithms, which incorporate predictions in a black-box manner to outperform existing algorithms if the predictions are accurate while otherwise maintaining theoretical guarantees even when the predictions are extremely erroneous. It offers a popular framework for overcoming pessimistic worst-case competitive analysis. We particularly begin investigating the classical online traveling salesman problem (OLTSP), where future requests are augmented with predictions. Unlike the prediction models in other previous studies, each actual request in the OLTSP, associated with its arrival time and position, may not coincide with the predicted ones, which, as imagined, leads to a troublesome situation. Our main result is to study different prediction models and design algorithms to improve the best-known results in the different settings. Moreover, we generalize the proposed results to the online dial-a-ride problem [2].
For incremental learning, we conduct the MOST AI project on smart manufacturing and collaborate with high-tech manufacturing companies in Taiwan. First, we take into consideration the lithography process in the semiconductor industry as the short-term goal to elaborate the artificial intelligence optimization applications. Apart from most advanced process control systems that used statistical measured data, we further attempt to make use of real data-driven approaches. More precisely, we define the correlated clustering features from big data and conduct automatic compensation through incremental learning schemes, which can iteratively adjust the settings of the learning model to fit different customers’ demands.
1. Ya-Chun Liang, Kazuo Iwama, Chung-Shou Liao: Improving the Bounds of the Online Dynamic Power Management Problem. CoRR abs/2209.12021 (2022), accepted by ISAAC 2022.
2. Hsiao-Yu Hu, Hao-Ting Wei, Meng-Hsi Li, Kai-Min Chung, Chung-Shou Liao: Online TSP with Predictions. CoRR abs/2206.15364 (2022)