Banner

Keynote Speakers


Speaker: Dr. Kazuo Iwama (Kyoto University, Japan)

Email: iwama@kuis.kyoto-u.ac.jp

Title: Narrowing (Already) Small Complexity Gaps

Date: 2017-12-16

Abstract: Suppose that some problem has an algorithm which runs in time \(O(n)\). This algorithm is optimal in the usual sense and there is little room for further improvements. One may be interested in the constant factor that is hidden by the big-O notation, but that would cause nasty issues about machine models. On the other hand, there are several other complexity measures that are more accurate and cleaner than the number of computation steps. One of them is the number of comparisons for evaluating sorting algorithms. In many textbooks, it is just written that fast algorithms need \(O(n\log n)\) comparisons (log means the logarithm of base \(2\) here) which is optimal. It turns out, however, that its information theoretic lower bound is \(\log(n!)\), which is approximately equal to \(nlogn-1.4426n+O(\log n)\). For instance, it is easy to prove that the simple binary insertion sort needs at most nlogn comparisons, but it is much harder to specifically bound the linear term or to bound its constant factor. The current best upper bound for that constant factor is \(-1.32\) due to Ford and Johnson that is published in 1959! Another example of such measure is the number of oracle calls for reconstruction of strings which is a model of sequence by hybridization. Again the current upper bound is \(n+2\log n\) and it was a long-standing open question whether the logn term can be removed. This talk is about our two recent results on improved upper bounds for those complexity measures. Such a small improvement may be useless in practice, but it needs a lot of new algorithmic techniques that may be useful in other occasions. Even more importantly, those are nice examples to demonstrate the power of randomization and/or the advantage of the average-case complexity over the worst-case complexity.

Biography: Professor Iwama received B.E., M.E. and Ph.D. degrees from Department of Electrical Engineering, Kyoto University in 1973, 1975 and 1980, respectively. He is the project professor of Kyoto University, and his research interests are mainly algorithms and complexity theory. He is the member of Academia Europae, the editor-in-chief of European Association for Theoretical Computer Science (EATCS), and the founder and president of Asian Association for Algorithms and Computation (AAAC). He was also the organizing committee chair of ICALP/LICS 2015 and 23rd ACM-SIAM Symposium on Discrete Algorithms (SODA 2012). Prof. Iwama had published more than 200 papers. He was also the Honorary Doctorate of University of Latvia.


Speaker: Dr. Ding-Zhu Du (University of Texas at Dallas, USA)

Email: dzdu@utdallas.edu

Titile: Maximization of Multi-Factor Influence

Date: 2017-12-17

Abstract: In viral marketing, a customer gets influenced usually by not only a single factor, but also more than one factors, or multiple factors. For example, both quality and price of a product are very important for a customer to make his decision on buying or not. In this talk, we consider a model with following properties: (a) A product is described by \(k\) (\(k \geq 2\)) factors and a customer buys the product if and only if he gets influenced on every factor. (b) All factors are independent so that influence diffusion of each factor follows an independent cascade model in the networks and different factors may follows different independent cascade models, that is, accepting probabilities for different factors on each edge may be different. Of course, the maximization of influence spread is NP-hard. We show that the influence spread in this model is still a submodular function and hence its maximization has a \((1-1/e)\)-approximation.

Biography: Dingzhu Du received the Ph.D. degree from the University of California, Santa Barbara, in 1985. He was a post doctor of the institute of mathematical science of University of California, Berkeley during 1985~1986. After that he went to the department of mathematics of MIT at 1986 as a visiting scholar. In 1987 he left MIT and took the position as a professor at institute of mathematics of Chinese Academy of Sciences. He became an associate professor and then the professor at the department of computer science of the University of Minnesota from 1991 to 1995. Currently he is the professor of the department of computer science of University of Texas at Dallas and the project supervisor of the computer theoretics of the NSF committee. Dr. Du had published more than 200 paper on the top conference of computer science and journals, and more than 10 authored books. He is the chief editor of the network theory and application, and editorial board member of more than 15 journals. He received CSTS award from INFORMS in 1998, the 2nd class prize of the Chinese NSF in 1993 and the 1st class prize of natural science of Chinese Academy of Sciences.


Speaker: Dr. Simai He (Shanghai University of Finance and Economics, China)

Email: simaihe@mail.shufe.edu.cn

Titile: A Non-Asymptotic Approach to Analyze Kidney Exchange Graphs

Date: 2017-12-17

Biography: Simai He is a professor in Department of Management Science, Shanghai University of Finance and Economics. His research focuses on optimization, game theory and supply chain management. He published papers in OR and CS journals and conferences such as Operations Research, Mathematics of Operations Research, Mathematical Programming, SIAM Journal on Optimization, FOCS, SODA and EC. He won Gold Medal in 33rd International Mathematical Competition. He also received the Award for Young Scientists from the Operations Research Society of China in 2014. He has been widely consulting for companies such as JD, Didi and NetEase, etc, and is the Chief Scientist of Cardinal Operations.

Abstract: We propose a novel methodology to study kidney exchange. Taking the random graph model of kidney exchange introduced in Ashlagi, Garmarnik, Rees and Roth's "The need for (long) chains in kidney exchange" (2012), we propose a non-asymptotic approach to quantifying the effectiveness of transplant chains in reducing the number of unmatched highly sensitized patients. Our approach is based on a two phase random walk procedure where random walks are used to allocate chains, followed by allocation via matching in cycles. The benefit of random walks is that they preserve the probabilistic structure of residual graphs, greatly facilitating analysis. The approach allows us to analytically show the benefit of chains, as compared to transplantation in two-way cycles only, in non-asymptotic (medium-sized) graphs. We also derive useful analytical bounds that illustrate the performance of our proposed allocation procedure and more general kidney allocation procedures. Our results complement previous findings on the benefits of chains that includes analytical results in large (limit) graphs and empirical results based on data from fielded kidney exchanges.