Welcome to my homepage !!

Naoto Ohsaka (Ph.D.)

Photo
Curriculum Vitae
Email: naoto.ohsaka (at) gmail.com
Google Scholar author ID: Qgkc9DgAAAAJ
ORCID: 0000-0001-9584-4764
DBLP: 81/10779

Research Interests

graph algorithms; computational complexity; network diffusion

Publications

  1. Approximation Algorithm for Submodular Maximization under Submodular Cover.
    Naoto Ohsaka, Tatsuya Matsuoka.
    37th Conference on Uncertainty in Artificial Intelligence (UAI 2021).
    Abstract Paper
  2. We study a new optimization problem called submodular maximization under submodular cover (SMSC), which requires to find a fixed-size set such that one monotone submodular function $f$ is maximized subject to that another monotone submodular function $g$ is maximized approximately. SMSC is preferable to submodular function maximization when one wants to maximize two objective functions simultaneously. We propose an optimization framework for SMSC, which guarantees a constant-factor approximation. Our algorithm's key idea is to construct a new instance of submodular function maximization from a given instance of SMSC, which can be approximated efficiently. Besides, if we are given an approximation oracle for submodular function maximization, our algorithm provably produces nearly optimal solutions. We experimentally evaluate the proposed algorithm in terms of sensor placement and movie recommendation using real-world data.
  3. A Fully Polynomial Parameterized Algorithm for Counting the Number of Reachable Vertices in a Digraph.
    Naoto Ohsaka.
    Information Processing Letters.
    Abstract Paper ArXiv
  4. We consider the problem of counting the number of vertices reachable from each vertex in a digraph $G$, which is equal to computing all the out-degrees of the transitive closure of $G$. The current (theoretically) fastest algorithms run in quadratic time; however, Borassi has shown that this problem is not solvable in truly subquadratic time unless the Strong Exponential Time Hypothesis fails [Inf. Process. Lett., 116(10):628--630, 2016]. In this paper, we present an $\mathcal{O}(f^3n)$-time exact algorithm, where $n$ is the number of vertices in $G$ and $f$ is the feedback edge number of $G$. Our algorithm thus runs in truly subquadratic time for digraphs of $f=\mathcal{O}(n^{\frac{1}{3}-\epsilon})$ for any $\epsilon > 0$, i.e., the number of edges is $n$ plus $\mathcal{O}(n^{\frac{1}{3}-\epsilon})$, and is fully polynomial fixed parameter tractable, the notion of which was first introduced by Fomin, Lokshtanov, Pilipczuk, Saurabh, and Wrochna [ACM Trans. Algorithms, 14(3):34:1--34:45, 2018]. We also show that the same result holds for vertex-weighted digraphs, where the task is to compute the total weights of vertices reachable from each vertex.
  5. Spanning Tree Constrained Determinantal Point Processes are Hard to (Approximately) Evaluate.
    Tatsuya Matsuoka, Naoto Ohsaka.
    Operations Research Letters.
    Abstract Paper ArXiv
  6. We consider determinantal point processes (DPPs) constrained by spanning trees. Given a graph $G=(V,E)$ and a positive semi-definite matrix $\mathbf{A}$ indexed by $E$, a spanning-tree DPP defines a distribution such that we draw $S\subseteq E$ with probability proportional to $\det(\mathbf{A}_S)$ only if $S$ induces a spanning tree. We prove $\sharp\textsf{P}$-hardness of computing the normalizing constant for spanning-tree DPPs and provide an approximation-preserving reduction from the mixed discriminant, for which FPRAS is not known. We show similar results for DPPs constrained by forests.
  7. Unconstrained MAP Inference, Exponentiated Determinantal Point Processes, and Exponential Inapproximability.
    Naoto Ohsaka.
    24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021).
    Abstract Paper
  8. We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter $p$. We prove the following complexity-theoretic hardness results that explain the difficulty in approximating unconstrained MAP inference and the normalizing constant for E-DPPs.
    • Unconstrained MAP inference for an $n \times n$ matrix is $\textsf{NP}$-hard to approximate within a $2^{\beta n}$-factor, where $\beta = 10^{-10^{13}} $. This result improves upon a $(\frac{9}{8}-\epsilon)$-factor inapproximability given by Kulesza and Taskar (2012).
    • The normalizing constant for E-DPPs of any (fixed) constant exponent $p \geq \beta^{-1} = 10^{10^{13}}$ is $\textsf{NP}$-hard to approximate within a $2^{\beta pn}$-factor. This gives a(nother) negative answer to open questions posed by Kulesza andTaskar (2012); Ohsaka and Matsuoka (2020).
  9. Tracking Regret Bounds for Online Submodular Optimization.
    Tatsuya Matsuoka, Shinji Ito, Naoto Ohsaka.
    24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021).
    Abstract Paper
  10. In this paper, we propose algorithms for online submodular optimization with tracking regret bounds. Online submodular optimization is a generic framework for sequential decision making used to select subsets. Existing algorithms for online submodular optimization have been shown to achieve small (static) regret, which means that the algorithm’s performance is comparable to the performance of a fixed optimal action. Such algorithms, however, may perform poorly in an environment that changes over time. To overcome this problem, we apply a tracking-regret-analysis framework to online submodular optimization, one by which output is assessed through comparison with time-varying optimal subsets. We propose algorithms for submodular minimization, monotone submodular maximization under a size constraint, and unconstrained submodular maximization, and we show tracking regret bounds. In addition, we show that our tracking regret bound for submodular minimization is nearly tight.
  11. Predictive Optimization with Zero-Shot Domain Adaptation.
    Tomoya Sakai, Naoto Ohsaka.
    2021 SIAM International Conference on Data Mining (SDM 2021).
    Abstract Paper ArXiv
  12. Prediction in a new domain without any training sample, called zero-shot domain adaptation (ZSDA), is an important task in domain adaptation. While prediction in a new domain has gained much attention in recent years, in this paper, we investigate another potential of ZSDA. Specifically, instead of predicting responses in a new domain, we find a description of a new domain given a prediction. The task is regarded as predictive optimization, but existing predictive optimization methods have not been extended to handling multiple domains. We propose a simple framework for predictive optimization with ZSDA and analyze the condition in which the optimization problem becomes convex optimization. We also discuss how to handle the interaction of characteristics of a domain in predictive optimization. Through numerical experiments, we demonstrate the potential usefulness of our proposed framework.
  13. On the (In)tractability of Computing Normalizing Constants for the Product of Determinantal Point Processes.
    Naoto Ohsaka, Tatsuya Matsuoka.
    37th International Conference on Machine Learning (ICML 2020).
    Abstract Paper
  14. We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices as a natural, promising generalization of DPPs. We study the computational complexity of computing its normalizing constant, which is among the most essential probabilistic inference tasks. Our complexity-theoretic results (almost) rule out the existence of efficient algorithms for this task, unless input matrices are forced to have favorable structures. In particular, we prove the following:
    • Computing $ \sum_{S} \det(\mathbf{A}_{S,S})^p $ exactly for every (fixed) positive even integer $p$ is $\textsf{UP}$-hard and $\textsf{Mod}_3\textsf{P}$-hard, which gives a negative answer to an open question posed by Kulesza and Tasker (2012).
    • $ \sum_{S} \det(\mathbf{A}_{S,S}) \det(\mathbf{B}_{S,S}) \det(\mathbf{C}_{S,S}) $ is $\textsf{NP}$-hard to approximate within a factor of $ 2^{\mathcal{O}(|\mathcal{I}|^{1-\epsilon})} $ for any $\epsilon > 0$, where $|\mathcal{I}|$ is the input size. This result is stronger than $\sharp\textsf{P}$-hardness for the case of two matrices by Gillenwater (2014).
    • There exists a $ k^{\mathcal{O}(k)} |\mathcal{I}|^{\mathcal{O}(1)} $-time algorithm for computing $\sum_{S} \det(\mathbf{A}_{S,S}) \det(\mathbf{B}_{S,S})$, where $k$ is ``the maximum rank of $\mathbf{A}$ and $\mathbf{B}$'' or ``the treewidth of the graph induced by nonzero entries of $\mathbf{A}$ and $\mathbf{B}$.'' Such parameterized algorithms are said to be fixed-parameter tractable.
  15. The Solution Distribution of Influence Maximization: A High-level Experimental Study on Three Algorithmic Approaches.
    Naoto Ohsaka.
    ACM SIGMOD International Conference on Management of Data 2020 (SIGMOD 2020).
    Abstract Paper ArXiv
  16. Influence maximization is among the most fundamental algorithmic problems in social influence analysis. Over the last decade, a great effort has been devoted to developing efficient algorithms for influence maximization, so that identifying the ``best'' algorithm has become a demanding task. In SIGMOD'17, Arora, Galhotra, and Ranu reported benchmark results on eleven existing algorithms and demonstrated that there is no single state-of-the-art offering the best trade-off between computational efficiency and solution quality.
    In this paper, we report a high-level experimental study on three well-established algorithmic approaches for influence maximization, referred to as Oneshot, Snapshot, and Reverse Influence Sampling (RIS). Different from Arora et al., our experimental methodology is so designed that we examine the distribution of random solutions, characterize the relation between the sample number and the actual solution quality, and avoid implementation dependencies. Our main findings are as follows: 1. For a sufficiently large sample number, we obtain a unique solution regardless of algorithms. 2. The average solution quality of Oneshot, Snapshot, and RIS improves at the same rate up to scaling of sample number. 3. Oneshot requires more samples than Snapshot, and Snapshot requires fewer but larger samples than RIS. We discuss the time efficiency when conditioning Oneshot, Snapshot, and RIS to be of identical accuracy. Our conclusion is that Oneshot is suitable only if the size of available memory is limited, and RIS is more efficient than Snapshot for large networks; Snapshot is preferable for small, low-probability networks.
  17. A Predictive Optimization Framework for Hierarchical Demand Matching.
    Naoto Ohsaka, Tomoya Sakai, Akihiro Yabe.
    2020 SIAM International Conference on Data Mining (SDM 2020).
    Abstract Paper
  18. Predictive optimization is a framework for designing an entire data-analysis pipeline that comprises both prediction and optimization, to be able to maximize overall throughput performance. In practical demand analysis, a knowledge of hierarchies, which might be geographical or categorical, is recognized as useful, though such additional knowledge has not been taken into account in existing predictive optimization. In this paper, we propose a novel hierarchical predictive optimization pipeline that is able to deal with a wide range of applications including inventory management. Based on an existing hierarchical demand prediction model, we present a stochastic matching framework that can manage prediction-uncertainty in decision making. We further provide a greedy approximation algorithm for solving demand matching on hierarchical structures. In experimental evaluations on both artificial and real-world data, we demonstrate the effectiveness of our proposed hierarchical-predictive-optimization pipeline.
  19. Boosting PageRank Scores by Optimizing Internal Link Structure.
    Naoto Ohsaka, Tomohiro Sonobe, Naonori Kakimura, Takuro Fukunaga, Sumio Fujita, Ken-ichi Kawarabayashi.
    29th International Conference on Database and Expert Systems Applications (DEXA 2018).
    Abstract Paper
  20. We consider and formulate problems of PageRank score boosting motivated by applications such as effective web advertising. More precisely, given a graph and target vertices, one is required to find a fixed-size set of missing edges that maximizes the minimum PageRank score among the targets. We provide theoretical analyses to show that all of them are NP-hard. To overcome the hardness, we develop heuristic-based algorithms for them. We finally perform experiments on several real-world networks to verify the effectiveness of the proposed algorithms compared to baselines. Specifically, our algorithm achieves 100 times improvements of the minimum PageRank score among selected 100 vertices by adding only dozens of edges.
  21. NoSingles: a Space-Efficient Algorithm for Influence Maximization.
    Diana Popova, Naoto Ohsaka, Ken-ichi Kawarabayashi, Alex Thomo.
    30th International Conference on Scientific and Statistical Database Management (SSDBM 2018).
    Abstract Paper
  22. Algorithmic problems of computing influence estimation and influence maximization have been actively researched for decades. We developed a novel algorithm, NoSingles, based on the Reverse Influence Sampling method proposed by Borgs et al. in 2013. NoSingles solves the problem of influence maximization in large graphs using much smaller space than the existing state-of-the-art algorithms while preserving the theoretical guarantee of the approximation of $$(1-1/e-\epsilon)$$ of the optimum, for any $$ \epsilon > 0 $$. The NoSingles data structure is saved on the hard drive of the machine, and can be used repeatedly for playing out ``what if'' scenarios (e.g. trying different combination of seeds and calculating the influence spread). We also introduce a variation of NoSingles algorithm, which further decreases the running time, while preserving the approximation guarantee. We support our claims with extensive experiments on large real-world graphs. Savings in required space allow to successfully run NoSingles on a consumer-grade laptop for graphs with tens of millions of vertices and hundreds of millions of edges
  23. On the Power of Tree-Depth for Fully Polynomial FPT Algorithms.
    Yoichi Iwata, Tomoaki Ogasawara, Naoto Ohsaka.
    35th International Symposium on Theoretical Aspects of Computer Science (STACS 2018).
    Abstract Paper
  24. There are many classical problems in P whose time complexities have not been improved over the past decades. Recent studies of ``Hardness in P'' have revealed that, for several of such problems, the current fastest algorithm is the best possible under some complexity assumptions. To bypass this difficulty, Fomin et al. (SODA 2017) introduced the concept of fully polynomial FPT algorithms. For a problem with the current best time complexity $$O(n^c)$$, the goal is to design an algorithm running in $$k^{O(1)}n^{c'}$$ time for a parameter $$k$$ and a constant $$c' < c$$.
    In this paper, we investigate the complexity of graph problems in P parameterized by tree-depth, a graph parameter related to tree-width. We show that a simple divide-and-conquer method can solve many graph problems, including Weighted Matching, Negative Cycle Detection, Minimum Weight Cycle, Replacement Paths, and 2-hop Cover, in $$O(\mathrm{td}\cdot m)$$ time or $$O(\mathrm{td}\cdot (m+n\log n))$$ time, where $$\mathrm{td}$$ is the tree-depth of the input graph. Because any graph of tree-width $$\mathrm{tw}$$ has tree-depth at most $$(\mathrm{tw}+1)\log_2 n$$, our algorithms also run in $$O(\mathrm{tw}\cdot m\log n)$$ time or $$O(\mathrm{tw}\cdot (m+n\log n)\log n)$$ time. These results match or improve the previous best algorithms parameterized by tree-width. Especially, we solve an open problem of fully polynomial FPT algorithm for Weighted Matching parameterized by tree-width posed by Fomin et al.
  25. Coarsening Massive Influence Networks for Scalable Diffusion Analysis.
    Naoto Ohsaka, Tomohiro Sonobe, Sumio Fujita, and Ken-ichi Kawarabayashi.
    ACM SIGMOD International Conference on Management of Data 2017 (SIGMOD 2017).
    Abstract Paper Slide Code
  26. Fueled by the increasing popularity of online social networks, social influence analysis has attracted a great deal of research attention in the past decade. The diffusion process is often modeled using influence graphs, and there has been a line of research that involves algorithmic problems in influence graphs. However, the vast size of today's real-world networks raises a serious issue with regard to computational efficiency.
    In this paper, we propose a new algorithm for reducing influence graphs. Given an input influence graph, the proposed algorithm produces a vertex-weighted influence graph, which is compact and approximates the diffusion properties of the input graph. The central strategy of influence graph reduction is coarsening, which has the potential to greatly reduce the number of edges by merging a vertex set into a single weighted vertex. We provide two implementations; a speed-oriented implementation which runs in linear time with linear space and a scalability-oriented implementation which runs in practically linear time with sublinear space. Further, we present general frameworks using our compact graphs that accelerate existing algorithms for influence maximization and influence estimation problems, which are motivated by practical applications, such as viral marketing. Using these frameworks, we can quickly obtain solutions that have accuracy guarantees under a reasonable assumption. Experiments with real-world networks demonstrate that the proposed algorithm can scale to billion-edge graphs and reduce the graph size to up to 4%. In addition, our influence maximization framework achieves four times speed-up of a state-of-the-art D-SSA algorithm, and our influence estimation framework cuts down the computation time of a simulation-based method to 3.5%.
  27. Portfolio Optimization for Influence Spread.
    Naoto Ohsaka, and Yuichi Yoshida.
    International Conference on World Wide Web (WWW 2017).
    Abstract Paper Slide
  28. Motivated by viral marketing, stochastic diffusion processes that model influence spread on a network have been studied intensively. The primary interest in such models has been to find a seed set of a fixed size that maximizes the expected size of the cascade from it. Practically, however, it is not desirable to have the risk of ending with a small cascade, even if the expected size of the cascade is large. To address this issue, we adopt conditional value at risk (CVaR) as a risk measure, and propose an algorithm that computes a portfolio over seed sets with a provable guarantee on its CVaR. Using real-world social networks, we demonstrate that the portfolio computed by our algorithm has a significantly better CVaR than seed sets computed by other baseline methods.
  29. Maximizing Time-Decaying Influence in Social Networks.
    Naoto Ohsaka, Yutaro Yamaguchi, Naonori Kakimura, and Ken-ichi Kawarabayashi.
    European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2016).
    Paper Slide
  30. Dynamic Influence Analysis in Evolving Networks.
    Naoto Ohsaka, Takuya Akiba, Yuichi Yoshida, and Ken-ichi Kawarabayashi.
    Proceedings of the VLDB Endowment (PVLDB 16).
    Paper Slide Code
  31. Monotone $$k$$-Submodular Function Maximization with Size Constraints.
    Naoto Ohsaka, and Yuichi Yoshida
    Annual Conference on Neural Information Processing Systems (NIPS 2015) (Poster presentation).
    Paper Poster
  32. Efficient PageRank Tracking in Evolving Networks.
    Naoto Ohsaka, Takanori Maehara, and Ken-ichi Kawarabayashi.
    ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD 2015) (Research track paper).
    Paper Slide
  33. Fast and Accurate Influence Maximization on Large Networks with Pruned Monte-Carlo Simulations.
    Naoto Ohsaka, Takuya Akiba, Yuichi Yoshida, and Ken-ichi Kawarabayashi.
    AAAI Conference on Artificial Intelligence (AAAI 2014) (Main technical track paper).
    Paper Slide Code
  34. A Reinforcement Learning Method to Improve the Sweeping Efficiency for an Agent.
    Naoto Ohsaka, Daisuke Kitakoshi, and Masato Suzuki.
    IEEE International Conference on Granular Computing (GrC 2011).
    Paper

Preprints

  1. New!! Some Inapproximability Results of MAP Inference and Exponentiated Determinantal Point Processes.
    Naoto Ohsaka.
    Abstract ArXiv
  2. We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter $p$. We prove the following complexity-theoretic hardness results that explain the difficulty in approximating MAP inference and the normalizing constant for E-DPPs.
    • Unconstrained MAP inference for an $n \times n$ matrix is $\textsf{NP}$-hard to approximate within a factor of $2^{\beta n}$, where $\beta = 10^{-10^{13}} $. This result improves upon a $(\frac{9}{8}-\epsilon)$-factor inapproximability given by Kulesza and Taskar (2012).
    • Log-determinant maximization is $\textsf{NP}$-hard to approximate within a factor of $\frac{5}{4}$ for the unconstrained case and within a factor of $1+10^{-10^{13}}$ for the size-constrained monotone case.
    • The normalizing constant for E-DPPs of any (fixed) constant exponent $p \geq \beta^{-1} = 10^{10^{13}}$ is $\textsf{NP}$-hard to approximate within a factor of $2^{\beta pn}$. This gives a(nother) negative answer to open questions posed by Kulesza andTaskar (2012); Ohsaka and Matsuoka (2020).
  3. New!! On Reconfigurability of Target Sets.
    Naoto Ohsaka.
    Abstract ArXiv
  4. We study the problem of deciding reconfigurability of target sets of a graph. Given a graph $G$ with vertex thresholds $\tau$, consider a dynamic process in which vertex $v$ becomes activated once at least $\tau(v)$ of its neighbors are activated. A vertex set $S$ is called a target set if all vertices of $G$ would be activated when initially activating vertices of $S$. In the Target Set Reconfiguration problem, given two target sets $X$ and $Y$ of the same size, we are required to determine whether $X$ can be transformed into $Y$ by repeatedly swapping one vertex in the current set with another vertex not in the current set preserving every intermediate set as a target set. In this paper, we investigate the complexity of Target Set Reconfiguration in restricted cases. On the hardness side, we prove that Target Set Reconfiguration is $\textsf{PSPACE}$-complete on bipartite planar graphs of degree $3$ or $4$ and of threshold $2$, bipartite $3$-regular graphs of threshold $1$ or $2$, and split graphs, which is in contrast to the fact that a special case called Vertex Cover Reconfiguration is in $\textsf{P}$ for the last graph class. On the positive side, we present a polynomial-time algorithm for Target Set Reconfiguration on graphs of maximum degree $2$ and trees. The latter result can be thought of as a generalization of that for Vertex Cover Reconfiguration.

Talks


Programming Contests


Fox photos taken by me