We study the matroid semi-bandits problem, where at each round the learner plays a subset of $K$ arms from a feasible set, and the goal is to maximize the expected cumulative linear rewards. Existing algorithms have per-round time complexity at least $\Omega(K)$, which becomes expensive when $K$ is large. To address this computational issue, we propose FasterCUCB whose sampling rule takes time sublinear in $K$ for common classes of matroids: $\mathcal{O}(D\,\mathrm{polylog}(K)\,\mathrm{polylog}(T))$ for uniform matroids, partition matroids, and graphical matroids, and $\mathcal{O}(D\sqrt{K}\mathrm{polylog}(T))$ for transversal matroids. Here, $D$ is the maximum number of elements in any feasible subset of arms, and $T$ is the horizon. Our technique is based on dynamic maintenance of an approximate maximum-weight basis over inner-product weights. Although the introduction of an approximate maximum-weight basis presents a challenge in regret analysis, we can still guarantee an upper bound on regret as tight as CUCB in the sense that it matches the gap-dependent lower bound by Kveton et al. (2014a) asymptotically.
We present a reconfiguration analogue of alphabet reduction à la Dinur (J. ACM, 2007) and its applications. Given a binary constraint graph $G$ and its two satisfying assignments $\psi^\mathsf{ini}$ and $\psi^\mathsf{tar}$, the $\textsf{Maxmin 2-CSP Reconfiguration}$ problem requests to transform $\psi^\mathsf{ini}$ into $\psi^\mathsf{tar}$ by repeatedly changing the value of a single vertex so that the minimum fraction of satisfied edges is maximized. We demonstrate a polynomial-time reduction from $\textsf{Maxmin 2-CSP Reconfiguration}$ with arbitrarily large alphabet size $W \in \mathbb{N}$ to itself with universal alphabet size $W_0 \in \mathbb{N}$ such that
In the Minmax Set Cover Reconfiguration problem, given a set system $\mathcal{F}$ over a universe and its two covers $\mathcal{C}^\mathsf{start}$ and $\mathcal{C}^\mathsf{goal}$ of size $k$, we wish to transform $\mathcal{C}^\mathsf{start}$ into $\mathcal{C}^\mathsf{goal}$ by repeatedly adding or removing a single set of $\mathcal{F}$ while covering the universe in any intermediate state. Then, the objective is to minimize the maximize size of any intermediate cover during transformation. We prove that Minmax Set Cover Reconfiguration and Minmax Dominating Set Reconfiguration are $\mathsf{PSPACE}$-hard to approximate within a factor of $2-\frac{1}{\operatorname{polyloglog} N}$, where $N$ is the size of the universe and the number of vertices in a graph, respectively, improving upon Ohsaka (SODA 2024) and Karthik C. S. and Manurangsi (2023). This is the first result that exhibits a sharp threshold for the approximation factor of any reconfiguration problem because both problems admit a $2$-factor approximation algorithm as per Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (Theor. Comput. Sci., 2011). Our proof is based on a reconfiguration analogue of the FGLSS reduction from Probabilistically Checkable Reconfiguration Proofs of Hirahara and Ohsaka (2024). We also prove that for any constant $\varepsilon \in (0,1)$, Minmax Hypergraph Vertex Cover Reconfiguration on $\operatorname{poly}(\varepsilon^{-1})$-uniform hypergraphs is $\mathsf{PSPACE}$-hard to approximate within a factor of $2-\varepsilon$.
We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices, as a natural, promising generalization of DPPs. We study the computational complexity of computing its normalizing constant, which is among the most essential probabilistic inference tasks. Our complexity-theoretic results (almost) rule out the existence of efficient algorithms for this task unless the input matrices are forced to have favorable structures. In particular, we prove the following:
Motivated by the inapproximability of reconfiguration problems, we present a new PCP-type characterization of $\mathsf{PSPACE}$, which we call a probabilistically checkable reconfiguration proof (PCRP): Any $\mathsf{PSPACE}$ computation can be encoded into an exponentially long sequence of polynomially long proofs such that every adjacent pair of the proofs differs in at most one bit, and every proof can be probabilistically checked by reading a constant number of bits.
Using the new characterization, we prove $\mathsf{PSPACE}$-completeness of approximate versions of many reconfiguration problems, such as the Maxmin 3-SAT Reconfiguration problem. This resolves the open problem posed by Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (ISAAC 2008; Theor. Comput. Sci. 2011) as well as the Reconfiguration Inapproximability Hypothesis by Ohsaka (STACS 2023) affirmatively. We also present $\mathsf{PSPACE}$-completeness of approximating the Maxmin Clique Reconfiguration problem to within a factor of $n^\varepsilon$ for some constant $\varepsilon > 0$.
Excellent tail performance is crucial for modern machine learning tasks, such as algorithmic fairness, class imbalance, and risk-sensitive decision making, as it ensures the effective handling of challenging samples within a dataset. Tail performance is also a vital determinant of success for personalised recommender systems to reduce the risk of losing users with low satisfaction. This study introduces a "safe" collaborative filtering method that prioritises recommendation quality for less-satisfied users rather than focusing on the average performance. Our approach minimises the conditional value at risk (CVaR), which represents the average risk over the tails of users' loss. To overcome computational challenges for web-scale recommender systems, we develop a robust yet practical algorithm that extends the most scalable method, implicit alternating least squares (iALS). Empirical evaluation on real-world datasets demonstrates the excellent tail performance of our approach while maintaining competitive computational efficiency.
In the Determinant Maximization problem, given an $n \times n$ positive semi-definite matrix $\mathbf{A}$ in $\mathbb{Q}^{n \times n}$ and an integer $k$, we are required to find a $k \times k$ principal submatrix of $\mathbf{A}$ having the maximum determinant. This problem is known to be NP-hard and further proven to be W[1]-hard with respect to $k$ by Koutis; i.e., a $f(k)n^{\mathcal{O}(1)}$-time algorithm is unlikely to exist for any computable function $f$. However, there is still room to explore its parameterized complexity in the restricted case, in the hope of overcoming the general-case parameterized intractability. In this study, we rule out the fixed-parameter tractability of Determinant Maximization even if an input matrix is extremely sparse or low rank, or an approximate solution is acceptable. We first prove that Determinant Maximization is NP-hard and W[1]-hard even if an input matrix is an arrowhead matrix; i.e., the underlying graph formed by nonzero entries is a star, implying that the structural sparsity is not helpful. By contrast, we show that Determinant Maximization is solvable in polynomial time on tridiagonal matrices. Thereafter, we demonstrate the W[1]-hardness with respect to the rank $r$ of an input matrix. Our result is stronger than Koutis' result in the sense that any $k \times k$ principal submatrix is singular whenever $k > r$. We finally give evidence that it is W[1]-hard to approximate Determinant Maximization parameterized by $k$ within a factor of $2^{-c\sqrt{k}}$ for some universal constant $c > 0$. Our hardness result is conditional on the Parameterized Inapproximability Hypothesis posed by Lokshtanov, Ramanujan, Saurab, and Zehavi, which asserts that a gap version of Binary Constraint Satisfaction Problem is W[1]-hard. To complement this result, we develop an $\varepsilon$-additive approximation algorithm that runs in $\varepsilon^{-r^2} \cdot r^{\mathcal{O}(r^3)} \cdot n^{\mathcal{O}(1)}$ time for the rank $r$ of an input matrix, provided that the diagonal entries are bounded.
Combinatorial reconfiguration is an emerging field of theoretical computer science that studies the reachability between a pair of feasible solutions for a particular combinatorial problem. We study the hardness of accomplishing “approximate” reconfigurability, which affords to relax the feasibility of solutions. For example, in $\textsf{Minmax Set Cover Reconfiguration}$, given a pair of covers $\mathcal{C}_\mathsf{s}$ and $\mathcal{C}_\mathsf{t}$ for a set system $\mathcal{F}$, we aim to transform $\mathcal{C}_\mathsf{s}$ into $\mathcal{C}_\mathsf{t}$ by repeatedly adding or removing a single set of $\mathcal{F}$ so as to minimize the maximum size of covers during transformation. The recent study by Ohsaka (STACS 2023) gives evidence that a host of reconfiguration problems are PSPACE-hard to approximate assuming the Reconfiguration Inapproximability Hypothesis (RIH), which postulates that a gap version of $\textsf{Maxmin CSP Reconfiguration}$ is PSPACE-hard. One limitation of this approach is that inapproximability factors are not explicitly shown, so that even a $1.00 \cdots 001$-approximation algorithm for $\textsf{Minmax Set Cover Reconfiguration}$ may not be ruled out, whereas it admits $2$-approximation as per Ito, Demaine, Harvey, Papadimitriou, Sideri, Uehara, and Uno (Theor. Comput. Sci., 2011).
In this paper, we demonstrate gap amplification for reconfiguration problems. In particular, we prove an explicit factor of PSPACE-hardness of approximation for three popular reconfiguration problems only assuming RIH. Our main result is that under RIH, $\textsf{Maxmin Binary CSP Reconfiguration}$ is PSPACE-hard to approximate within a factor of $0.9942$. Moreover, the same result holds even if the constraint graph is restricted to $(d,\lambda)$-expander for arbitrarily small $\frac{\lambda}{d}$. The crux of its proof is an alteration of the gap amplification technique due to Dinur (J. ACM, 2007), which amplifies the $1$ vs. $1-\varepsilon$ gap for arbitrarily small $\varepsilon > 0$ up to the $1$ vs. $1-0.0058$ gap. As an application of the main result, we demonstrate that $\textsf{Minmax Set Cover Reconfiguration}$ and $\textsf{Minmax Dominating Set Reconfiguration}$ are PSPACE-hard to approximate within a factor of $1.0029$ under RIH. Our proof is based on a gap-preserving reduction from Label Cover to Set Cover due to Lund and Yannakakis (J. ACM, 1994). However, unlike Lund--Yannakakis' reduction, the expander mixing lemma is essential to use. We highlight that all results hold unconditionally as long as “PSPACE-hard” is replaced by “NP-hard,” and are the first explicit inapproximability results for reconfiguration problems without resorting to the parallel repetition theorem. We finally complement the main result by showing that it is NP-hard to approximate $\textsf{Maxmin Binary CSP Reconfiguration}$ within a factor better than $\frac{3}{4}$.
In matching markets such as job posting and online dating platforms, the recommender system plays a critical role in the success of the platform. Unlike standard recommender systems that suggest items to users, reciprocal recommender systems (RRSs) that suggest other users must take into account the mutual interests of users. In addition, ensuring that recommendation opportunities do not disproportionately favor popular users is essential for the total number of matches and for fairness among users. Existing recommendation methods in matching markets, however, face computational challenges on large-scale platforms and depend on specific examination functions in the position-based model (PBM). In this paper, we introduce the reciprocal recommendation method based on the matching with transferable utility (TU matching) model in the context of ranking recommendations in matching markets and propose a fast and examination-model-free algorithm. Furthermore, we evaluate our approach on experiments with synthetic data and real-world data from an online dating platform in Japan. Our method performs better than or as well as existing methods in terms of the total number of matches and works well even in a large-scale dataset for which one existing method does not work.
Diversification of recommendation results is a promising approach for coping with the uncertainty associated with users' information needs. Of particular importance in diversified recommendation is to define and optimize an appropriate diversity objective. In this study, we revisit the most popular diversity objective called intra-list distance (ILD), defined as the average pairwise distance between selected items, and a similar but lesser known objective called dispersion, which is the minimum pairwise distance. Owing to their simplicity and flexibility, ILD and dispersion have been used in a plethora of diversified recommendation research. Nevertheless, we do not actually know what kind of items are preferred by them. We present a critical reexamination of ILD and dispersion from theoretical and experimental perspectives. Our theoretical results reveal that these objectives have potential drawbacks: ILD may select duplicate items that are very close to each other, whereas dispersion may overlook distant item pairs. As a competitor to ILD and dispersion, we design a diversity objective called Gaussian ILD, which can interpolate between ILD and dispersion by tuning the bandwidth parameter. We verify our theoretical results by experimental results using real-world data and confirm the extreme behavior of ILD and dispersion in practice.
Beyond accuracy, there are a variety of aspects to the quality of recommender systems, such as diversity, fairness, and robustness. We argue that many of the prevalent problems in recommender systems are partly due to low-dimensionality of user and item embeddings, particularly when dot-product models, such as matrix factorization, are used. In this study, we showcase empirical evidence suggesting the necessity of sufficient dimensionality for user/item embeddings to achieve diverse, fair, and robust recommendation. We then present theoretical analyses of the expressive power of dot-product models. Our theoretical results demonstrate that the number of possible rankings expressible under dot-product models is exponentially bounded by the dimension of item factors. We empirically found that the low-dimensionality contributes to a popularity bias, widening the gap between the rank positions of popular and long-tail items; we also give a theoretical justification for this phenomenon.
Combinatorial reconfiguration is a growing research field studying problems on the transformability between a pair of solutions of a search problem. For example, in SAT Reconfiguration, for a Boolean formula $\varphi$ and two satisfying truth assignments $\sigma_\mathsf{s}$ and $\sigma_\mathsf{t}$ for $\varphi$, we are asked to determine whether there is a sequence of satisfying truth assignments for $\varphi$ starting from $\sigma_\mathsf{s}$ and ending with $\sigma_\mathsf{t}$, each resulting from the previous one by flipping a single variable assignment. We consider the approximability of optimization variants of reconfiguration problems; e.g., Maxmin SAT Reconfiguration requires to maximize the minimum fraction of satisfied clauses of $\varphi$ during transformation from $\sigma_\mathsf{s}$ to $\sigma_\mathsf{t}$. Solving such optimization variants approximately, we may be able to obtain a reasonable reconfiguration sequence comprising almost-satisfying truth assignments.
In this study, we prove a series of gap-preserving reductions to give evidence that a host of reconfiguration problems are PSPACE-hard to approximate, under some plausible assumption. Our starting point is a new working hypothesis called the Reconfiguration Inapproximability Hypothesis (RIH), which asserts that a gap version of Maxmin CSP Reconfiguration is PSPACE-hard. This hypothesis may be thought of as a reconfiguration analogue of the PCP theorem. Our main result is PSPACE-hardness of approximating Maxmin $3$-SAT Reconfiguration of bounded occurrence under RIH. The crux of its proof is a gap-preserving reduction from Maxmin Binary CSP Reconfiguration to itself of bounded degree. Because a simple application of the degree reduction technique using expander graphs due to Papadimitriou and Yannakakis does not preserve the perfect completeness, we modify the alphabet as if each vertex could take a pair of values simultaneously. To accomplish the soundness requirement, we further apply an explicit family of near-Ramanujan graphs and the expander mixing lemma. As an application of the main result, we demonstrate that under RIH, optimization variants of popular reconfiguration problems are PSPACE-hard to approximate, including Nondeterministic Constraint Logic due to Hearn and Demaine, Independent Set Reconfiguration, Clique Reconfiguration, and Vertex Cover Reconfiguration.
We study the problem of deciding reconfigurability of target sets of a graph. Given a graph $G$ with vertex thresholds $\tau$, consider a dynamic process in which vertex $v$ becomes activated once at least $\tau(v)$ of its neighbors are activated. A vertex set $S$ is called a target set if all vertices of $G$ would be activated when initially activating vertices of $S$. In the Target Set Reconfiguration problem, given two target sets $X$ and $Y$ of the same size, we are required to determine whether $X$ can be transformed into $Y$ by repeatedly swapping one vertex in the current set with another vertex not in the current set preserving every intermediate set as a target set. In this paper, we investigate the complexity of Target Set Reconfiguration in restricted cases. On the hardness side, we prove that Target Set Reconfiguration is PSPACE-complete on bipartite planar graphs of degree $3$ and $4$ and of threshold $2$, bipartite $3$-regular graphs and planar $3$-regular graphs of threshold $1$ and $2$, and split graphs, which is in contrast to the fact that a special case called Vertex Cover Reconfiguration is in P for the last graph class. On the positive side, we present a polynomial-time algorithm for Target Set Reconfiguration on graphs of maximum degree $2$ and trees. The latter result can be thought of as a generalization of that for Vertex Cover Reconfiguration.
In the Determinant Maximization problem, given an $n \times n$ positive semi-definite matrix $\mathbf{A}$ in $\mathbb{Q}^{n \times n}$ and an integer $k$, we are required to find a $k \times k$ principal submatrix of $\mathbf{A}$ having the maximum determinant. This problem is known to be NP-hard and further proven to be W[1]-hard with respect to $k$ by Koutis; i.e., a $f(k)n^{\mathcal{O}(1)}$-time algorithm is unlikely to exist for any computable function $f$. However, there is still room to explore its parameterized complexity in the restricted case, in the hope of overcoming the general-case parameterized intractability. In this study, we rule out the fixed-parameter tractability of Determinant Maximization even if an input matrix is extremely sparse or low rank, or an approximate solution is acceptable. We first prove that Determinant Maximization is NP-hard and W[1]-hard even if an input matrix is an arrowhead matrix; i.e., the underlying graph formed by nonzero entries is a star, implying that the structural sparsity is not helpful. By contrast, we show that Determinant Maximization is solvable in polynomial time on tridiagonal matrices. Thereafter, we demonstrate the W[1]-hardness with respect to the rank $r$ of an input matrix. Our result is stronger than Koutis' result in the sense that any $k \times k$ principal submatrix is singular whenever $k > r$. We finally give evidence that it is W[1]-hard to approximate Determinant Maximization parameterized by $k$ within a factor of $2^{-c\sqrt{k}}$ for some universal constant $c > 0$. Our hardness result is conditional on the Parameterized Inapproximability Hypothesis posed by Lokshtanov, Ramanujan, Saurab, and Zehavi, which asserts that a gap version of Binary Constraint Satisfaction Problem is W[1]-hard. To complement this result, we develop an $\varepsilon$-additive approximation algorithm that runs in $\varepsilon^{-r^2} \cdot r^{\mathcal{O}(r^3)} \cdot n^{\mathcal{O}(1)}$ time for the rank $r$ of an input matrix, provided that the diagonal entries are bounded.
We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter $p$. We present several complexity-theoretic hardness results that explain the difficulty in approximating MAP inference and the normalizing constant for E-DPPs. We first prove that unconstrained MAP inference for an $n \times n$ matrix is NP-hard to approximate within a factor of $2^{\beta n}$, where $\beta = 10^{-10^{13}} $. This result improves upon the best-known inapproximability factor of $(\frac{9}{8}-\epsilon)$, and rules out the existence of any polynomial-factor approximation algorithm assuming P ≠ NP. We then show that log-determinant maximization is NP-hard to approximate within a factor of $\frac{5}{4}$ for the unconstrained case and within a factor of $1+10^{-10^{13}}$ for the size-constrained monotone case. In particular, log-determinant maximization does not admit a polynomial-time approximation scheme unless P = NP. As a corollary of the first result, we demonstrate that the normalizing constant for E-DPPs of any (fixed) constant exponent $p \geq \beta^{-1} = 10^{10^{13}}$ is NP-hard to approximate within a factor of $2^{\beta pn}$, which is in contrast to the case of $p \leq 1$ admitting a fully polynomial-time randomized approximation scheme.
Reconfiguration problems require finding a step-by-step transformation between a pair of feasible solutions for a particular problem.
The primary concern in Theoretical Computer Science has been revealing their computational complexity for classical problems.
This paper presents an initial study on reconfiguration problems derived from a submodular function, which has more of a flavor of Data Mining.
Our submodular reconfiguration problems request to find a solution sequence connecting two input solutions such that each solution has an objective value above a threshold in a submodular function $f: 2^{[n]} \to \mathbb{R}_+$ and is obtained from the previous one by applying a simple transformation rule.
We formulate three reconfiguration problems:
Monotone Submodular Reconfiguration (MSReco), which applies to influence maximization, and two versions of Unconstrained Submodular Reconfiguration (USReco), which apply to determinantal point processes.
Our contributions are summarized as follows:
Determinantal point processes (DPPs) are attractive probabilistic models for expressing item quality and set diversity simultaneously. Although DPPs are widely-applicable to many subset selection tasks, there exist simple small-size probability distributions that any DPP cannot express. To overcome this drawback while keeping good properties of DPPs, in this paper we investigate the expressive power of convex combinations of DPPs. We provide upper and lower bounds for the number of DPPs required for exactly expressing any probability distribution. For the approximation error, we give an upper bound on the Kullback--Leibler divergence $n-\lfloor \log t\rfloor +\epsilon$ for any $\epsilon >0$ of approximate distribution from a given joint probability distribution, where $t$ is the number of DPPs. Our numerical simulation on an online retail dataset empirically verifies that a convex combination of only two DPPs can outperform a nonsymmetric DPP in terms of the Kullback--Leibler divergence. By combining a polynomial number of DPPs, we can express probability distributions induced by bounded-degree pseudo-Boolean functions, which include weighted coverage functions of bounded occurrence.
The concept of $k$-submodularity is an extension of submodularity, of which maximization has various applications, such as influence maximization and sensor placement. In such situations, to model complicated real problems, we want to deal with multiple factors, such as, more detailed parameter representing a property of a given function or a constraint which should be imposed for a given function, simultaneously. Besides, it is preferable that an algorithm for the modeling problem is simple. In this paper, for both monotone $k$-submodular function maximization with bounded curvature and monotone weakly $k$-submodular function maximization, we give approximation ratio analysis on greedy-type algorithms on the problem with the matroid constraint and that with the individual size constraint. Furthermore, we give an approximation ratio analysis on another type of the relaxation of $k$-submodular functions, approximately $k$-submodular functions, with the matroid constraint.
We study a new optimization problem called submodular maximization under submodular cover (SMSC), which requires to find a fixed-size set such that one monotone submodular function $f$ is maximized subject to that another monotone submodular function $g$ is maximized approximately. SMSC is preferable to submodular function maximization when one wants to maximize two objective functions simultaneously. We propose an optimization framework for SMSC, which guarantees a constant-factor approximation. Our algorithm's key idea is to construct a new instance of submodular function maximization from a given instance of SMSC, which can be approximated efficiently. Besides, if we are given an approximation oracle for submodular function maximization, our algorithm provably produces nearly optimal solutions. We experimentally evaluate the proposed algorithm in terms of sensor placement and movie recommendation using real-world data.
We consider the problem of counting the number of vertices reachable from each vertex in a digraph $G$, which is equal to computing all the out-degrees of the transitive closure of $G$. The current (theoretically) fastest algorithms run in quadratic time; however, Borassi has shown that this problem is not solvable in truly subquadratic time unless the Strong Exponential Time Hypothesis fails [Inf. Process. Lett., 116(10):628--630, 2016]. In this paper, we present an $\mathcal{O}(f^3n)$-time exact algorithm, where $n$ is the number of vertices in $G$ and $f$ is the feedback edge number of $G$. Our algorithm thus runs in truly subquadratic time for digraphs of $f=\mathcal{O}(n^{\frac{1}{3}-\epsilon})$ for any $\epsilon > 0$, i.e., the number of edges is $n$ plus $\mathcal{O}(n^{\frac{1}{3}-\epsilon})$, and is fully polynomial fixed parameter tractable, the notion of which was first introduced by Fomin, Lokshtanov, Pilipczuk, Saurabh, and Wrochna [ACM Trans. Algorithms, 14(3):34:1--34:45, 2018]. We also show that the same result holds for vertex-weighted digraphs, where the task is to compute the total weights of vertices reachable from each vertex.
We consider determinantal point processes (DPPs) constrained by spanning trees. Given a graph $G=(V,E)$ and a positive semi-definite matrix $\mathbf{A}$ indexed by $E$, a spanning-tree DPP defines a distribution such that we draw $S\subseteq E$ with probability proportional to $\det(\mathbf{A}_S)$ only if $S$ induces a spanning tree. We prove $\sharp\textsf{P}$-hardness of computing the normalizing constant for spanning-tree DPPs and provide an approximation-preserving reduction from the mixed discriminant, for which FPRAS is not known. We show similar results for DPPs constrained by forests.
We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter $p$. We prove the following complexity-theoretic hardness results that explain the difficulty in approximating unconstrained MAP inference and the normalizing constant for E-DPPs.
In this paper, we propose algorithms for online submodular optimization with tracking regret bounds. Online submodular optimization is a generic framework for sequential decision making used to select subsets. Existing algorithms for online submodular optimization have been shown to achieve small (static) regret, which means that the algorithm’s performance is comparable to the performance of a fixed optimal action. Such algorithms, however, may perform poorly in an environment that changes over time. To overcome this problem, we apply a tracking-regret-analysis framework to online submodular optimization, one by which output is assessed through comparison with time-varying optimal subsets. We propose algorithms for submodular minimization, monotone submodular maximization under a size constraint, and unconstrained submodular maximization, and we show tracking regret bounds. In addition, we show that our tracking regret bound for submodular minimization is nearly tight.
Prediction in a new domain without any training sample, called zero-shot domain adaptation (ZSDA), is an important task in domain adaptation. While prediction in a new domain has gained much attention in recent years, in this paper, we investigate another potential of ZSDA. Specifically, instead of predicting responses in a new domain, we find a description of a new domain given a prediction. The task is regarded as predictive optimization, but existing predictive optimization methods have not been extended to handling multiple domains. We propose a simple framework for predictive optimization with ZSDA and analyze the condition in which the optimization problem becomes convex optimization. We also discuss how to handle the interaction of characteristics of a domain in predictive optimization. Through numerical experiments, we demonstrate the potential usefulness of our proposed framework.
We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices as a natural, promising generalization of DPPs. We study the computational complexity of computing its normalizing constant, which is among the most essential probabilistic inference tasks. Our complexity-theoretic results (almost) rule out the existence of efficient algorithms for this task, unless input matrices are forced to have favorable structures. In particular, we prove the following:
Influence maximization is among the most fundamental algorithmic problems in social influence analysis. Over the last decade, a great effort has been devoted to developing efficient algorithms for influence maximization, so that identifying the ``best'' algorithm has become a demanding task. In SIGMOD'17, Arora, Galhotra, and Ranu reported benchmark results on eleven existing algorithms and demonstrated that there is no single state-of-the-art offering the best trade-off between computational efficiency and solution quality.
In this paper, we report a high-level experimental study on three well-established algorithmic approaches for influence maximization, referred to as Oneshot, Snapshot, and Reverse Influence Sampling (RIS). Different from Arora et al., our experimental methodology is so designed that we examine the distribution of random solutions, characterize the relation between the sample number and the actual solution quality, and avoid implementation dependencies. Our main findings are as follows: 1. For a sufficiently large sample number, we obtain a unique solution regardless of algorithms. 2. The average solution quality of Oneshot, Snapshot, and RIS improves at the same rate up to scaling of sample number. 3. Oneshot requires more samples than Snapshot, and Snapshot requires fewer but larger samples than RIS. We discuss the time efficiency when conditioning Oneshot, Snapshot, and RIS to be of identical accuracy. Our conclusion is that Oneshot is suitable only if the size of available memory is limited, and RIS is more efficient than Snapshot for large networks; Snapshot is preferable for small, low-probability networks.
Predictive optimization is a framework for designing an entire data-analysis pipeline that comprises both prediction and optimization, to be able to maximize overall throughput performance. In practical demand analysis, a knowledge of hierarchies, which might be geographical or categorical, is recognized as useful, though such additional knowledge has not been taken into account in existing predictive optimization. In this paper, we propose a novel hierarchical predictive optimization pipeline that is able to deal with a wide range of applications including inventory management. Based on an existing hierarchical demand prediction model, we present a stochastic matching framework that can manage prediction-uncertainty in decision making. We further provide a greedy approximation algorithm for solving demand matching on hierarchical structures. In experimental evaluations on both artificial and real-world data, we demonstrate the effectiveness of our proposed hierarchical-predictive-optimization pipeline.
We consider and formulate problems of PageRank score boosting motivated by applications such as effective web advertising. More precisely, given a graph and target vertices, one is required to find a fixed-size set of missing edges that maximizes the minimum PageRank score among the targets. We provide theoretical analyses to show that all of them are NP-hard. To overcome the hardness, we develop heuristic-based algorithms for them. We finally perform experiments on several real-world networks to verify the effectiveness of the proposed algorithms compared to baselines. Specifically, our algorithm achieves 100 times improvements of the minimum PageRank score among selected 100 vertices by adding only dozens of edges.
Algorithmic problems of computing influence estimation and influence maximization have been actively researched for decades. We developed a novel algorithm, NoSingles, based on the Reverse Influence Sampling method proposed by Borgs et al. in 2013. NoSingles solves the problem of influence maximization in large graphs using much smaller space than the existing state-of-the-art algorithms while preserving the theoretical guarantee of the approximation of $(1-1/e-\epsilon)$ of the optimum, for any $ \epsilon > 0 $. The NoSingles data structure is saved on the hard drive of the machine, and can be used repeatedly for playing out ``what if'' scenarios (e.g. trying different combination of seeds and calculating the influence spread). We also introduce a variation of NoSingles algorithm, which further decreases the running time, while preserving the approximation guarantee. We support our claims with extensive experiments on large real-world graphs. Savings in required space allow to successfully run NoSingles on a consumer-grade laptop for graphs with tens of millions of vertices and hundreds of millions of edges.
There are many classical problems in P whose time complexities have not been improved over the past decades.
Recent studies of ``Hardness in P'' have revealed that, for several of such problems, the current fastest algorithm is the best possible under some complexity assumptions.
To bypass this difficulty, Fomin et al. (SODA 2017) introduced the concept of fully polynomial FPT algorithms.
For a problem with the current best time complexity $O(n^c)$, the goal is to design an algorithm running in $k^{O(1)}n^{c'}$ time for a parameter $k$ and a constant $c' < c$.
In this paper, we investigate the complexity of graph problems in P parameterized by tree-depth, a graph parameter related to tree-width.
We show that a simple divide-and-conquer method can solve many graph problems, including
Weighted Matching, Negative Cycle Detection, Minimum Weight Cycle, Replacement Paths, and 2-hop Cover,
in $O(\mathrm{td}\cdot m)$ time or $O(\mathrm{td}\cdot (m+n\log n))$ time, where $\mathrm{td}$ is the tree-depth of the input graph.
Because any graph of tree-width $\mathrm{tw}$ has tree-depth at most $(\mathrm{tw}+1)\log_2 n$, our algorithms also run in $O(\mathrm{tw}\cdot m\log n)$ time or $O(\mathrm{tw}\cdot (m+n\log n)\log n)$ time.
These results match or improve the previous best algorithms parameterized by tree-width.
Especially, we solve an open problem of fully polynomial FPT algorithm for Weighted Matching parameterized by tree-width posed by Fomin et al.
Fueled by the increasing popularity of online social networks, social influence analysis has attracted a great deal of research attention in the past decade. The diffusion process is often modeled using influence graphs, and there has been a line of research that involves algorithmic problems in influence graphs. However, the vast size of today's real-world networks raises a serious issue with regard to computational efficiency.
In this paper, we propose a new algorithm for reducing influence graphs. Given an input influence graph, the proposed algorithm produces a vertex-weighted influence graph, which is compact and approximates the diffusion properties of the input graph. The central strategy of influence graph reduction is coarsening, which has the potential to greatly reduce the number of edges by merging a vertex set into a single weighted vertex. We provide two implementations; a speed-oriented implementation which runs in linear time with linear space and a scalability-oriented implementation which runs in practically linear time with sublinear space. Further, we present general frameworks using our compact graphs that accelerate existing algorithms for influence maximization and influence estimation problems, which are motivated by practical applications, such as viral marketing. Using these frameworks, we can quickly obtain solutions that have accuracy guarantees under a reasonable assumption. Experiments with real-world networks demonstrate that the proposed algorithm can scale to billion-edge graphs and reduce the graph size to up to 4%. In addition, our influence maximization framework achieves four times speed-up of a state-of-the-art D-SSA algorithm, and our influence estimation framework cuts down the computation time of a simulation-based method to 3.5%.
Motivated by viral marketing, stochastic diffusion processes that model influence spread on a network have been studied intensively. The primary interest in such models has been to find a seed set of a fixed size that maximizes the expected size of the cascade from it. Practically, however, it is not desirable to have the risk of ending with a small cascade, even if the expected size of the cascade is large. To address this issue, we adopt conditional value at risk (CVaR) as a risk measure, and propose an algorithm that computes a portfolio over seed sets with a provable guarantee on its CVaR. Using real-world social networks, we demonstrate that the portfolio computed by our algorithm has a significantly better CVaR than seed sets computed by other baseline methods.
We study the hardness of approximating two reconfiguration problems. One problem is Maxmin $k$-Cut Reconfiguration, which is a reconfiguration analogue of Max $k$-Cut. Given a graph $G$ and its two $k$-colorings $f_\mathsf{start},f_\mathsf{end}$, we are required to transform $f_\mathsf{start}$ into $f_\mathsf{end}$ by repeatedly recoloring a single vertex so as to maximize the minimum fraction of bichromatic edges of $G$ at any intermediate state. The other is Maxmin E$k$-SAT Reconfiguration, which is a reconfiguration analogue of Max E$k$-SAT. For a $k$-CNF formula $\varphi$ where every clause contains exactly $k$ literals and its two satisfying assignments $\alpha_\mathsf{start},\alpha_\mathsf{end}$, we are asked to transform $\alpha_\mathsf{start}$ into $\alpha_\mathsf{end}$ by flipping a single variable assignment at a time so that the minimum fraction of satisfied clauses of $\varphi$ is maximized. The Probabilistically Checkable Reconfiguration Proof theorem due to Hirahara and Ohsaka (STOC 2024) and Karthik C. S. and Manurangsi (2023) implies that Maxmin 4-Cut Reconfiguration and Maxmin E3-SAT Reconfiguration are $\textsf{PSPACE}$-hard to approximate within a constant factor. However, the asymptotic behavior of approximability for these problems with respect to $k$ is not well understood.
In this paper, we present the following hardness-of-approximation results and approximation algorithms for Maxmin $k$-Cut Reconfiguration and Maxmin E$k$-SAT Reconfiguration:
Given a graph $G$ with a vertex threshold function $\tau$, consider a dynamic process in which any inactive vertex $v$ becomes activated whenever at least $\tau(v)$ of its neighbors are activated. A vertex set $S$ is called a target set if all vertices of $G$ would be activated when initially activating vertices of $S$. In the $\textsf{Minmax Target Set Reconfiguration}$ problem, for a graph $G$ and its two target sets $X$ and $Y$, we wish to transform $X$ into $Y$ by repeatedly adding or removing a single vertex, using only target sets of $G$, so as to minimize the maximum size of any intermediate target set. We prove that it is NP-hard to approximate $\textsf{Minmax Target Set Reconfiguration}$ within a factor of $2-o\left(\frac{1}{\operatorname{polylog} n}\right)$, where $n$ is the number of vertices. Our result establishes a tight lower bound on approximability of $\textsf{Minmax Target Set Reconfiguration}$, which admits a $2$-factor approximation algorithm. The proof is based on a gap-preserving reduction from $\textsf{Target Set Selection}$ to $\textsf{Minmax Target Set Reconfiguration}$, where NP-hardness of approximation for the former problem is proven by Chen (SIAM J. Discrete Math., 2009) and Charikar, Naamad, and Wirth (APPROX/RANDOM 2016).
Given a two-prover game $G$ and its two satisfying labelings $\psi_\mathsf{s}$ and $\psi_\mathsf{t}$, the Label Cover Reconfiguration problem asks whether $\psi_\mathsf{s}$ can be transformed into $\psi_\mathsf{t}$ by repeatedly changing the value of a vertex while preserving any intermediate labeling satisfying $G$. We consider an optimization variant of Label Cover Reconfiguration by relaxing the feasibility of labelings, referred to as Maxmin Label Cover Reconfiguration: we are allowed to transform by passing through any non-satisfying labelings, but required to maximize the minimum fraction of satisfied edges during transformation from $\psi_\mathsf{s}$ to $\psi_\mathsf{t}$. Since the parallel repetition theorem of Raz (SIAM J. Comput., 1998), which implies NP-hardness of Label Cover within any constant factor, produces strong inapproximability results for many NP-hard problems, one may think of using Maxmin Label Cover Reconfiguration to derive inapproximability results for reconfiguration problems. We prove the following results on Maxmin Label Cover Reconfiguration, which display different trends from those of Label Cover and the parallel repetition theorem: