Improved Approximation Ratio for Strategyproof Facility Location on a Cycle

Krzysztof Rogowski, Marcin Dziubiński
University of Warsaw, Institute of Informatics

Motivating Examples

Selecting:

  • time of the day
  • political candidate
  • location of a facility

Model

Cyclic graph . Set of agents , odd.

Model

Mechanism , maps votes of agents to a lottery over vertices.

Model

Agents has preferences over vertices of graph induced by their ideal vertices.

Model

Cost incurred by agent from selecting vertex is its distance from their ideal vertex.

Model

Agents observe mechanism first, then submits their votes (any vertices of ).

Model

Mechanism selects a resulting lottery based (only) on the votes of agents.

Model

Mechanism selects a resulting lottery based (only) on the votes of agents.

Model

Agents and social planner perceives mechanism based on the resulting lottery.

Model (costs)

Let be a lottery distribution over the vertices of .

Cost of agent from :

Social Cost of :

Model

Result of the selection is sampled from the lottery returned by the mechanism.

Model

Random sampling of the lottery is not interesting.

Model

Agents and social planner perceives mechanism based on the resulting lottery.

Opt Mechanism

Consider opt mechanism which returns some lottery over socially optimal vertices.

Opt Mechanism

Consider opt mechanism which returns some lottery over socially optimal vertices.

Agent costs

Deviations

Any agent can misreport their vote to try to minimize their own cost.

Deviations

Deviations

Strategyproofness (in expectation)

Intuition: No one can benefit from misreporting their vote.

Definition: Mechanism is strategyproof if for any state of the world , any agent and alternative report of agent it holds that:

Assumption

Further we restrict ourselves to mechanisms which are strategyproof.

Implications: We can assume that agents report their votes truthfully.

Note: Mechanism returning optimal vertex may not be strategyproof on the cycle (depending on number of vertices).

Question: What is the best strategyproof mechanism?

Approximation Ratio

Intuition: Measure how good the mechanism is by comparing the social cost of its outcome to optimal outcome in the worst case.

Definition: Minimal constant such that for every state of the world it holds that:

RD Mechanism

Working: Selects each agent's vote with the same probability (uniform lottery).

Properties: strategyproof, neutral, anonymous

Approximation Ratio bound:

PCD Mechanism (Meir 2019)

Working: Selects each agent's vote with probability proportional to length of the opposing arc.

Properties: strategyproof, neutral, anonymous

Approximation Ratio bound:

New stuff from now on

Mixed Mechanism RD+PCD

Definition: Consider a RD+PCD mechanism which for each state of the world returns average lottery of RD and PCD.

Preserves: strategyproofness, neutrality, anonymity (if both input mechanisms adhere to these properties)

Approximation ratio of RD+PCD

Observation: For every state approximation ratio of RD+PCD is average of approximation ratios of RD and PCD.

Let be the state of world for which RD performs poorly (poorer than PCD).
Consider possible performance of different mechanisms for .

Performance (apx) RD PCD RD+PCD
Pessimistic
Optimistic

This works for all states of the world and mechanism switched.

Approximation ratio of RD+PCD

Bounds for approximation ratio:

  • trivial: ,
  • optimistic: ,
  • experimental: (!),
  • proven: .

Results

Theorem: For any set of agents with an odd cardinality and a cyclic graph , there exists a strategyproof mechanism for single facility location whose approximation ratio with respect to social cost is bounded from above by .

Results

State of the Art (RD mechanism)
Our Result
Our Experiments
------------------------------- -------
Lower Bound (Meir 2019)

Further Work

  • Examination of experimental bound of for approximation ratio of RD+PCD.
  • Generalization to other classes of graphs.
  • Research mixes of different mechanisms.

Proof technique (thanks)

  • Fixing vertex generating optimal social cost (due to neutrality and anonymity).
  • Cutting the cycle (estimating the distance on the cycle by the distance on a line segment).
  • Identifying variables in which bound is monotonic.
  • Direct estimating.

Hello, my name is Krzysztof Rogowski. I will present the paper "Improved Approximation Ratio for Strategyproof Facility Location on a Cycle", which is joint work with Marcin Dziubiński.

Let's start with the motivation for studying facility location from a mechanism design perspective. This framework can model various real-world scenarios, but we will stick with the highlighted example. Imagine a group of international friends who want to agree on a time to watch a movie together. Each friend has their own preferred time, and they want to find a way to select a time that minimizes overall dissatisfaction. Question is how to choose the time based on agents' reported preferences.

I will present you now how election process could be modeled. We have a cycle graph $G$ and a set of agents $N$.

Observing graph and agents social planner propose a mechanism. Mechanism is a function that maps agents' votes to a lottery over the vertices of $G$, where a lottery means a probability distribution.

Each agent has their ideal vertex which is also their type (depicted by dot) The agents' types collectively constitute the state of the world.

Each agent has preferences over the vertices of $G$, determined by their cost, Cost incurred for agent from vertex is its distance from their ideal vertex.

Agents observe the mechanism and then submit their votes. For the sake of example, assume that they vote truthfully.

The mechanism selects a resulting lottery based solely on the agents' votes.

We depict the resulting lottery with clock hands—the longer the hand, the higher the probability of that vertex being selected.

Agents and the social planner evaluate the mechanism by looking at the resulting lottery.

As a side note, the final result of the election is sampled from the lottery returned by the mechanism and depicted as a yellow circle.

We are not interested in random sampling aspect here.

Back to main example and scoring.

Let be nice for agents and select opt mechanism which for every situation returns socially optimal outcome (according to agents' votes).

Let be nice for agents and select opt mechanism which for every situation returns socially optimal outcome (according to agents' votes).

Agents' costs are extended to lotteries with expected values. Social cost is the sum of individual costs. When designing mechanisms, our goal is to minimize social cost. Let's look at the costs incurred by individual agents in our example. We can observe that the red agent has a cost of 3, which is not ideal for them.

Red agent can misreport their vote (make it not point to their true ideal location).

Here we see the outcome the mechanism would return for a message profile that includes a misreported vote by the red agent.

After recalculating costs, we observe that the red agent can benefit from misreporting. This is an undesirable situation.

We call a mechanism strategyproof if no situation like the one presented before occurs—that is, regardless of others' votes, an agent cannot benefit from misreporting their vote. The term "in expectation" refers to how agents evaluate the mechanism in our model—using expected values.

Further we restrict ourselves to SP mechanisms only. However, this restriction comes with a cost: As shown in our example, a mechanism that always returns the socially optimal result is not SP in some settings. This leads to an interesting question, which we tackle for the cycle graph: What is the best possible strategyproof mechanism in such setting?

To answer that question, we need a way to compare the quality of mechanisms. We use the well-established notion of approximation ratio. It is defined as the minimal constant $\phi$ such that, for all states of the world, the social cost of the mechanism's result is at most $\phi$ times worse than the optimal social cost. Equipped with this tool to compare mechanisms, we proceed to introduce some facility location mechanisms known from the literature.

Let's start with the random dictator mechanism. It returns lottery where the probability of selecting a vertex is proportional to the number of votes for that vertex. Below are some examples depicting how RD works. You can notice that in the second example, a vertex chosen by two agents is selected twice as often (visually, it has a clock hand twice as long). This mechanism is strategyproof (SP), neutral, and anonymous, where: - neutrality means the mechanism doesn't distinguish between similar graph vertices, - anonymity means it doesn't distinguish between agents. Quite interestingly, despite its simplicity, for many settings—especially the circle—the RD mechanism is the best performing mechanism known to date, with an approximation ratio bound of $2-2/n$.

Let's now introduce the PCD mechanism, proposed by Meir in 2019. It works for cycles and an odd number of agents. It selects each agent's vote with probability proportional to the length of the opposing arc, which for three agents corresponds to the arc between the other two agents. You can see an example of the PCD mechanism in action in the figure below. We won't delve further into its workings, as those details are not needed for our presentation. Nevertheless, to advertise this mechanism a bit more, I will mention that it can be seen as a generalization of the median mechanism for graphs with cycles. The PCD mechanism, like RD, is strategyproof, anonymous, and neutral. Moreover, it can be shown that it has an approximation ratio bound of 2.

So far, we've discussed known mechanisms. Now we move to new territory introduced in our paper. We start with simple idea. Let's take the mechanisms we already have and mix them. More precisely, consider the RD+PCD mechanism, which for a given state of the world returns the average lottery of RD and PCD. Because costs for agents are defined via expected value, it is easy to show that this construction preserves strategyproofness. Moreover, it does not refer specifically to any vertices or agents, so it also preserves neutrality and anonymity.

Most importantly, mixing interacts well with the approximation ratio. Let me briefly explain why. First, let's establish a key observation: For every state of the world, the performance of the RD+PCD mechanism is the average performance of RD and PCD. Because of this, we can expect the approximation ratio bound for RD+PCD to be somewhere between 1.5 and 2, depending on whether the bad states of the world for RD and PCD overlap. Try to see it from the perspective of single state. Consider state $s$ for which RD performs bad that is almost twice as poorly as the optimal solution. Such a state exists due to the approximation ratio bound of 2 for RD. Depending on how PCD performs for $s$, the performance of RD+PCD is somewhere between $2$ and $1.5$. If PCD also performs poorly for $s$, then RD+PCD would have approximation ratio around $2$ for $s$ (upper row). But if $s$ is a good state for PCD, then RD+PCD would have approximation ratio around $1.5$ for $s$ (bottom row).

We ran experiments to see how RD+PCD mechanism perform. They involved calculating the approximation ratio of the RD+PCD mechanism for various states of the world across different sets of agents and cycle graphs with varying numbers of vertices. In the presented plot, each color corresponds to a different cardinality of the agent set. The x-axis ranges over number of vertices in the cycle graph. The approximation ratio of the RD+PCD mechanism for each set of states can be read from the y-axis. Note that some dots on the plot would correspond to set with over one million states of the world. The first immediate observation is that we have not found any state of the world where RD+PCD performs more than 1.5 times worse than the optimal solution. Moreover, a quick look at the value trends across different colors and the x-axis suggests that the approximation ratio of RD+PCD is indeed be bounded by 1.5. This is quite a finding, because as previously argued, it requires RD and PCD mechanisms to perfectly complement each other (have their bad states disjoined). The main technical contribution of our paper is proving the bound of 1.75 for the approximation ratio of the RD+PCD mechanism, which is halfway between the experimental bound of 1.5 and the trivial, worst-case bound of 2. This result positively answers the open question of whether there is any mechanism that beats the RD mechanism and the bound of 2.

Here goes the main theorem. It concerns strategyproof mechanisms for facility location on a cycle. For any odd number of agents $n$, there exists a strategyproof mechanism with approximation ratio at most $\phi=1.75$.

Here is a table comparing our result with bounds for the approximation ratio of optimal strategyproof mechanisms from the literature. You can see that our result narrows the gap between the known upper and lower bounds by about 25%. Furthermore, our experimental results reduce the gap even further, achieving an approximation ratio bound of $1.5$.

Obvious directions for further research are: - Closing the gap between the experimental and proven bounds for the approximation ratio of RD+PCD. - Experimenting with different types of mixing and generalizing results to other graphs beyond cycles.

As a parting slide, I leave you with an overview of the proof for the 1.75 bound for RD+PCD. I won't delve into any details, and for further discussion, I invite you to join me in the poster session. Thank you for your attention!