Efficient approximately fair item allocation
Efficient approximately fair item allocation
When allocating objects among people with different preferences, two major goals are pareto-efficiency and fairness. Since the objects are indivisible, there may not exist any fair allocation. For example, when there is a single house and 2 people, every allocation of the house will be unfair to 1 person. Therefore, several common approximations have been studied, such as [[maximin-share]] fairness (MMS), [[envy-free-item-allocation|envy-freeness]] up to one item (EF1), [[proportional-division|proportionality]] up to one item (PROP1), and equitability up to one item (EQ1). The problem of efficient approximately fair item allocation is to find an allocation that is both Pareto-efficient (PE) and satisfies one of these fairness notions. The problem was first presented at 2016 Both are guaranteed to return an allocation with no envy-cycles. However, the allocation is not guaranteed to be Pareto-efficient. The Approximate-CEEI mechanism* returns a partial EF1 allocation for arbitrary preference relations. It is PE w.r.t. the allocated objects, but not PE w.r.t. all objects (since some objects may remain unallocated). This raised the question of finding an allocation that is both PE and EF1.
Maximum Nash Welfare rule Caragiannis, Kurokawa, Moulin, Procaccia, Shah and Wang were the first to prove the existence of a PE+EF1 allocation. They proved that, when all agents have positive additive utilities, every allocation that maximizes the product of utilities (also known as the Nash welfare) is EF1. Since it is obvious that the maximizing allocation is PE, the existence of PE+EF1 allocation follows.
While the max-product allocation has desirable properties, it cannot be computed in polynomial time: finding a max-product allocation is NP-hard and even APX-hard. This led to various works attempting to approximate the maximum product, with improving approximation factors:
Cole and Gkatzelis presented a 2.89-factor approximation. Anari, Gharan, Saberi and Singh presented an e-factor approximation. *Cole, Devanur, Gkatelis, Jain, Mai, Vazirani and Yazdanbod presented a 2-factor approximation. However, these approximations do not guarantee EF1. Some more recent algorithms guarantee both approximate max-product and fairness:
- Barman, Krishanmurthy and Vaish present an algorithm that guarantees PE, PROP1, and a 2-approximation to the max product, in strongly polynomial time.
- Caragiannis, Gravin and Huang present an algorithm that guarantees EFX, PROP1, and a 2.9-approximation to the max product, by discarding some goods (they also show existence of a 2-approximate allocation).
The max-product solution is particularly appealing when the valuations are binary (the value of each item is either 0 or 1):
- Amanatidis, Birmpas, Filos-Ratsikas, Hollender and Voudouris prove that with binary valuations the max-product solution is not only EF1 but also EFX (envy-free up to any good). This holds whenever the value of each item can take one of two values - not necessarily 0 or 1. With general additive valuations, max-product does not imply EFX but implies a natural generalization of it.
- Halpern, Procaccia, Psomas and Shah prove that with binary valuations the max-product rule with lexicographically tie-breaking can be computed in polynomial time, and it is also group-strategyproof.
Non-additive valuations If the agents' utilities are not additive, the max-product solution is not necessarily EF1; but if the agents' utilities are at least submodular, the max-product solution satisfies a weaker property called Marginal-Envy-Freeness except-1-item (MEF1): it means that each agent i values his bundle at least as much as the marginal utility of (the bundle of j with the best item removed from it). Formally: *<math>\forall i,j: ~~~ \exists Y\subseteq X_j: ~~~|Y|\leq 1, ~~~V_i(X_i) \geq V_i(X_i \cup (X_j\setminus Y)) - V_i(X_i) </math>
Similar approximations have been found for more general utility functions:
- Bei, Garg, Hoefer and Mehlhorn and Anari, Mai, Gharan and Vazirani study markets with multiple units of each item-kind, where the valuations are piecewise-linear concave. This means that the utility of a bundle with different item-kinds is the sum of utilities for each single item-kind (this is the meaning of "separable"), but for each item-kind, the valuation has decreasing marginal utilities (this is the meaning of "piecewise-linear concave"). They give a 2-approximation to the max-product.
- Ortega studies a multi-unit market with binary valuations. He proves that the egalitarian-rule is Lorenz dominant (a property stronger than leximin-optimality), unique in utilities, and group-strategyproof.
- Garg, Hoefer and Mehlhorn study budget-additive valuations - a subclass of submodular utilities. They give a (2.404 + ε)-approximation to the max-product in time polynomial in the input size and 1/ε.
- Benabbou, Chakraborty, Igarashi and Zick study submodular utilities with binary marginal gains (i.e., each item adds either 0 or 1 to the value of each bundle). They prove that, with such valuations, both the max-product and the leximin allocations are EF1 and maximize the utilitarian welfare (sum of utilities).
- Babaioff, Ezra and Feige also study submodular utilities with binary ("dichotomous") marginal gains. They present a deterministic truthful mechanism that outputs a Lorenz dominant allocation, which is hence EFX and max-product.
Mixed valuations Martin and Walsh show that, with "mixed manna" (- additive valuations that may be both positive and negative), maximizing the product of utilities (or minimizing the product of minus the utilities) does not ensure EF1. They also prove that an EFX3 allocation may not exist even with identical utilities. However, with tertiary utilities, EFX and PO allocations, or EFX3 and PO allocations always exist; and with identical utilities, EFX and PO allocations always exist. For these cases there are polynomial-time algorithms.
Increasing price algorithm Barman, Krishanmurthy and Vaish presented a pseudo-polynomial time algorithm for finding PE+EF1 allocations for positive additive valuations. They proved the following results.
- The algorithm finds a PE+EF1 allocation in time O(poly(m,n,vmax)), where m is the number of objects, n the number of agents, and vmax the largest value of an item (all the valuations are integers).
- The same algorithm provides a 1.45 approximation to the maximum Nash welfare.
- The algorithm also proves the existence of an allocation which is both EF1 and fractionally Pareto optimal.
Basic concepts Their algorithm is based on the notion of competitive-equilibrium in a Fisher market. It uses the following concepts.
- Approximate EF1 allocation**: Given a constant *e* > 0, an allocation is *e*-EF1 if it satisfies the EF1 condition up to a multiplicative constant of (1+*e*). Formally:
- <math>\forall i,j: ~~~ \exists Y\subseteq X_j: ~~~|Y|\leq 1, ~~~(1+e)\cdot V_i(X_i) \geq V_i(X_j\setminus Y)</math>
- Price-vector**: a vector assigning a price (a real number) to each item.
- Bang-for-buck ratio: for an agent i and an object o, it is the ratio of the agent's valuation of the item, to the item price: vij / pj.
- Maximum bang-for-buck (MBB) set: for an agent i, it is the set of objects maximizing his bang-for-buck ratio (given a price-vector p).
- MBB allocation: an allocation in which each agent receives only objects from his MBB set. In an MBB allocation, each agent maximizes his utility given his budget (but MBB allocation is a stronger condition). The first welfare theorem implies that an MBB allocation is fractionally Pareto optimal.
- Price-envy-free (pEF) allocation**: an allocation X in which, for every two agents *i*.*j*, the price of *Xi* (called the "income" of i) is at least as large as the price *Xj*. This implies that all incomes are identical. Moreover, an allocation that is both MBB and pEF is envy-free, since each agent maximizes his utility given his income, and all other agents have the same income.
- Price-envy-free-except-one (pEF1) allocation**: an allocation in which, for every two agents *i*.*j*, the price p(*Xi*) is at least as large as the price of *Xj* without its most expensive item. This does *not* imply that the incomes are identical. However, an allocation that is both MBB and pEF1 is also EF1.
- The initial allocation is an MBB allocation, and all operations maintain this condition. Therefore, the returned allocation is MBB, so it is also fPO.
- By the termination conditions, whenever the algorithm terminates, the returned allocation is 3e-pEF1, so it is also 3e-EF1.
Now, assume that we have an instance with general valuations. We run the above algorithm on a rounded instance, where each valuation is rounded upwards to the nearest power of (1+e). Note that for each agent i and object o, the rounded value Vi(o) is bounded between Vi(o) and (1+e)Vi(o*).
- By the previous paragraph, the resulting allocation is fPO and 3e-EF1 with respect to the rounded instance.
- For every e sufficiently small (particularly, less than 1 / 6 m3 Vmax4), an allocation that is fPO for the rounded instance is PO for the original instance. in the context of fair public decision making. They proved that, in this case, a PE+PROP1 allocation always exists.
Since every EF1 allocation is PROP1, a PE+PROP1 allocation exists in indivisible item allocation too; the question is whether such allocations can be found by faster algorithms than the ones for PE+EF1.
- Barman and Krishnamurthy** presented a algorithm finding a PE+PROP1 allocation for *goods* (objects with positive utility).
- Branzei and Sandomirskiy** extended the condition of PROP1 to *chores* (objects with negative utility). Formally, for all *i*:
- <math>\exists Y\subseteq X_i: ~~~|Y|\leq 1, ~~~V_i(X_i \setminus Y) \geq V_i(M)/n</math>.
They presented an algorithm finding a PE+PROP1 allocation of chores. The algorithm is strongly polynomial-time if either the number of objects or the number of agents (or both) are fixed.
- Aziz, Caragiannis, Igarashi and Walsh** extended the condition of PROP1 to *mixed valuations* (objects can have both positive and negative utilities). In this setting, an allocation is called PROP1 if, for each agent *i*, if we remove one negative item from i's bundle, or add one positive item to i's bundle, then i's utility is at least 1/*n* of the total. Their Generalized Adjusted Winner algorithm finds a PE+EF1 allocation for two agents; such an allocation is also PROP1.
- Aziz, Moulin and Sandomirskiy** presented a strongly polynomial-time algorithm for finding an allocation that is fractionally PE (stronger than PE) and PROP1, with general mixed valuations, even if the number of agents or objects is not fixed, and even if the agents have different entitlements.
Efficient approximately equitable allocation An allocation of objects is called equitable (EQ) if the subjective value of all agents is the same. The motivation for studying this notion comes from experiments showing that human subjects prefer equitable allocations to envy-free ones. An allocation is called equitable up to one item (EQ1) if, for every two agents i and j, if at most one item is removed from the bundle of j, then the subjective value of i is at least that of j. Formally, for all i, j:
<math>\exists Y\subseteq X_j: ~~~|Y|\leq 1, ~~~V_i(X_i) \geq V_j(X_j\setminus Y)</math>. A stronger notion is equitable up to any item (EQx): for every two agents i and j, if any single item is removed from the bundle of j, then the subjective value of i is at least that of j: <math>\forall y\in X_j: ~~~V_i(X_i) \geq V_j(X_j\setminus \{y\})</math>. EQx allocations were first studied by Gourves, Monnot and Tlilane, who used a different term: "near jealosy-free". They proved that a partial EQx allocation of always exists, even with the additional requirement that the union of all allocated goods is a basis of a given matroid. They used an algorithm similar to the envy-graph-procedure. Suksompong proved that an EQ1 allocation exists even with the additional requirement that all allocations must be contiguous subsets of a line.
- Freeman, Sidkar, Vaish and Xia** proved the following stronger results:
- When all valuations are strictly positive, a PE+EQx allocation always exists, and there is a pseudopolynomial time algorithm that finds a PE+EQ1 allocation.
- When some valuations may be zero, even a PE+EQ1 allocation may not exist, and deciding whether a PE+EQ/EQ1/EQx exists is strongly NP-hard.
- A PE+EQ1+EF1 allocation may not exist. Deciding whether it exists is strongly NP-hard in general, but polynomial-time solvable with binary (0 or 1) valuations.
Algorithms for a small number of agents Bredereck, Kaczmarcyk, Knop and Niedermeier study a setting where there are few agents (small n) and few item-types (small m), the utility per item-type is upper-bounded (by V), but there can be many items of each type. For this setting, they prove the following meta-theorem (Theorem 2): Given an efficiency criterion E, and a fairness criterion F, if <math>n+u_{\max}+m</math> is fixed, then it is possible to decide in polynomial time whether exists an allocation that is both E-efficient and F-fair, as long as E and F satisfy the following properties:
- Fairness: The question of whether an F-fair allocation exists can be modeled by an integer linear program with a fixed dimension.
- Efficiency: The question of whether there exists an allocation that E-dominates another given allocation can be modeled by an integer program whose dual tree-depth, and the absolute value of the largest coefficient, are upper-bounded by some function <math>f(n,u_{\max})</math>.
Then, they prove that several common fairness and efficiency criteria satisfy these properties, including:
Fairness notions: envy-freeness, EF1, EFx, graph-EF (when the agents envy only their neighbors on a fixed graph), egalitarian, maximin-share, and average-group-envy-freeness (where each group of agents values its share as the arithmetic mean of the members' utilities). Efficiency notions: Pareto-efficiency, graph Pareto-efficiency (where Pareto-domination considers only exchanges between neighbors on a fixed graph), and group-Pareto-efficiency. An allocation X as k-group-Pareto-efficient (GPEk) if there is no other allocation Y that is at least as good (by arithmetic mean of utilities) for all groups of size k, and strictly better for at least one group of size k. Note that GPE1 is equivalent to Pareto-efficiency. GPEn is equivalent to a utilitarian-maximal allocation, since for the grand group G of size n, the arithmetic-mean utility is equivalent to the sum of all agents' utilities. For all k, GPEk+1 implies GPEk'. The runtime of their algorithm is polynomial in the input size (in bits) times <math>d^{2.5 d}</math>, where *d'' is the number of variables in the resulting ILP, which is <math>d = m(n+1) + m(n+1)\cdot(4\cdot (4n\cdot V+1)^n)^{m(n+1)}</math>.