forked from Deuxfleurs/garage
New version of the algorithm that calculate the layout.
It takes as paramters the replication factor and the zone redundancy, computes the largest partition size reachable with these constraints, and among the possible assignation with this partition size, it computes the one that moves the least number of partitions compared to the previous assignation. This computation uses graph algorithms defined in graph_algo.rs
This commit is contained in:
parent
c4adbeed51
commit
7f3249a237
9 changed files with 918 additions and 690 deletions
Binary file not shown.
|
@ -100,13 +100,12 @@ Again, we will represent an assignment $\alpha$ as a flow in a specific graph $G
|
||||||
Given some candidate size value $s$, we describe the oriented weighted graph $G=(V,E)$ with vertex set $V$ arc set $E$.
|
Given some candidate size value $s$, we describe the oriented weighted graph $G=(V,E)$ with vertex set $V$ arc set $E$.
|
||||||
|
|
||||||
The set of vertices $V$ contains the source $\mathbf{s}$, the sink $\mathbf{t}$, vertices
|
The set of vertices $V$ contains the source $\mathbf{s}$, the sink $\mathbf{t}$, vertices
|
||||||
$\mathbf{p, p^+, p^-}$ for every partition $p$, vertices $\mathbf{x}_{p,z}$ for every partition $p$ and zone $z$, and vertices $\mathbf{n}$ for every node $n$.
|
$\mathbf{p^+, p^-}$ for every partition $p$, vertices $\mathbf{x}_{p,z}$ for every partition $p$ and zone $z$, and vertices $\mathbf{n}$ for every node $n$.
|
||||||
|
|
||||||
The set of arcs $E$ contains:
|
The set of arcs $E$ contains:
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item ($\mathbf{s}$,$\mathbf{p}$, $\rho_\mathbf{N}$) for every partition $p$;
|
\item ($\mathbf{s}$,$\mathbf{p}^+$, $\rho_\mathbf{Z}$) for every partition $p$;
|
||||||
\item ($\mathbf{p}$,$\mathbf{p}^+$, $\rho_\mathbf{Z}$) for every partition $p$;
|
\item ($\mathbf{s}$,$\mathbf{p}^-$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$;
|
||||||
\item ($\mathbf{p}$,$\mathbf{p}^+$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$;
|
|
||||||
\item ($\mathbf{p}^+$,$\mathbf{x}_{p,z}$, 1) for every partition $p$ and zone $z$;
|
\item ($\mathbf{p}^+$,$\mathbf{x}_{p,z}$, 1) for every partition $p$ and zone $z$;
|
||||||
\item ($\mathbf{p}^-$,$\mathbf{x}_{p,z}$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$ and zone $z$;
|
\item ($\mathbf{p}^-$,$\mathbf{x}_{p,z}$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$ and zone $z$;
|
||||||
\item ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) for every partition $p$, zone $z$ and node $n\in z$;
|
\item ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) for every partition $p$, zone $z$ and node $n\in z$;
|
||||||
|
@ -119,7 +118,7 @@ In the following complexity calculations, we will use the number of vertices and
|
||||||
An assignment $\alpha$ is realizable with partition size $s$ and the redundancy constraints $(\rho_\mathbf{N},\rho_\mathbf{Z})$ if and only if there exists a maximal flow function $f$ in $G$ with total flow $\rho_\mathbf{N}P$, such that the arcs ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) used are exactly those for which $p$ is associated to $n$ in $\alpha$.
|
An assignment $\alpha$ is realizable with partition size $s$ and the redundancy constraints $(\rho_\mathbf{N},\rho_\mathbf{Z})$ if and only if there exists a maximal flow function $f$ in $G$ with total flow $\rho_\mathbf{N}P$, such that the arcs ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) used are exactly those for which $p$ is associated to $n$ in $\alpha$.
|
||||||
\end{proposition}
|
\end{proposition}
|
||||||
\begin{proof}
|
\begin{proof}
|
||||||
Given such flow $f$, we can reconstruct a candidate $\alpha$. In $f$, the flow passing through every $\mathbf{p}$ is $\rho_\mathbf{N}$, and since the outgoing capacity of every $\mathbf{x}_{p,z}$ is 1, every partition is associated to $\rho_\mathbf{N}$ distinct nodes. The fraction $\rho_\mathbf{Z}$ of the flow passing through every $\mathbf{p^+}$ must be spread over as many distinct zones as every arc outgoing from $\mathbf{p^+}$ has capacity 1. So the reconstructed $\alpha$ verifies the redundancy constraints. For every node $n$, the flow between $\mathbf{n}$ and $\mathbf{t}$ corresponds to the number of partitions associated to $n$. By construction of $f$, this does not exceed $\lfloor c_n/s \rfloor$. We assumed that the partition size is $s$, hence this association does not exceed the storage capacity of the nodes.
|
Given such flow $f$, we can reconstruct a candidate $\alpha$. In $f$, the flow passing through $\mathbf{p^+}$ and $\mathbf{p^-}$ is $\rho_\mathbf{N}$, and since the outgoing capacity of every $\mathbf{x}_{p,z}$ is 1, every partition is associated to $\rho_\mathbf{N}$ distinct nodes. The fraction $\rho_\mathbf{Z}$ of the flow passing through every $\mathbf{p^+}$ must be spread over as many distinct zones as every arc outgoing from $\mathbf{p^+}$ has capacity 1. So the reconstructed $\alpha$ verifies the redundancy constraints. For every node $n$, the flow between $\mathbf{n}$ and $\mathbf{t}$ corresponds to the number of partitions associated to $n$. By construction of $f$, this does not exceed $\lfloor c_n/s \rfloor$. We assumed that the partition size is $s$, hence this association does not exceed the storage capacity of the nodes.
|
||||||
|
|
||||||
In the other direction, given an assignment $\alpha$, one can similarly check that the facts that $\alpha$ respects the redundancy constraints, and the storage capacities of the nodes, are necessary condition to construct a maximal flow function $f$.
|
In the other direction, given an assignment $\alpha$, one can similarly check that the facts that $\alpha$ respects the redundancy constraints, and the storage capacities of the nodes, are necessary condition to construct a maximal flow function $f$.
|
||||||
\end{proof}
|
\end{proof}
|
||||||
|
@ -272,16 +271,16 @@ The distance $d(f,f')$ is bounded by the maximal number of differences in the as
|
||||||
|
|
||||||
The detection of negative cycle is done with the Bellman-Ford algorithm, whose complexity should normally be $O(\#E\#V)$. In our case, it amounts to $O(P^2ZN)$. Multiplied by the complexity of the outer loop, it amounts to $O(P^3ZN)$ which is a lot when the number of partitions and nodes starts to be large. To avoid that, we adapt the Bellman-Ford algorithm.
|
The detection of negative cycle is done with the Bellman-Ford algorithm, whose complexity should normally be $O(\#E\#V)$. In our case, it amounts to $O(P^2ZN)$. Multiplied by the complexity of the outer loop, it amounts to $O(P^3ZN)$ which is a lot when the number of partitions and nodes starts to be large. To avoid that, we adapt the Bellman-Ford algorithm.
|
||||||
|
|
||||||
The Bellman-Ford algorithm runs $\#V$ iterations of an outer loop, and an inner loop over $E$. The idea is to compute the shortest paths from a source vertex $v$ to all other vertices. After $k$ iterations of the outer loop, the algorithm has computed all shortest path of length at most $k$. All shortest path have length at most $\#V$, so if there is an update in the last iteration of the loop, it means that there is a negative cycle in the graph. The observation that will enable us to improve the complexity is the following:
|
The Bellman-Ford algorithm runs $\#V$ iterations of an outer loop, and an inner loop over $E$. The idea is to compute the shortest paths from a source vertex $v$ to all other vertices. After $k$ iterations of the outer loop, the algorithm has computed all shortest path of length at most $k$. All simple paths have length at most $\#V-1$, so if there is an update in the last iteration of the loop, it means that there is a negative cycle in the graph. The observation that will enable us to improve the complexity is the following:
|
||||||
|
|
||||||
\begin{proposition}
|
\begin{proposition}
|
||||||
In the graph $G_f$ (and $G$), all simple paths and cycles have a length at most $6N$.
|
In the graph $G_f$ (and $G$), all simple paths have a length at most $4N$.
|
||||||
\end{proposition}
|
\end{proposition}
|
||||||
\begin{proof}
|
\begin{proof}
|
||||||
Since $f$ is a maximal flow, there is no outgoing edge from $\mathbf{s}$ in $G_f$. One can thus check than any simple path of length 6 must contain at least to node of type $\mathbf{n}$. Hence on a cycle, at most 6 arcs separate two successive nodes of type $\mathbf{n}$.
|
Since $f$ is a maximal flow, there is no outgoing edge from $\mathbf{s}$ in $G_f$. One can thus check than any simple path of length 4 must contain at least two node of type $\mathbf{n}$. Hence on a path, at most 4 arcs separate two successive nodes of type $\mathbf{n}$.
|
||||||
\end{proof}
|
\end{proof}
|
||||||
|
|
||||||
Thus, in the absence of negative cycles, shortest paths in $G_f$ have length at most $6N$. So we can do only $6N$ iterations of the outer loop in Bellman-Ford algorithm. This makes the complexity of the detection of one set of cycle to be $O(N\#E) = O(N^2 P)$.
|
Thus, in the absence of negative cycles, shortest paths in $G_f$ have length at most $4N$. So we can do only $4N+1$ iterations of the outer loop in Bellman-Ford algorithm. This makes the complexity of the detection of one set of cycle to be $O(N\#E) = O(N^2 P)$.
|
||||||
|
|
||||||
With this improvement, the complexity of the whole algorithm is, in the worst case, $O(N^2P^2)$. However, since we detect several cycles at once and we start with a flow that might be close to the previous one, the number of iterations of the outer loop might be smaller in practice.
|
With this improvement, the complexity of the whole algorithm is, in the worst case, $O(N^2P^2)$. However, since we detect several cycles at once and we start with a flow that might be close to the previous one, the number of iterations of the outer loop might be smaller in practice.
|
||||||
|
|
||||||
|
|
440
src/rpc/graph_algo.rs
Normal file
440
src/rpc/graph_algo.rs
Normal file
|
@ -0,0 +1,440 @@
|
||||||
|
|
||||||
|
//! This module deals with graph algorithms.
|
||||||
|
//! It is used in layout.rs to build the partition to node assignation.
|
||||||
|
|
||||||
|
use rand::prelude::SliceRandom;
|
||||||
|
use std::cmp::{max, min};
|
||||||
|
use std::collections::VecDeque;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
//Vertex data structures used in all the graphs used in layout.rs.
|
||||||
|
//usize parameters correspond to node/zone/partitions ids.
|
||||||
|
//To understand the vertex roles below, please refer to the formal description
|
||||||
|
//of the layout computation algorithm.
|
||||||
|
#[derive(Clone,Copy,Debug, PartialEq, Eq, Hash)]
|
||||||
|
pub enum Vertex{
|
||||||
|
Source,
|
||||||
|
Pup(usize), //The vertex p+ of partition p
|
||||||
|
Pdown(usize), //The vertex p- of partition p
|
||||||
|
PZ(usize,usize), //The vertex corresponding to x_(partition p, zone z)
|
||||||
|
N(usize), //The vertex corresponding to node n
|
||||||
|
Sink
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
//Edge data structure for the flow algorithm.
|
||||||
|
//The graph is stored as an adjacency list
|
||||||
|
#[derive(Clone, Copy, Debug)]
|
||||||
|
pub struct FlowEdge {
|
||||||
|
cap: u32, //flow maximal capacity of the edge
|
||||||
|
flow: i32, //flow value on the edge
|
||||||
|
dest: usize, //destination vertex id
|
||||||
|
rev: usize, //index of the reversed edge (v, self) in the edge list of vertex v
|
||||||
|
}
|
||||||
|
|
||||||
|
//Edge data structure for the detection of negative cycles.
|
||||||
|
//The graph is stored as a list of edges (u,v).
|
||||||
|
#[derive(Clone, Copy, Debug)]
|
||||||
|
pub struct WeightedEdge {
|
||||||
|
w: i32, //weight of the edge
|
||||||
|
dest: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub trait Edge: Clone + Copy {}
|
||||||
|
impl Edge for FlowEdge {}
|
||||||
|
impl Edge for WeightedEdge {}
|
||||||
|
|
||||||
|
//Struct for the graph structure. We do encapsulation here to be able to both
|
||||||
|
//provide user friendly Vertex enum to address vertices, and to use usize indices
|
||||||
|
//and Vec instead of HashMap in the graph algorithm to optimize execution speed.
|
||||||
|
pub struct Graph<E : Edge>{
|
||||||
|
vertextoid : HashMap<Vertex , usize>,
|
||||||
|
idtovertex : Vec<Vertex>,
|
||||||
|
|
||||||
|
graph : Vec< Vec<E> >
|
||||||
|
}
|
||||||
|
|
||||||
|
pub type CostFunction = HashMap<(Vertex,Vertex), i32>;
|
||||||
|
|
||||||
|
impl<E : Edge> Graph<E>{
|
||||||
|
pub fn new(vertices : &[Vertex]) -> Self {
|
||||||
|
let mut map = HashMap::<Vertex, usize>::new();
|
||||||
|
for i in 0..vertices.len() {
|
||||||
|
map.insert(vertices[i] , i);
|
||||||
|
}
|
||||||
|
return Graph::<E> {
|
||||||
|
vertextoid : map,
|
||||||
|
idtovertex: vertices.to_vec(),
|
||||||
|
graph : vec![Vec::< E >::new(); vertices.len() ]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Graph<FlowEdge>{
|
||||||
|
//This function adds a directed edge to the graph with capacity c, and the
|
||||||
|
//corresponding reversed edge with capacity 0.
|
||||||
|
pub fn add_edge(&mut self, u: Vertex, v:Vertex, c: u32) -> Result<(), String>{
|
||||||
|
if !self.vertextoid.contains_key(&u) || !self.vertextoid.contains_key(&v) {
|
||||||
|
return Err("The graph does not contain the provided vertex.".to_string());
|
||||||
|
}
|
||||||
|
let idu = self.vertextoid[&u];
|
||||||
|
let idv = self.vertextoid[&v];
|
||||||
|
let rev_u = self.graph[idu].len();
|
||||||
|
let rev_v = self.graph[idv].len();
|
||||||
|
self.graph[idu].push( FlowEdge{cap: c , dest: idv , flow: 0, rev : rev_v} );
|
||||||
|
self.graph[idv].push( FlowEdge{cap: 0 , dest: idu , flow: 0, rev : rev_u} );
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
//This function returns the list of vertices that receive a positive flow from
|
||||||
|
//vertex v.
|
||||||
|
pub fn get_positive_flow_from(&self , v:Vertex) -> Result< Vec<Vertex> , String>{
|
||||||
|
if !self.vertextoid.contains_key(&v) {
|
||||||
|
return Err("The graph does not contain the provided vertex.".to_string());
|
||||||
|
}
|
||||||
|
let idv = self.vertextoid[&v];
|
||||||
|
let mut result = Vec::<Vertex>::new();
|
||||||
|
for edge in self.graph[idv].iter() {
|
||||||
|
if edge.flow > 0 {
|
||||||
|
result.push(self.idtovertex[edge.dest]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return Ok(result);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
//This function returns the value of the flow incoming to v.
|
||||||
|
pub fn get_inflow(&self , v:Vertex) -> Result< i32 , String>{
|
||||||
|
if !self.vertextoid.contains_key(&v) {
|
||||||
|
return Err("The graph does not contain the provided vertex.".to_string());
|
||||||
|
}
|
||||||
|
let idv = self.vertextoid[&v];
|
||||||
|
let mut result = 0;
|
||||||
|
for edge in self.graph[idv].iter() {
|
||||||
|
result += max(0,self.graph[edge.dest][edge.rev].flow);
|
||||||
|
}
|
||||||
|
return Ok(result);
|
||||||
|
}
|
||||||
|
|
||||||
|
//This function returns the value of the flow outgoing from v.
|
||||||
|
pub fn get_outflow(&self , v:Vertex) -> Result< i32 , String>{
|
||||||
|
if !self.vertextoid.contains_key(&v) {
|
||||||
|
return Err("The graph does not contain the provided vertex.".to_string());
|
||||||
|
}
|
||||||
|
let idv = self.vertextoid[&v];
|
||||||
|
let mut result = 0;
|
||||||
|
for edge in self.graph[idv].iter() {
|
||||||
|
result += max(0,edge.flow);
|
||||||
|
}
|
||||||
|
return Ok(result);
|
||||||
|
}
|
||||||
|
|
||||||
|
//This function computes the flow total value by computing the outgoing flow
|
||||||
|
//from the source.
|
||||||
|
pub fn get_flow_value(&mut self) -> Result<i32, String> {
|
||||||
|
return self.get_outflow(Vertex::Source);
|
||||||
|
}
|
||||||
|
|
||||||
|
//This function shuffles the order of the edge lists. It keeps the ids of the
|
||||||
|
//reversed edges consistent.
|
||||||
|
fn shuffle_edges(&mut self) {
|
||||||
|
let mut rng = rand::thread_rng();
|
||||||
|
for i in 0..self.graph.len() {
|
||||||
|
self.graph[i].shuffle(&mut rng);
|
||||||
|
//We need to update the ids of the reverse edges.
|
||||||
|
for j in 0..self.graph[i].len() {
|
||||||
|
let target_v = self.graph[i][j].dest;
|
||||||
|
let target_rev = self.graph[i][j].rev;
|
||||||
|
self.graph[target_v][target_rev].rev = j;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
//Computes an upper bound of the flow n the graph
|
||||||
|
pub fn flow_upper_bound(&self) -> u32{
|
||||||
|
let idsource = self.vertextoid[&Vertex::Source];
|
||||||
|
let mut flow_upper_bound = 0;
|
||||||
|
for edge in self.graph[idsource].iter(){
|
||||||
|
flow_upper_bound += edge.cap;
|
||||||
|
}
|
||||||
|
return flow_upper_bound;
|
||||||
|
}
|
||||||
|
|
||||||
|
//This function computes the maximal flow using Dinic's algorithm. It starts with
|
||||||
|
//the flow values already present in the graph. So it is possible to add some edge to
|
||||||
|
//the graph, compute a flow, add other edges, update the flow.
|
||||||
|
pub fn compute_maximal_flow(&mut self) -> Result<(), String> {
|
||||||
|
if !self.vertextoid.contains_key(&Vertex::Source) {
|
||||||
|
return Err("The graph does not contain a source.".to_string());
|
||||||
|
}
|
||||||
|
if !self.vertextoid.contains_key(&Vertex::Sink) {
|
||||||
|
return Err("The graph does not contain a sink.".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
let idsource = self.vertextoid[&Vertex::Source];
|
||||||
|
let idsink = self.vertextoid[&Vertex::Sink];
|
||||||
|
|
||||||
|
let nb_vertices = self.graph.len();
|
||||||
|
|
||||||
|
let flow_upper_bound = self.flow_upper_bound();
|
||||||
|
|
||||||
|
//To ensure the dispersion of the associations generated by the
|
||||||
|
//assignation, we shuffle the neighbours of the nodes. Hence,
|
||||||
|
//the vertices do not consider their neighbours in the same order.
|
||||||
|
self.shuffle_edges();
|
||||||
|
|
||||||
|
//We run Dinic's max flow algorithm
|
||||||
|
loop {
|
||||||
|
//We build the level array from Dinic's algorithm.
|
||||||
|
let mut level = vec![None; nb_vertices];
|
||||||
|
|
||||||
|
let mut fifo = VecDeque::new();
|
||||||
|
fifo.push_back((idsource, 0));
|
||||||
|
while !fifo.is_empty() {
|
||||||
|
if let Some((id, lvl)) = fifo.pop_front() {
|
||||||
|
if level[id] == None { //it means id has not yet been reached
|
||||||
|
level[id] = Some(lvl);
|
||||||
|
for edge in self.graph[id].iter() {
|
||||||
|
if edge.cap as i32 - edge.flow > 0 {
|
||||||
|
fifo.push_back((edge.dest, lvl + 1));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if level[idsink] == None {
|
||||||
|
//There is no residual flow
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
//Now we run DFS respecting the level array
|
||||||
|
let mut next_nbd = vec![0; nb_vertices];
|
||||||
|
let mut lifo = VecDeque::new();
|
||||||
|
|
||||||
|
lifo.push_back((idsource, flow_upper_bound));
|
||||||
|
|
||||||
|
while let Some((id_tmp, f_tmp)) = lifo.back() {
|
||||||
|
let id = *id_tmp;
|
||||||
|
let f = *f_tmp;
|
||||||
|
if id == idsink {
|
||||||
|
//The DFS reached the sink, we can add a
|
||||||
|
//residual flow.
|
||||||
|
lifo.pop_back();
|
||||||
|
while !lifo.is_empty() {
|
||||||
|
if let Some((id, _)) = lifo.pop_back() {
|
||||||
|
let nbd = next_nbd[id];
|
||||||
|
self.graph[id][nbd].flow += f as i32;
|
||||||
|
let id_rev = self.graph[id][nbd].dest;
|
||||||
|
let nbd_rev = self.graph[id][nbd].rev;
|
||||||
|
self.graph[id_rev][nbd_rev].flow -= f as i32;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
lifo.push_back((idsource, flow_upper_bound));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
//else we did not reach the sink
|
||||||
|
let nbd = next_nbd[id];
|
||||||
|
if nbd >= self.graph[id].len() {
|
||||||
|
//There is nothing to explore from id anymore
|
||||||
|
lifo.pop_back();
|
||||||
|
if let Some((parent, _)) = lifo.back() {
|
||||||
|
next_nbd[*parent] += 1;
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
//else we can try to send flow from id to its nbd
|
||||||
|
let new_flow = min(f, self.graph[id][nbd].cap - self.graph[id][nbd].flow as u32 );
|
||||||
|
if let (Some(lvldest), Some(lvlid)) =
|
||||||
|
(level[self.graph[id][nbd].dest], level[id]){
|
||||||
|
if lvldest <= lvlid || new_flow == 0 {
|
||||||
|
//We cannot send flow to nbd.
|
||||||
|
next_nbd[id] += 1;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
//otherwise, we send flow to nbd.
|
||||||
|
lifo.push_back((self.graph[id][nbd].dest, new_flow));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
//This function takes a flow, and a cost function on the edges, and tries to find an
|
||||||
|
// equivalent flow with a better cost, by finding improving overflow cycles. It uses
|
||||||
|
// as subroutine the Bellman Ford algorithm run up to path_length.
|
||||||
|
// We assume that the cost of edge (u,v) is the opposite of the cost of (v,u), and only
|
||||||
|
// one needs to be present in the cost function.
|
||||||
|
pub fn optimize_flow_with_cost(&mut self , cost: &CostFunction, path_length: usize )
|
||||||
|
-> Result<(),String>{
|
||||||
|
|
||||||
|
//We build the weighted graph g where we will look for negative cycle
|
||||||
|
let mut gf = self.build_cost_graph(cost)?;
|
||||||
|
let mut cycles = gf.list_negative_cycles(path_length);
|
||||||
|
while cycles.len() > 0 {
|
||||||
|
//we enumerate negative cycles
|
||||||
|
for c in cycles.iter(){
|
||||||
|
for i in 0..c.len(){
|
||||||
|
//We add one flow unit to the edge (u,v) of cycle c
|
||||||
|
let idu = self.vertextoid[&c[i]];
|
||||||
|
let idv = self.vertextoid[&c[(i+1)%c.len()]];
|
||||||
|
for j in 0..self.graph[idu].len(){
|
||||||
|
//since idu appears at most once in the cycles, we enumerate every
|
||||||
|
//edge at most once.
|
||||||
|
let edge = self.graph[idu][j];
|
||||||
|
if edge.dest == idv {
|
||||||
|
self.graph[idu][j].flow += 1;
|
||||||
|
self.graph[idv][edge.rev].flow -=1;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
gf = self.build_cost_graph(cost)?;
|
||||||
|
cycles = gf.list_negative_cycles(path_length);
|
||||||
|
}
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
//Construct the weighted graph G_f from the flow and the cost function
|
||||||
|
fn build_cost_graph(&self , cost: &CostFunction) -> Result<Graph<WeightedEdge>,String>{
|
||||||
|
|
||||||
|
let mut g = Graph::<WeightedEdge>::new(&self.idtovertex);
|
||||||
|
let nb_vertices = self.idtovertex.len();
|
||||||
|
for i in 0..nb_vertices {
|
||||||
|
for edge in self.graph[i].iter() {
|
||||||
|
if edge.cap as i32 -edge.flow > 0 {
|
||||||
|
//It is possible to send overflow through this edge
|
||||||
|
let u = self.idtovertex[i];
|
||||||
|
let v = self.idtovertex[edge.dest];
|
||||||
|
if cost.contains_key(&(u,v)) {
|
||||||
|
g.add_edge(u,v, cost[&(u,v)])?;
|
||||||
|
}
|
||||||
|
else if cost.contains_key(&(v,u)) {
|
||||||
|
g.add_edge(u,v, -cost[&(v,u)])?;
|
||||||
|
}
|
||||||
|
else{
|
||||||
|
g.add_edge(u,v, 0)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return Ok(g);
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Graph<WeightedEdge>{
|
||||||
|
//This function adds a single directed weighted edge to the graph.
|
||||||
|
pub fn add_edge(&mut self, u: Vertex, v:Vertex, w: i32) -> Result<(), String>{
|
||||||
|
if !self.vertextoid.contains_key(&u) || !self.vertextoid.contains_key(&v) {
|
||||||
|
return Err("The graph does not contain the provided vertex.".to_string());
|
||||||
|
}
|
||||||
|
let idu = self.vertextoid[&u];
|
||||||
|
let idv = self.vertextoid[&v];
|
||||||
|
self.graph[idu].push( WeightedEdge{w: w , dest: idv} );
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
//This function lists the negative cycles it manages to find after path_length
|
||||||
|
//iterations of the main loop of the Bellman-Ford algorithm. For the classical
|
||||||
|
//algorithm, path_length needs to be equal to the number of vertices. However,
|
||||||
|
//for particular graph structures like our case, the algorithm is still correct
|
||||||
|
//when path_length is the length of the longest possible simple path.
|
||||||
|
//See the formal description of the algorithm for more details.
|
||||||
|
fn list_negative_cycles(&self, path_length: usize) -> Vec< Vec<Vertex> > {
|
||||||
|
|
||||||
|
let nb_vertices = self.graph.len();
|
||||||
|
|
||||||
|
//We start with every vertex at distance 0 of some imaginary extra -1 vertex.
|
||||||
|
let mut distance = vec![0 ; nb_vertices];
|
||||||
|
//The prev vector collects for every vertex from where does the shortest path come
|
||||||
|
let mut prev = vec![None; nb_vertices];
|
||||||
|
|
||||||
|
for _ in 0..path_length +1 {
|
||||||
|
for id in 0..nb_vertices{
|
||||||
|
for e in self.graph[id].iter(){
|
||||||
|
if distance[id] + e.w < distance[e.dest] {
|
||||||
|
distance[e.dest] = distance[id] + e.w;
|
||||||
|
prev[e.dest] = Some(id);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
//If self.graph contains a negative cycle, then at this point the graph described
|
||||||
|
//by prev (which is a directed 1-forest/functional graph)
|
||||||
|
//must contain a cycle. We list the cycles of prev.
|
||||||
|
let cycles_prev = cycles_of_1_forest(&prev);
|
||||||
|
|
||||||
|
//Remark that the cycle in prev is in the reverse order compared to the cycle
|
||||||
|
//in the graph. Thus the .rev().
|
||||||
|
return cycles_prev.iter().map(|cycle| cycle.iter().rev().map(
|
||||||
|
|id| self.idtovertex[*id]
|
||||||
|
).collect() ).collect();
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
//This function returns the list of cycles of a directed 1 forest. It does not
|
||||||
|
//check for the consistency of the input.
|
||||||
|
fn cycles_of_1_forest(forest: &[Option<usize>]) -> Vec<Vec<usize>> {
|
||||||
|
let mut cycles = Vec::<Vec::<usize>>::new();
|
||||||
|
let mut time_of_discovery = vec![None; forest.len()];
|
||||||
|
|
||||||
|
for t in 0..forest.len(){
|
||||||
|
let mut id = t;
|
||||||
|
//while we are on a valid undiscovered node
|
||||||
|
while time_of_discovery[id] == None {
|
||||||
|
time_of_discovery[id] = Some(t);
|
||||||
|
if let Some(i) = forest[id] {
|
||||||
|
id = i;
|
||||||
|
}
|
||||||
|
else{
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if forest[id] != None && time_of_discovery[id] == Some(t) {
|
||||||
|
//We discovered an id that we explored at this iteration t.
|
||||||
|
//It means we are on a cycle
|
||||||
|
let mut cy = vec![id; 1];
|
||||||
|
let id2 = id;
|
||||||
|
while let Some(id2) = forest[id2] {
|
||||||
|
if id2 != id {
|
||||||
|
cy.push(id2);
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cycles.push(cy);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return cycles;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
//====================================================================================
|
||||||
|
//====================================================================================
|
||||||
|
//====================================================================================
|
||||||
|
//====================================================================================
|
||||||
|
//====================================================================================
|
||||||
|
//====================================================================================
|
||||||
|
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_flow() {
|
||||||
|
let left_vec = vec![3; 8];
|
||||||
|
let right_vec = vec![0, 4, 8, 4, 8];
|
||||||
|
//There are asserts in the function that computes the flow
|
||||||
|
}
|
||||||
|
|
||||||
|
//maybe add tests relative to the matching optilization ?
|
||||||
|
}
|
|
@ -1,17 +1,23 @@
|
||||||
use std::cmp::min;
|
|
||||||
use std::cmp::Ordering;
|
use std::cmp::Ordering;
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
|
use std::collections::HashSet;
|
||||||
|
|
||||||
|
use hex::ToHex;
|
||||||
|
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
use garage_util::bipartite::*;
|
|
||||||
use garage_util::crdt::{AutoCrdt, Crdt, LwwMap};
|
use garage_util::crdt::{AutoCrdt, Crdt, LwwMap};
|
||||||
use garage_util::data::*;
|
use garage_util::data::*;
|
||||||
|
|
||||||
use rand::prelude::SliceRandom;
|
use crate::graph_algo::*;
|
||||||
|
|
||||||
use crate::ring::*;
|
use crate::ring::*;
|
||||||
|
|
||||||
|
use std::convert::TryInto;
|
||||||
|
|
||||||
|
//The Message type will be used to collect information on the algorithm.
|
||||||
|
type Message = Vec<String>;
|
||||||
|
|
||||||
/// The layout of the cluster, i.e. the list of roles
|
/// The layout of the cluster, i.e. the list of roles
|
||||||
/// which are assigned to each cluster node
|
/// which are assigned to each cluster node
|
||||||
#[derive(Clone, Debug, Serialize, Deserialize)]
|
#[derive(Clone, Debug, Serialize, Deserialize)]
|
||||||
|
@ -19,12 +25,21 @@ pub struct ClusterLayout {
|
||||||
pub version: u64,
|
pub version: u64,
|
||||||
|
|
||||||
pub replication_factor: usize,
|
pub replication_factor: usize,
|
||||||
|
#[serde(default="default_one")]
|
||||||
|
pub zone_redundancy: usize,
|
||||||
|
|
||||||
|
//This attribute is only used to retain the previously computed partition size,
|
||||||
|
//to know to what extent does it change with the layout update.
|
||||||
|
#[serde(default="default_zero")]
|
||||||
|
pub partition_size: u32,
|
||||||
|
|
||||||
pub roles: LwwMap<Uuid, NodeRoleV>,
|
pub roles: LwwMap<Uuid, NodeRoleV>,
|
||||||
|
|
||||||
/// node_id_vec: a vector of node IDs with a role assigned
|
/// node_id_vec: a vector of node IDs with a role assigned
|
||||||
/// in the system (this includes gateway nodes).
|
/// in the system (this includes gateway nodes).
|
||||||
/// The order here is different than the vec stored by `roles`, because:
|
/// The order here is different than the vec stored by `roles`, because:
|
||||||
/// 1. non-gateway nodes are first so that they have lower numbers
|
/// 1. non-gateway nodes are first so that they have lower numbers holding
|
||||||
|
/// in u8 (the number of non-gateway nodes is at most 256).
|
||||||
/// 2. nodes that don't have a role are excluded (but they need to
|
/// 2. nodes that don't have a role are excluded (but they need to
|
||||||
/// stay in the CRDT as tombstones)
|
/// stay in the CRDT as tombstones)
|
||||||
pub node_id_vec: Vec<Uuid>,
|
pub node_id_vec: Vec<Uuid>,
|
||||||
|
@ -38,6 +53,15 @@ pub struct ClusterLayout {
|
||||||
pub staging_hash: Hash,
|
pub staging_hash: Hash,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn default_one() -> usize{
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
fn default_zero() -> u32{
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
const NB_PARTITIONS : usize = 1usize << PARTITION_BITS;
|
||||||
|
|
||||||
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone, Debug, Serialize, Deserialize)]
|
#[derive(PartialEq, Eq, PartialOrd, Ord, Clone, Debug, Serialize, Deserialize)]
|
||||||
pub struct NodeRoleV(pub Option<NodeRole>);
|
pub struct NodeRoleV(pub Option<NodeRole>);
|
||||||
|
|
||||||
|
@ -66,16 +90,31 @@ impl NodeRole {
|
||||||
None => "gateway".to_string(),
|
None => "gateway".to_string(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn tags_string(&self) -> String {
|
||||||
|
let mut tags = String::new();
|
||||||
|
if self.tags.len() == 0 {
|
||||||
|
return tags
|
||||||
|
}
|
||||||
|
tags.push_str(&self.tags[0].clone());
|
||||||
|
for t in 1..self.tags.len(){
|
||||||
|
tags.push_str(",");
|
||||||
|
tags.push_str(&self.tags[t].clone());
|
||||||
|
}
|
||||||
|
return tags;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ClusterLayout {
|
impl ClusterLayout {
|
||||||
pub fn new(replication_factor: usize) -> Self {
|
pub fn new(replication_factor: usize, zone_redundancy: usize) -> Self {
|
||||||
let empty_lwwmap = LwwMap::new();
|
let empty_lwwmap = LwwMap::new();
|
||||||
let empty_lwwmap_hash = blake2sum(&rmp_to_vec_all_named(&empty_lwwmap).unwrap()[..]);
|
let empty_lwwmap_hash = blake2sum(&rmp_to_vec_all_named(&empty_lwwmap).unwrap()[..]);
|
||||||
|
|
||||||
ClusterLayout {
|
ClusterLayout {
|
||||||
version: 0,
|
version: 0,
|
||||||
replication_factor,
|
replication_factor,
|
||||||
|
zone_redundancy,
|
||||||
|
partition_size: 0,
|
||||||
roles: LwwMap::new(),
|
roles: LwwMap::new(),
|
||||||
node_id_vec: Vec::new(),
|
node_id_vec: Vec::new(),
|
||||||
ring_assignation_data: Vec::new(),
|
ring_assignation_data: Vec::new(),
|
||||||
|
@ -122,6 +161,44 @@ impl ClusterLayout {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
///Returns the uuids of the non_gateway nodes in self.node_id_vec.
|
||||||
|
pub fn useful_nodes(&self) -> Vec<Uuid> {
|
||||||
|
let mut result = Vec::<Uuid>::new();
|
||||||
|
for uuid in self.node_id_vec.iter() {
|
||||||
|
match self.node_role(uuid) {
|
||||||
|
Some(role) if role.capacity != None => result.push(*uuid),
|
||||||
|
_ => ()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
///Given a node uuids, this function returns the label of its zone
|
||||||
|
pub fn get_node_zone(&self, uuid : &Uuid) -> Result<String,String> {
|
||||||
|
match self.node_role(uuid) {
|
||||||
|
Some(role) => return Ok(role.zone.clone()),
|
||||||
|
_ => return Err("The Uuid does not correspond to a node present in the cluster.".to_string())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
///Given a node uuids, this function returns its capacity or fails if it does not have any
|
||||||
|
pub fn get_node_capacity(&self, uuid : &Uuid) -> Result<u32,String> {
|
||||||
|
match self.node_role(uuid) {
|
||||||
|
Some(NodeRole{capacity : Some(cap), zone: _, tags: _}) => return Ok(*cap),
|
||||||
|
_ => return Err("The Uuid does not correspond to a node present in the cluster or this node does not have a positive capacity.".to_string())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
///Returns the sum of capacities of non gateway nodes in the cluster
|
||||||
|
pub fn get_total_capacity(&self) -> Result<u32,String> {
|
||||||
|
let mut total_capacity = 0;
|
||||||
|
for uuid in self.useful_nodes().iter() {
|
||||||
|
total_capacity += self.get_node_capacity(uuid)?;
|
||||||
|
}
|
||||||
|
return Ok(total_capacity);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
/// Check a cluster layout for internal consistency
|
/// Check a cluster layout for internal consistency
|
||||||
/// returns true if consistent, false if error
|
/// returns true if consistent, false if error
|
||||||
pub fn check(&self) -> bool {
|
pub fn check(&self) -> bool {
|
||||||
|
@ -168,342 +245,412 @@ impl ClusterLayout {
|
||||||
true
|
true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ClusterLayout {
|
||||||
/// This function calculates a new partition-to-node assignation.
|
/// This function calculates a new partition-to-node assignation.
|
||||||
/// The computed assignation maximizes the capacity of a
|
/// The computed assignation respects the node replication factor
|
||||||
|
/// and the zone redundancy parameter It maximizes the capacity of a
|
||||||
/// partition (assuming all partitions have the same size).
|
/// partition (assuming all partitions have the same size).
|
||||||
/// Among such optimal assignation, it minimizes the distance to
|
/// Among such optimal assignation, it minimizes the distance to
|
||||||
/// the former assignation (if any) to minimize the amount of
|
/// the former assignation (if any) to minimize the amount of
|
||||||
/// data to be moved. A heuristic ensures node triplets
|
/// data to be moved.
|
||||||
/// dispersion (in garage_util::bipartite::optimize_matching()).
|
pub fn calculate_partition_assignation(&mut self, replication:usize, redundancy:usize) -> Result<Message,String> {
|
||||||
pub fn calculate_partition_assignation(&mut self) -> bool {
|
|
||||||
//The nodes might have been updated, some might have been deleted.
|
//The nodes might have been updated, some might have been deleted.
|
||||||
//So we need to first update the list of nodes and retrieve the
|
//So we need to first update the list of nodes and retrieve the
|
||||||
//assignation.
|
//assignation.
|
||||||
let old_node_assignation = self.update_nodes_and_ring();
|
|
||||||
|
|
||||||
let (node_zone, _) = self.get_node_zone_capacity();
|
//We update the node ids, since the node list might have changed with the staged
|
||||||
|
//changes in the layout. We retrieve the old_assignation reframed with the new ids
|
||||||
|
let old_assignation_opt = self.update_node_id_vec()?;
|
||||||
|
self.replication_factor = replication;
|
||||||
|
self.zone_redundancy = redundancy;
|
||||||
|
|
||||||
//We compute the optimal number of partition to assign to
|
let mut msg = Message::new();
|
||||||
//every node and zone.
|
msg.push(format!("Computation of a new cluster layout where partitions are
|
||||||
if let Some((part_per_nod, part_per_zone)) = self.optimal_proportions() {
|
replicated {} times on at least {} distinct zones.", replication, redundancy));
|
||||||
//We collect part_per_zone in a vec to not rely on the
|
|
||||||
//arbitrary order in which elements are iterated in
|
|
||||||
//Hashmap::iter()
|
|
||||||
let part_per_zone_vec = part_per_zone
|
|
||||||
.iter()
|
|
||||||
.map(|(x, y)| (x.clone(), *y))
|
|
||||||
.collect::<Vec<(String, usize)>>();
|
|
||||||
//We create an indexing of the zones
|
|
||||||
let mut zone_id = HashMap::<String, usize>::new();
|
|
||||||
for (i, ppz) in part_per_zone_vec.iter().enumerate() {
|
|
||||||
zone_id.insert(ppz.0.clone(), i);
|
|
||||||
}
|
|
||||||
|
|
||||||
//We compute a candidate for the new partition to zone
|
//We generate for once numerical ids for the zone, to use them as indices in the
|
||||||
//assignation.
|
//flow graphs.
|
||||||
let nb_zones = part_per_zone.len();
|
let (id_to_zone , zone_to_id) = self.generate_zone_ids()?;
|
||||||
let nb_nodes = part_per_nod.len();
|
|
||||||
let nb_partitions = 1 << PARTITION_BITS;
|
|
||||||
let left_cap_vec = vec![self.replication_factor as u32; nb_partitions];
|
|
||||||
let right_cap_vec = part_per_zone_vec.iter().map(|(_, y)| *y as u32).collect();
|
|
||||||
let mut zone_assignation = dinic_compute_matching(left_cap_vec, right_cap_vec);
|
|
||||||
|
|
||||||
//We create the structure for the partition-to-node assignation.
|
msg.push(format!("The cluster contains {} nodes spread over {} zones.",
|
||||||
let mut node_assignation = vec![vec![None; self.replication_factor]; nb_partitions];
|
self.useful_nodes().len(), id_to_zone.len()));
|
||||||
//We will decrement part_per_nod to keep track of the number
|
|
||||||
//of partitions that we still have to associate.
|
|
||||||
let mut part_per_nod = part_per_nod;
|
|
||||||
|
|
||||||
//We minimize the distance to the former assignation(if any)
|
//We compute the optimal partition size
|
||||||
|
let partition_size = self.compute_optimal_partition_size(&zone_to_id)?;
|
||||||
|
if old_assignation_opt != None {
|
||||||
|
msg.push(format!("Given the replication and redundancy constraint, the
|
||||||
|
optimal size of a partition is {}. In the previous layout, it used to
|
||||||
|
be {}.", partition_size, self.partition_size));
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
msg.push(format!("Given the replication and redundancy constraints, the
|
||||||
|
optimal size of a partition is {}.", partition_size));
|
||||||
|
}
|
||||||
|
self.partition_size = partition_size;
|
||||||
|
|
||||||
//We get the id of the zones of the former assignation
|
//We compute a first flow/assignment that is heuristically close to the previous
|
||||||
//(and the id no_zone if there is no node assignated)
|
//assignment
|
||||||
let no_zone = part_per_zone_vec.len();
|
let mut gflow = self.compute_candidate_assignment( &zone_to_id, &old_assignation_opt)?;
|
||||||
let old_zone_assignation: Vec<Vec<usize>> = old_node_assignation
|
|
||||||
.iter()
|
|
||||||
.map(|x| {
|
|
||||||
x.iter()
|
|
||||||
.map(|id| match *id {
|
|
||||||
Some(i) => zone_id[&node_zone[i]],
|
|
||||||
None => no_zone,
|
|
||||||
})
|
|
||||||
.collect()
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
//We minimize the distance to the former zone assignation
|
if let Some(assoc) = &old_assignation_opt {
|
||||||
zone_assignation =
|
//We minimize the distance to the previous assignment.
|
||||||
optimize_matching(&old_zone_assignation, &zone_assignation, nb_zones + 1); //+1 for no_zone
|
self.minimize_rebalance_load(&mut gflow, &zone_to_id, &assoc)?;
|
||||||
|
}
|
||||||
|
|
||||||
//We need to assign partitions to nodes in their zone
|
msg.append(&mut self.output_stat(&gflow, &old_assignation_opt, &zone_to_id,&id_to_zone)?);
|
||||||
//We first put the nodes assignation that can stay the same
|
|
||||||
for i in 0..nb_partitions {
|
|
||||||
for j in 0..self.replication_factor {
|
|
||||||
if let Some(Some(former_node)) = old_node_assignation[i].iter().find(|x| {
|
|
||||||
if let Some(id) = x {
|
|
||||||
zone_id[&node_zone[*id]] == zone_assignation[i][j]
|
|
||||||
} else {
|
|
||||||
false
|
|
||||||
}
|
|
||||||
}) {
|
|
||||||
if part_per_nod[*former_node] > 0 {
|
|
||||||
node_assignation[i][j] = Some(*former_node);
|
|
||||||
part_per_nod[*former_node] -= 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//We complete the assignation of partitions to nodes
|
//We update the layout structure
|
||||||
let mut rng = rand::thread_rng();
|
self.update_ring_from_flow(id_to_zone.len() , &gflow)?;
|
||||||
for i in 0..nb_partitions {
|
return Ok(msg);
|
||||||
for j in 0..self.replication_factor {
|
}
|
||||||
if node_assignation[i][j] == None {
|
|
||||||
let possible_nodes: Vec<usize> = (0..nb_nodes)
|
|
||||||
.filter(|id| {
|
|
||||||
zone_id[&node_zone[*id]] == zone_assignation[i][j]
|
|
||||||
&& part_per_nod[*id] > 0
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
assert!(!possible_nodes.is_empty());
|
|
||||||
//We randomly pick a node
|
|
||||||
if let Some(nod) = possible_nodes.choose(&mut rng) {
|
|
||||||
node_assignation[i][j] = Some(*nod);
|
|
||||||
part_per_nod[*nod] -= 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//We write the assignation in the 1D table
|
|
||||||
self.ring_assignation_data = Vec::<CompactNodeType>::new();
|
|
||||||
for ass in node_assignation {
|
|
||||||
for nod in ass {
|
|
||||||
if let Some(id) = nod {
|
|
||||||
self.ring_assignation_data.push(id as CompactNodeType);
|
|
||||||
} else {
|
|
||||||
panic!()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
true
|
|
||||||
} else {
|
|
||||||
false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// The LwwMap of node roles might have changed. This function updates the node_id_vec
|
/// The LwwMap of node roles might have changed. This function updates the node_id_vec
|
||||||
/// and returns the assignation given by ring, with the new indices of the nodes, and
|
/// and returns the assignation given by ring, with the new indices of the nodes, and
|
||||||
/// None of the node is not present anymore.
|
/// None if the node is not present anymore.
|
||||||
/// We work with the assumption that only this function and calculate_new_assignation
|
/// We work with the assumption that only this function and calculate_new_assignation
|
||||||
/// do modify assignation_ring and node_id_vec.
|
/// do modify assignation_ring and node_id_vec.
|
||||||
fn update_nodes_and_ring(&mut self) -> Vec<Vec<Option<usize>>> {
|
fn update_node_id_vec(&mut self) -> Result< Option< Vec<Vec<usize> > > ,String> {
|
||||||
|
// (1) We compute the new node list
|
||||||
|
//Non gateway nodes should be coded on 8bits, hence they must be first in the list
|
||||||
|
//We build the new node ids
|
||||||
|
let mut new_non_gateway_nodes: Vec<Uuid> = self.roles.items().iter()
|
||||||
|
.filter(|(_, _, v)|
|
||||||
|
match &v.0 {Some(r) if r.capacity != None => true, _=> false })
|
||||||
|
.map(|(k, _, _)| *k).collect();
|
||||||
|
|
||||||
|
if new_non_gateway_nodes.len() > MAX_NODE_NUMBER {
|
||||||
|
return Err(format!("There are more than {} non-gateway nodes in the new layout. This is not allowed.", MAX_NODE_NUMBER).to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut new_gateway_nodes: Vec<Uuid> = self.roles.items().iter()
|
||||||
|
.filter(|(_, _, v)|
|
||||||
|
match v {NodeRoleV(Some(r)) if r.capacity == None => true, _=> false })
|
||||||
|
.map(|(k, _, _)| *k).collect();
|
||||||
|
|
||||||
|
let nb_useful_nodes = new_non_gateway_nodes.len();
|
||||||
|
let mut new_node_id_vec = Vec::<Uuid>::new();
|
||||||
|
new_node_id_vec.append(&mut new_non_gateway_nodes);
|
||||||
|
new_node_id_vec.append(&mut new_gateway_nodes);
|
||||||
|
|
||||||
|
|
||||||
|
// (2) We retrieve the old association
|
||||||
|
//We rewrite the old association with the new indices. We only consider partition
|
||||||
|
//to node assignations where the node is still in use.
|
||||||
|
let nb_partitions = 1usize << PARTITION_BITS;
|
||||||
|
let mut old_assignation = vec![ Vec::<usize>::new() ; nb_partitions];
|
||||||
|
|
||||||
|
if self.ring_assignation_data.len() == 0 {
|
||||||
|
//This is a new association
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
if self.ring_assignation_data.len() != nb_partitions * self.replication_factor {
|
||||||
|
return Err("The old assignation does not have a size corresponding to the old replication factor or the number of partitions.".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
//We build a translation table between the uuid and new ids
|
||||||
|
let mut uuid_to_new_id = HashMap::<Uuid, usize>::new();
|
||||||
|
|
||||||
|
//We add the indices of only the new non-gateway nodes that can be used in the
|
||||||
|
//association ring
|
||||||
|
for i in 0..nb_useful_nodes {
|
||||||
|
uuid_to_new_id.insert(new_node_id_vec[i], i );
|
||||||
|
}
|
||||||
|
|
||||||
|
let rf= self.replication_factor;
|
||||||
|
for p in 0..nb_partitions {
|
||||||
|
for old_id in &self.ring_assignation_data[p*rf..(p+1)*rf] {
|
||||||
|
let uuid = self.node_id_vec[*old_id as usize];
|
||||||
|
if uuid_to_new_id.contains_key(&uuid) {
|
||||||
|
old_assignation[p].push(uuid_to_new_id[&uuid]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
//We write the results
|
||||||
|
self.node_id_vec = new_node_id_vec;
|
||||||
|
self.ring_assignation_data = Vec::<CompactNodeType>::new();
|
||||||
|
|
||||||
|
return Ok(Some(old_assignation));
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
///This function generates ids for the zone of the nodes appearing in
|
||||||
|
///self.node_id_vec.
|
||||||
|
fn generate_zone_ids(&self) -> Result<(Vec<String>, HashMap<String, usize>),String>{
|
||||||
|
let mut id_to_zone = Vec::<String>::new();
|
||||||
|
let mut zone_to_id = HashMap::<String,usize>::new();
|
||||||
|
|
||||||
|
for uuid in self.node_id_vec.iter() {
|
||||||
|
if self.roles.get(uuid) == None {
|
||||||
|
return Err("The uuid was not found in the node roles (this should not happen, it might be a critical error).".to_string());
|
||||||
|
}
|
||||||
|
match self.node_role(&uuid) {
|
||||||
|
Some(r) => if !zone_to_id.contains_key(&r.zone) && r.capacity != None {
|
||||||
|
zone_to_id.insert(r.zone.clone() , id_to_zone.len());
|
||||||
|
id_to_zone.push(r.zone.clone());
|
||||||
|
}
|
||||||
|
_ => ()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return Ok((id_to_zone, zone_to_id));
|
||||||
|
}
|
||||||
|
|
||||||
|
///This function computes by dichotomy the largest realizable partition size, given
|
||||||
|
///the layout.
|
||||||
|
fn compute_optimal_partition_size(&self, zone_to_id: &HashMap<String, usize>) -> Result<u32,String>{
|
||||||
|
let nb_partitions = 1usize << PARTITION_BITS;
|
||||||
|
let empty_set = HashSet::<(usize,usize)>::new();
|
||||||
|
let mut g = self.generate_flow_graph(1, zone_to_id, &empty_set)?;
|
||||||
|
g.compute_maximal_flow()?;
|
||||||
|
if g.get_flow_value()? < (nb_partitions*self.replication_factor).try_into().unwrap() {
|
||||||
|
return Err("The storage capacity of he cluster is to small. It is impossible to store partitions of size 1.".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut s_down = 1;
|
||||||
|
let mut s_up = self.get_total_capacity()?;
|
||||||
|
while s_down +1 < s_up {
|
||||||
|
g = self.generate_flow_graph((s_down+s_up)/2, zone_to_id, &empty_set)?;
|
||||||
|
g.compute_maximal_flow()?;
|
||||||
|
if g.get_flow_value()? < (nb_partitions*self.replication_factor).try_into().unwrap() {
|
||||||
|
s_up = (s_down+s_up)/2;
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
s_down = (s_down+s_up)/2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return Ok(s_down);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn generate_graph_vertices(nb_zones : usize, nb_nodes : usize) -> Vec<Vertex> {
|
||||||
|
let mut vertices = vec![Vertex::Source, Vertex::Sink];
|
||||||
|
for p in 0..NB_PARTITIONS {
|
||||||
|
vertices.push(Vertex::Pup(p));
|
||||||
|
vertices.push(Vertex::Pdown(p));
|
||||||
|
for z in 0..nb_zones {
|
||||||
|
vertices.push(Vertex::PZ(p, z));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for n in 0..nb_nodes {
|
||||||
|
vertices.push(Vertex::N(n));
|
||||||
|
}
|
||||||
|
return vertices;
|
||||||
|
}
|
||||||
|
|
||||||
|
fn generate_flow_graph(&self, size: u32, zone_to_id: &HashMap<String, usize>, exclude_assoc : &HashSet<(usize,usize)>) -> Result<Graph<FlowEdge>, String> {
|
||||||
|
let vertices = ClusterLayout::generate_graph_vertices(zone_to_id.len(),
|
||||||
|
self.useful_nodes().len());
|
||||||
|
let mut g= Graph::<FlowEdge>::new(&vertices);
|
||||||
|
let nb_zones = zone_to_id.len();
|
||||||
|
for p in 0..NB_PARTITIONS {
|
||||||
|
g.add_edge(Vertex::Source, Vertex::Pup(p), self.zone_redundancy as u32)?;
|
||||||
|
g.add_edge(Vertex::Source, Vertex::Pdown(p), (self.replication_factor - self.zone_redundancy) as u32)?;
|
||||||
|
for z in 0..nb_zones {
|
||||||
|
g.add_edge(Vertex::Pup(p) , Vertex::PZ(p,z) , 1)?;
|
||||||
|
g.add_edge(Vertex::Pdown(p) , Vertex::PZ(p,z) ,
|
||||||
|
self.replication_factor as u32)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for n in 0..self.useful_nodes().len() {
|
||||||
|
let node_capacity = self.get_node_capacity(&self.node_id_vec[n])?;
|
||||||
|
let node_zone = zone_to_id[&self.get_node_zone(&self.node_id_vec[n])?];
|
||||||
|
g.add_edge(Vertex::N(n), Vertex::Sink, node_capacity/size)?;
|
||||||
|
for p in 0..NB_PARTITIONS {
|
||||||
|
if !exclude_assoc.contains(&(p,n)) {
|
||||||
|
g.add_edge(Vertex::PZ(p, node_zone), Vertex::N(n), 1)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return Ok(g);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
fn compute_candidate_assignment(&self, zone_to_id: &HashMap<String, usize>,
|
||||||
|
old_assoc_opt : &Option<Vec< Vec<usize> >>) -> Result<Graph<FlowEdge>, String > {
|
||||||
|
|
||||||
|
//We list the edges that are not used in the old association
|
||||||
|
let mut exclude_edge = HashSet::<(usize,usize)>::new();
|
||||||
|
if let Some(old_assoc) = old_assoc_opt {
|
||||||
|
let nb_nodes = self.useful_nodes().len();
|
||||||
|
for p in 0..NB_PARTITIONS {
|
||||||
|
for n in 0..nb_nodes {
|
||||||
|
exclude_edge.insert((p,n));
|
||||||
|
}
|
||||||
|
for n in old_assoc[p].iter() {
|
||||||
|
exclude_edge.remove(&(p,*n));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
//We compute the best flow using only the edges used in the old assoc
|
||||||
|
let mut g = self.generate_flow_graph(self.partition_size, zone_to_id, &exclude_edge )?;
|
||||||
|
g.compute_maximal_flow()?;
|
||||||
|
for (p,n) in exclude_edge.iter() {
|
||||||
|
let node_zone = zone_to_id[&self.get_node_zone(&self.node_id_vec[*n])?];
|
||||||
|
g.add_edge(Vertex::PZ(*p,node_zone), Vertex::N(*n), 1)?;
|
||||||
|
}
|
||||||
|
g.compute_maximal_flow()?;
|
||||||
|
return Ok(g);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn minimize_rebalance_load(&self, gflow: &mut Graph<FlowEdge>, zone_to_id: &HashMap<String, usize>, old_assoc : &Vec< Vec<usize> >) -> Result<(), String > {
|
||||||
|
let mut cost = CostFunction::new();
|
||||||
|
for p in 0..NB_PARTITIONS {
|
||||||
|
for n in old_assoc[p].iter() {
|
||||||
|
let node_zone = zone_to_id[&self.get_node_zone(&self.node_id_vec[*n])?];
|
||||||
|
cost.insert((Vertex::PZ(p,node_zone), Vertex::N(*n)), -1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
let nb_nodes = self.useful_nodes().len();
|
||||||
|
let path_length = 4*nb_nodes;
|
||||||
|
gflow.optimize_flow_with_cost(&cost, path_length)?;
|
||||||
|
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
fn update_ring_from_flow(&mut self, nb_zones : usize, gflow: &Graph<FlowEdge> ) -> Result<(), String>{
|
||||||
|
self.ring_assignation_data = Vec::<CompactNodeType>::new();
|
||||||
|
for p in 0..NB_PARTITIONS {
|
||||||
|
for z in 0..nb_zones {
|
||||||
|
let assoc_vertex = gflow.get_positive_flow_from(Vertex::PZ(p,z))?;
|
||||||
|
for vertex in assoc_vertex.iter() {
|
||||||
|
match vertex{
|
||||||
|
Vertex::N(n) => self.ring_assignation_data.push((*n).try_into().unwrap()),
|
||||||
|
_ => ()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if self.ring_assignation_data.len() != NB_PARTITIONS*self.replication_factor {
|
||||||
|
return Err("Critical Error : the association ring we produced does not have the right size.".to_string());
|
||||||
|
}
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
//This function returns a message summing up the partition repartition of the new
|
||||||
|
//layout.
|
||||||
|
fn output_stat(&self , gflow : &Graph<FlowEdge>,
|
||||||
|
old_assoc_opt : &Option< Vec<Vec<usize>> >,
|
||||||
|
zone_to_id: &HashMap<String, usize>,
|
||||||
|
id_to_zone : &Vec<String>) -> Result<Message, String>{
|
||||||
|
let mut msg = Message::new();
|
||||||
|
|
||||||
let nb_partitions = 1usize << PARTITION_BITS;
|
let nb_partitions = 1usize << PARTITION_BITS;
|
||||||
let mut node_assignation = vec![vec![None; self.replication_factor]; nb_partitions];
|
let used_cap = self.partition_size * nb_partitions as u32 *
|
||||||
let rf = self.replication_factor;
|
self.replication_factor as u32;
|
||||||
let ring = &self.ring_assignation_data;
|
let total_cap = self.get_total_capacity()?;
|
||||||
|
let percent_cap = 100.0*(used_cap as f32)/(total_cap as f32);
|
||||||
|
msg.push(format!("Available capacity / Total cluster capacity: {} / {} ({:.1} %)",
|
||||||
|
used_cap , total_cap , percent_cap ));
|
||||||
|
msg.push(format!("If the percentage is to low, it might be that the replication/redundancy constraints force the use of nodes/zones with small storage capacities.
|
||||||
|
You might want to rebalance the storage capacities or relax the constraints. See the detailed statistics below and look for saturated nodes/zones."));
|
||||||
|
msg.push(format!("Recall that because of the replication, the actual available storage capacity is {} / {} = {}.", used_cap , self.replication_factor , used_cap/self.replication_factor as u32));
|
||||||
|
|
||||||
let new_node_id_vec: Vec<Uuid> = self.roles.items().iter().map(|(k, _, _)| *k).collect();
|
//We define and fill in the following tables
|
||||||
|
let storing_nodes = self.useful_nodes();
|
||||||
|
let mut new_partitions = vec![0; storing_nodes.len()];
|
||||||
|
let mut stored_partitions = vec![0; storing_nodes.len()];
|
||||||
|
|
||||||
if ring.len() == rf * nb_partitions {
|
let mut new_partitions_zone = vec![0; id_to_zone.len()];
|
||||||
for i in 0..nb_partitions {
|
let mut stored_partitions_zone = vec![0; id_to_zone.len()];
|
||||||
for j in 0..self.replication_factor {
|
|
||||||
node_assignation[i][j] = new_node_id_vec
|
|
||||||
.iter()
|
|
||||||
.position(|id| *id == self.node_id_vec[ring[i * rf + j] as usize]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
self.node_id_vec = new_node_id_vec;
|
for p in 0..nb_partitions {
|
||||||
self.ring_assignation_data = vec![];
|
for z in 0..id_to_zone.len() {
|
||||||
node_assignation
|
let pz_nodes = gflow.get_positive_flow_from(Vertex::PZ(p,z))?;
|
||||||
}
|
if pz_nodes.len() > 0 {
|
||||||
|
stored_partitions_zone[z] += 1;
|
||||||
|
}
|
||||||
|
for vert in pz_nodes.iter() {
|
||||||
|
if let Vertex::N(n) = *vert {
|
||||||
|
stored_partitions[n] += 1;
|
||||||
|
if let Some(old_assoc) = old_assoc_opt {
|
||||||
|
if !old_assoc[p].contains(&n) {
|
||||||
|
new_partitions[n] += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if let Some(old_assoc) = old_assoc_opt {
|
||||||
|
let mut old_zones_of_p = Vec::<usize>::new();
|
||||||
|
for n in old_assoc[p].iter() {
|
||||||
|
old_zones_of_p.push(
|
||||||
|
zone_to_id[&self.get_node_zone(&self.node_id_vec[*n])?]);
|
||||||
|
}
|
||||||
|
if !old_zones_of_p.contains(&z) {
|
||||||
|
new_partitions_zone[z] += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
///This function compute the number of partition to assign to
|
//We display the statistics
|
||||||
///every node and zone, so that every partition is replicated
|
|
||||||
///self.replication_factor times and the capacity of a partition
|
|
||||||
///is maximized.
|
|
||||||
fn optimal_proportions(&mut self) -> Option<(Vec<usize>, HashMap<String, usize>)> {
|
|
||||||
let mut zone_capacity: HashMap<String, u32> = HashMap::new();
|
|
||||||
|
|
||||||
let (node_zone, node_capacity) = self.get_node_zone_capacity();
|
if *old_assoc_opt != None {
|
||||||
let nb_nodes = self.node_id_vec.len();
|
let total_new_partitions : usize = new_partitions.iter().sum();
|
||||||
|
msg.push(format!("A total of {} new copies of partitions need to be \
|
||||||
|
transferred.", total_new_partitions));
|
||||||
|
}
|
||||||
|
msg.push(format!(""));
|
||||||
|
msg.push(format!("Detailed statistics by zones and nodes."));
|
||||||
|
|
||||||
for i in 0..nb_nodes {
|
for z in 0..id_to_zone.len(){
|
||||||
if zone_capacity.contains_key(&node_zone[i]) {
|
let mut nodes_of_z = Vec::<usize>::new();
|
||||||
zone_capacity.insert(
|
for n in 0..storing_nodes.len(){
|
||||||
node_zone[i].clone(),
|
if self.get_node_zone(&self.node_id_vec[n])? == id_to_zone[z] {
|
||||||
zone_capacity[&node_zone[i]] + node_capacity[i],
|
nodes_of_z.push(n);
|
||||||
);
|
}
|
||||||
} else {
|
}
|
||||||
zone_capacity.insert(node_zone[i].clone(), node_capacity[i]);
|
let replicated_partitions : usize = nodes_of_z.iter()
|
||||||
}
|
.map(|n| stored_partitions[*n]).sum();
|
||||||
}
|
msg.push(format!(""));
|
||||||
|
|
||||||
//Compute the optimal number of partitions per zone
|
if *old_assoc_opt != None {
|
||||||
let sum_capacities: u32 = zone_capacity.values().sum();
|
msg.push(format!("Zone {}: {} distinct partitions stored ({} new, \
|
||||||
|
{} partition copies) ", id_to_zone[z], stored_partitions_zone[z],
|
||||||
|
new_partitions_zone[z], replicated_partitions));
|
||||||
|
}
|
||||||
|
else{
|
||||||
|
msg.push(format!("Zone {}: {} distinct partitions stored ({} partition \
|
||||||
|
copies) ",
|
||||||
|
id_to_zone[z], stored_partitions_zone[z], replicated_partitions));
|
||||||
|
}
|
||||||
|
|
||||||
if sum_capacities == 0 {
|
let available_cap_z : u32 = self.partition_size*replicated_partitions as u32;
|
||||||
println!("No storage capacity in the network.");
|
let mut total_cap_z = 0;
|
||||||
return None;
|
for n in nodes_of_z.iter() {
|
||||||
}
|
total_cap_z += self.get_node_capacity(&self.node_id_vec[*n])?;
|
||||||
|
}
|
||||||
|
let percent_cap_z = 100.0*(available_cap_z as f32)/(total_cap_z as f32);
|
||||||
|
msg.push(format!(" Available capacity / Total capacity: {}/{} ({:.1}%).",
|
||||||
|
available_cap_z, total_cap_z, percent_cap_z));
|
||||||
|
msg.push(format!(""));
|
||||||
|
|
||||||
let nb_partitions = 1 << PARTITION_BITS;
|
for n in nodes_of_z.iter() {
|
||||||
|
let available_cap_n = stored_partitions[*n] as u32 *self.partition_size;
|
||||||
|
let total_cap_n =self.get_node_capacity(&self.node_id_vec[*n])?;
|
||||||
|
let tags_n = (self.node_role(&self.node_id_vec[*n])
|
||||||
|
.ok_or("Node not found."))?.tags_string();
|
||||||
|
msg.push(format!(" Node {}: {} partitions ({} new) ; \
|
||||||
|
available/total capacity: {} / {} ({:.1}%) ; tags:{}",
|
||||||
|
&self.node_id_vec[*n].to_vec().encode_hex::<String>(),
|
||||||
|
stored_partitions[*n],
|
||||||
|
new_partitions[*n], available_cap_n, total_cap_n,
|
||||||
|
(available_cap_n as f32)/(total_cap_n as f32)*100.0 ,
|
||||||
|
tags_n));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
//Initially we would like to use zones porportionally to
|
return Ok(msg);
|
||||||
//their capacity.
|
}
|
||||||
//However, a large zone can be associated to at most
|
|
||||||
//nb_partitions to ensure replication of the date.
|
|
||||||
//So we take the min with nb_partitions:
|
|
||||||
let mut part_per_zone: HashMap<String, usize> = zone_capacity
|
|
||||||
.iter()
|
|
||||||
.map(|(k, v)| {
|
|
||||||
(
|
|
||||||
k.clone(),
|
|
||||||
min(
|
|
||||||
nb_partitions,
|
|
||||||
(self.replication_factor * nb_partitions * *v as usize)
|
|
||||||
/ sum_capacities as usize,
|
|
||||||
),
|
|
||||||
)
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
//The replication_factor-1 upper bounds the number of
|
|
||||||
//part_per_zones that are greater than nb_partitions
|
|
||||||
for _ in 1..self.replication_factor {
|
|
||||||
//The number of partitions that are not assignated to
|
|
||||||
//a zone that takes nb_partitions.
|
|
||||||
let sum_capleft: u32 = zone_capacity
|
|
||||||
.keys()
|
|
||||||
.filter(|k| part_per_zone[*k] < nb_partitions)
|
|
||||||
.map(|k| zone_capacity[k])
|
|
||||||
.sum();
|
|
||||||
|
|
||||||
//The number of replication of the data that we need
|
|
||||||
//to ensure.
|
|
||||||
let repl_left = self.replication_factor
|
|
||||||
- part_per_zone
|
|
||||||
.values()
|
|
||||||
.filter(|x| **x == nb_partitions)
|
|
||||||
.count();
|
|
||||||
if repl_left == 0 {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
for k in zone_capacity.keys() {
|
|
||||||
if part_per_zone[k] != nb_partitions {
|
|
||||||
part_per_zone.insert(
|
|
||||||
k.to_string(),
|
|
||||||
min(
|
|
||||||
nb_partitions,
|
|
||||||
(nb_partitions * zone_capacity[k] as usize * repl_left)
|
|
||||||
/ sum_capleft as usize,
|
|
||||||
),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//Now we divide the zone's partition share proportionally
|
|
||||||
//between their nodes.
|
|
||||||
|
|
||||||
let mut part_per_nod: Vec<usize> = (0..nb_nodes)
|
|
||||||
.map(|i| {
|
|
||||||
(part_per_zone[&node_zone[i]] * node_capacity[i] as usize)
|
|
||||||
/ zone_capacity[&node_zone[i]] as usize
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
//We must update the part_per_zone to make it correspond to
|
|
||||||
//part_per_nod (because of integer rounding)
|
|
||||||
part_per_zone = part_per_zone.iter().map(|(k, _)| (k.clone(), 0)).collect();
|
|
||||||
for i in 0..nb_nodes {
|
|
||||||
part_per_zone.insert(
|
|
||||||
node_zone[i].clone(),
|
|
||||||
part_per_zone[&node_zone[i]] + part_per_nod[i],
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
//Because of integer rounding, the total sum of part_per_nod
|
|
||||||
//might not be replication_factor*nb_partitions.
|
|
||||||
// We need at most to add 1 to every non maximal value of
|
|
||||||
// part_per_nod. The capacity of a partition will be bounded
|
|
||||||
// by the minimal value of
|
|
||||||
// node_capacity_vec[i]/part_per_nod[i]
|
|
||||||
// so we try to maximize this minimal value, keeping the
|
|
||||||
// part_per_zone capped
|
|
||||||
|
|
||||||
let discrepancy: usize =
|
|
||||||
nb_partitions * self.replication_factor - part_per_nod.iter().sum::<usize>();
|
|
||||||
|
|
||||||
//We use a stupid O(N^2) algorithm. If the number of nodes
|
|
||||||
//is actually expected to be high, one should optimize this.
|
|
||||||
|
|
||||||
for _ in 0..discrepancy {
|
|
||||||
if let Some(idmax) = (0..nb_nodes)
|
|
||||||
.filter(|i| part_per_zone[&node_zone[*i]] < nb_partitions)
|
|
||||||
.max_by(|i, j| {
|
|
||||||
(node_capacity[*i] * (part_per_nod[*j] + 1) as u32)
|
|
||||||
.cmp(&(node_capacity[*j] * (part_per_nod[*i] + 1) as u32))
|
|
||||||
}) {
|
|
||||||
part_per_nod[idmax] += 1;
|
|
||||||
part_per_zone.insert(
|
|
||||||
node_zone[idmax].clone(),
|
|
||||||
part_per_zone[&node_zone[idmax]] + 1,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//We check the algorithm consistency
|
|
||||||
|
|
||||||
let discrepancy: usize =
|
|
||||||
nb_partitions * self.replication_factor - part_per_nod.iter().sum::<usize>();
|
|
||||||
assert!(discrepancy == 0);
|
|
||||||
assert!(if let Some(v) = part_per_zone.values().max() {
|
|
||||||
*v <= nb_partitions
|
|
||||||
} else {
|
|
||||||
false
|
|
||||||
});
|
|
||||||
|
|
||||||
Some((part_per_nod, part_per_zone))
|
|
||||||
}
|
|
||||||
|
|
||||||
//Returns vectors of zone and capacity; indexed by the same (temporary)
|
|
||||||
//indices as node_id_vec.
|
|
||||||
fn get_node_zone_capacity(&self) -> (Vec<String>, Vec<u32>) {
|
|
||||||
let node_zone = self
|
|
||||||
.node_id_vec
|
|
||||||
.iter()
|
|
||||||
.map(|id_nod| match self.node_role(id_nod) {
|
|
||||||
Some(NodeRole {
|
|
||||||
zone,
|
|
||||||
capacity: _,
|
|
||||||
tags: _,
|
|
||||||
}) => zone.clone(),
|
|
||||||
_ => "".to_string(),
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
let node_capacity = self
|
|
||||||
.node_id_vec
|
|
||||||
.iter()
|
|
||||||
.map(|id_nod| match self.node_role(id_nod) {
|
|
||||||
Some(NodeRole {
|
|
||||||
zone: _,
|
|
||||||
capacity: Some(c),
|
|
||||||
tags: _,
|
|
||||||
}) => *c,
|
|
||||||
_ => 0,
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
(node_zone, node_capacity)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
//====================================================================================
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
|
|
|
@ -8,9 +8,11 @@ mod consul;
|
||||||
mod kubernetes;
|
mod kubernetes;
|
||||||
|
|
||||||
pub mod layout;
|
pub mod layout;
|
||||||
|
pub mod graph_algo;
|
||||||
pub mod ring;
|
pub mod ring;
|
||||||
pub mod system;
|
pub mod system;
|
||||||
|
|
||||||
|
|
||||||
mod metrics;
|
mod metrics;
|
||||||
pub mod rpc_helper;
|
pub mod rpc_helper;
|
||||||
|
|
||||||
|
|
|
@ -40,6 +40,7 @@ pub struct Ring {
|
||||||
// Type to store compactly the id of a node in the system
|
// Type to store compactly the id of a node in the system
|
||||||
// Change this to u16 the day we want to have more than 256 nodes in a cluster
|
// Change this to u16 the day we want to have more than 256 nodes in a cluster
|
||||||
pub type CompactNodeType = u8;
|
pub type CompactNodeType = u8;
|
||||||
|
pub const MAX_NODE_NUMBER: usize = 256;
|
||||||
|
|
||||||
// The maximum number of times an object might get replicated
|
// The maximum number of times an object might get replicated
|
||||||
// This must be at least 3 because Garage supports 3-way replication
|
// This must be at least 3 because Garage supports 3-way replication
|
||||||
|
|
|
@ -97,6 +97,7 @@ pub struct System {
|
||||||
kubernetes_discovery: Option<KubernetesDiscoveryParam>,
|
kubernetes_discovery: Option<KubernetesDiscoveryParam>,
|
||||||
|
|
||||||
replication_factor: usize,
|
replication_factor: usize,
|
||||||
|
zone_redundancy: usize,
|
||||||
|
|
||||||
/// The ring
|
/// The ring
|
||||||
pub ring: watch::Receiver<Arc<Ring>>,
|
pub ring: watch::Receiver<Arc<Ring>>,
|
||||||
|
@ -192,6 +193,7 @@ impl System {
|
||||||
network_key: NetworkKey,
|
network_key: NetworkKey,
|
||||||
background: Arc<BackgroundRunner>,
|
background: Arc<BackgroundRunner>,
|
||||||
replication_factor: usize,
|
replication_factor: usize,
|
||||||
|
zone_redundancy: usize,
|
||||||
config: &Config,
|
config: &Config,
|
||||||
) -> Arc<Self> {
|
) -> Arc<Self> {
|
||||||
let node_key =
|
let node_key =
|
||||||
|
@ -211,7 +213,7 @@ impl System {
|
||||||
"No valid previous cluster layout stored ({}), starting fresh.",
|
"No valid previous cluster layout stored ({}), starting fresh.",
|
||||||
e
|
e
|
||||||
);
|
);
|
||||||
ClusterLayout::new(replication_factor)
|
ClusterLayout::new(replication_factor, zone_redundancy)
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -285,6 +287,7 @@ impl System {
|
||||||
rpc: RpcHelper::new(netapp.id.into(), fullmesh, background.clone(), ring.clone()),
|
rpc: RpcHelper::new(netapp.id.into(), fullmesh, background.clone(), ring.clone()),
|
||||||
system_endpoint,
|
system_endpoint,
|
||||||
replication_factor,
|
replication_factor,
|
||||||
|
zone_redundancy,
|
||||||
rpc_listen_addr: config.rpc_bind_addr,
|
rpc_listen_addr: config.rpc_bind_addr,
|
||||||
rpc_public_addr,
|
rpc_public_addr,
|
||||||
bootstrap_peers: config.bootstrap_peers.clone(),
|
bootstrap_peers: config.bootstrap_peers.clone(),
|
||||||
|
|
|
@ -1,363 +0,0 @@
|
||||||
/*
|
|
||||||
* This module deals with graph algorithm in complete bipartite
|
|
||||||
* graphs. It is used in layout.rs to build the partition to node
|
|
||||||
* assignation.
|
|
||||||
* */
|
|
||||||
|
|
||||||
use rand::prelude::SliceRandom;
|
|
||||||
use std::cmp::{max, min};
|
|
||||||
use std::collections::VecDeque;
|
|
||||||
|
|
||||||
//Graph data structure for the flow algorithm.
|
|
||||||
#[derive(Clone, Copy, Debug)]
|
|
||||||
struct EdgeFlow {
|
|
||||||
c: i32,
|
|
||||||
flow: i32,
|
|
||||||
v: usize,
|
|
||||||
rev: usize,
|
|
||||||
}
|
|
||||||
|
|
||||||
//Graph data structure for the detection of positive cycles.
|
|
||||||
#[derive(Clone, Copy, Debug)]
|
|
||||||
struct WeightedEdge {
|
|
||||||
w: i32,
|
|
||||||
u: usize,
|
|
||||||
v: usize,
|
|
||||||
}
|
|
||||||
|
|
||||||
/* This function takes two matchings (old_match and new_match) in a
|
|
||||||
* complete bipartite graph. It returns a matching that has the
|
|
||||||
* same degree as new_match at every vertex, and that is as close
|
|
||||||
* as possible to old_match.
|
|
||||||
* */
|
|
||||||
pub fn optimize_matching(
|
|
||||||
old_match: &[Vec<usize>],
|
|
||||||
new_match: &[Vec<usize>],
|
|
||||||
nb_right: usize,
|
|
||||||
) -> Vec<Vec<usize>> {
|
|
||||||
let nb_left = old_match.len();
|
|
||||||
let ed = WeightedEdge { w: -1, u: 0, v: 0 };
|
|
||||||
let mut edge_vec = vec![ed; nb_left * nb_right];
|
|
||||||
|
|
||||||
//We build the complete bipartite graph structure, represented
|
|
||||||
//by the list of all edges.
|
|
||||||
for i in 0..nb_left {
|
|
||||||
for j in 0..nb_right {
|
|
||||||
edge_vec[i * nb_right + j].u = i;
|
|
||||||
edge_vec[i * nb_right + j].v = nb_left + j;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for i in 0..edge_vec.len() {
|
|
||||||
//We add the old matchings
|
|
||||||
if old_match[edge_vec[i].u].contains(&(edge_vec[i].v - nb_left)) {
|
|
||||||
edge_vec[i].w *= -1;
|
|
||||||
}
|
|
||||||
//We add the new matchings
|
|
||||||
if new_match[edge_vec[i].u].contains(&(edge_vec[i].v - nb_left)) {
|
|
||||||
(edge_vec[i].u, edge_vec[i].v) = (edge_vec[i].v, edge_vec[i].u);
|
|
||||||
edge_vec[i].w *= -1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
//Now edge_vec is a graph where edges are oriented LR if we
|
|
||||||
//can add them to new_match, and RL otherwise. If
|
|
||||||
//adding/removing them makes the matching closer to old_match
|
|
||||||
//they have weight 1; and -1 otherwise.
|
|
||||||
|
|
||||||
//We shuffle the edge list so that there is no bias depending in
|
|
||||||
//partitions/zone label in the triplet dispersion
|
|
||||||
let mut rng = rand::thread_rng();
|
|
||||||
edge_vec.shuffle(&mut rng);
|
|
||||||
|
|
||||||
//Discovering and flipping a cycle with positive weight in this
|
|
||||||
//graph will make the matching closer to old_match.
|
|
||||||
//We use Bellman Ford algorithm to discover positive cycles
|
|
||||||
while let Some(cycle) = positive_cycle(&edge_vec, nb_left, nb_right) {
|
|
||||||
for i in cycle {
|
|
||||||
//We flip the edges of the cycle.
|
|
||||||
(edge_vec[i].u, edge_vec[i].v) = (edge_vec[i].v, edge_vec[i].u);
|
|
||||||
edge_vec[i].w *= -1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//The optimal matching is build from the graph structure.
|
|
||||||
let mut matching = vec![Vec::<usize>::new(); nb_left];
|
|
||||||
for e in edge_vec {
|
|
||||||
if e.u > e.v {
|
|
||||||
matching[e.v].push(e.u - nb_left);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
matching
|
|
||||||
}
|
|
||||||
|
|
||||||
//This function finds a positive cycle in a bipartite wieghted graph.
|
|
||||||
fn positive_cycle(
|
|
||||||
edge_vec: &[WeightedEdge],
|
|
||||||
nb_left: usize,
|
|
||||||
nb_right: usize,
|
|
||||||
) -> Option<Vec<usize>> {
|
|
||||||
let nb_side_min = min(nb_left, nb_right);
|
|
||||||
let nb_vertices = nb_left + nb_right;
|
|
||||||
let weight_lowerbound = -((nb_left + nb_right) as i32) - 1;
|
|
||||||
let mut accessed = vec![false; nb_left];
|
|
||||||
|
|
||||||
//We try to find a positive cycle accessible from the left
|
|
||||||
//vertex i.
|
|
||||||
for i in 0..nb_left {
|
|
||||||
if accessed[i] {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
let mut weight = vec![weight_lowerbound; nb_vertices];
|
|
||||||
let mut prev = vec![edge_vec.len(); nb_vertices];
|
|
||||||
weight[i] = 0;
|
|
||||||
//We compute largest weighted paths from i.
|
|
||||||
//Since the graph is bipartite, any simple cycle has length
|
|
||||||
//at most 2*nb_side_min. In the general Bellman-Ford
|
|
||||||
//algorithm, the bound here is the number of vertices. Since
|
|
||||||
//the number of partitions can be much larger than the
|
|
||||||
//number of nodes, we optimize that.
|
|
||||||
for _ in 0..(2 * nb_side_min) {
|
|
||||||
for (j, e) in edge_vec.iter().enumerate() {
|
|
||||||
if weight[e.v] < weight[e.u] + e.w {
|
|
||||||
weight[e.v] = weight[e.u] + e.w;
|
|
||||||
prev[e.v] = j;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
//We update the accessed table
|
|
||||||
for i in 0..nb_left {
|
|
||||||
if weight[i] > weight_lowerbound {
|
|
||||||
accessed[i] = true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
//We detect positive cycle
|
|
||||||
for e in edge_vec {
|
|
||||||
if weight[e.v] < weight[e.u] + e.w {
|
|
||||||
//it means e is on a path branching from a positive cycle
|
|
||||||
let mut was_seen = vec![false; nb_vertices];
|
|
||||||
let mut curr = e.u;
|
|
||||||
//We track back with prev until we reach the cycle.
|
|
||||||
while !was_seen[curr] {
|
|
||||||
was_seen[curr] = true;
|
|
||||||
curr = edge_vec[prev[curr]].u;
|
|
||||||
}
|
|
||||||
//Now curr is on the cycle. We collect the edges ids.
|
|
||||||
let mut cycle = vec![prev[curr]];
|
|
||||||
let mut cycle_vert = edge_vec[prev[curr]].u;
|
|
||||||
while cycle_vert != curr {
|
|
||||||
cycle.push(prev[cycle_vert]);
|
|
||||||
cycle_vert = edge_vec[prev[cycle_vert]].u;
|
|
||||||
}
|
|
||||||
|
|
||||||
return Some(cycle);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
None
|
|
||||||
}
|
|
||||||
|
|
||||||
// This function takes two arrays of capacity and computes the
|
|
||||||
// maximal matching in the complete bipartite graph such that the
|
|
||||||
// left vertex i is matched to left_cap_vec[i] right vertices, and
|
|
||||||
// the right vertex j is matched to right_cap_vec[j] left vertices.
|
|
||||||
// To do so, we use Dinic's maximum flow algorithm.
|
|
||||||
pub fn dinic_compute_matching(left_cap_vec: Vec<u32>, right_cap_vec: Vec<u32>) -> Vec<Vec<usize>> {
|
|
||||||
let mut graph = Vec::<Vec<EdgeFlow>>::new();
|
|
||||||
let ed = EdgeFlow {
|
|
||||||
c: 0,
|
|
||||||
flow: 0,
|
|
||||||
v: 0,
|
|
||||||
rev: 0,
|
|
||||||
};
|
|
||||||
|
|
||||||
// 0 will be the source
|
|
||||||
graph.push(vec![ed; left_cap_vec.len()]);
|
|
||||||
for (i, c) in left_cap_vec.iter().enumerate() {
|
|
||||||
graph[0][i].c = *c as i32;
|
|
||||||
graph[0][i].v = i + 2;
|
|
||||||
graph[0][i].rev = 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
//1 will be the sink
|
|
||||||
graph.push(vec![ed; right_cap_vec.len()]);
|
|
||||||
for (i, c) in right_cap_vec.iter().enumerate() {
|
|
||||||
graph[1][i].c = *c as i32;
|
|
||||||
graph[1][i].v = i + 2 + left_cap_vec.len();
|
|
||||||
graph[1][i].rev = 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
//we add left vertices
|
|
||||||
for i in 0..left_cap_vec.len() {
|
|
||||||
graph.push(vec![ed; 1 + right_cap_vec.len()]);
|
|
||||||
graph[i + 2][0].c = 0; //directed
|
|
||||||
graph[i + 2][0].v = 0;
|
|
||||||
graph[i + 2][0].rev = i;
|
|
||||||
|
|
||||||
for j in 0..right_cap_vec.len() {
|
|
||||||
graph[i + 2][j + 1].c = 1;
|
|
||||||
graph[i + 2][j + 1].v = 2 + left_cap_vec.len() + j;
|
|
||||||
graph[i + 2][j + 1].rev = i + 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//we add right vertices
|
|
||||||
for i in 0..right_cap_vec.len() {
|
|
||||||
let lft_ln = left_cap_vec.len();
|
|
||||||
graph.push(vec![ed; 1 + lft_ln]);
|
|
||||||
graph[i + lft_ln + 2][0].c = graph[1][i].c;
|
|
||||||
graph[i + lft_ln + 2][0].v = 1;
|
|
||||||
graph[i + lft_ln + 2][0].rev = i;
|
|
||||||
|
|
||||||
for j in 0..left_cap_vec.len() {
|
|
||||||
graph[i + 2 + lft_ln][j + 1].c = 0; //directed
|
|
||||||
graph[i + 2 + lft_ln][j + 1].v = j + 2;
|
|
||||||
graph[i + 2 + lft_ln][j + 1].rev = i + 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//To ensure the dispersion of the triplets generated by the
|
|
||||||
//assignation, we shuffle the neighbours of the nodes. Hence,
|
|
||||||
//left vertices do not consider the right ones in the same order.
|
|
||||||
let mut rng = rand::thread_rng();
|
|
||||||
for i in 0..graph.len() {
|
|
||||||
graph[i].shuffle(&mut rng);
|
|
||||||
//We need to update the ids of the reverse edges.
|
|
||||||
for j in 0..graph[i].len() {
|
|
||||||
let target_v = graph[i][j].v;
|
|
||||||
let target_rev = graph[i][j].rev;
|
|
||||||
graph[target_v][target_rev].rev = j;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
let nb_vertices = graph.len();
|
|
||||||
|
|
||||||
//We run Dinic's max flow algorithm
|
|
||||||
loop {
|
|
||||||
//We build the level array from Dinic's algorithm.
|
|
||||||
let mut level = vec![-1; nb_vertices];
|
|
||||||
|
|
||||||
let mut fifo = VecDeque::new();
|
|
||||||
fifo.push_back((0, 0));
|
|
||||||
while !fifo.is_empty() {
|
|
||||||
if let Some((id, lvl)) = fifo.pop_front() {
|
|
||||||
if level[id] == -1 {
|
|
||||||
level[id] = lvl;
|
|
||||||
for e in graph[id].iter() {
|
|
||||||
if e.c - e.flow > 0 {
|
|
||||||
fifo.push_back((e.v, lvl + 1));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if level[1] == -1 {
|
|
||||||
//There is no residual flow
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
//Now we run DFS respecting the level array
|
|
||||||
let mut next_nbd = vec![0; nb_vertices];
|
|
||||||
let mut lifo = VecDeque::new();
|
|
||||||
|
|
||||||
let flow_upper_bound = if let Some(x) = left_cap_vec.iter().max() {
|
|
||||||
*x as i32
|
|
||||||
} else {
|
|
||||||
panic!();
|
|
||||||
};
|
|
||||||
|
|
||||||
lifo.push_back((0, flow_upper_bound));
|
|
||||||
|
|
||||||
while let Some((id_tmp, f_tmp)) = lifo.back() {
|
|
||||||
let id = *id_tmp;
|
|
||||||
let f = *f_tmp;
|
|
||||||
if id == 1 {
|
|
||||||
//The DFS reached the sink, we can add a
|
|
||||||
//residual flow.
|
|
||||||
lifo.pop_back();
|
|
||||||
while !lifo.is_empty() {
|
|
||||||
if let Some((id, _)) = lifo.pop_back() {
|
|
||||||
let nbd = next_nbd[id];
|
|
||||||
graph[id][nbd].flow += f;
|
|
||||||
let id_v = graph[id][nbd].v;
|
|
||||||
let nbd_v = graph[id][nbd].rev;
|
|
||||||
graph[id_v][nbd_v].flow -= f;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
lifo.push_back((0, flow_upper_bound));
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
//else we did not reach the sink
|
|
||||||
let nbd = next_nbd[id];
|
|
||||||
if nbd >= graph[id].len() {
|
|
||||||
//There is nothing to explore from id anymore
|
|
||||||
lifo.pop_back();
|
|
||||||
if let Some((parent, _)) = lifo.back() {
|
|
||||||
next_nbd[*parent] += 1;
|
|
||||||
}
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
//else we can try to send flow from id to its nbd
|
|
||||||
let new_flow = min(f, graph[id][nbd].c - graph[id][nbd].flow);
|
|
||||||
if level[graph[id][nbd].v] <= level[id] || new_flow == 0 {
|
|
||||||
//We cannot send flow to nbd.
|
|
||||||
next_nbd[id] += 1;
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
//otherwise, we send flow to nbd.
|
|
||||||
lifo.push_back((graph[id][nbd].v, new_flow));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//We return the association
|
|
||||||
let assoc_table = (0..left_cap_vec.len())
|
|
||||||
.map(|id| {
|
|
||||||
graph[id + 2]
|
|
||||||
.iter()
|
|
||||||
.filter(|e| e.flow > 0)
|
|
||||||
.map(|e| e.v - 2 - left_cap_vec.len())
|
|
||||||
.collect()
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
//consistency check
|
|
||||||
|
|
||||||
//it is a flow
|
|
||||||
for i in 3..graph.len() {
|
|
||||||
assert!(graph[i].iter().map(|e| e.flow).sum::<i32>() == 0);
|
|
||||||
for e in graph[i].iter() {
|
|
||||||
assert!(e.flow + graph[e.v][e.rev].flow == 0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//it solves the matching problem
|
|
||||||
for i in 0..left_cap_vec.len() {
|
|
||||||
assert!(left_cap_vec[i] as i32 == graph[i + 2].iter().map(|e| max(0, e.flow)).sum::<i32>());
|
|
||||||
}
|
|
||||||
for i in 0..right_cap_vec.len() {
|
|
||||||
assert!(
|
|
||||||
right_cap_vec[i] as i32
|
|
||||||
== graph[i + 2 + left_cap_vec.len()]
|
|
||||||
.iter()
|
|
||||||
.map(|e| max(0, e.flow))
|
|
||||||
.sum::<i32>()
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
assoc_table
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
mod tests {
|
|
||||||
use super::*;
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn test_flow() {
|
|
||||||
let left_vec = vec![3; 8];
|
|
||||||
let right_vec = vec![0, 4, 8, 4, 8];
|
|
||||||
//There are asserts in the function that computes the flow
|
|
||||||
let _ = dinic_compute_matching(left_vec, right_vec);
|
|
||||||
}
|
|
||||||
|
|
||||||
//maybe add tests relative to the matching optilization ?
|
|
||||||
}
|
|
|
@ -4,7 +4,6 @@
|
||||||
extern crate tracing;
|
extern crate tracing;
|
||||||
|
|
||||||
pub mod background;
|
pub mod background;
|
||||||
pub mod bipartite;
|
|
||||||
pub mod config;
|
pub mod config;
|
||||||
pub mod crdt;
|
pub mod crdt;
|
||||||
pub mod data;
|
pub mod data;
|
||||||
|
|
Loading…
Reference in a new issue