TimelyDataflow/timely-dataflow

Abomonation Not Working for non-POD for Communication across Processes

zzxx-husky opened this issue · 2 comments

I am trapped in a problem when running timely (v0.10.0) in a distributed setting, even with only two machines. The problem can be easily reproduced (at least on my machines). It's a segmentation fault, which takes me quite some time to use rust-gdb to figure out the reason. At the end, I found that the program meets some problem when it tries to read the RefOrMut messages from the input of the operator (e.g., line 54 in pagerank.rs).

I swear I did not use any unsafe code in the program.

The RefOrMut message being read is actually RefOrMut<Vec<MyStructure>>. The problem occurs when reading a specific member of MyStructure, and that member is a Vec<u32>. It looks like the Vec<u32> is broken.

I further found that when the size of this Vec<u32> is 0, things are fine. But if RefOfMut is Ref and the size is not 0, segmentation fault happens. After some guess and trials, I found it may be a serialization problem. As timely use Abomonation as the serialization module, I try to see whether there is any problem for the communication across processes. At the end, I think there is a problem.

Here is the test for Abomonation:

extern crate abomonation;
use abomonation::{Abomonation, encode, decode};
use std::io::{Read, Write};
use std::net::{TcpListener, TcpStream};

#[derive(Debug)]
struct S {
  val: usize,
  n: Vec<u32>
}

impl Abomonation for S {}

#[allow(dead_code)]
pub fn abom() {
  let args: Vec<_> = std::env::args().collect();
  let role = &args[1];
  let vector: Vec<_> = (0..256u32).map(|i| S{val: i as usize, n: vec!(i)}).collect();

  if role == "server" {
    println!("{:?}", vector);
    // encode vector into a Vec<u8>
    let mut bytes = Vec::new();
    unsafe { encode(&vector, &mut bytes); }
    println!("bytes: {}", bytes.len());

    let listener = TcpListener::bind("0.0.0.0:9123").unwrap();
    println!("Server starts");
    for stream in listener.incoming() {
      println!("Incoming transmission");
      let mut s = stream.unwrap();
      s.write(&bytes[..]).unwrap();
      println!("Sent");
    }
  }

  if role == "client" {
    match TcpStream::connect("127.0.0.1:9123") {
      Ok(mut stream) => {
        let mut data: Vec<u8> = Vec::with_capacity(10000);
        data.resize(10000, 0);
        println!("Connected");
        match stream.read(&mut data[..]) {
          Ok(size) => { // if len is 0, size will be 0, even the capacity is 100000 ...
            assert!(size < 10000);
            data.resize(size, 0);
            // unsafely decode a &Vec<(u64, String)> from binary data (maybe your utf8 are lies!).
            if let Some((result, remaining)) = unsafe { decode::<Vec<S>>(&mut data) } {
              assert!(result.len() == vector.len());
              assert!(remaining.len() == 0);
              println!("{:?}", result);
            }
          },
          Err(e) => panic!(e)
        }
      },
      Err(e) => panic!(e)
    };
  }
}

I run that piece of code on two processes (one server + one client) in the same machines. The server sends 8216 bytes and the client can receive exactly 8216 bytes. But the client will get seg fault when it tries to print the result. It looks like Abomonation does not work for non-POD structures?

I hope the above information helps. And I also wonder if there is any solution to solve the problem? Thanks in advance.

Hello!

#[derive(Debug)]
struct S {
  val: usize,
  n: Vec<u32>
}

impl Abomonation for S {}

is an incorrect implementation of Abomonation. The best thing to do is to use the abomonation_derive crate, which allows you to write

#[derive(Abomonation, Debug)]
struct S {
  val: usize,
  n: Vec<u32>
}

or you could use the unsafe_abomonate! macro described here. The problem is that while abomonation can handle non-POD data, your implementation does not. Should be a quick fix, though!

Wow! It works after I change abomonation to abomonation_derive and change S to

#[derive(Abomonation, Debug)]
struct S {
  ...
}

Thank you very much!