From rusty.gates at icloud.com Sun Jun 1 00:02:26 2014 From: rusty.gates at icloud.com (Tommi) Date: Sun, 01 Jun 2014 10:02:26 +0300 Subject: [rust-dev] Patterns that'll never match Message-ID: Would it be possible to get a compile-time error for a `match` branch that can never be reached due to a previous branch encompassing it. For example, for the middle branch here: let n = 0; match n { x if x < 2 => (), x if x < 1 => (), _ => () } If this is a too complicated a problem in the general case, then perhaps there could be warnings for some (implementation defined) simple cases. From christophe.pedretti at gmail.com Sun Jun 1 01:34:30 2014 From: christophe.pedretti at gmail.com (Christophe Pedretti) Date: Sun, 1 Jun 2014 10:34:30 +0200 Subject: [rust-dev] Passing arguments bu reference Message-ID: <-7036542088947732759@unknownmsgid> Hello all, I've read this : http://words.steveklabnik.com/pointers-in-rust-a-guide I am coming from Java where everything is passed and returned by reference (except for primitive data types), no choice. I know that with C, you have to use pointers to avoid passing and returning by value. When i read the mentionned guide, things are not so evident with Rust. So, to be precise, imagine i need to write ? fonction which takes ? big Vec (In my case it?s an SQL BLOB) as argument and returns a big Vec Should i use Fn my_func(src : &Vec) -> &Vec Fn my_func(src : &Vec) -> ~Vec Fn my_func(src : &Vec) ->Vec Fn my_func(src : Vec) -> Vec Fn my_func(src : ~Vec) -> ~Vec Any other combination ? Thanks PS : i know that i have to use lifetimes and that ~ are now Box, i've omitted them to simplify my demand PS2 : genrally with a language you can accomplish the same thing with different methods, but there are also common "usages", even if Rust is young, what is the usage for passing and returning large data values From corey at octayn.net Sun Jun 1 01:40:35 2014 From: corey at octayn.net (Corey Richardson) Date: Sun, 1 Jun 2014 01:40:35 -0700 Subject: [rust-dev] Patterns that'll never match In-Reply-To: References: Message-ID: We already *do* do this, but not for guards because that's not possible. On Sun, Jun 1, 2014 at 12:02 AM, Tommi wrote: > Would it be possible to get a compile-time error for a `match` branch that can never be reached due to a previous branch encompassing it. For example, for the middle branch here: > > let n = 0; > match n { > x if x < 2 => (), > x if x < 1 => (), > _ => () > } > > If this is a too complicated a problem in the general case, then perhaps there could be warnings for some (implementation defined) simple cases. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -- http://octayn.net/ From danielmicay at gmail.com Sun Jun 1 01:41:21 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sun, 01 Jun 2014 04:41:21 -0400 Subject: [rust-dev] Passing arguments bu reference In-Reply-To: <-7036542088947732759@unknownmsgid> References: <-7036542088947732759@unknownmsgid> Message-ID: <538AE731.6020905@gmail.com> On 01/06/14 04:34 AM, Christophe Pedretti wrote: > Hello all, > > I've read this : > http://words.steveklabnik.com/pointers-in-rust-a-guide > > I am coming from Java where everything is passed and returned by > reference (except for primitive data types), no choice. > > I know that with C, you have to use pointers to avoid passing and > returning by value. > > When i read the mentionned guide, things are not so evident with Rust. > > So, to be precise, imagine i need to write ? fonction which takes ? > big Vec (In my case it?s an SQL BLOB) as argument and returns a > big Vec > > Should i use > Fn my_func(src : &Vec) -> &Vec > Fn my_func(src : &Vec) -> ~Vec > Fn my_func(src : &Vec) ->Vec > Fn my_func(src : Vec) -> Vec > Fn my_func(src : ~Vec) -> ~Vec > Any other combination ? > > Thanks > > PS : i know that i have to use lifetimes and that ~ are now Box, i've > omitted them to simplify my demand > PS2 : genrally with a language you can accomplish the same thing with > different methods, but there are also common "usages", even if Rust is > young, what is the usage for passing and returning large data values Vec is always { ptr, len, cap }, it's never larger than 3 words. Rust *always* passes, assigns and returns exactly as C would. It's a shallow copy and never runs any magical operations as it can in C++. You should pass it by-value if the function needs to own a copy of the vector, and otherwise pass `&[T]` or `&mut [T]`. Using `&Vec` is an anti-pattern because it offers nothing over `&[T]` and is just less general. It does make sense to use `&mut Vec` if you want to alter the length in the function without taking ownership. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From dpx.infinity at gmail.com Sun Jun 1 01:43:44 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Sun, 1 Jun 2014 12:43:44 +0400 Subject: [rust-dev] Passing arguments bu reference In-Reply-To: <-7036542088947732759@unknownmsgid> References: <-7036542088947732759@unknownmsgid> Message-ID: Aw, Gmail makes it so easy to press "Reply" instead of "Reply to all". See below :) > Hi, Christophe, > > Because `Vec` looks like this: > > struct Vec { > len: uint, > cap: uint, > data: *mut T > } > > its actual size is just three words, so you can freely pass it around > regardless of number of items in it. So, the correct signature is this > one: > > fn my_func(src: Vec) -> Vec > > However, it actually depends on your actual use case. Passing a vector > by value means that the function you pass it to will consume it, and > you won't be able to use it again in the calling code: > > let v = vec!(1, 2, 3); > my_func(v); > v.push(4); // error: use of moved value `v` > > If you don't need this (and it happens very rarely, in fact), you > should use references. Whether they are mutable or not depends on what > you want to do with the vector. > > fn modify_vector_somehow(v: &mut Vec) { ... } > > let v = vec!(1, 2, 3); > modify_vector_somehow(&mut v); > v.push(4); // all is fine > > 2014-06-01 12:34 GMT+04:00 Christophe Pedretti : >> Hello all, >> >> I've read this : >> http://words.steveklabnik.com/pointers-in-rust-a-guide >> >> I am coming from Java where everything is passed and returned by >> reference (except for primitive data types), no choice. >> >> I know that with C, you have to use pointers to avoid passing and >> returning by value. >> >> When i read the mentionned guide, things are not so evident with Rust. >> >> So, to be precise, imagine i need to write ? fonction which takes ? >> big Vec (In my case it?s an SQL BLOB) as argument and returns a >> big Vec >> >> Should i use >> Fn my_func(src : &Vec) -> &Vec >> Fn my_func(src : &Vec) -> ~Vec >> Fn my_func(src : &Vec) ->Vec >> Fn my_func(src : Vec) -> Vec >> Fn my_func(src : ~Vec) -> ~Vec >> Any other combination ? >> >> Thanks >> >> PS : i know that i have to use lifetimes and that ~ are now Box, i've >> omitted them to simplify my demand >> PS2 : genrally with a language you can accomplish the same thing with >> different methods, but there are also common "usages", even if Rust is >> young, what is the usage for passing and returning large data values >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev 2014-06-01 12:34 GMT+04:00 Christophe Pedretti : > Hello all, > > I've read this : > http://words.steveklabnik.com/pointers-in-rust-a-guide > > I am coming from Java where everything is passed and returned by > reference (except for primitive data types), no choice. > > I know that with C, you have to use pointers to avoid passing and > returning by value. > > When i read the mentionned guide, things are not so evident with Rust. > > So, to be precise, imagine i need to write ? fonction which takes ? > big Vec (In my case it?s an SQL BLOB) as argument and returns a > big Vec > > Should i use > Fn my_func(src : &Vec) -> &Vec > Fn my_func(src : &Vec) -> ~Vec > Fn my_func(src : &Vec) ->Vec > Fn my_func(src : Vec) -> Vec > Fn my_func(src : ~Vec) -> ~Vec > Any other combination ? > > Thanks > > PS : i know that i have to use lifetimes and that ~ are now Box, i've > omitted them to simplify my demand > PS2 : genrally with a language you can accomplish the same thing with > different methods, but there are also common "usages", even if Rust is > young, what is the usage for passing and returning large data values > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From steve at steveklabnik.com Sun Jun 1 02:18:08 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Sun, 1 Jun 2014 02:18:08 -0700 Subject: [rust-dev] Passing arguments bu reference In-Reply-To: References: <-7036542088947732759@unknownmsgid> Message-ID: one of the recent changes with box is that it does placement new. So generally, this is bad: fn foo(x: int) -> Box { box (x + 1) } let y = foo(5); Because it forces your caller to use a Box. Instead... fn foo(x: int) -> int { x + 1 } Because then your caller can choose: let y = foo(5); for a copy, and let y = box foo(5); for a boxed value. Or, in the future.... let y = box(GC) foo(5); At least, that's my current understanding. From glaebhoerl at gmail.com Sun Jun 1 03:48:52 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Sun, 1 Jun 2014 12:48:52 +0200 Subject: [rust-dev] Patterns that'll never match In-Reply-To: References: Message-ID: Well, not possible in the general case, to be more precise. It would be possible in theory to teach the compiler about e.g. the comparison operators on built-in integral types, which don't involve any user code. It would only be appropriate as a warning rather than an error due to the inherent incompleteness of the analysis and the arbitrariness of what things to include in it. No opinion about whether it would be worth doing. On Sun, Jun 1, 2014 at 10:40 AM, Corey Richardson wrote: > We already *do* do this, but not for guards because that's not possible. > > On Sun, Jun 1, 2014 at 12:02 AM, Tommi wrote: > > Would it be possible to get a compile-time error for a `match` branch > that can never be reached due to a previous branch encompassing it. For > example, for the middle branch here: > > > > let n = 0; > > match n { > > x if x < 2 => (), > > x if x < 1 => (), > > _ => () > > } > > > > If this is a too complicated a problem in the general case, then perhaps > there could be warnings for some (implementation defined) simple cases. > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > > -- > http://octayn.net/ > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > On Sun, Jun 1, 2014 at 10:40 AM, Corey Richardson wrote: > We already *do* do this, but not for guards because that's not possible. > > On Sun, Jun 1, 2014 at 12:02 AM, Tommi wrote: > > Would it be possible to get a compile-time error for a `match` branch > that can never be reached due to a previous branch encompassing it. For > example, for the middle branch here: > > > > let n = 0; > > match n { > > x if x < 2 => (), > > x if x < 1 => (), > > _ => () > > } > > > > If this is a too complicated a problem in the general case, then perhaps > there could be warnings for some (implementation defined) simple cases. > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > > -- > http://octayn.net/ > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Sun Jun 1 04:04:40 2014 From: rusty.gates at icloud.com (Tommi) Date: Sun, 01 Jun 2014 14:04:40 +0300 Subject: [rust-dev] Patterns that'll never match In-Reply-To: References: Message-ID: <31703434-2E80-4A2C-83A2-98CF60AE9405@icloud.com> On 2014-06-01, at 13:48, G?bor Lehel wrote: > It would be possible in theory to teach the compiler about e.g. the comparison operators on built-in integral types, which don't involve any user code. It would only be appropriate as a warning rather than an error due to the inherent incompleteness of the analysis and the arbitrariness of what things to include in it. No opinion about whether it would be worth doing. Perhaps this kind of thing would be better suited for a separate tool that could (contrary to a compiler) run this and other kinds of heuristics without having to worry about blowing up compilation times. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.monrocq at gmail.com Sun Jun 1 04:29:03 2014 From: matthieu.monrocq at gmail.com (Matthieu Monrocq) Date: Sun, 1 Jun 2014 13:29:03 +0200 Subject: [rust-dev] Patterns that'll never match In-Reply-To: <31703434-2E80-4A2C-83A2-98CF60AE9405@icloud.com> References: <31703434-2E80-4A2C-83A2-98CF60AE9405@icloud.com> Message-ID: On Sun, Jun 1, 2014 at 1:04 PM, Tommi wrote: > On 2014-06-01, at 13:48, G?bor Lehel wrote: > > It would be possible in theory to teach the compiler about e.g. the > comparison operators on built-in integral types, which don't involve any > user code. It would only be appropriate as a warning rather than an error > due to the inherent incompleteness of the analysis and the arbitrariness of > what things to include in it. No opinion about whether it would be worth > doing. > > > Perhaps this kind of thing would be better suited for a separate tool that > could (contrary to a compiler) run this and other kinds of heuristics > without having to worry about blowing up compilation times. > > This is typically the domain of either static analysis or runtime instrumentation (branch coverage tools) in the arbitrary case, indeed. -- Matthieu > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.monrocq at gmail.com Sun Jun 1 04:36:05 2014 From: matthieu.monrocq at gmail.com (Matthieu Monrocq) Date: Sun, 1 Jun 2014 13:36:05 +0200 Subject: [rust-dev] A better type system In-Reply-To: <88b869f3-52b7-489c-a197-3a2cff60deb8@email.android.com> References: <7BB08DE7-7315-4F38-85AF-78BC633141CE@icloud.com> <538A14E5.7070009@mozilla.com> <5CFCDE64-BDEB-483E-BA92-424CA15F4529@icloud.com> <538A518E.90706@mozilla.com> <7D76B3E2-06C9-4382-8E68-F4F2E9003E26@icloud.com> <3acefe6c-bfa2-491e-bf36-40d921efa332@email.android.com> <770D9752-D9A0-4A02-B2F6-AFA0A853DDF7@mozilla.com> <88b869f3-52b7-489c-a197-3a2cff60deb8@email.android.com> Message-ID: FYI: I did a RFC for separating mut and "only" some times ago: https://github.com/rust-lang/rfcs/pull/78# I invite the interested readers to check it out and read the comments (notably those by thestinger, aka Daniel Micay on this list). For now, my understanding was that proposals on the topic were suspended until the dev team manages to clear its plate of several big projects (such as DST), especially as thestinger had a proposal to change the way lambda captures are modeled so it no longer requires a "&uniq" (only accessible to the compiler). -- Matthieu On Sun, Jun 1, 2014 at 2:32 AM, Patrick Walton wrote: > Yes, you could eliminate (c) by prohibiting taking references to the > inside of sum types (really, any existential type). This is what Cyclone > did. For (e) I'm thinking of sum types in which the two variants have > different sizes (although maybe that doesn't work). > > We'd basically have to bring back the old &mut as a separate type of > pointer to make it work. Note that Niko was considering a system like this > in older blog posts pre-INHTWAMA. (Search for "restrict pointers" on his > blog.) > > Patrick > > On May 31, 2014 5:26:39 PM PDT, Cameron Zwarich > wrote: >> >> FWIW, I think you could eliminate (c) by prohibiting mutation of sum >> types. What case are you thinking of for (e)? >> >> For (d), this would probably have to be distinguished from the current >> &mut somehow, to allow for truly unique access paths to sum types or shared >> data, so you could preserve any aliasing optimizations for the current >> &mut. Of course, more functions might take the less restrictive version, >> eliminating the optimization that way. >> >> Not that I think that this is a great idea; I?m just wondering whether >> there are any caveats that have escaped my mental model of the borrow >> checker. >> >> Cameron >> >> On May 31, 2014, at 5:01 PM, Patrick Walton wrote: >> >> I assume what you're trying to say is that we should allow multiple >> mutable references to pointer-free data. (Note that, as Huon pointed out, >> this is not the same thing as the Copy bound.) >> >> That is potentially plausible, but (a) it adds more complexity to the >> borrow checker; (b) it's a fairly narrow use case, since it'd only be safe >> for pointer-free data; (c) it admits casts like 3u8 -> bool, casts to >> out-of-range enum values, denormal floats, and the like, all of which would >> have various annoying consequences; (d) it complicates or defeats >> optimizations based on pointer aliasing of &mut; (e) it allows >> uninitialized data to be read, introducing undefined behavior into the >> language. I don't think it's worth it. >> >> Patrick >> >> On May 31, 2014 4:42:10 PM PDT, Tommi wrote: >>> >>> On 2014-06-01, at 1:02, Patrick Walton wrote: >>> >>> fn my_transmute(value: T, other: U) -> U { >>> let mut x = Left(other); >>> let y = match x { >>> Left(ref mut y) => y, >>> Right(_) => fail!() >>> }; >>> *x = Right(value); >>> (*y).clone() >>> } >>> >>> >>> If `U` implements `Copy`, then I don't see a (memory-safety) issue here. >>> And if `U` doesn't implement `Copy`, then it's same situation as it was in >>> the earlier example given by Matthieu, where there was an assignment to an >>> `Option>` variable while a different reference pointing to that >>> variable existed. The compiler shouldn't allow that assignment just as in >>> your example the compiler shouldn't allow the assignment `x = >>> Right(value);` (after a separate reference pointing to the contents of `x` >>> has been created) if `U` is not a `Copy` type. >>> >>> But, like I said in an earlier post, even though I don't see this >>> (transmuting a `Copy` type in safe code) as a memory-safety issue, it is a >>> code correctness issue. So it's a compromise between preventing logic bugs >>> (in safe code) and the convenience of more liberal mutation. >>> >>> >> -- >> Sent from my Android phone with K-9 Mail. Please excuse my brevity. >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> >> > -- > Sent from my Android phone with K-9 Mail. Please excuse my brevity. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.pedretti at gmail.com Sun Jun 1 11:20:49 2014 From: christophe.pedretti at gmail.com (Christophe Pedretti) Date: Sun, 1 Jun 2014 20:20:49 +0200 Subject: [rust-dev] Using String and StrSlice Message-ID: Hi all, suppose i want to replace the i th character c (this character is ascii, so represented by exactly one byte) in a String named buf with character 'a' i can do this buf = buf.as_slice().slice_to(i).to_string().append("a").append(buf.as_slice().slice_from(i+1)) if c is any UTF8 character, i can use buf = buf.as_slice().slice_to(i).to_string().append("a").append(buf.as_slice().slice_from(i+c.len_utf8_bytes())) It's quite complex, no better way to do ? Any future additional methods for String are planned ? Thanks -- Christophe -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholasbishop at gmail.com Sun Jun 1 12:48:35 2014 From: nicholasbishop at gmail.com (Nicholas Bishop) Date: Sun, 1 Jun 2014 15:48:35 -0400 Subject: [rust-dev] Calling a method while iterating over a field of the object Message-ID: I'm looking for a little borrow-checker advice. Here's a reasonably minimal program that demonstrates the problem: extern crate collections; use collections::HashMap; struct G { verts: HashMap, edges: Vec<(int, int)>, next_vert_id: int } impl G { fn new() -> G { G{verts: HashMap::new(), edges: Vec::new(), next_vert_id: 0} } fn add_vert(&mut self, s: &str) -> int { let id = self.next_vert_id; self.next_vert_id += 1; self.verts.insert(id, String::from_str(s)); id } fn add_edge(&mut self, v0: int, v1: int) { self.edges.push((v0, v1)) } } fn main() { let mut g = G::new(); { let v0 = g.add_vert("vert 0"); let v1 = g.add_vert("vert 1"); g.add_edge(v0, v1); } for &(v0, v1) in g.edges.iter() { g.add_vert("edge vert"); } } This fails to compile: $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) host: x86_64-unknown-linux-gnu $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc graph.rs graph.rs:39:9: 39:10 error: cannot borrow `g` as mutable because `g.edges` is also borrowed as immutable graph.rs:39 g.add_vert("edge vert"); ^ graph.rs:38:22: 38:29 note: previous borrow of `g.edges` occurs here; the immutable borrow prevents subsequent moves or mutable borrows of `g.edges` until the borrow ends graph.rs:38 for &(v0, v1) in g.edges.iter() { ^~~~~~~ graph.rs:41:2: 41:2 note: previous borrow ends here graph.rs:38 for &(v0, v1) in g.edges.iter() { graph.rs:39 g.add_vert("edge vert"); graph.rs:40 } graph.rs:41 } ^ error: aborting due to previous error My understanding of the error is: G::add_vert is being given a mutable reference to "g", which means it could do something naughty like clear g.edges, which would screw up the loop iteration that is happening in main(). That seems like a pretty reasonable thing to prevent, but it's not clear to me how I should restructure the program to work around the error. In this minimal example I could copy the code out of G::add_vert and stick it directly inside the loop, but that's clearly not the general solution. Thanks, -Nicholas From zwarich at mozilla.com Sun Jun 1 13:03:06 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Sun, 1 Jun 2014 13:03:06 -0700 Subject: [rust-dev] Calling a method while iterating over a field of the object In-Reply-To: References: Message-ID: <1621CCF4-E089-4E69-BA0F-4D98C6C87538@mozilla.com> The simplest thing to do is probably to build an intermediate vector of vertices to insert and then push them all after you are done iterating over the edges. Cameron > On Jun 1, 2014, at 12:48 PM, Nicholas Bishop wrote: > > I'm looking for a little borrow-checker advice. Here's a reasonably > minimal program that demonstrates the problem: > > extern crate collections; > > use collections::HashMap; > > struct G { > verts: HashMap, > edges: Vec<(int, int)>, > > next_vert_id: int > } > > impl G { > fn new() -> G { > G{verts: HashMap::new(), edges: Vec::new(), next_vert_id: 0} > } > > fn add_vert(&mut self, s: &str) -> int { > let id = self.next_vert_id; > self.next_vert_id += 1; > self.verts.insert(id, String::from_str(s)); > id > } > > fn add_edge(&mut self, v0: int, v1: int) { > self.edges.push((v0, v1)) > } > } > > fn main() { > let mut g = G::new(); > > { > let v0 = g.add_vert("vert 0"); > let v1 = g.add_vert("vert 1"); > g.add_edge(v0, v1); > } > > for &(v0, v1) in g.edges.iter() { > g.add_vert("edge vert"); > } > } > > This fails to compile: > $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v > rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) > host: x86_64-unknown-linux-gnu > > $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc graph.rs > graph.rs:39:9: 39:10 error: cannot borrow `g` as mutable because > `g.edges` is also borrowed as immutable > graph.rs:39 g.add_vert("edge vert"); > ^ > graph.rs:38:22: 38:29 note: previous borrow of `g.edges` occurs here; > the immutable borrow prevents subsequent moves or mutable borrows of > `g.edges` until the borrow ends > graph.rs:38 for &(v0, v1) in g.edges.iter() { > ^~~~~~~ > graph.rs:41:2: 41:2 note: previous borrow ends here > graph.rs:38 for &(v0, v1) in g.edges.iter() { > graph.rs:39 g.add_vert("edge vert"); > graph.rs:40 } > graph.rs:41 } > ^ > error: aborting due to previous error > > My understanding of the error is: G::add_vert is being given a mutable > reference to "g", which means it could do something naughty like clear > g.edges, which would screw up the loop iteration that is happening in > main(). > > That seems like a pretty reasonable thing to prevent, but it's not > clear to me how I should restructure the program to work around the > error. In this minimal example I could copy the code out of > G::add_vert and stick it directly inside the loop, but that's clearly > not the general solution. > > Thanks, > -Nicholas > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From christophe.pedretti at gmail.com Sun Jun 1 13:39:55 2014 From: christophe.pedretti at gmail.com (Christophe Pedretti) Date: Sun, 1 Jun 2014 22:39:55 +0200 Subject: [rust-dev] Calling a method while iterating over a field of the object In-Reply-To: <1621CCF4-E089-4E69-BA0F-4D98C6C87538@mozilla.com> References: <1621CCF4-E089-4E69-BA0F-4D98C6C87538@mozilla.com> Message-ID: and using mut_iter() instead of iter() is not enough ? 2014-06-01 22:03 GMT+02:00 Cameron Zwarich : > The simplest thing to do is probably to build an intermediate vector of > vertices to insert and then push them all after you are done iterating over > the edges. > > Cameron > > > On Jun 1, 2014, at 12:48 PM, Nicholas Bishop > wrote: > > > > I'm looking for a little borrow-checker advice. Here's a reasonably > > minimal program that demonstrates the problem: > > > > extern crate collections; > > > > use collections::HashMap; > > > > struct G { > > verts: HashMap, > > edges: Vec<(int, int)>, > > > > next_vert_id: int > > } > > > > impl G { > > fn new() -> G { > > G{verts: HashMap::new(), edges: Vec::new(), next_vert_id: 0} > > } > > > > fn add_vert(&mut self, s: &str) -> int { > > let id = self.next_vert_id; > > self.next_vert_id += 1; > > self.verts.insert(id, String::from_str(s)); > > id > > } > > > > fn add_edge(&mut self, v0: int, v1: int) { > > self.edges.push((v0, v1)) > > } > > } > > > > fn main() { > > let mut g = G::new(); > > > > { > > let v0 = g.add_vert("vert 0"); > > let v1 = g.add_vert("vert 1"); > > g.add_edge(v0, v1); > > } > > > > for &(v0, v1) in g.edges.iter() { > > g.add_vert("edge vert"); > > } > > } > > > > This fails to compile: > > $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v > > rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) > > host: x86_64-unknown-linux-gnu > > > > $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc graph.rs > > graph.rs:39:9: 39:10 error: cannot borrow `g` as mutable because > > `g.edges` is also borrowed as immutable > > graph.rs:39 g.add_vert("edge vert"); > > ^ > > graph.rs:38:22: 38:29 note: previous borrow of `g.edges` occurs here; > > the immutable borrow prevents subsequent moves or mutable borrows of > > `g.edges` until the borrow ends > > graph.rs:38 for &(v0, v1) in g.edges.iter() { > > ^~~~~~~ > > graph.rs:41:2: 41:2 note: previous borrow ends here > > graph.rs:38 for &(v0, v1) in g.edges.iter() { > > graph.rs:39 g.add_vert("edge vert"); > > graph.rs:40 } > > graph.rs:41 } > > ^ > > error: aborting due to previous error > > > > My understanding of the error is: G::add_vert is being given a mutable > > reference to "g", which means it could do something naughty like clear > > g.edges, which would screw up the loop iteration that is happening in > > main(). > > > > That seems like a pretty reasonable thing to prevent, but it's not > > clear to me how I should restructure the program to work around the > > error. In this minimal example I could copy the code out of > > G::add_vert and stick it directly inside the loop, but that's clearly > > not the general solution. > > > > Thanks, > > -Nicholas > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zwarich at mozilla.com Sun Jun 1 13:48:03 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Sun, 1 Jun 2014 13:48:03 -0700 Subject: [rust-dev] Calling a method while iterating over a field of the object In-Reply-To: References: <1621CCF4-E089-4E69-BA0F-4D98C6C87538@mozilla.com> Message-ID: <4F6E6CFD-896C-4E9B-A6CD-075219745C9C@mozilla.com> `mut_iter` only gives you mutable references to the elements of the container; it doesn?t allow you to reborrow the container itself mutably inside of the loop. Cameron On Jun 1, 2014, at 1:39 PM, Christophe Pedretti wrote: > and using mut_iter() instead of iter() is not enough ? > > > 2014-06-01 22:03 GMT+02:00 Cameron Zwarich : > The simplest thing to do is probably to build an intermediate vector of vertices to insert and then push them all after you are done iterating over the edges. > > Cameron > > > On Jun 1, 2014, at 12:48 PM, Nicholas Bishop wrote: > > > > I'm looking for a little borrow-checker advice. Here's a reasonably > > minimal program that demonstrates the problem: > > > > extern crate collections; > > > > use collections::HashMap; > > > > struct G { > > verts: HashMap, > > edges: Vec<(int, int)>, > > > > next_vert_id: int > > } > > > > impl G { > > fn new() -> G { > > G{verts: HashMap::new(), edges: Vec::new(), next_vert_id: 0} > > } > > > > fn add_vert(&mut self, s: &str) -> int { > > let id = self.next_vert_id; > > self.next_vert_id += 1; > > self.verts.insert(id, String::from_str(s)); > > id > > } > > > > fn add_edge(&mut self, v0: int, v1: int) { > > self.edges.push((v0, v1)) > > } > > } > > > > fn main() { > > let mut g = G::new(); > > > > { > > let v0 = g.add_vert("vert 0"); > > let v1 = g.add_vert("vert 1"); > > g.add_edge(v0, v1); > > } > > > > for &(v0, v1) in g.edges.iter() { > > g.add_vert("edge vert"); > > } > > } > > > > This fails to compile: > > $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v > > rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) > > host: x86_64-unknown-linux-gnu > > > > $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc graph.rs > > graph.rs:39:9: 39:10 error: cannot borrow `g` as mutable because > > `g.edges` is also borrowed as immutable > > graph.rs:39 g.add_vert("edge vert"); > > ^ > > graph.rs:38:22: 38:29 note: previous borrow of `g.edges` occurs here; > > the immutable borrow prevents subsequent moves or mutable borrows of > > `g.edges` until the borrow ends > > graph.rs:38 for &(v0, v1) in g.edges.iter() { > > ^~~~~~~ > > graph.rs:41:2: 41:2 note: previous borrow ends here > > graph.rs:38 for &(v0, v1) in g.edges.iter() { > > graph.rs:39 g.add_vert("edge vert"); > > graph.rs:40 } > > graph.rs:41 } > > ^ > > error: aborting due to previous error > > > > My understanding of the error is: G::add_vert is being given a mutable > > reference to "g", which means it could do something naughty like clear > > g.edges, which would screw up the loop iteration that is happening in > > main(). > > > > That seems like a pretty reasonable thing to prevent, but it's not > > clear to me how I should restructure the program to work around the > > error. In this minimal example I could copy the code out of > > G::add_vert and stick it directly inside the loop, but that's clearly > > not the general solution. > > > > Thanks, > > -Nicholas > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholasbishop at gmail.com Sun Jun 1 14:15:43 2014 From: nicholasbishop at gmail.com (Nicholas Bishop) Date: Sun, 1 Jun 2014 17:15:43 -0400 Subject: [rust-dev] Calling a method while iterating over a field of the object In-Reply-To: <4F6E6CFD-896C-4E9B-A6CD-075219745C9C@mozilla.com> References: <1621CCF4-E089-4E69-BA0F-4D98C6C87538@mozilla.com> <4F6E6CFD-896C-4E9B-A6CD-075219745C9C@mozilla.com> Message-ID: Building an intermediate would work, but it implies extra overhead. If this was a large graph instead of just one edge then it could be expensive to copy from the intermediate back into the original object. Are there any alternatives to consider? On Sun, Jun 1, 2014 at 4:48 PM, Cameron Zwarich wrote: > `mut_iter` only gives you mutable references to the elements of the > container; it doesn?t allow you to reborrow the container itself mutably > inside of the loop. > > Cameron > > On Jun 1, 2014, at 1:39 PM, Christophe Pedretti > wrote: > > and using mut_iter() instead of iter() is not enough ? > > > 2014-06-01 22:03 GMT+02:00 Cameron Zwarich : >> >> The simplest thing to do is probably to build an intermediate vector of >> vertices to insert and then push them all after you are done iterating over >> the edges. >> >> Cameron >> >> > On Jun 1, 2014, at 12:48 PM, Nicholas Bishop >> > wrote: >> > >> > I'm looking for a little borrow-checker advice. Here's a reasonably >> > minimal program that demonstrates the problem: >> > >> > extern crate collections; >> > >> > use collections::HashMap; >> > >> > struct G { >> > verts: HashMap, >> > edges: Vec<(int, int)>, >> > >> > next_vert_id: int >> > } >> > >> > impl G { >> > fn new() -> G { >> > G{verts: HashMap::new(), edges: Vec::new(), next_vert_id: 0} >> > } >> > >> > fn add_vert(&mut self, s: &str) -> int { >> > let id = self.next_vert_id; >> > self.next_vert_id += 1; >> > self.verts.insert(id, String::from_str(s)); >> > id >> > } >> > >> > fn add_edge(&mut self, v0: int, v1: int) { >> > self.edges.push((v0, v1)) >> > } >> > } >> > >> > fn main() { >> > let mut g = G::new(); >> > >> > { >> > let v0 = g.add_vert("vert 0"); >> > let v1 = g.add_vert("vert 1"); >> > g.add_edge(v0, v1); >> > } >> > >> > for &(v0, v1) in g.edges.iter() { >> > g.add_vert("edge vert"); >> > } >> > } >> > >> > This fails to compile: >> > $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v >> > rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) >> > host: x86_64-unknown-linux-gnu >> > >> > $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc graph.rs >> > graph.rs:39:9: 39:10 error: cannot borrow `g` as mutable because >> > `g.edges` is also borrowed as immutable >> > graph.rs:39 g.add_vert("edge vert"); >> > ^ >> > graph.rs:38:22: 38:29 note: previous borrow of `g.edges` occurs here; >> > the immutable borrow prevents subsequent moves or mutable borrows of >> > `g.edges` until the borrow ends >> > graph.rs:38 for &(v0, v1) in g.edges.iter() { >> > ^~~~~~~ >> > graph.rs:41:2: 41:2 note: previous borrow ends here >> > graph.rs:38 for &(v0, v1) in g.edges.iter() { >> > graph.rs:39 g.add_vert("edge vert"); >> > graph.rs:40 } >> > graph.rs:41 } >> > ^ >> > error: aborting due to previous error >> > >> > My understanding of the error is: G::add_vert is being given a mutable >> > reference to "g", which means it could do something naughty like clear >> > g.edges, which would screw up the loop iteration that is happening in >> > main(). >> > >> > That seems like a pretty reasonable thing to prevent, but it's not >> > clear to me how I should restructure the program to work around the >> > error. In this minimal example I could copy the code out of >> > G::add_vert and stick it directly inside the loop, but that's clearly >> > not the general solution. >> > >> > Thanks, >> > -Nicholas >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From zwarich at mozilla.com Sun Jun 1 14:26:06 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Sun, 1 Jun 2014 14:26:06 -0700 Subject: [rust-dev] Calling a method while iterating over a field of the object In-Reply-To: References: <1621CCF4-E089-4E69-BA0F-4D98C6C87538@mozilla.com> <4F6E6CFD-896C-4E9B-A6CD-075219745C9C@mozilla.com> Message-ID: <379E9AE1-F241-4CF9-8746-01BC575F7E85@mozilla.com> It?s difficult to do something better without somewhat breaking the encapsulation of your graph type, but you could split G into edge and vertex data structures and have the functions that add vertices / edges operate on part of . Then given an &mut G, you could reborrow the vertex data and the edge data with &mut pointers separately. This is tricky because not all implementations of a graph interface allow separate modification of vertex and edge data, so to exploit this you have to expose your representation somewhat. Cameron On Jun 1, 2014, at 2:15 PM, Nicholas Bishop wrote: > Building an intermediate would work, but it implies extra overhead. If > this was a large graph instead of just one edge then it could be > expensive to copy from the intermediate back into the original object. > Are there any alternatives to consider? > > On Sun, Jun 1, 2014 at 4:48 PM, Cameron Zwarich wrote: >> `mut_iter` only gives you mutable references to the elements of the >> container; it doesn?t allow you to reborrow the container itself mutably >> inside of the loop. >> >> Cameron >> >> On Jun 1, 2014, at 1:39 PM, Christophe Pedretti >> wrote: >> >> and using mut_iter() instead of iter() is not enough ? >> >> >> 2014-06-01 22:03 GMT+02:00 Cameron Zwarich : >>> >>> The simplest thing to do is probably to build an intermediate vector of >>> vertices to insert and then push them all after you are done iterating over >>> the edges. >>> >>> Cameron >>> >>>> On Jun 1, 2014, at 12:48 PM, Nicholas Bishop >>>> wrote: >>>> >>>> I'm looking for a little borrow-checker advice. Here's a reasonably >>>> minimal program that demonstrates the problem: >>>> >>>> extern crate collections; >>>> >>>> use collections::HashMap; >>>> >>>> struct G { >>>> verts: HashMap, >>>> edges: Vec<(int, int)>, >>>> >>>> next_vert_id: int >>>> } >>>> >>>> impl G { >>>> fn new() -> G { >>>> G{verts: HashMap::new(), edges: Vec::new(), next_vert_id: 0} >>>> } >>>> >>>> fn add_vert(&mut self, s: &str) -> int { >>>> let id = self.next_vert_id; >>>> self.next_vert_id += 1; >>>> self.verts.insert(id, String::from_str(s)); >>>> id >>>> } >>>> >>>> fn add_edge(&mut self, v0: int, v1: int) { >>>> self.edges.push((v0, v1)) >>>> } >>>> } >>>> >>>> fn main() { >>>> let mut g = G::new(); >>>> >>>> { >>>> let v0 = g.add_vert("vert 0"); >>>> let v1 = g.add_vert("vert 1"); >>>> g.add_edge(v0, v1); >>>> } >>>> >>>> for &(v0, v1) in g.edges.iter() { >>>> g.add_vert("edge vert"); >>>> } >>>> } >>>> >>>> This fails to compile: >>>> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v >>>> rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) >>>> host: x86_64-unknown-linux-gnu >>>> >>>> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc graph.rs >>>> graph.rs:39:9: 39:10 error: cannot borrow `g` as mutable because >>>> `g.edges` is also borrowed as immutable >>>> graph.rs:39 g.add_vert("edge vert"); >>>> ^ >>>> graph.rs:38:22: 38:29 note: previous borrow of `g.edges` occurs here; >>>> the immutable borrow prevents subsequent moves or mutable borrows of >>>> `g.edges` until the borrow ends >>>> graph.rs:38 for &(v0, v1) in g.edges.iter() { >>>> ^~~~~~~ >>>> graph.rs:41:2: 41:2 note: previous borrow ends here >>>> graph.rs:38 for &(v0, v1) in g.edges.iter() { >>>> graph.rs:39 g.add_vert("edge vert"); >>>> graph.rs:40 } >>>> graph.rs:41 } >>>> ^ >>>> error: aborting due to previous error >>>> >>>> My understanding of the error is: G::add_vert is being given a mutable >>>> reference to "g", which means it could do something naughty like clear >>>> g.edges, which would screw up the loop iteration that is happening in >>>> main(). >>>> >>>> That seems like a pretty reasonable thing to prevent, but it's not >>>> clear to me how I should restructure the program to work around the >>>> error. In this minimal example I could copy the code out of >>>> G::add_vert and stick it directly inside the loop, but that's clearly >>>> not the general solution. >>>> >>>> Thanks, >>>> -Nicholas >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> From singingboyo at gmail.com Sun Jun 1 14:33:36 2014 From: singingboyo at gmail.com (Brandon Sanderson) Date: Sun, 1 Jun 2014 14:33:36 -0700 Subject: [rust-dev] Calling a method while iterating over a field of the object In-Reply-To: <379E9AE1-F241-4CF9-8746-01BC575F7E85@mozilla.com> References: <1621CCF4-E089-4E69-BA0F-4D98C6C87538@mozilla.com> <4F6E6CFD-896C-4E9B-A6CD-075219745C9C@mozilla.com> <379E9AE1-F241-4CF9-8746-01BC575F7E85@mozilla.com> Message-ID: I haven't tried this at all, but could iterating over the range [ 0, g.edges.len() ) work? On Sun, Jun 1, 2014 at 2:26 PM, Cameron Zwarich wrote: > It?s difficult to do something better without somewhat breaking the > encapsulation of your graph type, but you could split G into edge and > vertex data structures and have the functions that add vertices / edges > operate on part of . Then given an &mut G, you could reborrow the vertex > data and the edge data with &mut pointers separately. > > This is tricky because not all implementations of a graph interface allow > separate modification of vertex and edge data, so to exploit this you have > to expose your representation somewhat. > > Cameron > > On Jun 1, 2014, at 2:15 PM, Nicholas Bishop > wrote: > > > Building an intermediate would work, but it implies extra overhead. If > > this was a large graph instead of just one edge then it could be > > expensive to copy from the intermediate back into the original object. > > Are there any alternatives to consider? > > > > On Sun, Jun 1, 2014 at 4:48 PM, Cameron Zwarich > wrote: > >> `mut_iter` only gives you mutable references to the elements of the > >> container; it doesn?t allow you to reborrow the container itself mutably > >> inside of the loop. > >> > >> Cameron > >> > >> On Jun 1, 2014, at 1:39 PM, Christophe Pedretti > >> wrote: > >> > >> and using mut_iter() instead of iter() is not enough ? > >> > >> > >> 2014-06-01 22:03 GMT+02:00 Cameron Zwarich : > >>> > >>> The simplest thing to do is probably to build an intermediate vector of > >>> vertices to insert and then push them all after you are done iterating > over > >>> the edges. > >>> > >>> Cameron > >>> > >>>> On Jun 1, 2014, at 12:48 PM, Nicholas Bishop < > nicholasbishop at gmail.com> > >>>> wrote: > >>>> > >>>> I'm looking for a little borrow-checker advice. Here's a reasonably > >>>> minimal program that demonstrates the problem: > >>>> > >>>> extern crate collections; > >>>> > >>>> use collections::HashMap; > >>>> > >>>> struct G { > >>>> verts: HashMap, > >>>> edges: Vec<(int, int)>, > >>>> > >>>> next_vert_id: int > >>>> } > >>>> > >>>> impl G { > >>>> fn new() -> G { > >>>> G{verts: HashMap::new(), edges: Vec::new(), next_vert_id: 0} > >>>> } > >>>> > >>>> fn add_vert(&mut self, s: &str) -> int { > >>>> let id = self.next_vert_id; > >>>> self.next_vert_id += 1; > >>>> self.verts.insert(id, String::from_str(s)); > >>>> id > >>>> } > >>>> > >>>> fn add_edge(&mut self, v0: int, v1: int) { > >>>> self.edges.push((v0, v1)) > >>>> } > >>>> } > >>>> > >>>> fn main() { > >>>> let mut g = G::new(); > >>>> > >>>> { > >>>> let v0 = g.add_vert("vert 0"); > >>>> let v1 = g.add_vert("vert 1"); > >>>> g.add_edge(v0, v1); > >>>> } > >>>> > >>>> for &(v0, v1) in g.edges.iter() { > >>>> g.add_vert("edge vert"); > >>>> } > >>>> } > >>>> > >>>> This fails to compile: > >>>> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v > >>>> rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) > >>>> host: x86_64-unknown-linux-gnu > >>>> > >>>> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc graph.rs > >>>> graph.rs:39:9: 39:10 error: cannot borrow `g` as mutable because > >>>> `g.edges` is also borrowed as immutable > >>>> graph.rs:39 g.add_vert("edge vert"); > >>>> ^ > >>>> graph.rs:38:22: 38:29 note: previous borrow of `g.edges` occurs here; > >>>> the immutable borrow prevents subsequent moves or mutable borrows of > >>>> `g.edges` until the borrow ends > >>>> graph.rs:38 for &(v0, v1) in g.edges.iter() { > >>>> ^~~~~~~ > >>>> graph.rs:41:2: 41:2 note: previous borrow ends here > >>>> graph.rs:38 for &(v0, v1) in g.edges.iter() { > >>>> graph.rs:39 g.add_vert("edge vert"); > >>>> graph.rs:40 } > >>>> graph.rs:41 } > >>>> ^ > >>>> error: aborting due to previous error > >>>> > >>>> My understanding of the error is: G::add_vert is being given a mutable > >>>> reference to "g", which means it could do something naughty like clear > >>>> g.edges, which would screw up the loop iteration that is happening in > >>>> main(). > >>>> > >>>> That seems like a pretty reasonable thing to prevent, but it's not > >>>> clear to me how I should restructure the program to work around the > >>>> error. In this minimal example I could copy the code out of > >>>> G::add_vert and stick it directly inside the loop, but that's clearly > >>>> not the general solution. > >>>> > >>>> Thanks, > >>>> -Nicholas > >>>> _______________________________________________ > >>>> Rust-dev mailing list > >>>> Rust-dev at mozilla.org > >>>> https://mail.mozilla.org/listinfo/rust-dev > >>> _______________________________________________ > >>> Rust-dev mailing list > >>> Rust-dev at mozilla.org > >>> https://mail.mozilla.org/listinfo/rust-dev > >> > >> > >> _______________________________________________ > >> Rust-dev mailing list > >> Rust-dev at mozilla.org > >> https://mail.mozilla.org/listinfo/rust-dev > >> > >> > >> > >> _______________________________________________ > >> Rust-dev mailing list > >> Rust-dev at mozilla.org > >> https://mail.mozilla.org/listinfo/rust-dev > >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholasbishop at gmail.com Sun Jun 1 15:02:54 2014 From: nicholasbishop at gmail.com (Nicholas Bishop) Date: Sun, 1 Jun 2014 18:02:54 -0400 Subject: [rust-dev] Error while trying to split source code into multiple files Message-ID: Here's example code: /src/main.rs: mod foo; fn main() { foo::foo(); } /src/bar.rs: pub fn bar() { } /src/foo.rs: mod bar; pub fn foo() { bar::bar(); } This fails: $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) host: x86_64-unknown-linux-gnu $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc main.rs foo.rs:1:5: 1:8 error: cannot declare a new module at this location foo.rs:1 mod bar; ^~~ foo.rs:1:5: 1:8 note: maybe move this module `foo` to its own directory via `foo/mod.rs` foo.rs:1 mod bar; ^~~ foo.rs:1:5: 1:8 note: ... or maybe `use` the module `bar` instead of possibly redeclaring it foo.rs:1 mod bar; ^~~ error: aborting due to previous error I tried the first suggestion (moving foo.rs to foo/mod.rs), this fails too: foo/mod.rs:1:5: 1:8 error: file not found for module `bar` foo/mod.rs:1 mod bar; ^~~ The second suggestion, which I took to mean replacing "mod bar" with "use bar", also failed: brokencrate/foo.rs:1:5: 1:8 error: unresolved import: there is no `bar` in `???` brokencrate/foo.rs:1 use bar; ^~~ brokencrate/foo.rs:1:5: 1:8 error: failed to resolve import `bar` brokencrate/foo.rs:1 use bar; ^~~ error: aborting due to 2 previous errors I'm guessing that this failure is related to this RFC: https://github.com/rust-lang/rfcs/blob/master/complete/0016-module-file-system-hierarchy.md Unfortunately the RFC describes "a common newbie mistake" but not what a newbie might do to correct this mistake. I also looked through http://doc.rust-lang.org/tutorial.html#crates-and-the-module-system, but didn't see this question directly addressed. Thanks, -Nicholas From nicholasbishop at gmail.com Sun Jun 1 15:32:36 2014 From: nicholasbishop at gmail.com (Nicholas Bishop) Date: Sun, 1 Jun 2014 18:32:36 -0400 Subject: [rust-dev] Calling a method while iterating over a field of the object In-Reply-To: References: <1621CCF4-E089-4E69-BA0F-4D98C6C87538@mozilla.com> <4F6E6CFD-896C-4E9B-A6CD-075219745C9C@mozilla.com> <379E9AE1-F241-4CF9-8746-01BC575F7E85@mozilla.com> Message-ID: > I haven't tried this at all, but could iterating over the range [ 0, > g.edges.len() ) work? An interesting suggestion, but doesn't extend easily to other types of containers like HashMaps. > It?s difficult to do something better without somewhat breaking the > encapsulation of your graph type, but you could split G into edge and vertex > data structures and have the functions that add vertices / edges operate on > part of . Then given an &mut G, you could reborrow the vertex data and the > edge data with &mut pointers separately. I'll give that a try. It should be enough to get me past immediate problems at least. -Nicholas >> >> This is tricky because not all implementations of a graph interface allow >> separate modification of vertex and edge data, so to exploit this you have >> to expose your representation somewhat. >> >> Cameron >> >> On Jun 1, 2014, at 2:15 PM, Nicholas Bishop >> wrote: >> >> > Building an intermediate would work, but it implies extra overhead. If >> > this was a large graph instead of just one edge then it could be >> > expensive to copy from the intermediate back into the original object. >> > Are there any alternatives to consider? >> > >> > On Sun, Jun 1, 2014 at 4:48 PM, Cameron Zwarich >> > wrote: >> >> `mut_iter` only gives you mutable references to the elements of the >> >> container; it doesn?t allow you to reborrow the container itself >> >> mutably >> >> inside of the loop. >> >> >> >> Cameron >> >> >> >> On Jun 1, 2014, at 1:39 PM, Christophe Pedretti >> >> wrote: >> >> >> >> and using mut_iter() instead of iter() is not enough ? >> >> >> >> >> >> 2014-06-01 22:03 GMT+02:00 Cameron Zwarich : >> >>> >> >>> The simplest thing to do is probably to build an intermediate vector >> >>> of >> >>> vertices to insert and then push them all after you are done iterating >> >>> over >> >>> the edges. >> >>> >> >>> Cameron >> >>> >> >>>> On Jun 1, 2014, at 12:48 PM, Nicholas Bishop >> >>>> >> >>>> wrote: >> >>>> >> >>>> I'm looking for a little borrow-checker advice. Here's a reasonably >> >>>> minimal program that demonstrates the problem: >> >>>> >> >>>> extern crate collections; >> >>>> >> >>>> use collections::HashMap; >> >>>> >> >>>> struct G { >> >>>> verts: HashMap, >> >>>> edges: Vec<(int, int)>, >> >>>> >> >>>> next_vert_id: int >> >>>> } >> >>>> >> >>>> impl G { >> >>>> fn new() -> G { >> >>>> G{verts: HashMap::new(), edges: Vec::new(), next_vert_id: 0} >> >>>> } >> >>>> >> >>>> fn add_vert(&mut self, s: &str) -> int { >> >>>> let id = self.next_vert_id; >> >>>> self.next_vert_id += 1; >> >>>> self.verts.insert(id, String::from_str(s)); >> >>>> id >> >>>> } >> >>>> >> >>>> fn add_edge(&mut self, v0: int, v1: int) { >> >>>> self.edges.push((v0, v1)) >> >>>> } >> >>>> } >> >>>> >> >>>> fn main() { >> >>>> let mut g = G::new(); >> >>>> >> >>>> { >> >>>> let v0 = g.add_vert("vert 0"); >> >>>> let v1 = g.add_vert("vert 1"); >> >>>> g.add_edge(v0, v1); >> >>>> } >> >>>> >> >>>> for &(v0, v1) in g.edges.iter() { >> >>>> g.add_vert("edge vert"); >> >>>> } >> >>>> } >> >>>> >> >>>> This fails to compile: >> >>>> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v >> >>>> rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) >> >>>> host: x86_64-unknown-linux-gnu >> >>>> >> >>>> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc graph.rs >> >>>> graph.rs:39:9: 39:10 error: cannot borrow `g` as mutable because >> >>>> `g.edges` is also borrowed as immutable >> >>>> graph.rs:39 g.add_vert("edge vert"); >> >>>> ^ >> >>>> graph.rs:38:22: 38:29 note: previous borrow of `g.edges` occurs here; >> >>>> the immutable borrow prevents subsequent moves or mutable borrows of >> >>>> `g.edges` until the borrow ends >> >>>> graph.rs:38 for &(v0, v1) in g.edges.iter() { >> >>>> ^~~~~~~ >> >>>> graph.rs:41:2: 41:2 note: previous borrow ends here >> >>>> graph.rs:38 for &(v0, v1) in g.edges.iter() { >> >>>> graph.rs:39 g.add_vert("edge vert"); >> >>>> graph.rs:40 } >> >>>> graph.rs:41 } >> >>>> ^ >> >>>> error: aborting due to previous error >> >>>> >> >>>> My understanding of the error is: G::add_vert is being given a >> >>>> mutable >> >>>> reference to "g", which means it could do something naughty like >> >>>> clear >> >>>> g.edges, which would screw up the loop iteration that is happening in >> >>>> main(). >> >>>> >> >>>> That seems like a pretty reasonable thing to prevent, but it's not >> >>>> clear to me how I should restructure the program to work around the >> >>>> error. In this minimal example I could copy the code out of >> >>>> G::add_vert and stick it directly inside the loop, but that's clearly >> >>>> not the general solution. >> >>>> >> >>>> Thanks, >> >>>> -Nicholas >> >>>> _______________________________________________ >> >>>> Rust-dev mailing list >> >>>> Rust-dev at mozilla.org >> >>>> https://mail.mozilla.org/listinfo/rust-dev >> >>> _______________________________________________ >> >>> Rust-dev mailing list >> >>> Rust-dev at mozilla.org >> >>> https://mail.mozilla.org/listinfo/rust-dev >> >> >> >> >> >> _______________________________________________ >> >> Rust-dev mailing list >> >> Rust-dev at mozilla.org >> >> https://mail.mozilla.org/listinfo/rust-dev >> >> >> >> >> >> >> >> _______________________________________________ >> >> Rust-dev mailing list >> >> Rust-dev at mozilla.org >> >> https://mail.mozilla.org/listinfo/rust-dev >> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From sfackler at gmail.com Sun Jun 1 15:56:23 2014 From: sfackler at gmail.com (Steven Fackler) Date: Sun, 1 Jun 2014 15:56:23 -0700 Subject: [rust-dev] Error while trying to split source code into multiple files In-Reply-To: References: Message-ID: The directory layout of the project should match the module hierarchy. bar is a submodule of foo so it shouldn't live next to foo in the filesystem. There are a couple of filesystem setups that will work: src/ main.rs foo/ mod.rs bar.rs src/ main.rs foo/ mod.rs bar/ mod.rs The first configuration seems to be what most code uses. If bar ends up having submodules of its own, it would need to move to the second setup. Steven Fackler On Sun, Jun 1, 2014 at 3:02 PM, Nicholas Bishop wrote: > Here's example code: > > /src/main.rs: > mod foo; > fn main() { > foo::foo(); > } > > /src/bar.rs: > pub fn bar() { > } > > /src/foo.rs: > mod bar; > pub fn foo() { > bar::bar(); > } > > This fails: > $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v > rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) > host: x86_64-unknown-linux-gnu > > $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc main.rs > foo.rs:1:5: 1:8 error: cannot declare a new module at this location > foo.rs:1 mod bar; > ^~~ > foo.rs:1:5: 1:8 note: maybe move this module `foo` to its own > directory via `foo/mod.rs` > foo.rs:1 mod bar; > ^~~ > foo.rs:1:5: 1:8 note: ... or maybe `use` the module `bar` instead of > possibly redeclaring it > foo.rs:1 mod bar; > ^~~ > error: aborting due to previous error > > I tried the first suggestion (moving foo.rs to foo/mod.rs), this fails > too: > foo/mod.rs:1:5: 1:8 error: file not found for module `bar` > foo/mod.rs:1 mod bar; > ^~~ > > The second suggestion, which I took to mean replacing "mod bar" with > "use bar", also failed: > brokencrate/foo.rs:1:5: 1:8 error: unresolved import: there is no `bar` > in `???` > brokencrate/foo.rs:1 use bar; > ^~~ > brokencrate/foo.rs:1:5: 1:8 error: failed to resolve import `bar` > brokencrate/foo.rs:1 use bar; > ^~~ > error: aborting due to 2 previous errors > > I'm guessing that this failure is related to this RFC: > > https://github.com/rust-lang/rfcs/blob/master/complete/0016-module-file-system-hierarchy.md > > Unfortunately the RFC describes "a common newbie mistake" but not what > a newbie might do to correct this mistake. I also looked through > http://doc.rust-lang.org/tutorial.html#crates-and-the-module-system, > but didn't see this question directly addressed. > > Thanks, > -Nicholas > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholasbishop at gmail.com Sun Jun 1 17:25:48 2014 From: nicholasbishop at gmail.com (Nicholas Bishop) Date: Sun, 1 Jun 2014 20:25:48 -0400 Subject: [rust-dev] Error while trying to split source code into multiple files In-Reply-To: References: Message-ID: My intent wasn't to make bar a submodule of foo, but rather that foo & bar would be sibling modules (and foo just happens to use bar). Is there a way to do that? On Sun, Jun 1, 2014 at 6:56 PM, Steven Fackler wrote: > The directory layout of the project should match the module hierarchy. bar > is a submodule of foo so it shouldn't live next to foo in the filesystem. > There are a couple of filesystem setups that will work: > > src/ > main.rs > foo/ > mod.rs > bar.rs > > src/ > main.rs > foo/ > mod.rs > bar/ > mod.rs > > The first configuration seems to be what most code uses. If bar ends up > having submodules of its own, it would need to move to the second setup. > > Steven Fackler > > > On Sun, Jun 1, 2014 at 3:02 PM, Nicholas Bishop > wrote: >> >> Here's example code: >> >> /src/main.rs: >> mod foo; >> fn main() { >> foo::foo(); >> } >> >> /src/bar.rs: >> pub fn bar() { >> } >> >> /src/foo.rs: >> mod bar; >> pub fn foo() { >> bar::bar(); >> } >> >> This fails: >> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v >> rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) >> host: x86_64-unknown-linux-gnu >> >> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc main.rs >> foo.rs:1:5: 1:8 error: cannot declare a new module at this location >> foo.rs:1 mod bar; >> ^~~ >> foo.rs:1:5: 1:8 note: maybe move this module `foo` to its own >> directory via `foo/mod.rs` >> foo.rs:1 mod bar; >> ^~~ >> foo.rs:1:5: 1:8 note: ... or maybe `use` the module `bar` instead of >> possibly redeclaring it >> foo.rs:1 mod bar; >> ^~~ >> error: aborting due to previous error >> >> I tried the first suggestion (moving foo.rs to foo/mod.rs), this fails >> too: >> foo/mod.rs:1:5: 1:8 error: file not found for module `bar` >> foo/mod.rs:1 mod bar; >> ^~~ >> >> The second suggestion, which I took to mean replacing "mod bar" with >> "use bar", also failed: >> brokencrate/foo.rs:1:5: 1:8 error: unresolved import: there is no `bar` in >> `???` >> brokencrate/foo.rs:1 use bar; >> ^~~ >> brokencrate/foo.rs:1:5: 1:8 error: failed to resolve import `bar` >> brokencrate/foo.rs:1 use bar; >> ^~~ >> error: aborting due to 2 previous errors >> >> I'm guessing that this failure is related to this RFC: >> >> https://github.com/rust-lang/rfcs/blob/master/complete/0016-module-file-system-hierarchy.md >> >> Unfortunately the RFC describes "a common newbie mistake" but not what >> a newbie might do to correct this mistake. I also looked through >> http://doc.rust-lang.org/tutorial.html#crates-and-the-module-system, >> but didn't see this question directly addressed. >> >> Thanks, >> -Nicholas >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > From sfackler at gmail.com Sun Jun 1 17:27:44 2014 From: sfackler at gmail.com (Steven Fackler) Date: Sun, 1 Jun 2014 17:27:44 -0700 Subject: [rust-dev] Error while trying to split source code into multiple files In-Reply-To: References: Message-ID: Yep! main.rs: mod foo; mod bar; fn main() { foo::foo(); } foo.rs: use bar; // this use lets you refer to the bar fn as bar::bar() instead of ::bar::bar() pub fn foo() { bar::bar(); } bar.rs: pub fn bar() {} Steven Fackler On Sun, Jun 1, 2014 at 5:25 PM, Nicholas Bishop wrote: > My intent wasn't to make bar a submodule of foo, but rather that foo & > bar would be sibling modules (and foo just happens to use bar). Is > there a way to do that? > > On Sun, Jun 1, 2014 at 6:56 PM, Steven Fackler wrote: > > The directory layout of the project should match the module hierarchy. > bar > > is a submodule of foo so it shouldn't live next to foo in the filesystem. > > There are a couple of filesystem setups that will work: > > > > src/ > > main.rs > > foo/ > > mod.rs > > bar.rs > > > > src/ > > main.rs > > foo/ > > mod.rs > > bar/ > > mod.rs > > > > The first configuration seems to be what most code uses. If bar ends up > > having submodules of its own, it would need to move to the second setup. > > > > Steven Fackler > > > > > > On Sun, Jun 1, 2014 at 3:02 PM, Nicholas Bishop < > nicholasbishop at gmail.com> > > wrote: > >> > >> Here's example code: > >> > >> /src/main.rs: > >> mod foo; > >> fn main() { > >> foo::foo(); > >> } > >> > >> /src/bar.rs: > >> pub fn bar() { > >> } > >> > >> /src/foo.rs: > >> mod bar; > >> pub fn foo() { > >> bar::bar(); > >> } > >> > >> This fails: > >> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v > >> rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) > >> host: x86_64-unknown-linux-gnu > >> > >> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc main.rs > >> foo.rs:1:5: 1:8 error: cannot declare a new module at this location > >> foo.rs:1 mod bar; > >> ^~~ > >> foo.rs:1:5: 1:8 note: maybe move this module `foo` to its own > >> directory via `foo/mod.rs` > >> foo.rs:1 mod bar; > >> ^~~ > >> foo.rs:1:5: 1:8 note: ... or maybe `use` the module `bar` instead of > >> possibly redeclaring it > >> foo.rs:1 mod bar; > >> ^~~ > >> error: aborting due to previous error > >> > >> I tried the first suggestion (moving foo.rs to foo/mod.rs), this fails > >> too: > >> foo/mod.rs:1:5: 1:8 error: file not found for module `bar` > >> foo/mod.rs:1 mod bar; > >> ^~~ > >> > >> The second suggestion, which I took to mean replacing "mod bar" with > >> "use bar", also failed: > >> brokencrate/foo.rs:1:5: 1:8 error: unresolved import: there is no > `bar` in > >> `???` > >> brokencrate/foo.rs:1 use bar; > >> ^~~ > >> brokencrate/foo.rs:1:5: 1:8 error: failed to resolve import `bar` > >> brokencrate/foo.rs:1 use bar; > >> ^~~ > >> error: aborting due to 2 previous errors > >> > >> I'm guessing that this failure is related to this RFC: > >> > >> > https://github.com/rust-lang/rfcs/blob/master/complete/0016-module-file-system-hierarchy.md > >> > >> Unfortunately the RFC describes "a common newbie mistake" but not what > >> a newbie might do to correct this mistake. I also looked through > >> http://doc.rust-lang.org/tutorial.html#crates-and-the-module-system, > >> but didn't see this question directly addressed. > >> > >> Thanks, > >> -Nicholas > >> _______________________________________________ > >> Rust-dev mailing list > >> Rust-dev at mozilla.org > >> https://mail.mozilla.org/listinfo/rust-dev > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholasbishop at gmail.com Sun Jun 1 17:39:28 2014 From: nicholasbishop at gmail.com (Nicholas Bishop) Date: Sun, 1 Jun 2014 20:39:28 -0400 Subject: [rust-dev] Error while trying to split source code into multiple files In-Reply-To: References: Message-ID: Perfect, thanks Steven. On Sun, Jun 1, 2014 at 8:27 PM, Steven Fackler wrote: > Yep! > > main.rs: > > mod foo; > mod bar; > > fn main() { > foo::foo(); > } > > foo.rs: > > use bar; // this use lets you refer to the bar fn as bar::bar() instead of > ::bar::bar() > > pub fn foo() { > bar::bar(); > } > > bar.rs: > > pub fn bar() {} > > Steven Fackler > > > On Sun, Jun 1, 2014 at 5:25 PM, Nicholas Bishop > wrote: >> >> My intent wasn't to make bar a submodule of foo, but rather that foo & >> bar would be sibling modules (and foo just happens to use bar). Is >> there a way to do that? >> >> On Sun, Jun 1, 2014 at 6:56 PM, Steven Fackler wrote: >> > The directory layout of the project should match the module hierarchy. >> > bar >> > is a submodule of foo so it shouldn't live next to foo in the >> > filesystem. >> > There are a couple of filesystem setups that will work: >> > >> > src/ >> > main.rs >> > foo/ >> > mod.rs >> > bar.rs >> > >> > src/ >> > main.rs >> > foo/ >> > mod.rs >> > bar/ >> > mod.rs >> > >> > The first configuration seems to be what most code uses. If bar ends up >> > having submodules of its own, it would need to move to the second setup. >> > >> > Steven Fackler >> > >> > >> > On Sun, Jun 1, 2014 at 3:02 PM, Nicholas Bishop >> > >> > wrote: >> >> >> >> Here's example code: >> >> >> >> /src/main.rs: >> >> mod foo; >> >> fn main() { >> >> foo::foo(); >> >> } >> >> >> >> /src/bar.rs: >> >> pub fn bar() { >> >> } >> >> >> >> /src/foo.rs: >> >> mod bar; >> >> pub fn foo() { >> >> bar::bar(); >> >> } >> >> >> >> This fails: >> >> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc -v >> >> rustc 0.11.0-pre-nightly (064dbb9 2014-06-01 00:56:42 -0700) >> >> host: x86_64-unknown-linux-gnu >> >> >> >> $ rust-nightly-x86_64-unknown-linux-gnu/bin/rustc main.rs >> >> foo.rs:1:5: 1:8 error: cannot declare a new module at this location >> >> foo.rs:1 mod bar; >> >> ^~~ >> >> foo.rs:1:5: 1:8 note: maybe move this module `foo` to its own >> >> directory via `foo/mod.rs` >> >> foo.rs:1 mod bar; >> >> ^~~ >> >> foo.rs:1:5: 1:8 note: ... or maybe `use` the module `bar` instead of >> >> possibly redeclaring it >> >> foo.rs:1 mod bar; >> >> ^~~ >> >> error: aborting due to previous error >> >> >> >> I tried the first suggestion (moving foo.rs to foo/mod.rs), this fails >> >> too: >> >> foo/mod.rs:1:5: 1:8 error: file not found for module `bar` >> >> foo/mod.rs:1 mod bar; >> >> ^~~ >> >> >> >> The second suggestion, which I took to mean replacing "mod bar" with >> >> "use bar", also failed: >> >> brokencrate/foo.rs:1:5: 1:8 error: unresolved import: there is no `bar` >> >> in >> >> `???` >> >> brokencrate/foo.rs:1 use bar; >> >> ^~~ >> >> brokencrate/foo.rs:1:5: 1:8 error: failed to resolve import `bar` >> >> brokencrate/foo.rs:1 use bar; >> >> ^~~ >> >> error: aborting due to 2 previous errors >> >> >> >> I'm guessing that this failure is related to this RFC: >> >> >> >> >> >> https://github.com/rust-lang/rfcs/blob/master/complete/0016-module-file-system-hierarchy.md >> >> >> >> Unfortunately the RFC describes "a common newbie mistake" but not what >> >> a newbie might do to correct this mistake. I also looked through >> >> http://doc.rust-lang.org/tutorial.html#crates-and-the-module-system, >> >> but didn't see this question directly addressed. >> >> >> >> Thanks, >> >> -Nicholas >> >> _______________________________________________ >> >> Rust-dev mailing list >> >> Rust-dev at mozilla.org >> >> https://mail.mozilla.org/listinfo/rust-dev >> > >> > > > From someone at mearie.org Sun Jun 1 21:32:54 2014 From: someone at mearie.org (Kang Seonghoon) Date: Mon, 2 Jun 2014 13:32:54 +0900 Subject: [rust-dev] Using String and StrSlice In-Reply-To: References: Message-ID: If your string is an ASCII, you can convert between `String` and `Vec` using `to_ascii()` and `as_str_ascii()` methods. You can then update an individual character in the vector. See `std::ascii` for related traits. Codepoint-wise operation is prone to error and I don't think such methods would be added in the future. In general, do not think `String` as a sequence of bytes or codepoints (Unicode scalar values to be exact) or characters. Think it as a black box containing some human text. `chars()` (a misnomer!) and `as_bytes()` should be used to extract a list of codepoints and UTF-8 bytes from the string. Codepoint-wise operations can always be done with `Vec`. 2014-06-02 3:20 GMT+09:00 Christophe Pedretti : > Hi all, > > suppose i want to replace the i th character c (this character is ascii, so > represented by exactly one byte) in a String named buf with character 'a' > > i can do this > buf = > buf.as_slice().slice_to(i).to_string().append("a").append(buf.as_slice().slice_from(i+1)) > > if c is any UTF8 character, i can use > buf = > buf.as_slice().slice_to(i).to_string().append("a").append(buf.as_slice().slice_from(i+c.len_utf8_bytes())) > > It's quite complex, no better way to do ? > > Any future additional methods for String are planned ? > > Thanks > > -- > Christophe > > > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- -- Kang Seonghoon | Software Engineer, iPlateia Inc. | http://mearie.org/ -- Opinions expressed in this email do not necessarily represent the views of my employer. -- From rusty.gates at icloud.com Mon Jun 2 00:44:26 2014 From: rusty.gates at icloud.com (Tommi) Date: Mon, 02 Jun 2014 10:44:26 +0300 Subject: [rust-dev] A better type system In-Reply-To: <88b869f3-52b7-489c-a197-3a2cff60deb8@email.android.com> References: <7BB08DE7-7315-4F38-85AF-78BC633141CE@icloud.com> <538A14E5.7070009@mozilla.com> <5CFCDE64-BDEB-483E-BA92-424CA15F4529@icloud.com> <538A518E.90706@mozilla.com> <7D76B3E2-06C9-4382-8E68-F4F2E9003E26@icloud.com> <3acefe6c-bfa2-491e-bf36-40d921efa332@email.android.com> <770D9752-D9A0-4A02-B2F6-AFA0A853DDF7@mozilla.com> <88b869f3-52b7-489c-a197-3a2cff60deb8@email.android.com> Message-ID: <520BA8CB-987E-4FEC-8C64-0F7732D72B34@icloud.com> In my original post I stated that it feels like there's something wrong with the language when it doesn't allow multiple mutable references to the same data, but I didn't really explain why it feels like that. So, I just want to add this simple example to help explain my position. It is just plain obvious to everybody that the following code snippet is memory-safe, but the compiler refuses to compile it due to "cannot borrow `stuff[..]` as mutable more than once at a time": let mut stuff = [1, 2, 3]; let r1 = stuff.mut_slice_to(2); let r2 = stuff.mut_slice_from(1); for i in std::iter::range(0u, 2) { if i % 2 == 0{ r1[i] += 1; } else { r2[i] += 2; } } It's not even possible to forcefully deallocate the memory that is being referenced by multiple reference variables here, and the memory being referenced is just plain old data. Nothing can go wrong here, and yet the compiler thinks something is potentially unsafe and refuses to compile. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Mon Jun 2 02:03:42 2014 From: rusty.gates at icloud.com (Tommi) Date: Mon, 02 Jun 2014 12:03:42 +0300 Subject: [rust-dev] A better type system In-Reply-To: <520BA8CB-987E-4FEC-8C64-0F7732D72B34@icloud.com> References: <7BB08DE7-7315-4F38-85AF-78BC633141CE@icloud.com> <538A14E5.7070009@mozilla.com> <5CFCDE64-BDEB-483E-BA92-424CA15F4529@icloud.com> <538A518E.90706@mozilla.com> <7D76B3E2-06C9-4382-8E68-F4F2E9003E26@icloud.com> <3acefe6c-bfa2-491e-bf36-40d921efa332@email.android.com> <770D9752-D9A0-4A02-B2F6-AFA0A853DDF7@mozilla.com> <88b869f3-52b7-489c-a197-3a2cff60deb8@email.android.com> <520BA8CB-987E-4FEC-8C64-0F7732D72B34@icloud.com> Message-ID: <813B4210-A5CA-4DA7-AA47-6050355CE476@icloud.com> On 2014-06-02, at 10:44, Tommi wrote: > Nothing can go wrong here [..] I just watched this video https://www.youtube.com/watch?v=awviiko59p8 and now I get the impression is that perhaps preventing aliasing bugs is more important than the convenience of unrestricted, aliased mutation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcwalton at mozilla.com Mon Jun 2 08:25:30 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Mon, 02 Jun 2014 08:25:30 -0700 Subject: [rust-dev] A better type system In-Reply-To: <520BA8CB-987E-4FEC-8C64-0F7732D72B34@icloud.com> References: <7BB08DE7-7315-4F38-85AF-78BC633141CE@icloud.com> <538A14E5.7070009@mozilla.com> <5CFCDE64-BDEB-483E-BA92-424CA15F4529@icloud.com> <538A518E.90706@mozilla.com> <7D76B3E2-06C9-4382-8E68-F4F2E9003E26@icloud.com> <3acefe6c-bfa2-491e-bf36-40d921efa332@email.android.com> <770D9752-D9A0-4A02-B2F6-AFA0A853DDF7@mozilla.com> <88b869f3-52b7-489c-a197-3a2cff60deb8@email.android.com> <520BA8CB-987E-4FEC-8C64-0F7732D72B34@icloud.com> Message-ID: <538C976A.3020207@mozilla.com> On 6/2/14 12:44 AM, Tommi wrote: > In my original post I stated that it feels like there's something wrong > with the language when it doesn't allow multiple mutable references to > the same data, but I didn't really explain why it feels like that. So, I > just want to add this simple example to help explain my position. It is > just plain obvious to everybody that the following code snippet is > memory-safe, but the compiler refuses to compile it due to "cannot > borrow `stuff[..]` as mutable more than once at a time": > > let mut stuff = [1, 2, 3]; > let r1 = stuff.mut_slice_to(2); > let r2 = stuff.mut_slice_from(1); I'd like to have a function that splits up a vector in that way. That should be doable in the standard library using some unsafe code under the hood. Patrick From christophe.pedretti at gmail.com Mon Jun 2 10:49:38 2014 From: christophe.pedretti at gmail.com (Christophe Pedretti) Date: Mon, 2 Jun 2014 19:49:38 +0200 Subject: [rust-dev] Passing arguments bu reference In-Reply-To: References: <-7036542088947732759@unknownmsgid> Message-ID: Great Thanks to all, now i have a more precise idea. So, to summarize if the function takes a vector as argument, transforms it, and return it, with no need for the caller to use the argument fn my_func(src: Vec) -> Vec if the functons takes a vector argument and use it just for a temporary read access, with the need for the caller to use the argument (it was just a borrow by the fcuntion) fn my_func(src: &[u8]) -> Vec -- Christophe -------------- next part -------------- An HTML attachment was scrubbed... URL: From jens at lidestrom.se Sun Jun 1 13:32:52 2014 From: jens at lidestrom.se (=?ISO-8859-1?Q?Jens_Lidestr=F6m?=) Date: Sun, 01 Jun 2014 22:32:52 +0200 Subject: [rust-dev] Confusion about lifetime guide example Message-ID: <538B8DF4.7050804@lidestrom.se> Hi all! Could someone help me with a question about the [References and lifetime guide](http://doc.rust-lang.org/guide-lifetimes.html#borrowing-and-enums) (for 0.11.0) which I can't figure out. In the *Lifetimes* section there is this example: fn example3() -> int { struct House { owner: Box } struct Person { age: int } let mut house = box House { owner: box Person {age: 30} }; let owner_age = &house.owner.age; house = box House {owner: box Person {age: 40}}; // Error house.owner = box Person {age: 50}; // Error *owner_age } Referring to this example the following section, *Borrowing and enums*, says: "The previous example showed that the type system forbids any borrowing of owned boxes found in aliasable, mutable memory." But to me it seems that this example shows that borrowing and thus aliasing is allowed, but prevents modification of the owning reference. (Or conversely, that borrowing is not allowed is the owning reference is modified.) Is the quoted text from the guide incorrect? Or am I confused about the meaning of the quote or the example? BR, Jens Lidestr?m From mozilla at mcpherrin.ca Mon Jun 2 13:09:10 2014 From: mozilla at mcpherrin.ca (Matthew McPherrin) Date: Mon, 2 Jun 2014 13:09:10 -0700 Subject: [rust-dev] A better type system In-Reply-To: <538C976A.3020207@mozilla.com> References: <7BB08DE7-7315-4F38-85AF-78BC633141CE@icloud.com> <538A14E5.7070009@mozilla.com> <5CFCDE64-BDEB-483E-BA92-424CA15F4529@icloud.com> <538A518E.90706@mozilla.com> <7D76B3E2-06C9-4382-8E68-F4F2E9003E26@icloud.com> <3acefe6c-bfa2-491e-bf36-40d921efa332@email.android.com> <770D9752-D9A0-4A02-B2F6-AFA0A853DDF7@mozilla.com> <88b869f3-52b7-489c-a197-3a2cff60deb8@email.android.com> <520BA8CB-987E-4FEC-8C64-0F7732D72B34@icloud.com> <538C976A.3020207@mozilla.com> Message-ID: Isn't this MutableVector's mut_split_at that we already have? On Mon, Jun 2, 2014 at 8:25 AM, Patrick Walton wrote: > On 6/2/14 12:44 AM, Tommi wrote: > >> In my original post I stated that it feels like there's something wrong >> with the language when it doesn't allow multiple mutable references to >> the same data, but I didn't really explain why it feels like that. So, I >> just want to add this simple example to help explain my position. It is >> just plain obvious to everybody that the following code snippet is >> memory-safe, but the compiler refuses to compile it due to "cannot >> borrow `stuff[..]` as mutable more than once at a time": >> >> let mut stuff = [1, 2, 3]; >> let r1 = stuff.mut_slice_to(2); >> let r2 = stuff.mut_slice_from(1); >> > > I'd like to have a function that splits up a vector in that way. That > should be doable in the standard library using some unsafe code under the > hood. > > Patrick > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Tue Jun 3 02:15:38 2014 From: rusty.gates at icloud.com (Tommi) Date: Tue, 03 Jun 2014 12:15:38 +0300 Subject: [rust-dev] Convenience syntax for importing the module itself along with items within Message-ID: I find it somewhat jarring to have to spend two lines for the following kind of imports: use module::Type; use module; So, I suggest we add a nicer syntax for doing the above imports using the following single line: use module::{self, Type}; It would probably be a good idea to force the `self` to be the first item on that list. And if someone writes the following... use module::self; ...that should probably cause at least a warning saying something like "You should write `use module;` instead of `use module::self;`". From s.gesemann at gmail.com Tue Jun 3 02:58:08 2014 From: s.gesemann at gmail.com (Sebastian Gesemann) Date: Tue, 3 Jun 2014 11:58:08 +0200 Subject: [rust-dev] A better type system In-Reply-To: References: <7BB08DE7-7315-4F38-85AF-78BC633141CE@icloud.com> <538A14E5.7070009@mozilla.com> <5CFCDE64-BDEB-483E-BA92-424CA15F4529@icloud.com> <538A518E.90706@mozilla.com> <7D76B3E2-06C9-4382-8E68-F4F2E9003E26@icloud.com> <3acefe6c-bfa2-491e-bf36-40d921efa332@email.android.com> <770D9752-D9A0-4A02-B2F6-AFA0A853DDF7@mozilla.com> <88b869f3-52b7-489c-a197-3a2cff60deb8@email.android.com> <520BA8CB-987E-4FEC-8C64-0F7732D72B34@icloud.com> <538C976A.3020207@mozilla.com> Message-ID: On Mon, Jun 2, 2014 at 10:09 PM, Matthew McPherrin wrote: > On Mon, Jun 2, 2014 at 8:25 AM, Patrick Walton wrote: >> On 6/2/14 12:44 AM, Tommi wrote: >>> >>> In my original post I stated that it feels like there's something wrong >>> with the language when it doesn't allow multiple mutable references to >>> the same data, but I didn't really explain why it feels like that. So, I >>> just want to add this simple example to help explain my position. It is >>> just plain obvious to everybody that the following code snippet is >>> memory-safe, but the compiler refuses to compile it due to "cannot >>> borrow `stuff[..]` as mutable more than once at a time": >>> >>> let mut stuff = [1, 2, 3]; >>> let r1 = stuff.mut_slice_to(2); >>> let r2 = stuff.mut_slice_from(1); >> >> I'd like to have a function that splits up a vector in that way. That >> should be doable in the standard library using some unsafe code under the >> hood. > > Isn't this MutableVector's mut_split_at that we already have? I thought about mentioning mut_split_at just to make people aware of it. But the resulting slices are not overlapping which is apparently what Tommi was interested. My understanding is that even if one uses an unsafe block to get two overlapping mutable slices, the use of those might invoke undefined behaviour because it violates some aliasing assumptions the compiler tends to exploit during optimizations. Correct me if I'm wrong. Cheers! sg From igor at mir2.org Tue Jun 3 02:59:00 2014 From: igor at mir2.org (Igor Bukanov) Date: Tue, 3 Jun 2014 11:59:00 +0200 Subject: [rust-dev] Clone and enum<'a> Message-ID: Consider the following enum: #[deriving(Clone)] enum List<'a> { Nil, Next(&'a List<'a>) } It generates en error: :4:10: 4:22 error: mismatched types: expected `&List<>` but found `List<>` (expected &-ptr but found enum List) :4 Next(&'a List<'a>) ^~~~~~~~~~~~ note: in expansion of #[deriving] :1:1: 2:5 note: expansion site error: aborting due to previous error Is it a bug in #[deriving] ? From dbau.pp at gmail.com Tue Jun 3 03:08:26 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Tue, 03 Jun 2014 20:08:26 +1000 Subject: [rust-dev] Clone and enum<'a> In-Reply-To: References: Message-ID: <538D9E9A.8040608@gmail.com> Somewhat. It is due to auto-deref: deriving(Clone) essentially expands to fn clone(&self) -> List<'a> { match *self { Nil => Nil, Next(ref x) => Next(x.clone()) } } `x` is of type `&&List<'a>`, but the `x.clone()` call auto-derefs through both layers of & to be calling List's clone directly (returning a `List<'a>`), rather than duplicating the reference. This will be fixed with UFCS, which will allow deriving to expand to something like `Next(Clone::clone(x))` and this does not undergo auto-deref. You can work around this by writing a Clone implementation by hand. In this case, List is Copy, so the implementation can be written as impl<'a> Clone for List<'a> { fn clone(&self) -> List<'a> { *self } } (Clone for more "interesting" List types (which aren't Copy, in general) will likely need to be implemented with a match and some internal Clones.) Huon On 03/06/14 19:59, Igor Bukanov wrote: > Consider the following enum: > > #[deriving(Clone)] > enum List<'a> { > Nil, > Next(&'a List<'a>) > } > > > It generates en error: > > :4:10: 4:22 error: mismatched types: expected `&List<>` but > found `List<>` (expected &-ptr but found enum List) > :4 Next(&'a List<'a>) > ^~~~~~~~~~~~ > note: in expansion of #[deriving] > :1:1: 2:5 note: expansion site > error: aborting due to previous error > > Is it a bug in #[deriving] ? > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From danielmicay at gmail.com Tue Jun 3 09:32:20 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Tue, 03 Jun 2014 12:32:20 -0400 Subject: [rust-dev] A better type system In-Reply-To: References: <7BB08DE7-7315-4F38-85AF-78BC633141CE@icloud.com> <538A14E5.7070009@mozilla.com> <5CFCDE64-BDEB-483E-BA92-424CA15F4529@icloud.com> <538A518E.90706@mozilla.com> <7D76B3E2-06C9-4382-8E68-F4F2E9003E26@icloud.com> <3acefe6c-bfa2-491e-bf36-40d921efa332@email.android.com> <770D9752-D9A0-4A02-B2F6-AFA0A853DDF7@mozilla.com> <88b869f3-52b7-489c-a197-3a2cff60deb8@email.android.com> <520BA8CB-987E-4FEC-8C64-0F7732D72B34@icloud.com> <538C976A.3020207@mozilla.com> Message-ID: <538DF894.50008@gmail.com> On 03/06/14 05:58 AM, Sebastian Gesemann wrote: > On Mon, Jun 2, 2014 at 10:09 PM, Matthew McPherrin wrote: >> On Mon, Jun 2, 2014 at 8:25 AM, Patrick Walton wrote: >>> On 6/2/14 12:44 AM, Tommi wrote: >>>> >>>> In my original post I stated that it feels like there's something wrong >>>> with the language when it doesn't allow multiple mutable references to >>>> the same data, but I didn't really explain why it feels like that. So, I >>>> just want to add this simple example to help explain my position. It is >>>> just plain obvious to everybody that the following code snippet is >>>> memory-safe, but the compiler refuses to compile it due to "cannot >>>> borrow `stuff[..]` as mutable more than once at a time": >>>> >>>> let mut stuff = [1, 2, 3]; >>>> let r1 = stuff.mut_slice_to(2); >>>> let r2 = stuff.mut_slice_from(1); >>> >>> I'd like to have a function that splits up a vector in that way. That >>> should be doable in the standard library using some unsafe code under the >>> hood. >> >> Isn't this MutableVector's mut_split_at that we already have? > > I thought about mentioning mut_split_at just to make people aware of > it. But the resulting slices are not overlapping which is apparently > what Tommi was interested. My understanding is that even if one uses > an unsafe block to get two overlapping mutable slices, the use of > those might invoke undefined behaviour because it violates some > aliasing assumptions the compiler tends to exploit during > optimizations. Correct me if I'm wrong. > > Cheers! > sg It causes undefined behaviour because the language is defined that way. It's defined that way both to allow for data parallelism and type-based alias analysis. There's potentially room for another form of & reference with aliasing, mutable data but the existing ones need to stay as they are today. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From gsingh_2011 at yahoo.com Tue Jun 3 09:56:35 2014 From: gsingh_2011 at yahoo.com (Gulshan Singh) Date: Tue, 3 Jun 2014 09:56:35 -0700 Subject: [rust-dev] Convenience syntax for importing the module itself along with items within In-Reply-To: References: Message-ID: +1, I was planning on suggesting this as well. On Jun 3, 2014 2:16 AM, "Tommi" wrote: > I find it somewhat jarring to have to spend two lines for the following > kind of imports: > > use module::Type; > use module; > > So, I suggest we add a nicer syntax for doing the above imports using the > following single line: > > use module::{self, Type}; > > It would probably be a good idea to force the `self` to be the first item > on that list. And if someone writes the following... > > use module::self; > > ...that should probably cause at least a warning saying something like > "You should write `use module;` instead of `use module::self;`". > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at mir2.org Wed Jun 4 00:28:57 2014 From: igor at mir2.org (Igor Bukanov) Date: Wed, 4 Jun 2014 09:28:57 +0200 Subject: [rust-dev] syntax for explicit generics when calling static method Message-ID: What is the syntax for calling a static method of a generic struct while selecting the the generic parameters explicitly? Apparently Struct::static_method does not work. For example, consider the following program: #[deriving(Show)] struct Test { i: int } impl Test { fn new() -> Test { Test {i: 1} } fn test(&self) -> int { self.i } } fn main() { let t = Test::new().test(); println!("t={}", t); } The latest nightly compiler generates: s.rs:10:13: 10:17 error: `Test` is a structure name, but this expression uses it like a function name s.rs:10 let t = Test::new().test(); ^~~~ Note that in this case type inference does not work as removing gives: s.rs:10:13: 10:31 error: cannot determine a type for this expression: unconstrained type s.rs:10 let t = Test::new().test(); ^~~~~~~~~~~~~~~~~~ From rusty.gates at icloud.com Wed Jun 4 00:50:36 2014 From: rusty.gates at icloud.com (Tommi) Date: Wed, 04 Jun 2014 10:50:36 +0300 Subject: [rust-dev] syntax for explicit generics when calling static method In-Reply-To: References: Message-ID: I don't know if there's a better way, but this at least works: let tmp: Test = Test::new(); let t = tmp.test(); println!("t={}", t); On 2014-06-04, at 10:28, Igor Bukanov wrote: > What is the syntax for calling a static method of a generic struct > while selecting the the generic parameters explicitly? Apparently > Struct::static_method does not work. For example, consider the > following program: > > #[deriving(Show)] > struct Test { i: int } > > impl Test { > fn new() -> Test { Test {i: 1} } > fn test(&self) -> int { self.i } > } > > fn main() { > let t = Test::new().test(); > println!("t={}", t); > } > > The latest nightly compiler generates: > > s.rs:10:13: 10:17 error: `Test` is a structure name, but this > expression uses it like a function name > s.rs:10 let t = Test::new().test(); > ^~~~ > > Note that in this case type inference does not work as removing gives: > > s.rs:10:13: 10:31 error: cannot determine a type for this expression: > unconstrained type > s.rs:10 let t = Test::new().test(); > ^~~~~~~~~~~~~~~~~~ > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Wed Jun 4 00:52:36 2014 From: rusty.gates at icloud.com (Tommi) Date: Wed, 04 Jun 2014 10:52:36 +0300 Subject: [rust-dev] syntax for explicit generics when calling static method In-Reply-To: References: Message-ID: Apparently this works as well: let t = Test::::new().test(); println!("t={}", t); On 2014-06-04, at 10:50, Tommi wrote: > I don't know if there's a better way, but this at least works: > > let tmp: Test = Test::new(); > let t = tmp.test(); > println!("t={}", t); > > On 2014-06-04, at 10:28, Igor Bukanov wrote: > >> What is the syntax for calling a static method of a generic struct >> while selecting the the generic parameters explicitly? Apparently >> Struct::static_method does not work. For example, consider the >> following program: >> >> #[deriving(Show)] >> struct Test { i: int } >> >> impl Test { >> fn new() -> Test { Test {i: 1} } >> fn test(&self) -> int { self.i } >> } >> >> fn main() { >> let t = Test::new().test(); >> println!("t={}", t); >> } >> >> The latest nightly compiler generates: >> >> s.rs:10:13: 10:17 error: `Test` is a structure name, but this >> expression uses it like a function name >> s.rs:10 let t = Test::new().test(); >> ^~~~ >> >> Note that in this case type inference does not work as removing gives: >> >> s.rs:10:13: 10:31 error: cannot determine a type for this expression: >> unconstrained type >> s.rs:10 let t = Test::new().test(); >> ^~~~~~~~~~~~~~~~~~ >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.gesemann at gmail.com Wed Jun 4 00:52:57 2014 From: s.gesemann at gmail.com (Sebastian Gesemann) Date: Wed, 4 Jun 2014 09:52:57 +0200 Subject: [rust-dev] syntax for explicit generics when calling static method In-Reply-To: References: Message-ID: On Wed, Jun 4, 2014 at 9:28 AM, Igor Bukanov wrote: > What is the syntax for calling a static method of a generic struct > while selecting the the generic parameters explicitly? Apparently > Struct::static_method does not work. For example, consider the > following program: > > #[deriving(Show)] > struct Test { i: int } > > impl Test { > fn new() -> Test { Test {i: 1} } > fn test(&self) -> int { self.i } > } > > fn main() { > let t = Test::new().test(); > println!("t={}", t); > } > > The latest nightly compiler generates: > > s.rs:10:13: 10:17 error: `Test` is a structure name, but this > expression uses it like a function name > s.rs:10 let t = Test::new().test(); > ^~~~ Not sure if it helps because I can't test it right now and I'm still a beginner, but here it goes: Try inserting a :: between Test and . AFAIU this is sometimes needed for disambiguation. Otherwise the compiler might think < is a less-than operator. Cheers! sg From igor at mir2.org Wed Jun 4 00:58:52 2014 From: igor at mir2.org (Igor Bukanov) Date: Wed, 4 Jun 2014 09:58:52 +0200 Subject: [rust-dev] syntax for explicit generics when calling static method In-Reply-To: References: Message-ID: Thanks, Test::::new() works indeed. On 4 June 2014 09:52, Sebastian Gesemann wrote: > On Wed, Jun 4, 2014 at 9:28 AM, Igor Bukanov wrote: >> What is the syntax for calling a static method of a generic struct >> while selecting the the generic parameters explicitly? Apparently >> Struct::static_method does not work. For example, consider the >> following program: >> >> #[deriving(Show)] >> struct Test { i: int } >> >> impl Test { >> fn new() -> Test { Test {i: 1} } >> fn test(&self) -> int { self.i } >> } >> >> fn main() { >> let t = Test::new().test(); >> println!("t={}", t); >> } >> >> The latest nightly compiler generates: >> >> s.rs:10:13: 10:17 error: `Test` is a structure name, but this >> expression uses it like a function name >> s.rs:10 let t = Test::new().test(); >> ^~~~ > > Not sure if it helps because I can't test it right now and I'm still a > beginner, but here it goes: Try inserting a :: between Test and > . AFAIU this is sometimes needed for disambiguation. Otherwise > the compiler might think < is a less-than operator. > > Cheers! > sg > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From sirinath at sakrio.com Wed Jun 4 07:58:18 2014 From: sirinath at sakrio.com (Suminda Dharmasena) Date: Wed, 4 Jun 2014 20:28:18 +0530 Subject: [rust-dev] Closure Capture Message-ID: Hi, The Box is a welcome change over ~. Any way the closure syntax can improve. Instead of having 2 syntaxes be explicit of what is captured. let x = 3; fn fun_arg (arg: int) -> () { println!("{}", arg + x) } // cannot capturelet closure_arg = (arg: int)|x| -> () { println!("{}", arg + x) }; // x is explicitly captured Suminda -------------- next part -------------- An HTML attachment was scrubbed... URL: From sirinath at sakrio.com Wed Jun 4 08:33:20 2014 From: sirinath at sakrio.com (Suminda Dharmasena) Date: Wed, 4 Jun 2014 21:03:20 +0530 Subject: [rust-dev] GADT Message-ID: Hi, It is great you have ADT but can you extend it to have GADTs? Suminda -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Wed Jun 4 09:26:58 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Wed, 4 Jun 2014 09:26:58 -0700 Subject: [rust-dev] GADT In-Reply-To: References: Message-ID: We'd love to have more advanced type system features (I'm looking forward to HKT myself), but the focus right now (seems to be) is cutting out everything that needs to be cut before 1.0. We can add neat new things after. From hallimanearavind at gmail.com Wed Jun 4 09:33:57 2014 From: hallimanearavind at gmail.com (Aravinda VK) Date: Wed, 4 Jun 2014 22:03:57 +0530 Subject: [rust-dev] How to kill a child task from parent? Message-ID: Hi, I am trying different alternative to kill a task from parent, But I didn't get any ways to kill a task from its parent. In the following approach I started worker2 inside worker1 and worker1 from main. After 1000 miliseconds worker1 dies, but worker2 still continues. use std::io::timer::sleep; fn worker1(){ spawn(proc() { worker2(); }); println!("worker 1"); sleep(1000); fail!("I am done"); } fn worker2(){ loop{ println!("worker 2"); } } fn main(){ spawn(proc() { worker1(); }); } Any suggestions? -- Regards Aravinda | ?????? http://aravindavk.in -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Wed Jun 4 11:17:14 2014 From: alex at crichton.co (Alex Crichton) Date: Wed, 4 Jun 2014 11:17:14 -0700 Subject: [rust-dev] How to kill a child task from parent? In-Reply-To: References: Message-ID: Rust tasks do not support being killed at arbitrary points. You'll have to arrange ahead of time for a "please die" message to be sent a long a channel, or a similar scheme for transmitting this information. On Wed, Jun 4, 2014 at 9:33 AM, Aravinda VK wrote: > Hi, > > I am trying different alternative to kill a task from parent, But I didn't > get any ways to kill a task from its parent. > > In the following approach I started worker2 inside worker1 and worker1 from > main. After 1000 miliseconds worker1 dies, but worker2 still continues. > > use std::io::timer::sleep; > > fn worker1(){ > spawn(proc() { > worker2(); > }); > println!("worker 1"); > sleep(1000); > fail!("I am done"); > } > > fn worker2(){ > loop{ > println!("worker 2"); > } > } > > fn main(){ > spawn(proc() { > worker1(); > }); > } > Any suggestions? > > -- > Regards > Aravinda | ?????? > http://aravindavk.in > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From farcaller at gmail.com Wed Jun 4 14:05:35 2014 From: farcaller at gmail.com (Vladimir Pouzanov) Date: Wed, 4 Jun 2014 22:05:35 +0100 Subject: [rust-dev] How do I bootstrap rust form armhf? Message-ID: I'm trying to run rustc on an arm board, but obviously there's no precompiled stage0 to build the compiler. Is there a procedure to cross-compile stage0 on other host machine where I do have rustc? -- Sincerely, Vladimir "Farcaller" Pouzanov http://farcaller.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Wed Jun 4 16:01:18 2014 From: banderson at mozilla.com (Brian Anderson) Date: Wed, 04 Jun 2014 16:01:18 -0700 Subject: [rust-dev] 7 high priority Rust libraries that need to be written Message-ID: <538FA53E.1030309@mozilla.com> Greetings, all. Looking for ways to have an impact on Rust? The current plan for Rust defers the creation of some key libraries until after Rust 1.0, but that doesn't mean we can't start on them now if the interest is out there. Here are 7 libraries that need to be created soon rather than later. Since these are all destined to be either incorporated directly into the Rust distribution or to be officially-maintained cargo packages, and since they are targeted for inclusion in the post-1.0 timeframe, they need to be designed carefully and implemented thoroughly. # Internationalization (https://github.com/mozilla/rust/issues/14494) ECMA 402 is a standard for internationalization, dealing with the automatic conversion of various information based on locale. Rust's core libraries provide *no* internationalization. A core problem here will be determining how Rust should think about locales. # Localization (https://github.com/mozilla/rust/issues/14495) This may depend on the previous for locale support, if nothing else. This is largely about the human-assisted translation of strings. We would like to experiment with a new Moco-developed standard for this, called L20N. This project will be about figuring out how the L20N API can be adapted to Rust. # Unicode (ICU) (https://github.com/mozilla/rust/issues/14656) The exact path forward here may require a bit of discussion still, but I think the most viable approach starts with binding libicu and wrapping in a Rust API. # Date/Time (https://github.com/mozilla/rust/issues/14657) Our time crate is very minimal, and the API looks dated. This is a hard problem and JodaTime seems to be well regarded so let's just copy it. # HTTP (https://github.com/teepee/teepee) ChrisMorgan is leading an effort to implement HTTP for Rust and I'm sure he would love more contributions. # Crypto (https://github.com/mozilla/rust/issues/14655) We've previously made the decision not to distribute any crypto with Rust at all, but this is probably not tenable since crypto is used everywhere. My current opinion is that we should not distribute any crypto /written in Rust/, but that distributing bindings to proven crypto is fine. Figure out a strategy here, build consensus, then start implementing a robust crypto library out of tree, with the goal of merging into the main distribution someday, and possibly - far in the future - reimplementing in Rust. There are some existing efforts along these lines that should be evaluated for this purpose. There are a lot of people interested in, and working on, this subject, and crypto potentially interacts with many libraries (like HTTP) so coordination is needed. # SQL (https://github.com/mozilla/rust/issues/14658) Generic SQL bindings. I'm told SqlAlchemy core is a good system to learn from. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sirinath at sakrio.com Wed Jun 4 21:54:43 2014 From: sirinath at sakrio.com (Suminda Dharmasena) Date: Thu, 5 Jun 2014 10:24:43 +0530 Subject: [rust-dev] GADT In-Reply-To: <1401899512940.891127227@boxbe> References: <1401899512940.891127227@boxbe> Message-ID: Hi, My thinking is that not to be too premature to get to version 1.0. My feeling is that there are still quite a few areas where the language syntax and features can evolve before version 1.0 Suminda -------------- next part -------------- An HTML attachment was scrubbed... URL: From eg1290 at gmail.com Wed Jun 4 22:06:15 2014 From: eg1290 at gmail.com (Evan G) Date: Thu, 5 Jun 2014 00:06:15 -0500 Subject: [rust-dev] GADT In-Reply-To: References: <1401899512940.891127227@boxbe> Message-ID: The thought-process is (as I know it) A) Taking things out is hard, and breaks code B) 1.0 should be stable, and supported without breakage for a long time C) Adding things is pretty easy, and doesn't break code D) A stable release should happen as soon as is reasonable, to get Rust used and tested in production enviroments with production code *Therefore, *get the language in a state that's as minimal as possible, so things can be added, but not removed. On Wed, Jun 4, 2014 at 11:54 PM, Suminda Dharmasena wrote: > Hi, > > My thinking is that not to be too premature to get to version 1.0. > > My feeling is that there are still quite a few areas where the language > syntax and features can evolve before version 1.0 > > Suminda > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sirinath at sakrio.com Wed Jun 4 22:09:59 2014 From: sirinath at sakrio.com (Suminda Dharmasena) Date: Thu, 5 Jun 2014 10:39:59 +0530 Subject: [rust-dev] GADT In-Reply-To: References: <1401899512940.891127227@boxbe> Message-ID: Some features like 2 closure syntaxes is not appealing for version 1.0 of the language. You should have a different way to specify capture. I send a mail reading this also. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sirinath at sakrio.com Wed Jun 4 22:11:09 2014 From: sirinath at sakrio.com (Suminda Dharmasena) Date: Thu, 5 Jun 2014 10:41:09 +0530 Subject: [rust-dev] Bring Back Type State Message-ID: Hi, The initial Type State implementation in Rust was not a great way to get about it. Please reconsider adding type state like it has been done in the Plaid language. Basically you can use traits mechanism to mixin and remove the trait when methods marked as having state transitions. Suminda Plaid: http://www.cs.cmu.edu/~aldrich/plaid/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Wed Jun 4 22:14:00 2014 From: banderson at mozilla.com (Brian Anderson) Date: Wed, 04 Jun 2014 22:14:00 -0700 Subject: [rust-dev] Bring Back Type State In-Reply-To: References: Message-ID: <538FFC98.1090908@mozilla.com> Thank you for your suggestion, but typestate is not coming back. There is no room in the complexity budget for another major piece of type system, and linear types can serve much the same purpose. On 06/04/2014 10:11 PM, Suminda Dharmasena wrote: > Hi, > > The initial Type State implementation in Rust was not a great way to > get about it. Please reconsider adding type state like it has been > done in the Plaid language. > > Basically you can use traits mechanism to mixin and remove the trait > when methods marked as having state transitions. > > Suminda > > Plaid: http://www.cs.cmu.edu/~aldrich/plaid/ > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From eg1290 at gmail.com Wed Jun 4 22:16:22 2014 From: eg1290 at gmail.com (Evan G) Date: Thu, 5 Jun 2014 00:16:22 -0500 Subject: [rust-dev] GADT In-Reply-To: References: <1401899512940.891127227@boxbe> Message-ID: I'm pretty sure closure*s are *on the list to be addressed before 1.0 See https://github.com/mozilla/rust/issues?milestone=20&page=1&state=open for a good idea of our roadmap is before 1.0 On Thu, Jun 5, 2014 at 12:09 AM, Suminda Dharmasena wrote: > Some features like 2 closure syntaxes is not appealing for version 1.0 of > the language. You should have a different way to specify capture. I send a > mail reading this also. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zwarich at mozilla.com Wed Jun 4 22:39:12 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Wed, 4 Jun 2014 22:39:12 -0700 Subject: [rust-dev] GADT In-Reply-To: References: Message-ID: <200897F1-5DFA-4980-8CDB-3E44348E3B4E@mozilla.com> There are at least two tricky aspects to adding GADTs in Rust: 1) Rust implements parametric polymorphism via monomorphization (duplicating polymorphic functions for each type), but GADTs are really only useful with polymorphic recursion, which requires a polymorphic function to be applied to a potentially unbounded number of types at runtime. 2) The interaction between GADTs and subtyping (e.g determining the variance of GADT constructors) is nontrivial. Cameron > On Jun 4, 2014, at 8:33 AM, Suminda Dharmasena wrote: > > Hi, > > It is great you have ADT but can you extend it to have GADTs? > > Suminda > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From zwarich at mozilla.com Wed Jun 4 22:40:15 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Wed, 4 Jun 2014 22:40:15 -0700 Subject: [rust-dev] Bring Back Type State In-Reply-To: <538FFC98.1090908@mozilla.com> References: <538FFC98.1090908@mozilla.com> Message-ID: <7AECE984-3CE8-4B05-8339-37F23C057B14@mozilla.com> Is there a canonical example of encoding a state machine into Rust's substructural types? Cameron > On Jun 4, 2014, at 10:14 PM, Brian Anderson wrote: > > Thank you for your suggestion, but typestate is not coming back. There is no room in the complexity budget for another major piece of type system, and linear types can serve much the same purpose. > >> On 06/04/2014 10:11 PM, Suminda Dharmasena wrote: >> Hi, >> >> The initial Type State implementation in Rust was not a great way to get about it. Please reconsider adding type state like it has been done in the Plaid language. >> >> Basically you can use traits mechanism to mixin and remove the trait when methods marked as having state transitions. >> >> Suminda >> >> Plaid: http://www.cs.cmu.edu/~aldrich/plaid/ >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecreed at cs.washington.edu Wed Jun 4 23:03:57 2014 From: ecreed at cs.washington.edu (Eric Reed) Date: Wed, 4 Jun 2014 23:03:57 -0700 Subject: [rust-dev] Bring Back Type State In-Reply-To: <7AECE984-3CE8-4B05-8339-37F23C057B14@mozilla.com> References: <538FFC98.1090908@mozilla.com> <7AECE984-3CE8-4B05-8339-37F23C057B14@mozilla.com> Message-ID: I'm not going to claim canonicity, but I used the type system to encode the socket state machine (see std::io::net::{tcp,udp}). TcpListener consumes itself when you start listening and becomes a TcpAcceptor. UdpSocket can "connect" (i.e. ignore messages from other sources) and become a UdpStream, which can "disconnect" (i.e. stop ignoring) and become a UdpSocket again. It's actually very easy to do. Make every state a distinct affine type. Implement state transitions as methods that take self by value (consume old state) and return the new state. On Wed, Jun 4, 2014 at 10:40 PM, Cameron Zwarich wrote: > Is there a canonical example of encoding a state machine into Rust's > substructural types? > > Cameron > > On Jun 4, 2014, at 10:14 PM, Brian Anderson wrote: > > Thank you for your suggestion, but typestate is not coming back. There is > no room in the complexity budget for another major piece of type system, > and linear types can serve much the same purpose. > > On 06/04/2014 10:11 PM, Suminda Dharmasena wrote: > > Hi, > > The initial Type State implementation in Rust was not a great way to get > about it. Please reconsider adding type state like it has been done in the > Plaid language. > > Basically you can use traits mechanism to mixin and remove the trait > when methods marked as having state transitions. > > Suminda > > Plaid: http://www.cs.cmu.edu/~aldrich/plaid/ > > > _______________________________________________ > Rust-dev mailing listRust-dev at mozilla.orghttps://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnathan.software at gmail.com Wed Jun 4 23:27:22 2014 From: pnathan.software at gmail.com (Paul Nathan) Date: Wed, 4 Jun 2014 23:27:22 -0700 Subject: [rust-dev] Designing long-running subtasks Message-ID: Hi, I'm designing a system which, for the purposes of the email, is a job/worker assignment queuing system (it's more nuanced than this, but this is what the approach is). Also, it's not just a "toy" system, I'd like to use it daily. Fundamentally, the task is as follows: The dispatcher receives a job list, sends out jobs to workers, occasionally gets a new job list, assigns any unperformed task to free workers, and monitors workers for being still around. Imagine if you would that the workers are threads that do calls out to, say, java processes or SSH subcommands - not pure Rust code. One approach in C might be to have a list of jobs with mutexes around them, some worker threads with mutexed state, and to have the dispatched check through the lists and verify that everything is assigned correctly. The workers' busyness would be indicated by a flag on the worker struct. So: the worker and the dispatcher pass the job back and forth (but the dispatcher can still look at the job state), and the dispatcher can eyeball a worker's state. All very shared state-y (and managable with some mutex discipline ). This won't work in a CSP system. So.... Let's say we have our main task, which spawns worker tasks which synchronously idle for new work to do. The dispatcher reads a new job, sets the "working" flag to true, and sends it off to the pool. On receipt of a do-work message, the worker begins his duties. After he's completed them, he sends a message back to the dispatch that he is done and the results can be collected. The job state is set to "finished", and the worker is then freed. Several difficulties present themselves: how does the dispatcher do its checkups on the worker? The worker task will be presumably chugging along and only flagging if something goes pear-shaped. Similarily, the job state should correspond roughly on both the dispatcher and the worker; the job should only be allocated to a free worker, which will then essentially take write-capabilities on the job away from the dispatcher. I think the shape of the problem is illustrated now. Design ideas are requested here. Regards, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From asb at asbradbury.org Thu Jun 5 00:11:40 2014 From: asb at asbradbury.org (Alex Bradbury) Date: Thu, 5 Jun 2014 08:11:40 +0100 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: <538FA53E.1030309@mozilla.com> References: <538FA53E.1030309@mozilla.com> Message-ID: On 5 June 2014 00:01, Brian Anderson wrote: > # Date/Time (https://github.com/mozilla/rust/issues/14657) > > Our time crate is very minimal, and the API looks dated. This is a hard > problem and JodaTime seems to be well regarded so let's just copy it. Or indeed, JSR310 which is written by the Joda author http://blog.joda.org/2009/11/why-jsr-310-isn-joda-time_4941.html http://www.slideshare.net/dcsobral/a-jsr310-date-beyond-joda-time. Alex From me at chrismorgan.info Thu Jun 5 01:20:48 2014 From: me at chrismorgan.info (Chris Morgan) Date: Thu, 5 Jun 2014 18:20:48 +1000 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: <538FA53E.1030309@mozilla.com> References: <538FA53E.1030309@mozilla.com> Message-ID: On Thu, Jun 5, 2014 at 9:01 AM, Brian Anderson wrote: > # Date/Time (https://github.com/mozilla/rust/issues/14657) > > Our time crate is very minimal, and the API looks dated. This is a hard > problem and JodaTime seems to be well regarded so let's just copy it. I suggest that anyone interested in doing this read https://github.com/mozilla/rust/wiki/Lib-datetime and also, following on from that, https://github.com/luisbg/rust-datetime and https://mail.mozilla.org/pipermail/rust-dev/2013-September/005528.html > # SQL (https://github.com/mozilla/rust/issues/14658) > > Generic SQL bindings. I'm told SqlAlchemy core is a good system to learn > from. This is still an area for significant research. I believe that a system more similar to LINQ to SQL than to SQLAlchemy is appropriate for our language; we have a good type system and the ability to do fancy compile-time things, so we should be using it. Also, incidentally, both of these things are higher-level tools; I would also like to see an attempt at safe SQL, e.g. `sql!(SELECT bar FROM foo)`. There are various things that Rust can do in that way, efficiently and correctly, that other languages can?t, and I?d like to see it tried out as an approach. The basic idea would be that any SQL would be permitted there and would be mapped onto type-safe constructs in Rust, all the way down to producing a struct type for each query with just the appropriate fields and so on. It might turn out to be a dud idea in the end, but I think it should be tried. As an aside, I am intending to post more about my vision for HTTP/web/SQL in Rust in the coming week which will have a few more details on these things. From jcd at sdf.org Thu Jun 5 01:43:41 2014 From: jcd at sdf.org (J. Cliff Dyer) Date: Thu, 5 Jun 2014 10:43:41 +0200 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: References: <538FA53E.1030309@mozilla.com> Message-ID: <20140605104341.13fdf8a5@gdoba.domain.local> On Thu, 5 Jun 2014 18:20:48 +1000 Chris Morgan wrote: > Also, incidentally, both of these things are higher-level tools; I > would also like to see an attempt at safe SQL, e.g. `sql!(SELECT bar > FROM foo)`. To be clear, SQLAlchemy is "higher level" in the sense that it is database agnostic, but SQLAlchemy *core*, is not an ORM, it is a set of tools for programmatically generating sql expressions, and managing connections and sessions. SQLAlchemy has an ORM, but it is a separate layer, built on top of the expression language, which is fairly low level. It is a very robust system that is worth looking into for inspiration, whether the path Rust takes is more type-oriented or not. Cheers, Cliff From sirinath at sakrio.com Thu Jun 5 02:00:02 2014 From: sirinath at sakrio.com (Suminda Dharmasena) Date: Thu, 5 Jun 2014 14:30:02 +0530 Subject: [rust-dev] Object Protocols | Structural Typing Message-ID: Hi, It is possible to support object protocols. Some cases are: - Developer want to pass an object to untrusted library. In such a case restrict functions that can be called on the object - Once the restriction is applied there should be not way to evoke the un exposed functions in the normal course of programming including unsafe code which are written in Rust. If it is passed to another language then this need not apply. - Structural typing can achieve the above on a less restrictive way - You want certain functions called in certain order, be able to enforce this. This can be achieved if you have Plaid Language style type state with gradual typing Suminda -------------- next part -------------- An HTML attachment was scrubbed... URL: From kimhyunkang at gmail.com Thu Jun 5 02:11:45 2014 From: kimhyunkang at gmail.com (kimhyunkang at gmail.com) Date: Thu, 5 Jun 2014 18:11:45 +0900 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: References: <538FA53E.1030309@mozilla.com> Message-ID: Hi list. 2014-06-05 17:20 GMT+09:00 Chris Morgan : > On Thu, Jun 5, 2014 at 9:01 AM, Brian Anderson > wrote: > > # Date/Time (https://github.com/mozilla/rust/issues/14657) > > > > Our time crate is very minimal, and the API looks dated. This is a hard > > problem and JodaTime seems to be well regarded so let's just copy it. > > I suggest that anyone interested in doing this read > https://github.com/mozilla/rust/wiki/Lib-datetime and also, following > on from that, https://github.com/luisbg/rust-datetime and > https://mail.mozilla.org/pipermail/rust-dev/2013-September/005528.html > > > # SQL (https://github.com/mozilla/rust/issues/14658) > > > > Generic SQL bindings. I'm told SqlAlchemy core is a good system to learn > > from. > This is still an area for significant research. I believe that a > system more similar to LINQ to SQL > than to > SQLAlchemy is appropriate for our language; we have a good type system > and the ability to do fancy compile-time things, so we should be using > it. > > Also, incidentally, both of these things are higher-level tools; I > would also like to see an attempt at safe SQL, e.g. `sql!(SELECT bar > FROM foo)`. There are various things that Rust can do in that way, > efficiently and correctly, that other languages can?t, and I?d like to > see it tried out as an approach. The basic idea would be that any SQL > would be permitted there and would be mapped onto type-safe constructs > in Rust, all the way down to producing a struct type for each query > with just the appropriate fields and so on. It might turn out to be a > dud idea in the end, but I think it should be tried. > > As an aside, I am intending to post more about my vision for > HTTP/web/SQL in Rust in the coming week which will have a few more > details on these things. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > Actually, I was working on a small project, just to see if it's possible to create type-safe SQL library for Rust. https://github.com/kimhyunkang/rust-sql I did not intend to open this up until it has more features, but it turns out some people were thinking about similar projects anyway. So far, it supports very basic mappings between POD struct and SQL table #[sql_table] pub struct TestTable { pub a: Option, pub b: String } I implemented basic insert and "select * from table" operations for above table. let db = sqlite3::open("insert_test.sqlite3").unwrap(); let records = [ TestTable { a: None, b: "Hello, world!".to_str() }, TestTable { a: Some(1), b: "Goodbye, world!".to_str() } ]; db.create_table_if_not_exists::(); db.insert_many(records.iter()); let select_records: Vec = db.select_all().collect(); *assert_eq!(*select_records, records*);* I was also planning to add sql!() macro almost exactly same as Chris Morgan suggests. However, you can't directly access type-checking part of rustc in #![phase(syntax)] modules, which means you need some dirty hacks to peroperly type-check such macros. Maybe a LINQ-like Haskell project esqueleto ( https://hackage.haskell.org/package/esqueleto ) might be a better path for us, but I don't know how to implement an equivalent in Rust, as we don't have GADT (yet). I'd love any comments, suggestions or a better idea for this project. I also intend to add my current (very hackish) implementation plan of sql!() macros few days later. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sirinath at sakrio.com Thu Jun 5 02:14:12 2014 From: sirinath at sakrio.com (Suminda Dharmasena) Date: Thu, 5 Jun 2014 14:44:12 +0530 Subject: [rust-dev] Dependent Type | Dependent Object Types Message-ID: Hi, Another aspect that can be considered is Dependent Types. S -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Thu Jun 5 02:21:01 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Thu, 05 Jun 2014 05:21:01 -0400 Subject: [rust-dev] Designing long-running subtasks In-Reply-To: References: Message-ID: <5390367D.60407@gmail.com> On 05/06/14 02:27 AM, Paul Nathan wrote: > > This won't work in a CSP system. So.... Rust isn't a pure CSP system. It has both immutable and mutable shared memory alongside various forms of channels. The best tool for the job will vary based on the specific use case. Channels will often be the right choice, but there's no need to prefer them when using shared state maps better to the problem. Either way, Rust prevents data races. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From dbau.pp at gmail.com Thu Jun 5 02:21:06 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Thu, 05 Jun 2014 19:21:06 +1000 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: References: <538FA53E.1030309@mozilla.com> Message-ID: <53903682.6000307@gmail.com> On 05/06/14 19:11, kimhyunkang at gmail.com wrote: > > I was also planning to add sql!() macro almost exactly same as Chris > Morgan suggests. However, you can't directly access type-checking part > of rustc in #![phase(syntax)] modules, which means you need some dirty > hacks to peroperly type-check such macros. The conventional approach is to expand to something that uses certain traits, meaning any external data has to satisfy those traits for the macro invocation to work. This technique is used by `println!` and `#[deriving]`, for example. (I don't know if you regard this as a dirty hack or not.) Huon From eduard.bopp at aepsil0n.de Thu Jun 5 02:29:27 2014 From: eduard.bopp at aepsil0n.de (Eduard Bopp) Date: Thu, 05 Jun 2014 11:29:27 +0200 Subject: [rust-dev] Dependent Type | Dependent Object Types In-Reply-To: References: Message-ID: <53903877.5050100@aepsil0n.de> On 06/05/2014 11:14 AM, Suminda Dharmasena wrote: > Hi, > > Another aspect that can be considered is Dependent Types. > > S I had the same concern and opened an RFC heading into that direction a couple of weeks ago. [1] But as far as I understand this is quite a technical challenge. For instance, this requires compile-time execution of arbitrary functions, which has been discussed [2] in the past. Also the scope of this type system extension needs to be clarified. I think, my RFC is not complete in that regard yet, and would appreciate constructive feedback very much. [1]: https://github.com/rust-lang/rfcs/pull/56 [2]: https://mail.mozilla.org/pipermail/rust-dev/2014-January/008252.html From kimhyunkang at gmail.com Thu Jun 5 02:47:10 2014 From: kimhyunkang at gmail.com (kimhyunkang at gmail.com) Date: Thu, 5 Jun 2014 18:47:10 +0900 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: <53903682.6000307@gmail.com> References: <538FA53E.1030309@mozilla.com> <53903682.6000307@gmail.com> Message-ID: I was thinking about something like this #[sql_table] pub struct TestTable { pub a: Option, pub b: String } let selector = sql!( select a from TestTable ); let mut iter: SqlRows> = selector.fetch(); let result: Vec> = iter.collect(); I first intended to convert sql! macro to some Iterator>. However, we don't have typeof(TestTable::a) syntax yet, which means it's impossible to get the type of "a" column without creating a dummy instance. So I plan to convert above macro into rough equivalent of below. pub fn new_selector(_f: fn || -> T) -> SqlSelector { // selector initialization code. // _f should not be executed anywhere } let selector = { // This code is unsafe and should not be executed let not_executed = fn || { unsafe { let dummy_instance = TestTable::uninitialized(); dummy_instance.a } }; // This automatically type-checks to SqlSelector> new_selector(not_executed); }; I'd love to know if there's a better alternative of this hack. 2014-06-05 18:21 GMT+09:00 Huon Wilson : > On 05/06/14 19:11, kimhyunkang at gmail.com wrote: > >> >> I was also planning to add sql!() macro almost exactly same as Chris >> Morgan suggests. However, you can't directly access type-checking part of >> rustc in #![phase(syntax)] modules, which means you need some dirty hacks >> to peroperly type-check such macros. >> > > > The conventional approach is to expand to something that uses certain > traits, meaning any external data has to satisfy those traits for the macro > invocation to work. This technique is used by `println!` and `#[deriving]`, > for example. > > (I don't know if you regard this as a dirty hack or not.) > > > Huon > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mneumann at ntecs.de Thu Jun 5 05:52:04 2014 From: mneumann at ntecs.de (Michael Neumann) Date: Thu, 05 Jun 2014 14:52:04 +0200 Subject: [rust-dev] Lifetimes and iterators Message-ID: <539067F4.9020609@ntecs.de> Hi all, I want to implement an iterator that returns a reference into the iterator struct from the next() call. Usually I'd write something like: fn next<'a>(&'a mut self) -> Option<&'a ...> but this breaks the Iterator trait. I now have the following code that works, but I am unsure if it is correct or not. use std::mem::transmute; // std::cast::transmute struct Iter { current: uint } impl<'a> Iterator<&'a uint> for Iter { fn next(&mut self) -> Option<&'a uint> { Some(unsafe { transmute(&self.current) } ) } } fn main() { let mut iter = Iter { current: 2 }; for &i in iter { println!("{}", i); } } Actually what I want is to declare that 'a (&self.current) is of shorter or equal lifetime than &self, but I don't know how. I found out that my code is of course wrong, as the following code produces a memory protection fault. use std::mem::transmute; struct Iter { current: ~str } impl<'a> Iterator<&'a ~str> for Iter { fn next(&mut self) -> Option<&'a ~str> { Some(unsafe { transmute(&self.current) } ) } } fn foo<'a>() -> &'a ~str { let mut iter = Iter { current: "hello".to_owned() }; return iter.next().unwrap(); } fn main() { println!("{}", foo()); } Any hints? Regards, Michael From mneumann at ntecs.de Thu Jun 5 06:07:54 2014 From: mneumann at ntecs.de (Michael Neumann) Date: Thu, 05 Jun 2014 15:07:54 +0200 Subject: [rust-dev] Lifetimes and iterators In-Reply-To: <539067F4.9020609@ntecs.de> References: <539067F4.9020609@ntecs.de> Message-ID: <53906BAA.8000204@ntecs.de> Am 05.06.2014 14:52, schrieb Michael Neumann: > Hi all, > > I want to implement an iterator that returns a reference into the > iterator struct from the next() call. Ok, I found out this is issue #8355 and it's closed. Regards, Michael From willi.t1 at gmail.com Thu Jun 5 08:06:54 2014 From: willi.t1 at gmail.com (Wilhansen Li) Date: Thu, 5 Jun 2014 23:06:54 +0800 Subject: [rust-dev] 7 high priority Rust libraries that need to be written Message-ID: > > # SQL (https://github.com/mozilla/rust/issues/14658) Generic SQL > bindings. I'm told SqlAlchemy core is a good system to learn > from. What about basing it off Anorm? http://www.playframework.com/documentation/2.3.x/ScalaAnorm It's not an ORM so it's pretty lightweight but it provides a nice API for submitting SQL queries and retrieving results. ORMs could be built on top of it, and it uses some functional programming concepts. -------------- next part -------------- An HTML attachment was scrubbed... URL: From explodingmind at gmail.com Thu Jun 5 09:07:33 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Thu, 5 Jun 2014 12:07:33 -0400 Subject: [rust-dev] Function Definition Type Inference Message-ID: Hey All, I have a rust program that generates rust source code, including function definitions.[1] I don't yet have much in the way of type inference baked into my code, relying instead upon on compile time analysis provided by rustc. This works well, except for when dealing with the type of function arguments, which cannot be inferred (except via trait bounds[2]) at compile time. Is there a mechanism through which I can hand off an ast::Item object with a FnDecl's Vec having TyInfer[3] and get it back with inferred type? Thanks! -- Ian [1] https://github.com/itdaniher/ratpak/blob/master/stage2.rs [2] currently exploring this, but looking for alternatives first [3] http://static.rust-lang.org/doc/master/syntax/ast/type.Ty_.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From explodingmind at gmail.com Thu Jun 5 11:13:44 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Thu, 5 Jun 2014 14:13:44 -0400 Subject: [rust-dev] Building rustc @ 1GB RAM? Message-ID: 1GB is close-ish to the 1.4GB last reported (over a month ago!) by http://huonw.github.io/isrustfastyet/mem/. Are there any workarounds to push the compilation memory down? I'm also exploring distcc, but IRFY has a bit of semantic ambiguity as to whether or not it's 1.4GB simultaneous or net total. Thanks! -- Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Thu Jun 5 11:15:44 2014 From: corey at octayn.net (Corey Richardson) Date: Thu, 5 Jun 2014 11:15:44 -0700 Subject: [rust-dev] Building rustc @ 1GB RAM? In-Reply-To: References: Message-ID: 1.4GB peak, consumed by rustc and every child process. The bencher is currently running after being down for a while, so it will fill in today. There are no real workarounds. On Thu, Jun 5, 2014 at 11:13 AM, Ian Daniher wrote: > 1GB is close-ish to the 1.4GB last reported (over a month ago!) by > http://huonw.github.io/isrustfastyet/mem/. > > Are there any workarounds to push the compilation memory down? I'm also > exploring distcc, but IRFY has a bit of semantic ambiguity as to whether or > not it's 1.4GB simultaneous or net total. > > Thanks! > -- > Ian > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- http://octayn.net/ From igor at mir2.org Thu Jun 5 11:25:10 2014 From: igor at mir2.org (Igor Bukanov) Date: Thu, 5 Jun 2014 20:25:10 +0200 Subject: [rust-dev] Building rustc @ 1GB RAM? In-Reply-To: References: Message-ID: Have you considered to use zram? Typically the compression for compiler memory is over a factor of 3 so that can be an option as the performance degradation under swapping could be tolerable. A similar option is to enable zswap, but as the max compression with it is effectively limited by factor of 2, it may not be enough to avoid swapping. On 5 June 2014 20:13, Ian Daniher wrote: > 1GB is close-ish to the 1.4GB last reported (over a month ago!) by > http://huonw.github.io/isrustfastyet/mem/. > > Are there any workarounds to push the compilation memory down? I'm also > exploring distcc, but IRFY has a bit of semantic ambiguity as to whether or > not it's 1.4GB simultaneous or net total. > > Thanks! > -- > Ian > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From banderson at mozilla.com Thu Jun 5 12:50:44 2014 From: banderson at mozilla.com (Brian Anderson) Date: Thu, 05 Jun 2014 12:50:44 -0700 Subject: [rust-dev] Dependent Type | Dependent Object Types In-Reply-To: References: Message-ID: <5390CA14.3030704@mozilla.com> I appreciate your enthusiasm, but please stop creating new threads that simply suggest adding major new features to the type system. The vast majority of type system features that might benefit Rust have been discussed many times, in excruciating depth, for years. On 06/05/2014 02:14 AM, Suminda Dharmasena wrote: > Hi, > > Another aspect that can be considered is Dependent Types. > > S > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From erick.tryzelaar at gmail.com Thu Jun 5 20:17:50 2014 From: erick.tryzelaar at gmail.com (Erick Tryzelaar) Date: Thu, 5 Jun 2014 20:17:50 -0700 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: References: <538FA53E.1030309@mozilla.com> <53903682.6000307@gmail.com> Message-ID: On Thursday, June 5, 2014, wrote: > > > I first intended to convert sql! macro to some Iterator>. > However, we don't have typeof(TestTable::a) syntax yet, which means it's > impossible to get the type of "a" column without creating a dummy instance. > There may be things we could do with the Encodable/Decodable infrastructure here. For example, we could require implementations to provide a schema so that a macro could get the type of a value without creating a dummy value. -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at kevincantu.org Thu Jun 5 21:20:21 2014 From: me at kevincantu.org (Kevin Cantu) Date: Thu, 5 Jun 2014 21:20:21 -0700 Subject: [rust-dev] Patterns that'll never match In-Reply-To: References: <31703434-2E80-4A2C-83A2-98CF60AE9405@icloud.com> Message-ID: Could be an interesting library! Kevin On Jun 1, 2014 4:29 AM, "Matthieu Monrocq" wrote: > > > > On Sun, Jun 1, 2014 at 1:04 PM, Tommi wrote: > >> On 2014-06-01, at 13:48, G?bor Lehel wrote: >> >> It would be possible in theory to teach the compiler about e.g. the >> comparison operators on built-in integral types, which don't involve any >> user code. It would only be appropriate as a warning rather than an error >> due to the inherent incompleteness of the analysis and the arbitrariness of >> what things to include in it. No opinion about whether it would be worth >> doing. >> >> >> Perhaps this kind of thing would be better suited for a separate tool >> that could (contrary to a compiler) run this and other kinds of heuristics >> without having to worry about blowing up compilation times. >> >> > This is typically the domain of either static analysis or runtime > instrumentation (branch coverage tools) in the arbitrary case, indeed. > > -- Matthieu > > >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at kevincantu.org Thu Jun 5 21:34:32 2014 From: me at kevincantu.org (Kevin Cantu) Date: Thu, 5 Jun 2014 21:34:32 -0700 Subject: [rust-dev] Object Protocols | Structural Typing In-Reply-To: References: Message-ID: I really don't know Plaid, and am no expert, but I'd want to implement that with messages to an agent of some sort, rather than by trying to fit gradual typing into Rust. Is somebody here more familiar with the literature? Kevin On Jun 5, 2014 2:00 AM, "Suminda Dharmasena" wrote: > Hi, > > It is possible to support object protocols. Some cases are: > > - Developer want to pass an object to untrusted library. In such a > case restrict functions that can be called on the object > - Once the restriction is applied there should be not way to evoke > the un exposed functions in the normal course of programming including > unsafe code which are written in Rust. If it is passed to another language > then this need not apply. > - Structural typing can achieve the above on a less restrictive way > - You want certain functions called in certain order, be able to > enforce this. This can be achieved if you have Plaid Language style type > state with gradual typing > > Suminda > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at kevincantu.org Thu Jun 5 21:42:27 2014 From: me at kevincantu.org (Kevin Cantu) Date: Thu, 5 Jun 2014 21:42:27 -0700 Subject: [rust-dev] Dependent Type | Dependent Object Types In-Reply-To: <5390CA14.3030704@mozilla.com> References: <5390CA14.3030704@mozilla.com> Message-ID: Rust is likely to be a great platform on which to implement such new languages, eventually, though. Be patient. :D Kevin On Jun 5, 2014 12:50 PM, "Brian Anderson" wrote: > I appreciate your enthusiasm, but please stop creating new threads that > simply suggest adding major new features to the type system. The vast > majority of type system features that might benefit Rust have been > discussed many times, in excruciating depth, for years. > > > On 06/05/2014 02:14 AM, Suminda Dharmasena wrote: > > Hi, > > Another aspect that can be considered is Dependent Types. > > S > > > _______________________________________________ > Rust-dev mailing listRust-dev at mozilla.orghttps://mail.mozilla.org/listinfo/rust-dev > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sirinath at sakrio.com Thu Jun 5 22:13:39 2014 From: sirinath at sakrio.com (Suminda Dharmasena) Date: Fri, 6 Jun 2014 10:43:39 +0530 Subject: [rust-dev] Object Protocols | Structural Typing In-Reply-To: <1402029299379.1049458079@boxbe> References: <1402029299379.1049458079@boxbe> Message-ID: Link to literature: http://www.cs.cmu.edu/~aldrich/plaid/ BTW, Gradual typing, dependent typing, type state (as in Plaid) will be a cool addition and really harp for this. -- Suminda Sirinath Salpitikorala Dharmasena, B.Sc. Comp. & I.S. (Hon.) Lond., P.G.Dip. Ind. Maths. J'Pura, MIEEE, MACM, CEO Sakr??! ? *Address*: 6G ? 1st Lane ? Pagoda Road ? Nugegoda 10250 ? Sri Lanka. ? *Mobile* : +94-(0)711007945 ? *Office*: +94-(0) 11 2 199766 ? *Home Office*: +94-(0) 11-5 875614 ? *Home*: +94-(0)11-5 864614 / 2 825908 ? *Web*: http://www.sakrio.com ? This email is subjected to the email Terms of Use and Disclaimer: http://www.sakrio.com/email-legal. Please read this first. -- On 6 June 2014 10:04, Kevin Cantu wrote: > [image: Boxbe] Kevin Cantu ( > me at kevincantu.org) is not on your Guest List > > | Approve sender > > | Approve domain > > > I really don't know Plaid, and am no expert, but I'd want to implement > that with messages to an agent of some sort, rather than by trying to fit > gradual typing into Rust. > > Is somebody here more familiar with the literature? > > Kevin > On Jun 5, 2014 2:00 AM, "Suminda Dharmasena" wrote: > >> Hi, >> >> It is possible to support object protocols. Some cases are: >> >> - Developer want to pass an object to untrusted library. In such a >> case restrict functions that can be called on the object >> - Once the restriction is applied there should be not way to evoke >> the un exposed functions in the normal course of programming including >> unsafe code which are written in Rust. If it is passed to another language >> then this need not apply. >> - Structural typing can achieve the above on a less restrictive way >> - You want certain functions called in certain order, be able to >> enforce this. This can be achieved if you have Plaid Language style type >> state with gradual typing >> >> Suminda >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sirinath at sakrio.com Thu Jun 5 23:00:20 2014 From: sirinath at sakrio.com (Suminda Dharmasena) Date: Fri, 6 Jun 2014 11:30:20 +0530 Subject: [rust-dev] Type System Message-ID: Hi, The concept of Rust is definitely appealing at a high level. One area that can improve is the Type System. Instead of ignoring the developments and research in this area please find a way to embrace it. Also a proper active objects based OO system would be appealing. Suminda -------------- next part -------------- An HTML attachment was scrubbed... URL: From someone at mearie.org Thu Jun 5 23:12:38 2014 From: someone at mearie.org (Kang Seonghoon) Date: Fri, 6 Jun 2014 15:12:38 +0900 Subject: [rust-dev] Type System In-Reply-To: References: Message-ID: I hope you aren't offended with this reply, but as Brian Anderson pointed out, Rust is a programming language being converged for the eventual backward compatibility ("1.0"). Any major feature suggestion or even the full proposal at this stage requires a good support from developers and community, and has to be formally written in the form of RFCs. I'd also like to point out that Rust's type system (and many static analyses) is already quite complex to design and verify. Many new features to the type system are not orthogonal to the existing features and need much efforts to harmonize. This is why the backward compatibility requirement for 1.0 is important; we don't want to simply throw the features out, we need a stable platform to work with. Please keep this in mind when suggesting features. If you are willing to prove that the features are not *that* hard to integrate, you can always fork the language to show your points. 2014-06-06 15:00 GMT+09:00 Suminda Dharmasena : > Hi, > > The concept of Rust is definitely appealing at a high level. One area that > can improve is the Type System. Instead of ignoring the developments and > research in this area please find a way to embrace it. > > Also a proper active objects based OO system would be appealing. > > Suminda > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- -- Kang Seonghoon | http://mearie.org/ -- Opinions expressed in this email do not necessarily represent the views of my employer. -- From me at kevincantu.org Fri Jun 6 02:35:01 2014 From: me at kevincantu.org (Kevin Cantu) Date: Fri, 6 Jun 2014 02:35:01 -0700 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: References: Message-ID: Since C# allows overloaded methods, but F# doesn't want them, what F# does is somewhat interesting: "overloaded methods are permitted in the language, provided that the arguments are in tuple form, not curried form." [ http://msdn.microsoft.com/en-us/library/dd483468.aspx] In practice, this means that all the calls to C# (tupled arguments) can be resolved, but idiomatic F# doesn't have overloaded methods. // tuple calling convention: looks like C# let aa = csharp_library.mx(1, 2) let bb = csharp_library.mx(1) // curried calling convention: makes dd, below, a function not a value let cc = fsharp_library.m2 1 2 let dd = fsharp_library.m2 1 Would it be useful to use pattern matching over some generic sort of tuples to implement something similar in Rust? Kevin On Sat, May 24, 2014 at 3:45 AM, Matthieu Monrocq < matthieu.monrocq at gmail.com> wrote: > > > > On Sat, May 24, 2014 at 9:06 AM, Zolt?n T?th wrote: > >> Alexander, your option 2 could be done automatically. By appending >> postfixes to the overloaded name depending on the parameter types. >> Increasing the number of letters used till the ambiguity is fully resolved. >> >> What do you think? >> >> >> fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) >> fillRect_I_I_I_I_BS ( int x, int y, int width, int height, Qt::BrushStyle >> style ) >> fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) >> fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) >> fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) >> fillRect_R_C ( const QRect & rectangle, const QColor & color ) >> fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) >> fillRect_I_I_I_I_B ( int x, int y, int width, int height, const QBrush & >> brush ) >> fillRect_I_I_I_I_C ( int x, int y, int width, int height, const QColor & >> color ) >> fillRect_I_I_I_I_GC ( int x, int y, int width, int height, >> Qt::GlobalColor color ) >> fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) >> fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) >> >> >> I believe this alternative was considered in the original blog post > Alexander wrote: this is, in essence, mangling. It makes for ugly function > names, although the prefix helps in locating them I guess. > > > Before we talk about generation though, I would start about investigating > where those overloads come from. > > First, there are two different objects being manipulated here: > > + QRect is a rectangle with integral coordinates > + QRectF is a rectangle with floating point coordinates > > > Second, a QRect may already be build from "(int* x*, int* y*, int* width*, > int* height*)"; thus all overloads taking 4 hints instead of a QRect are > pretty useless in a sense. > > Third, in a similar vein, QBrush can be build from "(Qt::BrushStyle)", > "(Qt::GlobalColor)" or "(QColor const&)". So once again those overloads are > pretty useless. > > > This leaves us with: > > + fillRect(QRect const&, QBrush const&) > + fillRect(QRectF const&, QBrush const&) > > Yep, that's it. Of all those inconsistent overloads (missing 4 taking 4 > floats, by the way...) only 2 are ever useful. The other 10 can be safely > discarded without impacting the expressiveness. > > > Now, of course, the real question is how well a tool could perform this > reduction step. I would note here that the position and names of the > "coordinate" arguments of "fillRect" is exactly that of those to "QRect"; > maybe a simple exhaustive search would thus suffice (though it does require > semantic understanding of what a constructor and default arguments are). > > It would be interesting checking how many overloads remain *after* this > reduction step. Here we got a factor of 6 already (should have been 8 if > the interface had been complete). > > It would also be interesting checking if the distinction int/float often > surfaces, there might be an opportunity here. > > > -- Matthieu > > > Alexander Tsvyashchenko wrote: >> >>> So far I can imagine several possible answers: >>> >>> 1. "We don't care, your legacy C++ libraries are bad and you should >>> feel bad!" - I think this stance would be bad for Rust and would hinder its >>> adoption, but if that's the ultimate answer - I'd personally prefer it said >>> loud and clear, so that at least nobody has any illusions. >>> >>> 2. "Define & maintain the mapping between C++ and Rust function >>> names" (I assume this is what you're alluding to with "define meaningful >>> unique function names" above?) While this might be possible for smaller >>> libraries, this is out of the question for large libraries like Qt5 - at >>> least I won't create and maintain this mapping for sure, and I doubt others >>> will: just looking at the stats from 3 Qt5 libraries (QtCore, QtGui and >>> QtWidgets) out of ~30 Qt libraries in total, from the 50745 wrapped >>> methods 9601 were overloads and required renaming. >>> >>> Besides that, this has a disadvantage of throwing away majority of >>> the experience people have with particular library and forcing them to >>> le-learn its API. >>> >>> On top of that, not for every overload it's easy to come up with >>> short, meaningful, memorable and distinctive names - you can try that >>> exercise for http://qt-project.org/doc/qt-4.8/qpainter.html#fillRect >>> ;-) >>> >>> 3. "Come up with some way to allow overloading / default parameters" >>> - possibly with reduced feature set, i.e. if type inference is difficult in >>> the presence of overloads, as suggested in some overloads discussions >>> (although not unsolvable, as proven by other languages that allow both type >>> inference & overloading?), possibly exclude overloads from the type >>> inference by annotating overloaded methods with special attributes? >>> >>> 4. Possibly some other options I'm missing? >>> >>> -- >>> Good luck! Alexander >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at kevincantu.org Fri Jun 6 02:44:24 2014 From: me at kevincantu.org (Kevin Cantu) Date: Fri, 6 Jun 2014 02:44:24 -0700 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: References: Message-ID: I imagine a macro like the following, which is NOT a macro, because I don't know how to write macros yet: macro fillRect(args...) { fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) fillRect_I_I_I_I_BS ( int x, int y, int width, int height, Qt::BrushStyle style ) fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) fillRect_R_C ( const QRect & rectangle, const QColor & color ) fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) fillRect_I_I_I_I_B ( int x, int y, int width, int height, const QBrush & brush ) fillRect_I_I_I_I_C ( int x, int y, int width, int height, const QColor & color ) fillRect_I_I_I_I_GC ( int x, int y, int width, int height, Qt::GlobalColor color ) fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) On Fri, Jun 6, 2014 at 2:35 AM, Kevin Cantu wrote: > Since C# allows overloaded methods, but F# doesn't want them, what F# does > is somewhat interesting: "overloaded methods are permitted in the language, > provided that the arguments are in tuple form, not curried form." [ > http://msdn.microsoft.com/en-us/library/dd483468.aspx] > > In practice, this means that all the calls to C# (tupled arguments) can be > resolved, but idiomatic F# doesn't have overloaded methods. > > // tuple calling convention: looks like C# > let aa = csharp_library.mx(1, 2) > let bb = csharp_library.mx(1) > > // curried calling convention: makes dd, below, a function not a value > let cc = fsharp_library.m2 1 2 > let dd = fsharp_library.m2 1 > > Would it be useful to use pattern matching over some generic sort of > tuples to implement something similar in Rust? > > > Kevin > > > > On Sat, May 24, 2014 at 3:45 AM, Matthieu Monrocq < > matthieu.monrocq at gmail.com> wrote: > >> >> >> >> On Sat, May 24, 2014 at 9:06 AM, Zolt?n T?th wrote: >> >>> Alexander, your option 2 could be done automatically. By appending >>> postfixes to the overloaded name depending on the parameter types. >>> Increasing the number of letters used till the ambiguity is fully resolved. >>> >>> What do you think? >>> >>> >>> fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) >>> fillRect_I_I_I_I_BS ( int x, int y, int width, int height, >>> Qt::BrushStyle style ) >>> fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) >>> fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) >>> fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) >>> fillRect_R_C ( const QRect & rectangle, const QColor & color ) >>> fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) >>> fillRect_I_I_I_I_B ( int x, int y, int width, int height, const QBrush & >>> brush ) >>> fillRect_I_I_I_I_C ( int x, int y, int width, int height, const QColor & >>> color ) >>> fillRect_I_I_I_I_GC ( int x, int y, int width, int height, >>> Qt::GlobalColor color ) >>> fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) >>> fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) >>> >>> >>> I believe this alternative was considered in the original blog post >> Alexander wrote: this is, in essence, mangling. It makes for ugly function >> names, although the prefix helps in locating them I guess. >> >> >> Before we talk about generation though, I would start about investigating >> where those overloads come from. >> >> First, there are two different objects being manipulated here: >> >> + QRect is a rectangle with integral coordinates >> + QRectF is a rectangle with floating point coordinates >> >> >> Second, a QRect may already be build from "(int* x*, int* y*, int* width*, >> int* height*)"; thus all overloads taking 4 hints instead of a QRect are >> pretty useless in a sense. >> >> Third, in a similar vein, QBrush can be build from "(Qt::BrushStyle)", >> "(Qt::GlobalColor)" or "(QColor const&)". So once again those overloads are >> pretty useless. >> >> >> This leaves us with: >> >> + fillRect(QRect const&, QBrush const&) >> + fillRect(QRectF const&, QBrush const&) >> >> Yep, that's it. Of all those inconsistent overloads (missing 4 taking 4 >> floats, by the way...) only 2 are ever useful. The other 10 can be safely >> discarded without impacting the expressiveness. >> >> >> Now, of course, the real question is how well a tool could perform this >> reduction step. I would note here that the position and names of the >> "coordinate" arguments of "fillRect" is exactly that of those to "QRect"; >> maybe a simple exhaustive search would thus suffice (though it does require >> semantic understanding of what a constructor and default arguments are). >> >> It would be interesting checking how many overloads remain *after* this >> reduction step. Here we got a factor of 6 already (should have been 8 if >> the interface had been complete). >> >> It would also be interesting checking if the distinction int/float often >> surfaces, there might be an opportunity here. >> >> >> -- Matthieu >> >> >> Alexander Tsvyashchenko wrote: >>> >>>> So far I can imagine several possible answers: >>>> >>>> 1. "We don't care, your legacy C++ libraries are bad and you should >>>> feel bad!" - I think this stance would be bad for Rust and would hinder its >>>> adoption, but if that's the ultimate answer - I'd personally prefer it said >>>> loud and clear, so that at least nobody has any illusions. >>>> >>>> 2. "Define & maintain the mapping between C++ and Rust function >>>> names" (I assume this is what you're alluding to with "define meaningful >>>> unique function names" above?) While this might be possible for smaller >>>> libraries, this is out of the question for large libraries like Qt5 - at >>>> least I won't create and maintain this mapping for sure, and I doubt others >>>> will: just looking at the stats from 3 Qt5 libraries (QtCore, QtGui and >>>> QtWidgets) out of ~30 Qt libraries in total, from the 50745 wrapped >>>> methods 9601 were overloads and required renaming. >>>> >>>> Besides that, this has a disadvantage of throwing away majority of >>>> the experience people have with particular library and forcing them to >>>> le-learn its API. >>>> >>>> On top of that, not for every overload it's easy to come up with >>>> short, meaningful, memorable and distinctive names - you can try that >>>> exercise for http://qt-project.org/doc/qt-4.8/qpainter.html#fillRect >>>> ;-) >>>> >>>> 3. "Come up with some way to allow overloading / default >>>> parameters" - possibly with reduced feature set, i.e. if type inference is >>>> difficult in the presence of overloads, as suggested in some overloads >>>> discussions (although not unsolvable, as proven by other languages that >>>> allow both type inference & overloading?), possibly exclude overloads from >>>> the type inference by annotating overloaded methods with special attributes? >>>> >>>> 4. Possibly some other options I'm missing? >>>> >>>> -- >>>> Good luck! Alexander >>>> >>>> >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Fri Jun 6 02:49:49 2014 From: rusty.gates at icloud.com (Tommi) Date: Fri, 06 Jun 2014 12:49:49 +0300 Subject: [rust-dev] Convenience syntax for importing the module itself along with items within In-Reply-To: References: Message-ID: <4CB13F44-AAEB-4FFB-AB09-085414290842@icloud.com> Due to a lack of objections, I made an RFC at: https://github.com/rust-lang/rfcs/pull/108 On 2014-06-03, at 19:56, Gulshan Singh wrote: > +1, I was planning on suggesting this as well. > > On Jun 3, 2014 2:16 AM, "Tommi" wrote: > I find it somewhat jarring to have to spend two lines for the following kind of imports: > > use module::Type; > use module; > > So, I suggest we add a nicer syntax for doing the above imports using the following single line: > > use module::{self, Type}; > > It would probably be a good idea to force the `self` to be the first item on that list. And if someone writes the following... > > use module::self; > > ...that should probably cause at least a warning saying something like "You should write `use module;` instead of `use module::self;`". > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at kevincantu.org Fri Jun 6 02:54:17 2014 From: me at kevincantu.org (Kevin Cantu) Date: Fri, 6 Jun 2014 02:54:17 -0700 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: References: Message-ID: Apologies for the accidentally sent email. Not sure what GMail just did for me there. Anyways, with a macro it should be possible to use the types given to choose between mangled names, for example, at compile time. Kevin On Fri, Jun 6, 2014 at 2:44 AM, Kevin Cantu wrote: > I imagine a macro like the following, which is NOT a macro, because I > don't know how to write macros yet: > > > macro fillRect(args...) { > > fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) > fillRect_I_I_I_I_BS ( int x, int y, int width, int height, Qt::BrushStyle > style ) > fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) > fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) > fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) > fillRect_R_C ( const QRect & rectangle, const QColor & color ) > fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) > fillRect_I_I_I_I_B ( int x, int y, int width, int height, const QBrush & > brush ) > fillRect_I_I_I_I_C ( int x, int y, int width, int height, const QColor & > color ) > fillRect_I_I_I_I_GC ( int x, int y, int width, int height, Qt::GlobalColor > color ) > fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) > fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) > > > > > > > > > On Fri, Jun 6, 2014 at 2:35 AM, Kevin Cantu wrote: > >> Since C# allows overloaded methods, but F# doesn't want them, what F# >> does is somewhat interesting: "overloaded methods are permitted in the >> language, provided that the arguments are in tuple form, not curried form." >> [http://msdn.microsoft.com/en-us/library/dd483468.aspx] >> >> In practice, this means that all the calls to C# (tupled arguments) can >> be resolved, but idiomatic F# doesn't have overloaded methods. >> >> // tuple calling convention: looks like C# >> let aa = csharp_library.mx(1, 2) >> let bb = csharp_library.mx(1) >> >> // curried calling convention: makes dd, below, a function not a value >> let cc = fsharp_library.m2 1 2 >> let dd = fsharp_library.m2 1 >> >> Would it be useful to use pattern matching over some generic sort of >> tuples to implement something similar in Rust? >> >> >> Kevin >> >> >> >> On Sat, May 24, 2014 at 3:45 AM, Matthieu Monrocq < >> matthieu.monrocq at gmail.com> wrote: >> >>> >>> >>> >>> On Sat, May 24, 2014 at 9:06 AM, Zolt?n T?th wrote: >>> >>>> Alexander, your option 2 could be done automatically. By appending >>>> postfixes to the overloaded name depending on the parameter types. >>>> Increasing the number of letters used till the ambiguity is fully resolved. >>>> >>>> What do you think? >>>> >>>> >>>> fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) >>>> fillRect_I_I_I_I_BS ( int x, int y, int width, int height, >>>> Qt::BrushStyle style ) >>>> fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) >>>> fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) >>>> fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) >>>> fillRect_R_C ( const QRect & rectangle, const QColor & color ) >>>> fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) >>>> fillRect_I_I_I_I_B ( int x, int y, int width, int height, const QBrush >>>> & brush ) >>>> fillRect_I_I_I_I_C ( int x, int y, int width, int height, const QColor >>>> & color ) >>>> fillRect_I_I_I_I_GC ( int x, int y, int width, int height, >>>> Qt::GlobalColor color ) >>>> fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) >>>> fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) >>>> >>>> >>>> I believe this alternative was considered in the original blog post >>> Alexander wrote: this is, in essence, mangling. It makes for ugly function >>> names, although the prefix helps in locating them I guess. >>> >>> >>> Before we talk about generation though, I would start about >>> investigating where those overloads come from. >>> >>> First, there are two different objects being manipulated here: >>> >>> + QRect is a rectangle with integral coordinates >>> + QRectF is a rectangle with floating point coordinates >>> >>> >>> Second, a QRect may already be build from "(int* x*, int* y*, int* >>> width*, int* height*)"; thus all overloads taking 4 hints instead of a >>> QRect are pretty useless in a sense. >>> >>> Third, in a similar vein, QBrush can be build from "(Qt::BrushStyle)", >>> "(Qt::GlobalColor)" or "(QColor const&)". So once again those overloads are >>> pretty useless. >>> >>> >>> This leaves us with: >>> >>> + fillRect(QRect const&, QBrush const&) >>> + fillRect(QRectF const&, QBrush const&) >>> >>> Yep, that's it. Of all those inconsistent overloads (missing 4 taking 4 >>> floats, by the way...) only 2 are ever useful. The other 10 can be safely >>> discarded without impacting the expressiveness. >>> >>> >>> Now, of course, the real question is how well a tool could perform this >>> reduction step. I would note here that the position and names of the >>> "coordinate" arguments of "fillRect" is exactly that of those to "QRect"; >>> maybe a simple exhaustive search would thus suffice (though it does require >>> semantic understanding of what a constructor and default arguments are). >>> >>> It would be interesting checking how many overloads remain *after* this >>> reduction step. Here we got a factor of 6 already (should have been 8 if >>> the interface had been complete). >>> >>> It would also be interesting checking if the distinction int/float often >>> surfaces, there might be an opportunity here. >>> >>> >>> -- Matthieu >>> >>> >>> Alexander Tsvyashchenko wrote: >>>> >>>>> So far I can imagine several possible answers: >>>>> >>>>> 1. "We don't care, your legacy C++ libraries are bad and you >>>>> should feel bad!" - I think this stance would be bad for Rust and would >>>>> hinder its adoption, but if that's the ultimate answer - I'd personally >>>>> prefer it said loud and clear, so that at least nobody has any illusions. >>>>> >>>>> 2. "Define & maintain the mapping between C++ and Rust function >>>>> names" (I assume this is what you're alluding to with "define meaningful >>>>> unique function names" above?) While this might be possible for smaller >>>>> libraries, this is out of the question for large libraries like Qt5 - at >>>>> least I won't create and maintain this mapping for sure, and I doubt others >>>>> will: just looking at the stats from 3 Qt5 libraries (QtCore, QtGui and >>>>> QtWidgets) out of ~30 Qt libraries in total, from the 50745 wrapped >>>>> methods 9601 were overloads and required renaming. >>>>> >>>>> Besides that, this has a disadvantage of throwing away majority of >>>>> the experience people have with particular library and forcing them to >>>>> le-learn its API. >>>>> >>>>> On top of that, not for every overload it's easy to come up with >>>>> short, meaningful, memorable and distinctive names - you can try that >>>>> exercise for >>>>> http://qt-project.org/doc/qt-4.8/qpainter.html#fillRect ;-) >>>>> >>>>> 3. "Come up with some way to allow overloading / default >>>>> parameters" - possibly with reduced feature set, i.e. if type inference is >>>>> difficult in the presence of overloads, as suggested in some overloads >>>>> discussions (although not unsolvable, as proven by other languages that >>>>> allow both type inference & overloading?), possibly exclude overloads from >>>>> the type inference by annotating overloaded methods with special attributes? >>>>> >>>>> 4. Possibly some other options I'm missing? >>>>> >>>>> -- >>>>> Good luck! Alexander >>>>> >>>>> >>>>> _______________________________________________ >>>>> Rust-dev mailing list >>>>> Rust-dev at mozilla.org >>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Fri Jun 6 03:24:54 2014 From: banderson at mozilla.com (Brian Anderson) Date: Fri, 06 Jun 2014 03:24:54 -0700 Subject: [rust-dev] Type System In-Reply-To: References: Message-ID: <539196F6.1070709@mozilla.com> Again, I must ask you to stop. Claiming that we are 'ignoring the developments of research' is *extremely* insulting. I've turned moderation on for your account. Further messages to this mailing list will need to be approved by the admins. On 06/05/2014 11:00 PM, Suminda Dharmasena wrote: > Hi, > > The concept of Rust is definitely appealing at a high level. One area > that can improve is the Type System. Instead of ignoring the > developments and research in this area please find a way to embrace it. > > Also a proper active objects based OO system would be appealing. > > Suminda > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.krishnaswami at cs.bham.ac.uk Fri Jun 6 06:23:25 2014 From: n.krishnaswami at cs.bham.ac.uk (Neelakantan Krishnaswami) Date: Fri, 06 Jun 2014 14:23:25 +0100 Subject: [rust-dev] Final call for talk proposals: HOPE'14 (Workshop on Higher-Order Programming with Effects, affiliated with ICFP'14) Message-ID: <5391C0CD.8040609@cs.bham.ac.uk> ---------------------------------------------------------------------- CALL FOR TALK PROPOSALS HOPE 2014 The 3rd ACM SIGPLAN Workshop on Higher-Order Programming with Effects August 31, 2014 Gothenburg, Sweden (the day before ICFP 2014) http://hope2014.mpi-sws.org ---------------------------------------------------------------------- HOPE 2014 aims at bringing together researchers interested in the design, semantics, implementation, and verification of higher-order effectful programs. It will be *informal*, consisting of invited talks, contributed talks on work in progress, and open-ended discussion sessions. --------------------- Goals of the Workshop --------------------- A recurring theme in many papers at ICFP, and in the research of many ICFP attendees, is the interaction of higher-order programming with various kinds of effects: storage effects, I/O, control effects, concurrency, etc. While effects are of critical importance in many applications, they also make it hard to build, maintain, and reason about one's code. Higher-order languages (both functional and object-oriented) provide a variety of abstraction mechanisms to help "tame" or "encapsulate" effects (e.g. monads, ADTs, ownership types, typestate, first-class events, transactions, Hoare Type Theory, session types, substructural and region-based type systems), and a number of different semantic models and verification technologies have been developed in order to codify and exploit the benefits of this encapsulation (e.g. bisimulations, step-indexed Kripke logical relations, higher-order separation logic, game semantics, various modal logics). But there remain many open problems, and the field is highly active. The goal of the HOPE workshop is to bring researchers from a variety of different backgrounds and perspectives together to exchange new and exciting ideas concerning the design, semantics, implementation, and verification of higher-order effectful programs. We want HOPE to be as informal and interactive as possible. The program will thus involve a combination of invited talks, contributed talks about work in progress, and open-ended discussion sessions. There will be no published proceedings, but participants will be invited to submit working documents, talk slides, etc. to be posted on this website. ----------------------- Call for Talk Proposals ----------------------- We solicit proposals for contributed talks. Proposals should be at most 2 pages, in either plain text or PDF format, and should specify how long a talk the speaker wishes to give. By default, contributed talks will be 30 minutes long, but proposals for shorter or longer talks will also be considered. Speakers may also submit supplementary material (e.g. a full paper, talk slides) if they desire, which PC members are free (but not expected) to read. We are interested in talks on all topics related to the interaction of higher-order programming and computational effects. Talks about work in progress are particularly encouraged. If you have any questions about the relevance of a particular topic, please contact the PC chairs at the address hope2014 AT mpi-sws.org. Deadline for talk proposals: June 13, 2014 (Friday) Notification of acceptance: July 4, 2014 (Friday) Workshop: August 31, 2014 (Sunday) The submission website is now open: https://www.easychair.org/conferences/?conf=hope2014 --------------- Invited Speaker --------------- Verifying Security Properties of SES Programs Philippa Gardner, Imperial College London Secure ECMAScript (SES) is a subset of JavaScript, designed in such a way that untrusted code can safety co-exist with trusted code. We introduce a program logic for verifying security properties of SES programs. It follows separation logic in that we can make local assertions about local state. It is different from separation logic in that we can also make global assertions about the global state and its interface with the local state. For example, we can globally assert that untrusted objects do not contain pointers to local trusted objects. Such assertions are key for describing security properties of common SES programs. This logic builds on the work of Gardner, Maffeis and Smith on reasoning about a core fragment of JavaScript (POPL2012), and the recent work of Smith on extending the logic to handle higher-order functions. This is joint work with Gareth Smith and Thomas Wood, Imperial. --------------------- Workshop Organization --------------------- Program Co-Chairs: Neel Krishnaswami (University of Birmingham) Hongseok Yang (University of Oxford) Program Committee: Zena Ariola (University of Oregon) Ohad Kammar (University of Cambridge) Ioannis Kassios (ETH Zurich) Naoki Kobayashi (University of Tokyo) Paul Blain Levy (University of Birmingham) Aleks Nanevski (IMDEA) Scott Owens (University of Kent) Sam Staton (Radboud University Nijmegen) Steve Zdancewic (University of Pennsylvania) From lists at dhardy.name Fri Jun 6 06:40:14 2014 From: lists at dhardy.name (Diggory Hardy) Date: Fri, 06 Jun 2014 15:40:14 +0200 Subject: [rust-dev] strings in sets/maps Message-ID: <1839434.JlW78sHfkb@tph-l13071> Dear List, I want to use strings as map keys, but couldn't find any mention of this in my understanding common use-case. The following works but as far as I understand requires a copy of the potential key to be made to call `contains()`, is this correct? let mut set: HashSet = HashSet::new(); set.insert( "x".into_string() ); println!( "set contains x: {}", set.contains( &"x".into_string() ) ); Note: I would normally be storing/testing &str types with non-static lifetime, but I don't think this makes a difference. I notice that HashSet<&str> also works (if lifetimes of the inserted strings are sufficient). Regards, Diggory Hardy -------------- next part -------------- An HTML attachment was scrubbed... URL: From cg.wowus.cg at gmail.com Fri Jun 6 06:42:32 2014 From: cg.wowus.cg at gmail.com (Clark Gaebel) Date: Fri, 6 Jun 2014 09:42:32 -0400 Subject: [rust-dev] strings in sets/maps In-Reply-To: <1839434.JlW78sHfkb@tph-l13071> References: <1839434.JlW78sHfkb@tph-l13071> Message-ID: &str and string are "equivalent", so use the _equiv version of functions you need. I'll send a patch to better-document this common use case later today. On 2014-06-06 9:40 AM, "Diggory Hardy" wrote: > Dear List, > > > > I want to use strings as map keys, but couldn't find any mention of this > in my understanding common use-case. The following works but as far as I > understand requires a copy of the potential key to be made to call > `contains()`, is this correct? > > > > let mut set: HashSet = HashSet::new(); > > set.insert( "x".into_string() ); > > println!( "set contains x: {}", set.contains( &"x".into_string() ) ); > > > > Note: I would normally be storing/testing &str types with non-static > lifetime, but I don't think this makes a difference. > > > > I notice that HashSet<&str> also works (if lifetimes of the inserted > strings are sufficient). > > > > Regards, > > Diggory Hardy > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manishsmail at gmail.com Fri Jun 6 07:54:48 2014 From: manishsmail at gmail.com (Manish Goregaokar) Date: Fri, 6 Jun 2014 20:24:48 +0530 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: References: Message-ID: FWIW you can achieve overloading by means of a parameter enum that captures all the various ways of entering data. Usually an overloaded function is just that -- each variant does the same thing, just that the initial data is in a different format. So you drive down to the essence of the overloading -- here the overloading boils down to "give me a rectangle, in some format, and a color, in some format" -- so do just that, give it enum parameters. One that is "a rectangle, in some format", and the other is "a color, in some format". This is actually cleaner, IMO, and as an added bonus you only have to define the function once with a match statement inside to normalize the data. This also lets one "bubble" the overloading out, you can call this from within a similar "rust-overloaded" function that needs, say, a rectangle, a circle, and a color, without having to tweak the parameters for every overloaded instance True, in some cases overloaded functions have different behavior based on which one is used, but that can be handled here too, and for that matter if they behave differently it might be best to give them different names :) For example, the two ways of specifying a rect can be managed via: enum Rectangle { Rect(Qrect), Coord(int, int, int, int) } and for the colors enum ColorProvider { Brush(QBrush), Color(QColor), GColor(GlobalColor) } -Manish Goregaokar On Fri, Jun 6, 2014 at 3:24 PM, Kevin Cantu wrote: > Apologies for the accidentally sent email. Not sure what GMail just did > for me there. Anyways, with a macro it should be possible to use the types > given to choose between mangled names, for example, at compile time. > > Kevin > > > > > > > > On Fri, Jun 6, 2014 at 2:44 AM, Kevin Cantu wrote: > >> I imagine a macro like the following, which is NOT a macro, because I >> don't know how to write macros yet: >> >> >> macro fillRect(args...) { >> >> fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) >> fillRect_I_I_I_I_BS ( int x, int y, int width, int height, Qt::BrushStyle >> style ) >> fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) >> fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) >> fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) >> fillRect_R_C ( const QRect & rectangle, const QColor & color ) >> fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) >> fillRect_I_I_I_I_B ( int x, int y, int width, int height, const QBrush & >> brush ) >> fillRect_I_I_I_I_C ( int x, int y, int width, int height, const QColor & >> color ) >> fillRect_I_I_I_I_GC ( int x, int y, int width, int height, >> Qt::GlobalColor color ) >> fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) >> fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) >> >> >> >> >> >> >> >> >> On Fri, Jun 6, 2014 at 2:35 AM, Kevin Cantu wrote: >> >>> Since C# allows overloaded methods, but F# doesn't want them, what F# >>> does is somewhat interesting: "overloaded methods are permitted in the >>> language, provided that the arguments are in tuple form, not curried form." >>> [http://msdn.microsoft.com/en-us/library/dd483468.aspx] >>> >>> In practice, this means that all the calls to C# (tupled arguments) can >>> be resolved, but idiomatic F# doesn't have overloaded methods. >>> >>> // tuple calling convention: looks like C# >>> let aa = csharp_library.mx(1, 2) >>> let bb = csharp_library.mx(1) >>> >>> // curried calling convention: makes dd, below, a function not a value >>> let cc = fsharp_library.m2 1 2 >>> let dd = fsharp_library.m2 1 >>> >>> Would it be useful to use pattern matching over some generic sort of >>> tuples to implement something similar in Rust? >>> >>> >>> Kevin >>> >>> >>> >>> On Sat, May 24, 2014 at 3:45 AM, Matthieu Monrocq < >>> matthieu.monrocq at gmail.com> wrote: >>> >>>> >>>> >>>> >>>> On Sat, May 24, 2014 at 9:06 AM, Zolt?n T?th wrote: >>>> >>>>> Alexander, your option 2 could be done automatically. By appending >>>>> postfixes to the overloaded name depending on the parameter types. >>>>> Increasing the number of letters used till the ambiguity is fully resolved. >>>>> >>>>> What do you think? >>>>> >>>>> >>>>> fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) >>>>> fillRect_I_I_I_I_BS ( int x, int y, int width, int height, >>>>> Qt::BrushStyle style ) >>>>> fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) >>>>> fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) >>>>> fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) >>>>> fillRect_R_C ( const QRect & rectangle, const QColor & color ) >>>>> fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) >>>>> fillRect_I_I_I_I_B ( int x, int y, int width, int height, const QBrush >>>>> & brush ) >>>>> fillRect_I_I_I_I_C ( int x, int y, int width, int height, const QColor >>>>> & color ) >>>>> fillRect_I_I_I_I_GC ( int x, int y, int width, int height, >>>>> Qt::GlobalColor color ) >>>>> fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) >>>>> fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) >>>>> >>>>> >>>>> I believe this alternative was considered in the original blog post >>>> Alexander wrote: this is, in essence, mangling. It makes for ugly function >>>> names, although the prefix helps in locating them I guess. >>>> >>>> >>>> Before we talk about generation though, I would start about >>>> investigating where those overloads come from. >>>> >>>> First, there are two different objects being manipulated here: >>>> >>>> + QRect is a rectangle with integral coordinates >>>> + QRectF is a rectangle with floating point coordinates >>>> >>>> >>>> Second, a QRect may already be build from "(int* x*, int* y*, int* >>>> width*, int* height*)"; thus all overloads taking 4 hints instead of a >>>> QRect are pretty useless in a sense. >>>> >>>> Third, in a similar vein, QBrush can be build from "(Qt::BrushStyle)", >>>> "(Qt::GlobalColor)" or "(QColor const&)". So once again those overloads are >>>> pretty useless. >>>> >>>> >>>> This leaves us with: >>>> >>>> + fillRect(QRect const&, QBrush const&) >>>> + fillRect(QRectF const&, QBrush const&) >>>> >>>> Yep, that's it. Of all those inconsistent overloads (missing 4 taking 4 >>>> floats, by the way...) only 2 are ever useful. The other 10 can be safely >>>> discarded without impacting the expressiveness. >>>> >>>> >>>> Now, of course, the real question is how well a tool could perform this >>>> reduction step. I would note here that the position and names of the >>>> "coordinate" arguments of "fillRect" is exactly that of those to "QRect"; >>>> maybe a simple exhaustive search would thus suffice (though it does require >>>> semantic understanding of what a constructor and default arguments are). >>>> >>>> It would be interesting checking how many overloads remain *after* this >>>> reduction step. Here we got a factor of 6 already (should have been 8 if >>>> the interface had been complete). >>>> >>>> It would also be interesting checking if the distinction int/float >>>> often surfaces, there might be an opportunity here. >>>> >>>> >>>> -- Matthieu >>>> >>>> >>>> Alexander Tsvyashchenko wrote: >>>>> >>>>>> So far I can imagine several possible answers: >>>>>> >>>>>> 1. "We don't care, your legacy C++ libraries are bad and you >>>>>> should feel bad!" - I think this stance would be bad for Rust and would >>>>>> hinder its adoption, but if that's the ultimate answer - I'd personally >>>>>> prefer it said loud and clear, so that at least nobody has any illusions. >>>>>> >>>>>> 2. "Define & maintain the mapping between C++ and Rust function >>>>>> names" (I assume this is what you're alluding to with "define meaningful >>>>>> unique function names" above?) While this might be possible for smaller >>>>>> libraries, this is out of the question for large libraries like Qt5 - at >>>>>> least I won't create and maintain this mapping for sure, and I doubt others >>>>>> will: just looking at the stats from 3 Qt5 libraries (QtCore, QtGui and >>>>>> QtWidgets) out of ~30 Qt libraries in total, from the 50745 wrapped >>>>>> methods 9601 were overloads and required renaming. >>>>>> >>>>>> Besides that, this has a disadvantage of throwing away majority >>>>>> of the experience people have with particular library and forcing them to >>>>>> le-learn its API. >>>>>> >>>>>> On top of that, not for every overload it's easy to come up with >>>>>> short, meaningful, memorable and distinctive names - you can try that >>>>>> exercise for >>>>>> http://qt-project.org/doc/qt-4.8/qpainter.html#fillRect ;-) >>>>>> >>>>>> 3. "Come up with some way to allow overloading / default >>>>>> parameters" - possibly with reduced feature set, i.e. if type inference is >>>>>> difficult in the presence of overloads, as suggested in some overloads >>>>>> discussions (although not unsolvable, as proven by other languages that >>>>>> allow both type inference & overloading?), possibly exclude overloads from >>>>>> the type inference by annotating overloaded methods with special attributes? >>>>>> >>>>>> 4. Possibly some other options I'm missing? >>>>>> >>>>>> -- >>>>>> Good luck! Alexander >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Rust-dev mailing list >>>>>> Rust-dev at mozilla.org >>>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Rust-dev mailing list >>>>> Rust-dev at mozilla.org >>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>>> >>> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From valerii.hiora at gmail.com Fri Jun 6 08:19:28 2014 From: valerii.hiora at gmail.com (Valerii Hiora) Date: Fri, 06 Jun 2014 18:19:28 +0300 Subject: [rust-dev] How do I bootstrap rust form armhf? In-Reply-To: References: Message-ID: <5391DC00.9080708@gmail.com> > I'm trying to run rustc on an arm board, but obviously there's no > precompiled stage0 to build the compiler. > Is there a procedure to cross-compile stage0 on other host machine where > I do have rustc? Disclaimer: haven't tried anything like this, but just a couple of hints: - configure script checks for CFG_ENABLE_LOCAL_RUST, so it should be possible to use any rustc - in src/etc there are make_snapshot.py, snapshot.py, local_stage0.sh which might be useful for study - there are a couple of mentions of cfg(stage0) in sources, probably compiling rust for stage0 will require additionally setting --cfg stage0 -- Valerii From lists at dhardy.name Fri Jun 6 08:34:26 2014 From: lists at dhardy.name (Diggory Hardy) Date: Fri, 06 Jun 2014 17:34:26 +0200 Subject: [rust-dev] strings in sets/maps In-Reply-To: References: <1839434.JlW78sHfkb@tph-l13071> Message-ID: <3541942.zqScxjU9uJ@tph-l13071> contains_equiv ? thanks Related: how to do this in a match? let s = "abc".to_owned(); match s { "abc" => ... On Friday 06 Jun 2014 09:42:32 you wrote: > &str and string are "equivalent", so use the _equiv version of functions > you need. I'll send a patch to better-document this common use case later > today. > > On 2014-06-06 9:40 AM, "Diggory Hardy" wrote: > > Dear List, > > > > I want to use strings as map keys, but couldn't find any mention of this > > in my understanding common use-case. The following works but as far as I > > understand requires a copy of the potential key to be made to call > > `contains()`, is this correct? > > > > > > > > let mut set: HashSet = HashSet::new(); > > > > set.insert( "x".into_string() ); > > > > println!( "set contains x: {}", set.contains( &"x".into_string() ) ); -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcwalton at mozilla.com Fri Jun 6 08:44:12 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Fri, 06 Jun 2014 08:44:12 -0700 Subject: [rust-dev] strings in sets/maps In-Reply-To: <1839434.JlW78sHfkb@tph-l13071> References: <1839434.JlW78sHfkb@tph-l13071> Message-ID: <5391E1CC.8050604@mozilla.com> On 6/6/14 6:40 AM, Diggory Hardy wrote: > Dear List, > > I want to use strings as map keys, but couldn't find any mention of this > in my understanding common use-case. The following works but as far as I > understand requires a copy of the potential key to be made to call > `contains()`, is this correct? I've been thinking for a while that we should provide a `StringMap` for this common use case, to make the `equiv()` stuff easier to sort out. "Easy things should be easy; hard things should be possible." Patrick From farcaller at gmail.com Fri Jun 6 10:10:20 2014 From: farcaller at gmail.com (Vladimir Pouzanov) Date: Fri, 6 Jun 2014 18:10:20 +0100 Subject: [rust-dev] How do I bootstrap rust form armhf? In-Reply-To: <5391DC00.9080708@gmail.com> References: <5391DC00.9080708@gmail.com> Message-ID: Thanks, I managed to bootstrap my rustc already with a few hints. On Fri, Jun 6, 2014 at 4:19 PM, Valerii Hiora wrote: > I'm trying to run rustc on an arm board, but obviously there's no >> precompiled stage0 to build the compiler. >> Is there a procedure to cross-compile stage0 on other host machine where >> I do have rustc? >> > > Disclaimer: haven't tried anything like this, but just a couple of > hints: > > - configure script checks for CFG_ENABLE_LOCAL_RUST, so it should be > possible to use any rustc > - in src/etc there are make_snapshot.py, snapshot.py, local_stage0.sh > which might be useful for study > - there are a couple of mentions of cfg(stage0) in sources, probably > compiling rust for stage0 will require additionally setting --cfg stage0 > > -- > > Valerii > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- Sincerely, Vladimir "Farcaller" Pouzanov http://farcaller.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at kevincantu.org Fri Jun 6 13:53:38 2014 From: me at kevincantu.org (Kevin Cantu) Date: Fri, 6 Jun 2014 13:53:38 -0700 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: References: Message-ID: Yeah, that's what I'd want to do for a user-developed binding. But for automatically generated bindings, giving users a macro which inferred function choice from the arguments -- and complained at compile time when unable to -- would be really cool. Let me try to some sort of proof of concept later this weekend, and then we can talk about automatically generating my hypothetical macro. Kevin On Fri, Jun 6, 2014 at 7:54 AM, Manish Goregaokar wrote: > FWIW you can achieve overloading by means of a parameter enum that > captures all the various ways of entering data. Usually an overloaded > function is just that -- each variant does the same thing, just that the > initial data is in a different format. > > So you drive down to the essence of the overloading -- here the > overloading boils down to "give me a rectangle, in some format, and a > color, in some format" -- so do just that, give it enum parameters. One > that is "a rectangle, in some format", and the other is "a color, in some > format". > > This is actually cleaner, IMO, and as an added bonus you only have to > define the function once with a match statement inside to normalize the > data. This also lets one "bubble" the overloading out, you can call this > from within a similar "rust-overloaded" function that needs, say, a > rectangle, a circle, and a color, without having to tweak the parameters > for every overloaded instance > > True, in some cases overloaded functions have different behavior based on > which one is used, but that can be handled here too, and for that matter if > they behave differently it might be best to give them different names :) > > For example, the two ways of specifying a rect can be managed via: > enum Rectangle { > Rect(Qrect), > Coord(int, int, int, int) > } > > and for the colors > enum ColorProvider { > Brush(QBrush), > Color(QColor), > GColor(GlobalColor) > } > > -Manish Goregaokar > > > On Fri, Jun 6, 2014 at 3:24 PM, Kevin Cantu wrote: > >> Apologies for the accidentally sent email. Not sure what GMail just did >> for me there. Anyways, with a macro it should be possible to use the types >> given to choose between mangled names, for example, at compile time. >> >> Kevin >> >> >> >> >> >> >> >> On Fri, Jun 6, 2014 at 2:44 AM, Kevin Cantu wrote: >> >>> I imagine a macro like the following, which is NOT a macro, because I >>> don't know how to write macros yet: >>> >>> >>> macro fillRect(args...) { >>> >>> fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) >>> fillRect_I_I_I_I_BS ( int x, int y, int width, int height, >>> Qt::BrushStyle style ) >>> fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) >>> fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) >>> fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) >>> fillRect_R_C ( const QRect & rectangle, const QColor & color ) >>> fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) >>> fillRect_I_I_I_I_B ( int x, int y, int width, int height, const QBrush & >>> brush ) >>> fillRect_I_I_I_I_C ( int x, int y, int width, int height, const QColor & >>> color ) >>> fillRect_I_I_I_I_GC ( int x, int y, int width, int height, >>> Qt::GlobalColor color ) >>> fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) >>> fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) >>> >>> >>> >>> >>> >>> >>> >>> >>> On Fri, Jun 6, 2014 at 2:35 AM, Kevin Cantu wrote: >>> >>>> Since C# allows overloaded methods, but F# doesn't want them, what F# >>>> does is somewhat interesting: "overloaded methods are permitted in the >>>> language, provided that the arguments are in tuple form, not curried form." >>>> [http://msdn.microsoft.com/en-us/library/dd483468.aspx] >>>> >>>> In practice, this means that all the calls to C# (tupled arguments) can >>>> be resolved, but idiomatic F# doesn't have overloaded methods. >>>> >>>> // tuple calling convention: looks like C# >>>> let aa = csharp_library.mx(1, 2) >>>> let bb = csharp_library.mx(1) >>>> >>>> // curried calling convention: makes dd, below, a function not a value >>>> let cc = fsharp_library.m2 1 2 >>>> let dd = fsharp_library.m2 1 >>>> >>>> Would it be useful to use pattern matching over some generic sort of >>>> tuples to implement something similar in Rust? >>>> >>>> >>>> Kevin >>>> >>>> >>>> >>>> On Sat, May 24, 2014 at 3:45 AM, Matthieu Monrocq < >>>> matthieu.monrocq at gmail.com> wrote: >>>> >>>>> >>>>> >>>>> >>>>> On Sat, May 24, 2014 at 9:06 AM, Zolt?n T?th wrote: >>>>> >>>>>> Alexander, your option 2 could be done automatically. By appending >>>>>> postfixes to the overloaded name depending on the parameter types. >>>>>> Increasing the number of letters used till the ambiguity is fully resolved. >>>>>> >>>>>> What do you think? >>>>>> >>>>>> >>>>>> fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) >>>>>> fillRect_I_I_I_I_BS ( int x, int y, int width, int height, >>>>>> Qt::BrushStyle style ) >>>>>> fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) >>>>>> fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) >>>>>> fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) >>>>>> fillRect_R_C ( const QRect & rectangle, const QColor & color ) >>>>>> fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) >>>>>> fillRect_I_I_I_I_B ( int x, int y, int width, int height, const >>>>>> QBrush & brush ) >>>>>> fillRect_I_I_I_I_C ( int x, int y, int width, int height, const >>>>>> QColor & color ) >>>>>> fillRect_I_I_I_I_GC ( int x, int y, int width, int height, >>>>>> Qt::GlobalColor color ) >>>>>> fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) >>>>>> fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) >>>>>> >>>>>> >>>>>> I believe this alternative was considered in the original blog post >>>>> Alexander wrote: this is, in essence, mangling. It makes for ugly function >>>>> names, although the prefix helps in locating them I guess. >>>>> >>>>> >>>>> Before we talk about generation though, I would start about >>>>> investigating where those overloads come from. >>>>> >>>>> First, there are two different objects being manipulated here: >>>>> >>>>> + QRect is a rectangle with integral coordinates >>>>> + QRectF is a rectangle with floating point coordinates >>>>> >>>>> >>>>> Second, a QRect may already be build from "(int* x*, int* y*, int* >>>>> width*, int* height*)"; thus all overloads taking 4 hints instead of >>>>> a QRect are pretty useless in a sense. >>>>> >>>>> Third, in a similar vein, QBrush can be build from "(Qt::BrushStyle)", >>>>> "(Qt::GlobalColor)" or "(QColor const&)". So once again those overloads are >>>>> pretty useless. >>>>> >>>>> >>>>> This leaves us with: >>>>> >>>>> + fillRect(QRect const&, QBrush const&) >>>>> + fillRect(QRectF const&, QBrush const&) >>>>> >>>>> Yep, that's it. Of all those inconsistent overloads (missing 4 taking >>>>> 4 floats, by the way...) only 2 are ever useful. The other 10 can be safely >>>>> discarded without impacting the expressiveness. >>>>> >>>>> >>>>> Now, of course, the real question is how well a tool could perform >>>>> this reduction step. I would note here that the position and names of the >>>>> "coordinate" arguments of "fillRect" is exactly that of those to "QRect"; >>>>> maybe a simple exhaustive search would thus suffice (though it does require >>>>> semantic understanding of what a constructor and default arguments are). >>>>> >>>>> It would be interesting checking how many overloads remain *after* >>>>> this reduction step. Here we got a factor of 6 already (should have been 8 >>>>> if the interface had been complete). >>>>> >>>>> It would also be interesting checking if the distinction int/float >>>>> often surfaces, there might be an opportunity here. >>>>> >>>>> >>>>> -- Matthieu >>>>> >>>>> >>>>> Alexander Tsvyashchenko wrote: >>>>>> >>>>>>> So far I can imagine several possible answers: >>>>>>> >>>>>>> 1. "We don't care, your legacy C++ libraries are bad and you >>>>>>> should feel bad!" - I think this stance would be bad for Rust and would >>>>>>> hinder its adoption, but if that's the ultimate answer - I'd personally >>>>>>> prefer it said loud and clear, so that at least nobody has any illusions. >>>>>>> >>>>>>> 2. "Define & maintain the mapping between C++ and Rust function >>>>>>> names" (I assume this is what you're alluding to with "define meaningful >>>>>>> unique function names" above?) While this might be possible for smaller >>>>>>> libraries, this is out of the question for large libraries like Qt5 - at >>>>>>> least I won't create and maintain this mapping for sure, and I doubt others >>>>>>> will: just looking at the stats from 3 Qt5 libraries (QtCore, QtGui and >>>>>>> QtWidgets) out of ~30 Qt libraries in total, from the 50745 wrapped >>>>>>> methods 9601 were overloads and required renaming. >>>>>>> >>>>>>> Besides that, this has a disadvantage of throwing away majority >>>>>>> of the experience people have with particular library and forcing them to >>>>>>> le-learn its API. >>>>>>> >>>>>>> On top of that, not for every overload it's easy to come up with >>>>>>> short, meaningful, memorable and distinctive names - you can try that >>>>>>> exercise for >>>>>>> http://qt-project.org/doc/qt-4.8/qpainter.html#fillRect ;-) >>>>>>> >>>>>>> 3. "Come up with some way to allow overloading / default >>>>>>> parameters" - possibly with reduced feature set, i.e. if type inference is >>>>>>> difficult in the presence of overloads, as suggested in some overloads >>>>>>> discussions (although not unsolvable, as proven by other languages that >>>>>>> allow both type inference & overloading?), possibly exclude overloads from >>>>>>> the type inference by annotating overloaded methods with special attributes? >>>>>>> >>>>>>> 4. Possibly some other options I'm missing? >>>>>>> >>>>>>> -- >>>>>>> Good luck! Alexander >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Rust-dev mailing list >>>>>>> Rust-dev at mozilla.org >>>>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Rust-dev mailing list >>>>>> Rust-dev at mozilla.org >>>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Rust-dev mailing list >>>>> Rust-dev at mozilla.org >>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>> >>>>> >>>> >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.sapin at exyr.org Fri Jun 6 14:59:53 2014 From: simon.sapin at exyr.org (Simon Sapin) Date: Fri, 06 Jun 2014 22:59:53 +0100 Subject: [rust-dev] strings in sets/maps In-Reply-To: <3541942.zqScxjU9uJ@tph-l13071> References: <1839434.JlW78sHfkb@tph-l13071> <3541942.zqScxjU9uJ@tph-l13071> Message-ID: <539239D9.3090302@exyr.org> On 06/06/2014 16:34, Diggory Hardy wrote: > Related: how to do this in a match? > > let s = "abc".to_owned(); > match s { > "abc" => ... match s.as_slice() { // ... -- Simon Sapin From spam at scientician.net Sat Jun 7 09:01:16 2014 From: spam at scientician.net (Bardur Arantsson) Date: Sat, 07 Jun 2014 18:01:16 +0200 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: <538FA53E.1030309@mozilla.com> References: <538FA53E.1030309@mozilla.com> Message-ID: On 2014-06-05 01:01, Brian Anderson wrote: > # Date/Time (https://github.com/mozilla/rust/issues/14657) > > Our time crate is very minimal, and the API looks dated. This is a hard > problem and JodaTime seems to be well regarded so let's just copy it. JSR310 has already been mentioned in the thread, but I didn't see anyone mentioning that it was accepted into the (relatively) recently finalized JDK8: http://docs.oracle.com/javase/8/docs/api/java/time/package-summary.html The important thing to note is basically that it was simplified quite a lot relative to JodaTime, in particular by removing non-Gregorian chronologies. Regards, From axel.viala at darnuria.eu Sat Jun 7 10:46:56 2014 From: axel.viala at darnuria.eu (Axel Viala) Date: Sat, 07 Jun 2014 19:46:56 +0200 Subject: [rust-dev] How do I bootstrap rust form armhf? In-Reply-To: References: <5391DC00.9080708@gmail.com> Message-ID: <53935010.6080602@darnuria.eu> On 06/06/2014 19:10, Vladimir Pouzanov wrote: > Thanks, I managed to bootstrap my rustc already with a few hints. > > Yeah! Try to contribute about it, for packaging rust into debian and other distribution this would be a great contribution! :) > On Fri, Jun 6, 2014 at 4:19 PM, Valerii Hiora > wrote: > > I'm trying to run rustc on an arm board, but obviously there's no > precompiled stage0 to build the compiler. > Is there a procedure to cross-compile stage0 on other host > machine where > I do have rustc? > > > Disclaimer: haven't tried anything like this, but just a couple > of hints: > > - configure script checks for CFG_ENABLE_LOCAL_RUST, so it > should be > possible to use any rustc > - in src/etc there are make_snapshot.py, snapshot.py, > local_stage0.sh > which might be useful for study > - there are a couple of mentions of cfg(stage0) in sources, > probably > compiling rust for stage0 will require additionally setting --cfg > stage0 > -- My Mozillian profile because I believe in an Open Internet. https://mozillians.org/en-US/u/darnuria/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From freekh at gmail.com Sat Jun 7 12:28:59 2014 From: freekh at gmail.com (Fredrik Ekholdt) Date: Sat, 7 Jun 2014 21:28:59 +0200 Subject: [rust-dev] Deprecating rustpkg Message-ID: <5FAC0072-E2DB-4194-91A4-62C24CFC4B08@gmail.com> Hi! Seems like this thread has been dead for a while. I am new to rust, but was playing with it today, looking for rustpkg and ended up reading this thread. I have tried to read through this thread, but it is possible that there is a newer more relevant thread covering this topic in which case I excuse myself. I am currently working on a language agnostic dependency/package manager and I was wondering whether it might suit Rusts requirements. Right now we are targeting it as a replacement to Maven/Ivy on the JVM, but the idea all along is to make it platform independent (and native) by having a small and predicable algorithm for resolution. The resolution engine is written 200 LOCs, and the overall logic (excluding metadata reading and so on) is about 400 LOCs more. (I wonder if the Rust implementation will be faster than the one in Scala? :) If there is interest I might start looking more into how to port it (I will need help though) - if not, I guess it was worth a shot! :) It is called Adept (https://github.com/adept-dm/adept) and it is in alpha for the Scala/JVM. The docs (listed below) are still unfortunately a bit scattered so I am summarising here. Some features that might be of interest: - Fast and reliable resolution of metadata using versioned metadata (which is easily & safely cacheable). - Fast and reliable artifact (binary files/sources) downloads, i.e. can download from multiple sources in parallel. - Authors can describe compatibility matrixes of their modules. The short story is that authors can rank modules and thereby define which ones are compatible (or can be replaced) and in multiple files thereby having many ?series? of compatible modules. Using this scheme we can emulate: ?normal? versioning, (what is called) "semantic? versioning and backward compatibility matrixes, but also other, more exotic, version schemes as well. AdeptHub (see below) will make it easy for authors to use the standard ones, but also make it possible to customise this. - Adept?s engine is flexible enough so that authors can publish multiple packages to multiple platforms and based on user input figure out which package & platform a user should get. - Adept?s engine is flexible enough to emulate the concept of scopes/configurations/views so an author can publish different scopes/configurations/views of the same package: one for compile, one for runtime, one for testing, etc etc. - Supports resolution through multiple attributes author-defined attributes. You can require a specific version, a binary-version, but also ?maturity? (or "release-type?) or whatever other attribute that might be relevant. - Does not require a user to resolve (figure out which packages you need), when they check out code a project. The way this works is that Adept generates a file after resolution that contains all artifacts (their locations, their hashes, filenames) that is required, as well as current requirements and the context (which metadata and where it should be downloaded from). Users/build server will therefore get exactly the same artifacts each time they build (using the SHA-256 hashes), but using compatibility matrixes it is possible to upgrade to a later compatible version easily/programmatically. This file currently called the "lockfile" , but it is not to be confused with Rubygem lockfiles. Note the name ?lockfile' will change of because of this confusion. - Is decentralized (as Git), but has a central hub, adepthub.com, (as GitHub) so first-time users can easily find things. - Decentralization makes it possible for users to change metadata (and contribute it or put it somewhere else), but also makes it possible to support a dsl/api in the build tool, where users can create requirements on build time (version ranges is supported through this). - Works offline (i.e not connected to the intertubes) provided that metadata/artifacts is locally available. It knows exactly what it needs so if something is not available it can give easy-to-understand error messages for when something is missing (which is different from Ivy/Maven although I am not sure what the story for cargo or rustpkg was?). - Supports sandboxed projects reliably (no global/changing artifacts). When you checkout a project that uses Adept, you can be sure it has the same artifacts as the one you used on your dev machine. - CLI-like search for packages (through Scalas sbt, but can be extended to a pure CLI tool). Works locally and on online repositories. - Repository metadata is decoupled from a projects source code, which is feature for me, because you might have different workflows etc etc for source code and actual releases. Stuff we are working on the next weeks: - Easy publishing to adepthub.com. - A better web app including browsing and non-CLI searches. - Online resolution on AdeptHub. - Notifications for new compatible releases. - Native web-?stuff" support (i.e. css, js, ?) made possible by importing for bower. This is important for web projects that want 1 package manager for rust but also wants to use css, js and the like. A common use case for the JVM, but perhaps less so for Rust? - AdeptHub will also provide enterprisy features such as support with private repositories and an on-premise version, which I think is important for companies to embrace an ecosystem. I guess some might not like this aspect, but I think this is an important aspect of software development as well. Note we have put effort into decoupling Adept and AdeptHub from each other and want an ecosystem similar to Git/GitHub in this regard. If you want to read more about it have a look below: (we compare heavily to Maven/Ivy though) - Adept?s concepts: https://github.com/adepthub/adepthub-ext/blob/master/concepts.md - Adept?s high-level features and a QA: https://github.com/adepthub/adepthub-ext/blob/master/README.md Also here is a programmatic guide (for Scala) which might help you get a better understanding: https://github.com/adepthub/adepthub-ext/blob/master/guide.md I am more than happy to discuss more details around Adept, rust or just dependency managers in general - I guess you might see that it is a subject that is of interest to me. Best regards, Fredrik -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Sat Jun 7 12:32:52 2014 From: bascule at gmail.com (Tony Arcieri) Date: Sat, 7 Jun 2014 12:32:52 -0700 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <5FAC0072-E2DB-4194-91A4-62C24CFC4B08@gmail.com> References: <5FAC0072-E2DB-4194-91A4-62C24CFC4B08@gmail.com> Message-ID: You might want to check out this thread... Mozilla is sponsoring work on a new Rust package manager called Cargo: https://mail.mozilla.org/pipermail/rust-dev/2014-March/009090.html -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpx.infinity at gmail.com Sat Jun 7 12:38:35 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Sat, 7 Jun 2014 23:38:35 +0400 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <5FAC0072-E2DB-4194-91A4-62C24CFC4B08@gmail.com> References: <5FAC0072-E2DB-4194-91A4-62C24CFC4B08@gmail.com> Message-ID: Hi, Fredrik, Currently a new package manager designed specifically for Rust is under active development. It is called Cargo, and you can find it here [1]. It is pretty much in very alpha stage now, but one day it will become a full package manager and build system for Rust. [1]: https://github.com/carlhuda/cargo On 07 ???? 2014 ?., at 23:28, Fredrik Ekholdt wrote: > Hi! > Seems like this thread has been dead for a while. I am new to rust, but was playing with it today, looking for rustpkg and ended up reading this thread. I have tried to read through this thread, but it is possible that there is a newer more relevant thread covering this topic in which case I excuse myself. > > I am currently working on a language agnostic dependency/package manager and I was wondering whether it might suit Rusts requirements. Right now we are targeting it as a replacement to Maven/Ivy on the JVM, but the idea all along is to make it platform independent (and native) by having a small and predicable algorithm for resolution. > The resolution engine is written 200 LOCs, and the overall logic (excluding metadata reading and so on) is about 400 LOCs more. (I wonder if the Rust implementation will be faster than the one in Scala? :) > > If there is interest I might start looking more into how to port it (I will need help though) - if not, I guess it was worth a shot! :) > > It is called Adept (https://github.com/adept-dm/adept) and it is in alpha for the Scala/JVM. The docs (listed below) are still unfortunately a bit scattered so I am summarising here. > > Some features that might be of interest: > - Fast and reliable resolution of metadata using versioned metadata (which is easily & safely cacheable). > - Fast and reliable artifact (binary files/sources) downloads, i.e. can download from multiple sources in parallel. > - Authors can describe compatibility matrixes of their modules. The short story is that authors can rank modules and thereby define which ones are compatible (or can be replaced) and in multiple files thereby having many ?series? of compatible modules. Using this scheme we can emulate: ?normal? versioning, (what is called) "semantic? versioning and backward compatibility matrixes, but also other, more exotic, version schemes as well. AdeptHub (see below) will make it easy for authors to use the standard ones, but also make it possible to customise this. > - Adept?s engine is flexible enough so that authors can publish multiple packages to multiple platforms and based on user input figure out which package & platform a user should get. > - Adept?s engine is flexible enough to emulate the concept of scopes/configurations/views so an author can publish different scopes/configurations/views of the same package: one for compile, one for runtime, one for testing, etc etc. > - Supports resolution through multiple attributes author-defined attributes. You can require a specific version, a binary-version, but also ?maturity? (or "release-type?) or whatever other attribute that might be relevant. > - Does not require a user to resolve (figure out which packages you need), when they check out code a project. The way this works is that Adept generates a file after resolution that contains all artifacts (their locations, their hashes, filenames) that is required, as well as current requirements and the context (which metadata and where it should be downloaded from). Users/build server will therefore get exactly the same artifacts each time they build (using the SHA-256 hashes), but using compatibility matrixes it is possible to upgrade to a later compatible version easily/programmatically. This file currently called the "lockfile" , but it is not to be confused with Rubygem lockfiles. Note the name ?lockfile' will change of because of this confusion. > - Is decentralized (as Git), but has a central hub, adepthub.com, (as GitHub) so first-time users can easily find things. > - Decentralization makes it possible for users to change metadata (and contribute it or put it somewhere else), but also makes it possible to support a dsl/api in the build tool, where users can create requirements on build time (version ranges is supported through this). > - Works offline (i.e not connected to the intertubes) provided that metadata/artifacts is locally available. It knows exactly what it needs so if something is not available it can give easy-to-understand error messages for when something is missing (which is different from Ivy/Maven although I am not sure what the story for cargo or rustpkg was?). > - Supports sandboxed projects reliably (no global/changing artifacts). When you checkout a project that uses Adept, you can be sure it has the same artifacts as the one you used on your dev machine. > - CLI-like search for packages (through Scalas sbt, but can be extended to a pure CLI tool). Works locally and on online repositories. > - Repository metadata is decoupled from a projects source code, which is feature for me, because you might have different workflows etc etc for source code and actual releases. > > Stuff we are working on the next weeks: > - Easy publishing to adepthub.com. > - A better web app including browsing and non-CLI searches. > - Online resolution on AdeptHub. > - Notifications for new compatible releases. > - Native web-?stuff" support (i.e. css, js, ?) made possible by importing for bower. This is important for web projects that want 1 package manager for rust but also wants to use css, js and the like. A common use case for the JVM, but perhaps less so for Rust? > - AdeptHub will also provide enterprisy features such as support with private repositories and an on-premise version, which I think is important for companies to embrace an ecosystem. I guess some might not like this aspect, but I think this is an important aspect of software development as well. Note we have put effort into decoupling Adept and AdeptHub from each other and want an ecosystem similar to Git/GitHub in this regard. > > If you want to read more about it have a look below: (we compare heavily to Maven/Ivy though) > - Adept?s concepts: https://github.com/adepthub/adepthub-ext/blob/master/concepts.md > - Adept?s high-level features and a QA: https://github.com/adepthub/adepthub-ext/blob/master/README.md > > Also here is a programmatic guide (for Scala) which might help you get a better understanding: https://github.com/adepthub/adepthub-ext/blob/master/guide.md > > I am more than happy to discuss more details around Adept, rust or just dependency managers in general - I guess you might see that it is a subject that is of interest to me. > > > Best regards, > Fredrik > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From me at kevincantu.org Sat Jun 7 13:27:33 2014 From: me at kevincantu.org (Kevin Cantu) Date: Sat, 7 Jun 2014 13:27:33 -0700 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <5FAC0072-E2DB-4194-91A4-62C24CFC4B08@gmail.com> Message-ID: Further historical disambiguation: * there used to be no package manager * Cargo was created * rustpkg replaced Cargo * Cargo' replaced rustpkg Why I'm the only one who calls it Cargo', I don't know. ;D Kevin On Sat, Jun 7, 2014 at 12:38 PM, Vladimir Matveev wrote: > Hi, Fredrik, > > Currently a new package manager designed specifically for Rust is under > active development. It is called Cargo, and you can find it here [1]. It is > pretty much in very alpha stage now, but one day it will become a full > package manager and build system for Rust. > > [1]: https://github.com/carlhuda/cargo > > On 07 ???? 2014 ?., at 23:28, Fredrik Ekholdt wrote: > > > Hi! > > Seems like this thread has been dead for a while. I am new to rust, but > was playing with it today, looking for rustpkg and ended up reading this > thread. I have tried to read through this thread, but it is possible that > there is a newer more relevant thread covering this topic in which case I > excuse myself. > > > > I am currently working on a language agnostic dependency/package manager > and I was wondering whether it might suit Rusts requirements. Right now we > are targeting it as a replacement to Maven/Ivy on the JVM, but the idea all > along is to make it platform independent (and native) by having a small and > predicable algorithm for resolution. > > The resolution engine is written 200 LOCs, and the overall logic > (excluding metadata reading and so on) is about 400 LOCs more. (I wonder if > the Rust implementation will be faster than the one in Scala? :) > > > > If there is interest I might start looking more into how to port it (I > will need help though) - if not, I guess it was worth a shot! :) > > > > It is called Adept (https://github.com/adept-dm/adept) and it is in > alpha for the Scala/JVM. The docs (listed below) are still unfortunately a > bit scattered so I am summarising here. > > > > Some features that might be of interest: > > - Fast and reliable resolution of metadata using versioned metadata > (which is easily & safely cacheable). > > - Fast and reliable artifact (binary files/sources) downloads, i.e. can > download from multiple sources in parallel. > > - Authors can describe compatibility matrixes of their modules. The > short story is that authors can rank modules and thereby define which ones > are compatible (or can be replaced) and in multiple files thereby having > many ?series? of compatible modules. Using this scheme we can emulate: > ?normal? versioning, (what is called) "semantic? versioning and backward > compatibility matrixes, but also other, more exotic, version schemes as > well. AdeptHub (see below) will make it easy for authors to use the > standard ones, but also make it possible to customise this. > > - Adept?s engine is flexible enough so that authors can publish multiple > packages to multiple platforms and based on user input figure out which > package & platform a user should get. > > - Adept?s engine is flexible enough to emulate the concept of > scopes/configurations/views so an author can publish different > scopes/configurations/views of the same package: one for compile, one for > runtime, one for testing, etc etc. > > - Supports resolution through multiple attributes author-defined > attributes. You can require a specific version, a binary-version, but also > ?maturity? (or "release-type?) or whatever other attribute that might be > relevant. > > - Does not require a user to resolve (figure out which packages you > need), when they check out code a project. The way this works is that Adept > generates a file after resolution that contains all artifacts (their > locations, their hashes, filenames) that is required, as well as current > requirements and the context (which metadata and where it should be > downloaded from). Users/build server will therefore get exactly the same > artifacts each time they build (using the SHA-256 hashes), but using > compatibility matrixes it is possible to upgrade to a later compatible > version easily/programmatically. This file currently called the "lockfile" > , but it is not to be confused with Rubygem lockfiles. Note the name > ?lockfile' will change of because of this confusion. > > - Is decentralized (as Git), but has a central hub, adepthub.com, (as > GitHub) so first-time users can easily find things. > > - Decentralization makes it possible for users to change metadata (and > contribute it or put it somewhere else), but also makes it possible to > support a dsl/api in the build tool, where users can create requirements on > build time (version ranges is supported through this). > > - Works offline (i.e not connected to the intertubes) provided that > metadata/artifacts is locally available. It knows exactly what it needs so > if something is not available it can give easy-to-understand error > messages for when something is missing (which is different from Ivy/Maven > although I am not sure what the story for cargo or rustpkg was?). > > - Supports sandboxed projects reliably (no global/changing artifacts). > When you checkout a project that uses Adept, you can be sure it has the > same artifacts as the one you used on your dev machine. > > - CLI-like search for packages (through Scalas sbt, but can be extended > to a pure CLI tool). Works locally and on online repositories. > > - Repository metadata is decoupled from a projects source code, which is > feature for me, because you might have different workflows etc etc for > source code and actual releases. > > > > Stuff we are working on the next weeks: > > - Easy publishing to adepthub.com. > > - A better web app including browsing and non-CLI searches. > > - Online resolution on AdeptHub. > > - Notifications for new compatible releases. > > - Native web-?stuff" support (i.e. css, js, ?) made possible by > importing for bower. This is important for web projects that want 1 package > manager for rust but also wants to use css, js and the like. A common use > case for the JVM, but perhaps less so for Rust? > > - AdeptHub will also provide enterprisy features such as support with > private repositories and an on-premise version, which I think is important > for companies to embrace an ecosystem. I guess some might not like this > aspect, but I think this is an important aspect of software development as > well. Note we have put effort into decoupling Adept and AdeptHub from each > other and want an ecosystem similar to Git/GitHub in this regard. > > > > If you want to read more about it have a look below: (we compare heavily > to Maven/Ivy though) > > - Adept?s concepts: > https://github.com/adepthub/adepthub-ext/blob/master/concepts.md > > - Adept?s high-level features and a QA: > https://github.com/adepthub/adepthub-ext/blob/master/README.md > > > > Also here is a programmatic guide (for Scala) which might help you get a > better understanding: > https://github.com/adepthub/adepthub-ext/blob/master/guide.md > > > > I am more than happy to discuss more details around Adept, rust or just > dependency managers in general - I guess you might see that it is a subject > that is of interest to me. > > > > > > Best regards, > > Fredrik > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Sat Jun 7 14:43:22 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Sat, 7 Jun 2014 17:43:22 -0400 Subject: [rust-dev] Rust NYC Meetup Message-ID: I remember there being a thread about this before, but my search-fu is weak. Is there a meetup in NYC yet? If not, it'll be just me at some random coffee shop every month to start. :) From depp at zdome.net Sun Jun 8 01:10:15 2014 From: depp at zdome.net (Dietrich Epp) Date: Sun, 8 Jun 2014 01:10:15 -0700 Subject: [rust-dev] Taking ownership of the datetime library Message-ID: <84C6D5B3-6BA2-4ABE-BE12-59F2ADD8949E@zdome.net> Unless there?s a good objection I?m taking ownership of the datetime library, as luisbg?s efforts seem to be abandoned. I?ve read the wiki, the last mailing list discussion, and I?ve familiarized myself with JSR 310, Joda Time, Noda Time, the C++ proposal, and others. Here is the repository: https://github.com/depp/datetime-rs If you want to do any bikeshedding, please review the /doc folder, and feel free to edit the wiki or post issues or pull requests through GitHub. Code review is also welcome, although there is only one module implemented yet. ?Dietrich From me at kevincantu.org Sun Jun 8 01:28:02 2014 From: me at kevincantu.org (Kevin Cantu) Date: Sun, 8 Jun 2014 01:28:02 -0700 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: References: Message-ID: Well, I've experimented a bit, and read some helpful things -- http://tomlee.co/2014/04/03/a-more-detailed-tour-of-the-rust-compiler/ -- and now I'm pretty sure that I could make what I imagined work for literal arguments, but not for arguments which we need the type system for. That is, if I understand correctly, macros are expanded before either that token (or AST node, e.g., in a procedural macro), has its type known: ``` let xx = ... my_terrible_macro!(xx); // xx is of type ??? ``` So better to work on smarter binding generators using more sophisticated generics... Kevin On Fri, Jun 6, 2014 at 2:54 AM, Kevin Cantu wrote: > Apologies for the accidentally sent email. Not sure what GMail just did > for me there. Anyways, with a macro it should be possible to use the types > given to choose between mangled names, for example, at compile time. > > Kevin > > > > > > > > On Fri, Jun 6, 2014 at 2:44 AM, Kevin Cantu wrote: > >> I imagine a macro like the following, which is NOT a macro, because I >> don't know how to write macros yet: >> >> >> macro fillRect(args...) { >> >> fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) >> fillRect_I_I_I_I_BS ( int x, int y, int width, int height, Qt::BrushStyle >> style ) >> fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) >> fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) >> fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) >> fillRect_R_C ( const QRect & rectangle, const QColor & color ) >> fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) >> fillRect_I_I_I_I_B ( int x, int y, int width, int height, const QBrush & >> brush ) >> fillRect_I_I_I_I_C ( int x, int y, int width, int height, const QColor & >> color ) >> fillRect_I_I_I_I_GC ( int x, int y, int width, int height, >> Qt::GlobalColor color ) >> fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) >> fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) >> >> >> >> >> >> >> >> >> On Fri, Jun 6, 2014 at 2:35 AM, Kevin Cantu wrote: >> >>> Since C# allows overloaded methods, but F# doesn't want them, what F# >>> does is somewhat interesting: "overloaded methods are permitted in the >>> language, provided that the arguments are in tuple form, not curried form." >>> [http://msdn.microsoft.com/en-us/library/dd483468.aspx] >>> >>> In practice, this means that all the calls to C# (tupled arguments) can >>> be resolved, but idiomatic F# doesn't have overloaded methods. >>> >>> // tuple calling convention: looks like C# >>> let aa = csharp_library.mx(1, 2) >>> let bb = csharp_library.mx(1) >>> >>> // curried calling convention: makes dd, below, a function not a value >>> let cc = fsharp_library.m2 1 2 >>> let dd = fsharp_library.m2 1 >>> >>> Would it be useful to use pattern matching over some generic sort of >>> tuples to implement something similar in Rust? >>> >>> >>> Kevin >>> >>> >>> >>> On Sat, May 24, 2014 at 3:45 AM, Matthieu Monrocq < >>> matthieu.monrocq at gmail.com> wrote: >>> >>>> >>>> >>>> >>>> On Sat, May 24, 2014 at 9:06 AM, Zolt?n T?th wrote: >>>> >>>>> Alexander, your option 2 could be done automatically. By appending >>>>> postfixes to the overloaded name depending on the parameter types. >>>>> Increasing the number of letters used till the ambiguity is fully resolved. >>>>> >>>>> What do you think? >>>>> >>>>> >>>>> fillRect_RF_B ( const QRectF & rectangle, const QBrush & brush ) >>>>> fillRect_I_I_I_I_BS ( int x, int y, int width, int height, >>>>> Qt::BrushStyle style ) >>>>> fillRect_Q_BS ( const QRect & rectangle, Qt::BrushStyle style ) >>>>> fillRect_RF_BS ( const QRectF & rectangle, Qt::BrushStyle style ) >>>>> fillRect_R_B ( const QRect & rectangle, const QBrush & brush ) >>>>> fillRect_R_C ( const QRect & rectangle, const QColor & color ) >>>>> fillRect_RF_C ( const QRectF & rectangle, const QColor & color ) >>>>> fillRect_I_I_I_I_B ( int x, int y, int width, int height, const QBrush >>>>> & brush ) >>>>> fillRect_I_I_I_I_C ( int x, int y, int width, int height, const QColor >>>>> & color ) >>>>> fillRect_I_I_I_I_GC ( int x, int y, int width, int height, >>>>> Qt::GlobalColor color ) >>>>> fillRect_R_GC ( const QRect & rectangle, Qt::GlobalColor color ) >>>>> fillRect_RF_GC ( const QRectF & rectangle, Qt::GlobalColor color ) >>>>> >>>>> >>>>> I believe this alternative was considered in the original blog post >>>> Alexander wrote: this is, in essence, mangling. It makes for ugly function >>>> names, although the prefix helps in locating them I guess. >>>> >>>> >>>> Before we talk about generation though, I would start about >>>> investigating where those overloads come from. >>>> >>>> First, there are two different objects being manipulated here: >>>> >>>> + QRect is a rectangle with integral coordinates >>>> + QRectF is a rectangle with floating point coordinates >>>> >>>> >>>> Second, a QRect may already be build from "(int* x*, int* y*, int* >>>> width*, int* height*)"; thus all overloads taking 4 hints instead of a >>>> QRect are pretty useless in a sense. >>>> >>>> Third, in a similar vein, QBrush can be build from "(Qt::BrushStyle)", >>>> "(Qt::GlobalColor)" or "(QColor const&)". So once again those overloads are >>>> pretty useless. >>>> >>>> >>>> This leaves us with: >>>> >>>> + fillRect(QRect const&, QBrush const&) >>>> + fillRect(QRectF const&, QBrush const&) >>>> >>>> Yep, that's it. Of all those inconsistent overloads (missing 4 taking 4 >>>> floats, by the way...) only 2 are ever useful. The other 10 can be safely >>>> discarded without impacting the expressiveness. >>>> >>>> >>>> Now, of course, the real question is how well a tool could perform this >>>> reduction step. I would note here that the position and names of the >>>> "coordinate" arguments of "fillRect" is exactly that of those to "QRect"; >>>> maybe a simple exhaustive search would thus suffice (though it does require >>>> semantic understanding of what a constructor and default arguments are). >>>> >>>> It would be interesting checking how many overloads remain *after* this >>>> reduction step. Here we got a factor of 6 already (should have been 8 if >>>> the interface had been complete). >>>> >>>> It would also be interesting checking if the distinction int/float >>>> often surfaces, there might be an opportunity here. >>>> >>>> >>>> -- Matthieu >>>> >>>> >>>> Alexander Tsvyashchenko wrote: >>>>> >>>>>> So far I can imagine several possible answers: >>>>>> >>>>>> 1. "We don't care, your legacy C++ libraries are bad and you >>>>>> should feel bad!" - I think this stance would be bad for Rust and would >>>>>> hinder its adoption, but if that's the ultimate answer - I'd personally >>>>>> prefer it said loud and clear, so that at least nobody has any illusions. >>>>>> >>>>>> 2. "Define & maintain the mapping between C++ and Rust function >>>>>> names" (I assume this is what you're alluding to with "define meaningful >>>>>> unique function names" above?) While this might be possible for smaller >>>>>> libraries, this is out of the question for large libraries like Qt5 - at >>>>>> least I won't create and maintain this mapping for sure, and I doubt others >>>>>> will: just looking at the stats from 3 Qt5 libraries (QtCore, QtGui and >>>>>> QtWidgets) out of ~30 Qt libraries in total, from the 50745 wrapped >>>>>> methods 9601 were overloads and required renaming. >>>>>> >>>>>> Besides that, this has a disadvantage of throwing away majority >>>>>> of the experience people have with particular library and forcing them to >>>>>> le-learn its API. >>>>>> >>>>>> On top of that, not for every overload it's easy to come up with >>>>>> short, meaningful, memorable and distinctive names - you can try that >>>>>> exercise for >>>>>> http://qt-project.org/doc/qt-4.8/qpainter.html#fillRect ;-) >>>>>> >>>>>> 3. "Come up with some way to allow overloading / default >>>>>> parameters" - possibly with reduced feature set, i.e. if type inference is >>>>>> difficult in the presence of overloads, as suggested in some overloads >>>>>> discussions (although not unsolvable, as proven by other languages that >>>>>> allow both type inference & overloading?), possibly exclude overloads from >>>>>> the type inference by annotating overloaded methods with special attributes? >>>>>> >>>>>> 4. Possibly some other options I'm missing? >>>>>> >>>>>> -- >>>>>> Good luck! Alexander >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Rust-dev mailing list >>>>>> Rust-dev at mozilla.org >>>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Rust-dev mailing list >>>>> Rust-dev at mozilla.org >>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eli at zigr.org Sun Jun 8 01:53:06 2014 From: eli at zigr.org (Eli Green) Date: Sun, 8 Jun 2014 10:53:06 +0200 Subject: [rust-dev] Generic Database Bindings Message-ID: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> Hi everyone, Is there an active project for these database bindings*? There were some good comments at https://github.com/mozilla/rust/issues/14658 but I don't know if there's a repository or wiki where people can comment on specific designs or requirements. I'm a rust newbie but have dealt with database access in C, C++, Java, Python and, several lifetimes ago, in Perl, so I'm relatively familiar with the problem space. If nobody else is working on this, I might start an empty repo just to have an open wiki that people can comment on. Or I could start the discussion here and move off-list when things starts to take shape and get down to the nitty-gritty. There may be people without a particular interest in the library who can still offer insight into a high-level design. Eli * I'm avoiding the term SQL here because I don't think there's any good reason to rule out supporting things like Cassandra since their result sets are tabular and should fit in with an API designed for relational databases. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4118 bytes Desc: not available URL: From lists at dhardy.name Sun Jun 8 03:50:51 2014 From: lists at dhardy.name (Diggory Hardy) Date: Sun, 08 Jun 2014 12:50:51 +0200 Subject: [rust-dev] strings in sets/maps In-Reply-To: <5391E1CC.8050604@mozilla.com> References: <1839434.JlW78sHfkb@tph-l13071> <5391E1CC.8050604@mozilla.com> Message-ID: <2693144.yKj6Pdr1aK@tph-l13071> Just adding doc for common cases is enough in my opinion. If you add StringMap you've got four possible variants (map/set and tree/hash); even without that another "class" just to rename a few methods. Diggory On Friday 06 Jun 2014 08:44:12 Patrick Walton wrote: > On 6/6/14 6:40 AM, Diggory Hardy wrote: > > Dear List, > > > > I want to use strings as map keys, but couldn't find any mention of this > > in my understanding common use-case. The following works but as far as I > > understand requires a copy of the potential key to be made to call > > `contains()`, is this correct? > > I've been thinking for a while that we should provide a `StringMap` for > this common use case, to make the `equiv()` stuff easier to sort out. > "Easy things should be easy; hard things should be possible." > > Patrick > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Sun Jun 8 05:19:39 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Sun, 8 Jun 2014 05:19:39 -0700 Subject: [rust-dev] Generic Database Bindings In-Reply-To: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> References: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> Message-ID: There isn't no. If you want to build a binding, just do it! The only one I'm really aware of right now is https://github.com/sfackler/rust-postgres From dpx.infinity at gmail.com Sun Jun 8 05:29:31 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Sun, 8 Jun 2014 16:29:31 +0400 Subject: [rust-dev] Generic Database Bindings In-Reply-To: References: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> Message-ID: <8586B490-B4B2-46E7-8240-5F772A9A70DF@gmail.com> There is also rustsqlite[1]. It would be great to have generic bindings for databases, like in Go or in Java. In Rust, however, reflective approaches of these won?t work because Rust lacks structural reflection. I guess, generic bindings will have to follow type classes approach, like Encodable/Decodable (maybe even use them, taking advantage of automatic deriving). [1]: https://github.com/linuxfood/rustsqlite On 08 ???? 2014 ?., at 16:19, Steve Klabnik wrote: > There isn't no. If you want to build a binding, just do it! The only > one I'm really aware of right now is > https://github.com/sfackler/rust-postgres > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From steve at steveklabnik.com Sun Jun 8 06:00:54 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Sun, 8 Jun 2014 06:00:54 -0700 Subject: [rust-dev] Generic Database Bindings In-Reply-To: References: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> <8586B490-B4B2-46E7-8240-5F772A9A70DF@gmail.com> Message-ID: Like any open source, start throwing some code together and then tell us all about it! :) From dbp at dbpmail.net Sun Jun 8 06:53:03 2014 From: dbp at dbpmail.net (Daniel Patterson) Date: Sun, 08 Jun 2014 09:53:03 -0400 Subject: [rust-dev] Rust NYC Meetup In-Reply-To: References: Message-ID: <874mzvpkxs.fsf@intel.home.dbpmail.net> Hi Steve, There was a presentation at a C++ meetup, but no actual meetup, IIRC. I'd be interested in a Rust only meetup, and probably others would as well. Daniel Steve Klabnik writes: > I remember there being a thread about this before, but my search-fu is weak. > > Is there a meetup in NYC yet? If not, it'll be just me at some random coffee shop every month to start. :) > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From steve at steveklabnik.com Sun Jun 8 07:14:05 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Sun, 8 Jun 2014 07:14:05 -0700 Subject: [rust-dev] Rust NYC Meetup In-Reply-To: <874mzvpkxs.fsf@intel.home.dbpmail.net> References: <874mzvpkxs.fsf@intel.home.dbpmail.net> Message-ID: Ah! Great! Give me a few days, and I'll come up with something. :) From ben.striegel at gmail.com Sun Jun 8 10:30:50 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sun, 8 Jun 2014 13:30:50 -0400 Subject: [rust-dev] Taking ownership of the datetime library In-Reply-To: <84C6D5B3-6BA2-4ABE-BE12-59F2ADD8949E@zdome.net> References: <84C6D5B3-6BA2-4ABE-BE12-59F2ADD8949E@zdome.net> Message-ID: This is quite an undertaking. Thanks for taking the initiative! On Sun, Jun 8, 2014 at 4:10 AM, Dietrich Epp wrote: > Unless there?s a good objection I?m taking ownership of the datetime > library, as luisbg?s efforts seem to be abandoned. I?ve read the wiki, the > last mailing list discussion, and I?ve familiarized myself with JSR 310, > Joda Time, Noda Time, the C++ proposal, and others. > > Here is the repository: https://github.com/depp/datetime-rs > > If you want to do any bikeshedding, please review the /doc folder, and > feel free to edit the wiki or post issues or pull requests through GitHub. > Code review is also welcome, although there is only one module implemented > yet. > > ?Dietrich > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nit.dgp673 at gmail.com Sun Jun 8 05:47:04 2014 From: nit.dgp673 at gmail.com (Laxmi Narayan NIT DGP) Date: Sun, 8 Jun 2014 18:17:04 +0530 Subject: [rust-dev] Generic Database Bindings In-Reply-To: <8586B490-B4B2-46E7-8240-5F772A9A70DF@gmail.com> References: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> <8586B490-B4B2-46E7-8240-5F772A9A70DF@gmail.com> Message-ID: hi , if i start working on this idea .. where can i get support ? * Laxmi Narayan Patel* * MCA NIT Durgapur ( Final year)* * Mob:- 8345847473 * On Sun, Jun 8, 2014 at 5:59 PM, Vladimir Matveev wrote: > There is also rustsqlite[1]. > > It would be great to have generic bindings for databases, like in Go or in > Java. In Rust, however, reflective approaches of these won?t work because > Rust lacks structural reflection. I guess, generic bindings will have to > follow type classes approach, like Encodable/Decodable (maybe even use > them, taking advantage of automatic deriving). > > [1]: https://github.com/linuxfood/rustsqlite > > On 08 ???? 2014 ?., at 16:19, Steve Klabnik > wrote: > > > There isn't no. If you want to build a binding, just do it! The only > > one I'm really aware of right now is > > https://github.com/sfackler/rust-postgres > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Sun Jun 8 13:14:33 2014 From: banderson at mozilla.com (Brian Anderson) Date: Sun, 08 Jun 2014 13:14:33 -0700 Subject: [rust-dev] Taking ownership of the datetime library In-Reply-To: <84C6D5B3-6BA2-4ABE-BE12-59F2ADD8949E@zdome.net> References: <84C6D5B3-6BA2-4ABE-BE12-59F2ADD8949E@zdome.net> Message-ID: <5394C429.9000902@mozilla.com> Thanks! On 06/08/2014 01:10 AM, Dietrich Epp wrote: > Unless there?s a good objection I?m taking ownership of the datetime library, as luisbg?s efforts seem to be abandoned. I?ve read the wiki, the last mailing list discussion, and I?ve familiarized myself with JSR 310, Joda Time, Noda Time, the C++ proposal, and others. > > Here is the repository: https://github.com/depp/datetime-rs > > If you want to do any bikeshedding, please review the /doc folder, and feel free to edit the wiki or post issues or pull requests through GitHub. Code review is also welcome, although there is only one module implemented yet. > > ?Dietrich > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From me at kevincantu.org Sun Jun 8 13:25:55 2014 From: me at kevincantu.org (Kevin Cantu) Date: Sun, 8 Jun 2014 13:25:55 -0700 Subject: [rust-dev] Generic Database Bindings In-Reply-To: References: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> <8586B490-B4B2-46E7-8240-5F772A9A70DF@gmail.com> Message-ID: Worth mentioning, too, that the IRC channel is *way* more active at odd hours now than it used to be. :) irc.mozilla.org #rust Kevin On Sun, Jun 8, 2014 at 6:00 AM, Steve Klabnik wrote: > Like any open source, start throwing some code together and then tell > us all about it! :) > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amindfv at gmail.com Sun Jun 8 13:39:14 2014 From: amindfv at gmail.com (amindfv at gmail.com) Date: Sun, 8 Jun 2014 16:39:14 -0400 Subject: [rust-dev] Rust NYC Meetup In-Reply-To: <874mzvpkxs.fsf@intel.home.dbpmail.net> References: <874mzvpkxs.fsf@intel.home.dbpmail.net> Message-ID: <5F23C361-02E4-4BC2-937D-472EA27E20A2@gmail.com> +1 Tom El Jun 8, 2014, a las 9:53, Daniel Patterson escribi?: > Hi Steve, > > There was a presentation at a C++ meetup, but no actual meetup, IIRC. > > I'd be interested in a Rust only meetup, and probably others would as > well. > > Daniel > > Steve Klabnik writes: > >> I remember there being a thread about this before, but my search-fu is weak. >> >> Is there a meetup in NYC yet? If not, it'll be just me at some random coffee shop every month to start. :) >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From Skirmantas.Kligys at gmail.com Sun Jun 8 14:57:54 2014 From: Skirmantas.Kligys at gmail.com (Skirmantas Kligys) Date: Sun, 8 Jun 2014 21:57:54 +0000 (UTC) Subject: [rust-dev] Rust (Servo) Cross-Compile to ARM References: <52F3B889.7010008@inf.u-szeged.hu> Message-ID: Luqman Aden writes: > > Building a Rust cross compiler that can target arm isn't too hard. You just need the right toolchain installed. I personally use Debian with the gcc-4.7-arm-linux-gnueabi package from the Emdebian repo. (I believe Ubuntu and other distros have similar packages). From there it's just a simple matter of passing the right triple to the configure script. > > ./configure?--target=arm-unknown-linux-gnueabi && make > > That'll build a rustc that can target arm as well as all the libraries. Then you can run it like so: > > rustc?--target=arm-unknown-linux-gnueabi --linker=arm-linux-gnueabi-gcc hello.rs > > That'll give you a binary, hello, which will run on arm/linux. So, that's the basic gist of it. I am trying to follow these instructions and also https://gist.github.com/amatus/6665852 unsuccessfully. export PATH=$PWD/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian- x64/bin:$PATH cd rust ./configure --prefix=/usr/local/stow/rust-pi-20140608 --target=arm-unknown- linux-gnueabihf make -j2 sudo make install cfg: build triple x86_64-unknown-linux-gnu cfg: host triples x86_64-unknown-linux-gnu cfg: target triples x86_64-unknown-linux-gnu arm-unknown-linux-gnueabihf cfg: non-build target triples arm-unknown-linux-gnueabihf cfg: enabling more debugging (CFG_ENABLE_DEBUG) cfg: host for x86_64-unknown-linux-gnu is x86_64 cfg: host for arm-unknown-linux-gnueabihf is arm cfg: os for x86_64-unknown-linux-gnu is unknown-linux-gnu cfg: os for arm-unknown-linux-gnueabihf is unknown-linux-gnueabihf cfg: using CC=gcc (CFG_CC) cfg: no pdflatex found, deferring to xelatex cfg: no xelatex found, deferring to lualatex cfg: no lualatex found, disabling LaTeX docs cfg: no pandoc found, omitting PDF and EPUB docs cfg: no llnextgen found, omitting grammar-verification ... A rustc gets built, but it targets Intel: $ rustc -C target-cpu=help hello.rs Available CPUs for this target: amdfam10 - Select the amdfam10 processor. athlon - Select the athlon processor. athlon-4 - Select the athlon-4 processor. athlon-fx - Select the athlon-fx processor. athlon-mp - Select the athlon-mp processor. athlon-tbird - Select the athlon-tbird processor. athlon-xp - Select the athlon-xp processor. athlon64 - Select the athlon64 processor. athlon64-sse3 - Select the athlon64-sse3 processor. ... Any ideas? From corey at octayn.net Sun Jun 8 15:10:34 2014 From: corey at octayn.net (Corey Richardson) Date: Sun, 8 Jun 2014 15:10:34 -0700 Subject: [rust-dev] Rust (Servo) Cross-Compile to ARM In-Reply-To: References: <52F3B889.7010008@inf.u-szeged.hu> Message-ID: You need to change the target, not just the target-cpu. `rustc --target arm-unknown-linux-gnueabihf ...` On Sun, Jun 8, 2014 at 2:57 PM, Skirmantas Kligys wrote: > Luqman Aden writes: >> >> Building a Rust cross compiler that can target arm isn't too hard. You > just need the right toolchain installed. I personally use Debian with the > gcc-4.7-arm-linux-gnueabi package from the Emdebian repo. (I believe Ubuntu > and other distros have similar packages). From there it's just a simple > matter of passing the right triple to the configure script. >> >> ./configure --target=arm-unknown-linux-gnueabi && make >> >> That'll build a rustc that can target arm as well as all the libraries. > Then you can run it like so: >> >> rustc --target=arm-unknown-linux-gnueabi --linker=arm-linux-gnueabi-gcc > hello.rs >> >> That'll give you a binary, hello, which will run on arm/linux. So, that's > the basic gist of it. > > I am trying to follow these instructions and also > > https://gist.github.com/amatus/6665852 > > unsuccessfully. > > export PATH=$PWD/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian- > x64/bin:$PATH > cd rust > ./configure --prefix=/usr/local/stow/rust-pi-20140608 --target=arm-unknown- > linux-gnueabihf > make -j2 > sudo make install > > cfg: build triple x86_64-unknown-linux-gnu > cfg: host triples x86_64-unknown-linux-gnu > cfg: target triples x86_64-unknown-linux-gnu arm-unknown-linux-gnueabihf > cfg: non-build target triples arm-unknown-linux-gnueabihf > cfg: enabling more debugging (CFG_ENABLE_DEBUG) > cfg: host for x86_64-unknown-linux-gnu is x86_64 > cfg: host for arm-unknown-linux-gnueabihf is arm > cfg: os for x86_64-unknown-linux-gnu is unknown-linux-gnu > cfg: os for arm-unknown-linux-gnueabihf is unknown-linux-gnueabihf > cfg: using CC=gcc (CFG_CC) > cfg: no pdflatex found, deferring to xelatex > cfg: no xelatex found, deferring to lualatex > cfg: no lualatex found, disabling LaTeX docs > cfg: no pandoc found, omitting PDF and EPUB docs > cfg: no llnextgen found, omitting grammar-verification > ... > > A rustc gets built, but it targets Intel: > > $ rustc -C target-cpu=help hello.rs > Available CPUs for this target: > > amdfam10 - Select the amdfam10 processor. > athlon - Select the athlon processor. > athlon-4 - Select the athlon-4 processor. > athlon-fx - Select the athlon-fx processor. > athlon-mp - Select the athlon-mp processor. > athlon-tbird - Select the athlon-tbird processor. > athlon-xp - Select the athlon-xp processor. > athlon64 - Select the athlon64 processor. > athlon64-sse3 - Select the athlon64-sse3 processor. > ... > > Any ideas? > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -- http://octayn.net/ From steve at steveklabnik.com Sun Jun 8 15:10:45 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Sun, 8 Jun 2014 18:10:45 -0400 Subject: [rust-dev] Rust NYC Meetup In-Reply-To: <5F23C361-02E4-4BC2-937D-472EA27E20A2@gmail.com> References: <874mzvpkxs.fsf@intel.home.dbpmail.net> <5F23C361-02E4-4BC2-937D-472EA27E20A2@gmail.com> Message-ID: <758BB267-5393-4118-ACAE-ED6F1DBA8948@steveklabnik.com> Cool. Expect to hear more from me very soon. From gsingh_2011 at yahoo.com Sun Jun 8 19:33:16 2014 From: gsingh_2011 at yahoo.com (Gulshan Singh) Date: Sun, 8 Jun 2014 19:33:16 -0700 Subject: [rust-dev] Value may contain references; add `'static` bound to `I` Message-ID: I'm getting this error but I don't completely understand why: value may contain references; add `'static` bound to `I`. Here are the relevant snippets of code, I can add more if required: https://gist.github.com/gsingh93/ca7da693d98936dec10b The general idea is I have a Simulator object that stores a reference to an Automaton and a boxed Iterator<&I>, where I is the type that is input to the automaton. I have the `'a` lifetime on all of the references because I want them to have the same lifetime, so I don't know why it's telling me to add `'static`. I've asked about this in IRC, but I didn't get any response. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skirmantas.kligys at gmail.com Sun Jun 8 20:17:41 2014 From: skirmantas.kligys at gmail.com (Skirmantas Kligys) Date: Sun, 8 Jun 2014 20:17:41 -0700 Subject: [rust-dev] Rust (Servo) Cross-Compile to ARM In-Reply-To: References: <52F3B889.7010008@inf.u-szeged.hu> Message-ID: On Sun, Jun 8, 2014 at 3:10 PM, Corey Richardson wrote: > You need to change the target, not just the target-cpu. `rustc > --target arm-unknown-linux-gnueabihf ...` Oh, apparently it built a compiler and a cross-compiler in the same binary. That was unexpected. For the record, this is the correct way to compile: rustc --target arm-unknown-linux-gnueabihf -C linker=arm-linux-gnueabihf-g++ hello.rs Thanks for help! > On Sun, Jun 8, 2014 at 2:57 PM, Skirmantas Kligys > wrote: >> Luqman Aden writes: >>> >>> Building a Rust cross compiler that can target arm isn't too hard. You >> just need the right toolchain installed. I personally use Debian with the >> gcc-4.7-arm-linux-gnueabi package from the Emdebian repo. (I believe Ubuntu >> and other distros have similar packages). From there it's just a simple >> matter of passing the right triple to the configure script. >>> >>> ./configure --target=arm-unknown-linux-gnueabi && make >>> >>> That'll build a rustc that can target arm as well as all the libraries. >> Then you can run it like so: >>> >>> rustc --target=arm-unknown-linux-gnueabi --linker=arm-linux-gnueabi-gcc >> hello.rs >>> >>> That'll give you a binary, hello, which will run on arm/linux. So, that's >> the basic gist of it. >> >> I am trying to follow these instructions and also >> >> https://gist.github.com/amatus/6665852 >> >> unsuccessfully. >> >> export PATH=$PWD/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian- >> x64/bin:$PATH >> cd rust >> ./configure --prefix=/usr/local/stow/rust-pi-20140608 --target=arm-unknown- >> linux-gnueabihf >> make -j2 >> sudo make install >> >> cfg: build triple x86_64-unknown-linux-gnu >> cfg: host triples x86_64-unknown-linux-gnu >> cfg: target triples x86_64-unknown-linux-gnu arm-unknown-linux-gnueabihf >> cfg: non-build target triples arm-unknown-linux-gnueabihf >> cfg: enabling more debugging (CFG_ENABLE_DEBUG) >> cfg: host for x86_64-unknown-linux-gnu is x86_64 >> cfg: host for arm-unknown-linux-gnueabihf is arm >> cfg: os for x86_64-unknown-linux-gnu is unknown-linux-gnu >> cfg: os for arm-unknown-linux-gnueabihf is unknown-linux-gnueabihf >> cfg: using CC=gcc (CFG_CC) >> cfg: no pdflatex found, deferring to xelatex >> cfg: no xelatex found, deferring to lualatex >> cfg: no lualatex found, disabling LaTeX docs >> cfg: no pandoc found, omitting PDF and EPUB docs >> cfg: no llnextgen found, omitting grammar-verification >> ... >> >> A rustc gets built, but it targets Intel: >> >> $ rustc -C target-cpu=help hello.rs >> Available CPUs for this target: >> >> amdfam10 - Select the amdfam10 processor. >> athlon - Select the athlon processor. >> athlon-4 - Select the athlon-4 processor. >> athlon-fx - Select the athlon-fx processor. >> athlon-mp - Select the athlon-mp processor. >> athlon-tbird - Select the athlon-tbird processor. >> athlon-xp - Select the athlon-xp processor. >> athlon64 - Select the athlon64 processor. >> athlon64-sse3 - Select the athlon64-sse3 processor. >> ... >> >> Any ideas? >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > > > -- > http://octayn.net/ From mozilla at mcpherrin.ca Mon Jun 9 01:35:38 2014 From: mozilla at mcpherrin.ca (Matthew McPherrin) Date: Mon, 9 Jun 2014 01:35:38 -0700 Subject: [rust-dev] Value may contain references; add `'static` bound to `I` In-Reply-To: References: Message-ID: A boxed value must own its contents, thus the only type of reference they may contain are 'static ones. You should probably store an &'a Iterator instead, maybe, or a generic T: Iterator On Sun, Jun 8, 2014 at 7:33 PM, Gulshan Singh wrote: > I'm getting this error but I don't completely understand why: value may > contain references; add `'static` bound to `I`. Here are the relevant > snippets of code, I can add more if required: > https://gist.github.com/gsingh93/ca7da693d98936dec10b > > The general idea is I have a Simulator object that stores a reference to > an Automaton and a boxed Iterator<&I>, where I is the type that is input to > the automaton. I have the `'a` lifetime on all of the references because I > want them to have the same lifetime, so I don't know why it's telling me to > add `'static`. I've asked about this in IRC, but I didn't get any response. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.pedretti at gmail.com Mon Jun 9 01:57:12 2014 From: christophe.pedretti at gmail.com (Christophe Pedretti) Date: Mon, 9 Jun 2014 10:57:12 +0200 Subject: [rust-dev] Generic Database Bindings In-Reply-To: References: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> <8586B490-B4B2-46E7-8240-5F772A9A70DF@gmail.com> Message-ID: i have started a small personal project, with for the moment, an only (but working) SQLite suport, you can find it here mainpage : http://chris-pe.github.io/Rustic/ github : https://github.com/chris-pe/Rustic documentation : http://www.rust-ci.org/chris-pe/Rustic/doc/rustic/ An example of how to use my library here https://github.com/chris-pe/Rustic/blob/master/test-db.rs -- Christophe 2014-06-08 22:25 GMT+02:00 Kevin Cantu : > Worth mentioning, too, that the IRC channel is *way* more active at odd > hours now than it used to be. :) > > irc.mozilla.org #rust > > > Kevin > > > On Sun, Jun 8, 2014 at 6:00 AM, Steve Klabnik > wrote: > >> Like any open source, start throwing some code together and then tell >> us all about it! :) >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zo1980 at gmail.com Mon Jun 9 03:12:24 2014 From: zo1980 at gmail.com (=?UTF-8?B?Wm9sdMOhbiBUw7N0aA==?=) Date: Mon, 9 Jun 2014 12:12:24 +0200 Subject: [rust-dev] how is Rust bootstrapped? Message-ID: My question is rather theoretical, from the libre-and-open-source-software point of view. Bootstrapping needs an already existing language to compile the first executable version of Rust. I read that this was OCaml at some time. I do not have OCaml on my machine, but still managed to build from a cloned Rust repo. The documentation says that building requires a C++ compiler. These suggest that the project moved from OCaml to C++. But there are also some texts on the web and in the source that suggests that stage0 is actually not compiled from the source repository, but is downloaded as a binary snapshot. If this latter is the case, then can someone compile a suitable stage0 from [C++|OCaml] source himself? -------------- next part -------------- An HTML attachment was scrubbed... URL: From owen.shepherd at e43.eu Mon Jun 9 03:16:11 2014 From: owen.shepherd at e43.eu (Owen Shepherd) Date: Mon, 9 Jun 2014 11:16:11 +0100 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: References: Message-ID: The Rust compiler is written in Rust. The build process downloads a prebuilt rustc binary to use for bootstrapping. The C++ compiler dependency is for LLVM. Owen Shepherd http://owenshepherd.net | owen.shepherd at e43.eu On 9 June 2014 11:12, Zolt?n T?th wrote: > My question is rather theoretical, from the libre-and-open-source-software > point of view. > > Bootstrapping needs an already existing language to compile the first > executable version of Rust. > > I read that this was OCaml at some time. I do not have OCaml on my > machine, but still managed to build from a cloned Rust repo. The > documentation says that building requires a C++ compiler. These suggest > that the project moved from OCaml to C++. > > But there are also some texts on the web and in the source that suggests > that stage0 is actually not compiled from the source repository, but is > downloaded as a binary snapshot. If this latter is the case, then can > someone compile a suitable stage0 from [C++|OCaml] source himself? > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leo.testard at gmail.com Mon Jun 9 03:16:54 2014 From: leo.testard at gmail.com (Leo Testard) Date: Mon, 09 Jun 2014 12:16:54 +0200 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: References: Message-ID: <34f60039-689f-4862-942a-e2aa506de9a8@email.android.com> Hello, The OCaml compiler is not used anymore for years. As you said, the Rust build system now downloads a precompiled snapshot of the new implementation, which is written in Rust. Unfortunately, you won't be able to compile it yourself.if you don't have Rust already setup on your machine. For C++, I believe it's used to compile the modified LLVM Rustc uses. Others may confirm this. Leo -- Envoy? de mon t?l?phone Android avec K-9 Mail. Excusez la bri?vet?. -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at kevincantu.org Mon Jun 9 03:32:58 2014 From: me at kevincantu.org (Kevin Cantu) Date: Mon, 9 Jun 2014 03:32:58 -0700 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: <34f60039-689f-4862-942a-e2aa506de9a8@email.android.com> References: <34f60039-689f-4862-942a-e2aa506de9a8@email.android.com> Message-ID: Perl and Python may still be dependencies for small things on some platforms, IIRC, too. Anyways, to get `rustc` on a new architecture, it must be one LLVM supports, and then you should cross-compile `rustc` onto it. I don't remember for sure whether `rustc` even runs on ARM, or if people just cross-compile binaries for it, actually, though... Kevin On Mon, Jun 9, 2014 at 3:16 AM, Leo Testard wrote: > Hello, > > The OCaml compiler is not used anymore for years. As you said, the Rust > build system now downloads a precompiled snapshot of the new > implementation, which is written in Rust. Unfortunately, you won't be able > to compile it yourself.if you don't have Rust already setup on your machine. > > For C++, I believe it's used to compile the modified LLVM Rustc uses. > Others may confirm this. > > Leo > -- > Envoy? de mon t?l?phone Android avec K-9 Mail. Excusez la bri?vet?. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbau.pp at gmail.com Mon Jun 9 03:36:16 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Mon, 09 Jun 2014 20:36:16 +1000 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: References: Message-ID: <53958E20.1000201@gmail.com> On 09/06/14 20:12, Zolt?n T?th wrote: > My question is rather theoretical, from the > libre-and-open-source-software point of view. > > Bootstrapping needs an already existing language to compile the first > executable version of Rust. > > I read that this was OCaml at some time. I do not have OCaml on my > machine, but still managed to build from a cloned Rust repo. The > documentation says that building requires a C++ compiler. These > suggest that the project moved from OCaml to C++. > > But there are also some texts on the web and in the source that > suggests that stage0 is actually not compiled from the source > repository, but is downloaded as a binary snapshot. If this latter is > the case, then can someone compile a suitable stage0 from [C++|OCaml] > source himself? > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev Yes, those texts are correct, one downloads a stage0 compiler as a binary snapshot to compile Rust from source. The stage0 compiler is just stored binaries compiled from some commit in the past. Every so often someone makes a new stage0 by making the buildbots build a snapshot of a more recent commit, allowing the libraries to be completely written in a newer iteration of Rust (they no longer have to be able to be compiled by the old snapshot). There's a wiki page about this snapshot process: https://github.com/mozilla/rust/wiki/Note-compiler-snapshots If one was really interested, one could theoretically backtrace through history, all the way back to the last version[1] of rustboot (the OCaml compiler), and use this to do a "full bootstrap". That is, use the rustboot compiler to build the first written-in-Rust compiler as a snapshot, and then use this snapshot to build the next one, following the chain of snapshotted commits[2] through to eventually get to modern Rust. As others have said, the C++ dependency is just for building LLVM, which is linked into rustc as a library, it's not used by the snapshot (that is, LLVM is a dependency required when building librustc to get a rustc compiler for the next stage; one can use the snapshot to compile libraries like libstd etc. without needing LLVM). Huon [1]: https://github.com/mozilla/rust/tree/ef75860a0a72f79f97216f8aaa5b388d98da6480/src/boot [2]: https://github.com/mozilla/rust/blob/master/src/snapshots.txt From zo1980 at gmail.com Mon Jun 9 04:40:18 2014 From: zo1980 at gmail.com (=?UTF-8?B?Wm9sdMOhbiBUw7N0aA==?=) Date: Mon, 9 Jun 2014 13:40:18 +0200 Subject: [rust-dev] how is Rust bootstrapped? Message-ID: TY Huon for your explanation, this is what i was interested in. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zo1980 at gmail.com Mon Jun 9 05:04:37 2014 From: zo1980 at gmail.com (=?UTF-8?B?Wm9sdMOhbiBUw7N0aA==?=) Date: Mon, 9 Jun 2014 14:04:37 +0200 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: References: Message-ID: Do you plan to create a cleaner full-bootstrap process? By "cleaner" I mean dividing stage-0 to more [sub-]stages, which would be well-defined and documented in terms of the set of language features it implements. Currently these sub-stages are defined by a team member's mood to instruct the build-bots to make a snapshot. This kind of bootstrap seems to be a black-box. I understand you do not spend resource on such tasks before 1.0, but do you think this is a legitimate|sensible request at all? Would it be worth the work? -------------- next part -------------- An HTML attachment was scrubbed... URL: From farcaller at gmail.com Mon Jun 9 05:09:00 2014 From: farcaller at gmail.com (Vladimir Pouzanov) Date: Mon, 9 Jun 2014 13:09:00 +0100 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: References: <34f60039-689f-4862-942a-e2aa506de9a8@email.android.com> Message-ID: Rustc runs on arm just fine. On Mon, Jun 9, 2014 at 11:32 AM, Kevin Cantu wrote: > Perl and Python may still be dependencies for small things on some > platforms, IIRC, too. > > Anyways, to get `rustc` on a new architecture, it must be one LLVM > supports, and then you should cross-compile `rustc` onto it. I don't > remember for sure whether `rustc` even runs on ARM, or if people just > cross-compile binaries for it, actually, though... > > > Kevin > > > On Mon, Jun 9, 2014 at 3:16 AM, Leo Testard wrote: > >> Hello, >> >> The OCaml compiler is not used anymore for years. As you said, the Rust >> build system now downloads a precompiled snapshot of the new >> implementation, which is written in Rust. Unfortunately, you won't be able >> to compile it yourself.if you don't have Rust already setup on your machine. >> >> For C++, I believe it's used to compile the modified LLVM Rustc uses. >> Others may confirm this. >> >> Leo >> -- >> Envoy? de mon t?l?phone Android avec K-9 Mail. Excusez la bri?vet?. >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -- Sincerely, Vladimir "Farcaller" Pouzanov http://farcaller.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sh4.seo at samsung.com Mon Jun 9 05:55:00 2014 From: sh4.seo at samsung.com (Sanghyeon Seo) Date: Mon, 09 Jun 2014 12:55:00 +0000 (GMT) Subject: [rust-dev] how is Rust bootstrapped? Message-ID: <24519353.205871402318500390.JavaMail.weblogic@epml09> > Do you plan to create a cleaner full-bootstrap process? > By "cleaner" I mean dividing stage-0 to more [sub-]stages, > which would be well-defined and documented in terms of > the set of language features it implements. Currently these > sub-stages are defined by a team member's mood to instruct > the build-bots to make a snapshot. This kind of bootstrap > seems to be a black-box. As I understand, there is no plan to do this. "Bootstrap" you are talking about is purely theoretical, and I don't think anyone actually performed it. In practice, Rust is bootstrapped from the downloaded binary. > I understand you do not spend resource on such tasks before 1.0, > but do you think this is a legitimate|sensible request at all? > Would it be worth the work? Personally, I don't see any value in doing this work. C compilers are bootstrapped from C compiler binaries. Analogously, the Rust compiler is bootstrapped from the Rust compiler binary. Trying to bootstrap from rustboot would be akin to trying to bootstrap GCC from last1120c (the oldest C compiler with surviving source code). An interesting feat of computer archaeology, but not really useful for anything. According to http://cm.bell-labs.com/who/dmr/primevalC.html someone actually managed to run last1120c, which is quite cool, I think. From qfire+rustdev at qfire.net Mon Jun 9 06:34:33 2014 From: qfire+rustdev at qfire.net (James Cassidy) Date: Mon, 9 Jun 2014 09:34:33 -0400 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: <24519353.205871402318500390.JavaMail.weblogic@epml09> References: <24519353.205871402318500390.JavaMail.weblogic@epml09> Message-ID: <20140609133433.GA12302@eventhorizon> On Mon, Jun 09, 2014 at 12:55:00PM +0000, Sanghyeon Seo wrote: > > Do you plan to create a cleaner full-bootstrap process? > > By "cleaner" I mean dividing stage-0 to more [sub-]stages, > > which would be well-defined and documented in terms of > > the set of language features it implements. Currently these > > sub-stages are defined by a team member's mood to instruct > > the build-bots to make a snapshot. This kind of bootstrap > > seems to be a black-box. > > As I understand, there is no plan to do this. "Bootstrap" you are talking > about is purely theoretical, and I don't think anyone actually performed it. > In practice, Rust is bootstrapped from the downloaded binary. > > > I understand you do not spend resource on such tasks before 1.0, > > but do you think this is a legitimate|sensible request at all? > > Would it be worth the work? > > Personally, I don't see any value in doing this work. C compilers are > bootstrapped from C compiler binaries. Analogously, the Rust compiler > is bootstrapped from the Rust compiler binary. > > Trying to bootstrap from rustboot would be akin to trying to bootstrap > GCC from last1120c (the oldest C compiler with surviving source code). > An interesting feat of computer archaeology, but not really useful for > anything. > I think he was more referring to what language features will be allowed in the rust compiler itself where earlier stages would be more restricted so they can be compiled with older rust compilers, for example hopefully rustc 2.0 can be compiled by rustc 1.0. Then later stages could use more features since it will be compiled with the more up to date earlier stage. Currently what features can be used in the compiler itself are just limited to whenever someone decides to compiler a newer stage0 compiler. -- Jim From nit.dgp673 at gmail.com Mon Jun 9 07:17:42 2014 From: nit.dgp673 at gmail.com (Laxmi Narayan NIT DGP) Date: Mon, 9 Jun 2014 19:47:42 +0530 Subject: [rust-dev] Generic Database Bindings In-Reply-To: References: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> <8586B490-B4B2-46E7-8240-5F772A9A70DF@gmail.com> Message-ID: hey chris , can you guide me on this project ... i would like to work on it . * Laxmi Narayan Patel* * MCA NIT Durgapur ( Final year)* * Mob:- 8345847473 * On Mon, Jun 9, 2014 at 2:27 PM, Christophe Pedretti < christophe.pedretti at gmail.com> wrote: > i have started a small personal project, with for the moment, an only (but > working) SQLite suport, you can find it here > mainpage : http://chris-pe.github.io/Rustic/ > github : https://github.com/chris-pe/Rustic > documentation : http://www.rust-ci.org/chris-pe/Rustic/doc/rustic/ > > An example of how to use my library here > https://github.com/chris-pe/Rustic/blob/master/test-db.rs > > -- > Christophe > > > 2014-06-08 22:25 GMT+02:00 Kevin Cantu : > > Worth mentioning, too, that the IRC channel is *way* more active at odd >> hours now than it used to be. :) >> >> irc.mozilla.org #rust >> >> >> Kevin >> >> >> On Sun, Jun 8, 2014 at 6:00 AM, Steve Klabnik >> wrote: >> >>> Like any open source, start throwing some code together and then tell >>> us all about it! :) >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Mon Jun 9 10:00:03 2014 From: banderson at mozilla.com (Brian Anderson) Date: Mon, 09 Jun 2014 10:00:03 -0700 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: <20140609133433.GA12302@eventhorizon> References: <24519353.205871402318500390.JavaMail.weblogic@epml09> <20140609133433.GA12302@eventhorizon> Message-ID: <5395E813.2020505@mozilla.com> This is an interesting idea, but I don't see it happening for a long time if ever: * The current process is working fine * rustc depends on many of the standard libraries, so restricting rustc means figuring out how to stick to a fixed subset of those libraries * It's a lot of work to make the bootstrap process even *more* complicated * For some minor benefits On 06/09/2014 06:34 AM, James Cassidy wrote: > On Mon, Jun 09, 2014 at 12:55:00PM +0000, Sanghyeon Seo wrote: >>> Do you plan to create a cleaner full-bootstrap process? >>> By "cleaner" I mean dividing stage-0 to more [sub-]stages, >>> which would be well-defined and documented in terms of >>> the set of language features it implements. Currently these >>> sub-stages are defined by a team member's mood to instruct >>> the build-bots to make a snapshot. This kind of bootstrap >>> seems to be a black-box. >> As I understand, there is no plan to do this. "Bootstrap" you are talking >> about is purely theoretical, and I don't think anyone actually performed it. >> In practice, Rust is bootstrapped from the downloaded binary. >> >>> I understand you do not spend resource on such tasks before 1.0, >>> but do you think this is a legitimate|sensible request at all? >>> Would it be worth the work? >> Personally, I don't see any value in doing this work. C compilers are >> bootstrapped from C compiler binaries. Analogously, the Rust compiler >> is bootstrapped from the Rust compiler binary. >> >> Trying to bootstrap from rustboot would be akin to trying to bootstrap >> GCC from last1120c (the oldest C compiler with surviving source code). >> An interesting feat of computer archaeology, but not really useful for >> anything. >> > I think he was more referring to what language features will be allowed in the > rust compiler itself where earlier stages would be more restricted so they can > be compiled with older rust compilers, for example hopefully rustc 2.0 can be > compiled by rustc 1.0. Then later stages could use more features since it will > be compiled with the more up to date earlier stage. > > Currently what features can be used in the compiler itself are just limited to > whenever someone decides to compiler a newer stage0 compiler. > > > -- Jim > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From steve at steveklabnik.com Mon Jun 9 10:16:45 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Mon, 9 Jun 2014 13:16:45 -0400 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: <5395E813.2020505@mozilla.com> References: <24519353.205871402318500390.JavaMail.weblogic@epml09> <20140609133433.GA12302@eventhorizon> <5395E813.2020505@mozilla.com> Message-ID: I have this pipe dream of compiling every Rust version ever and GPG signing them though.... heh. From corey at octayn.net Mon Jun 9 10:20:18 2014 From: corey at octayn.net (Corey Richardson) Date: Mon, 9 Jun 2014 10:20:18 -0700 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: References: <24519353.205871402318500390.JavaMail.weblogic@epml09> <20140609133433.GA12302@eventhorizon> <5395E813.2020505@mozilla.com> Message-ID: I currently have 4246 builds of rustc, going back to a little bit before bors started being used. On Mon, Jun 9, 2014 at 10:16 AM, Steve Klabnik wrote: > I have this pipe dream of compiling every Rust version ever and GPG > signing them though.... heh. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -- http://octayn.net/ From steve at steveklabnik.com Mon Jun 9 10:24:23 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Mon, 9 Jun 2014 13:24:23 -0400 Subject: [rust-dev] [ANN] Brooklyn.rs Message-ID: Hey all! So, I've moved to NYC, and one of the things I'm gonna miss the most about SF is they Bay Area Rust Meetup... so let's do this! Once my DNS resolves, the site will exist at http://www.brooklyn.rs . Until then, you can check it out at http://steveklabnik.github.io/brooklyn.rs/ TL;DR: The first meeting will be Saturday, June 21, at 1pm. I know weekends are hard for some people, so I plan on moving it to a weekday later, but Saturday hacking is a special thing to me[1] So the first one will be there. No talks, just hacking on some code. Keep it nice and simple at first. I hope to see you all there! 1: http://words.steveklabnik.com/keep-saturdays-sacred From rick.richardson at gmail.com Mon Jun 9 10:31:16 2014 From: rick.richardson at gmail.com (Rick Richardson) Date: Mon, 9 Jun 2014 13:31:16 -0400 Subject: [rust-dev] [ANN] Brooklyn.rs In-Reply-To: References: Message-ID: Wow. Thanks for seeing this up! As you point out, Saturdays are hard. I don't think I'll be able to make this one, but I definitely look forward to hacking with you in the future. Hey all! So, I've moved to NYC, and one of the things I'm gonna miss the most about SF is they Bay Area Rust Meetup... so let's do this! Once my DNS resolves, the site will exist at http://www.brooklyn.rs . Until then, you can check it out at http://steveklabnik.github.io/brooklyn.rs/ TL;DR: The first meeting will be Saturday, June 21, at 1pm. I know weekends are hard for some people, so I plan on moving it to a weekday later, but Saturday hacking is a special thing to me[1] So the first one will be there. No talks, just hacking on some code. Keep it nice and simple at first. I hope to see you all there! 1: http://words.steveklabnik.com/keep-saturdays-sacred _______________________________________________ Rust-dev mailing list Rust-dev at mozilla.org https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Mon Jun 9 10:33:20 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Mon, 9 Jun 2014 13:33:20 -0400 Subject: [rust-dev] [ANN] Brooklyn.rs In-Reply-To: References: Message-ID: I want to ask everyone who DOES make it what their preferred day would be, but I don't want to bias it towards the people who show up for the first one, so if you're interested, please let me know in this thread when's good for you. You can't make it easy for everyone, but I can hope... From dbp at dbpmail.net Mon Jun 9 10:35:48 2014 From: dbp at dbpmail.net (Daniel Patterson) Date: Mon, 09 Jun 2014 13:35:48 -0400 Subject: [rust-dev] [ANN] Brooklyn.rs In-Reply-To: References: Message-ID: <87mwdmm1e3.fsf@xps13.home.dbpmail.net> Similarly, weekends are hard, and I can't make this one, but I'm definitely interested in future ones. Rick Richardson writes: > Wow. Thanks for seeing this up! > As you point out, Saturdays are hard. I don't think I'll be able to make > this one, but I definitely look forward to hacking with you in the future. > Hey all! > > So, I've moved to NYC, and one of the things I'm gonna miss the most > about SF is they Bay Area Rust Meetup... so let's do this! > > Once my DNS resolves, the site will exist at http://www.brooklyn.rs . > Until then, you can check it out at > http://steveklabnik.github.io/brooklyn.rs/ > > TL;DR: The first meeting will be Saturday, June 21, at 1pm. I know > weekends are hard for some people, so I plan on moving it to a weekday > later, but Saturday hacking is a special thing to me[1] So the first > one will be there. No talks, just hacking on some code. Keep it nice > and simple at first. > > I hope to see you all there! > > > 1: http://words.steveklabnik.com/keep-saturdays-sacred > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From ben.striegel at gmail.com Mon Jun 9 11:15:32 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Mon, 9 Jun 2014 14:15:32 -0400 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: <5395E813.2020505@mozilla.com> References: <24519353.205871402318500390.JavaMail.weblogic@epml09> <20140609133433.GA12302@eventhorizon> <5395E813.2020505@mozilla.com> Message-ID: This does raise a good question though: post-1.0, will we continue the current procedure of snapshotting whenever we feel like it, or will we restrict snapshots to stable releases, as Go plans to do? https://docs.google.com/document/d/1P3BLR31VA8cvLJLfMibSuTdwTuF7WWLux71CYD0eeD8/preview?sle=true "The rule we plan to adopt is that the Go 1.3 compiler must compile using Go 1.2, Go 1.4 must compile using Go 1.3, and so on." On Mon, Jun 9, 2014 at 1:00 PM, Brian Anderson wrote: > This is an interesting idea, but I don't see it happening for a long time > if ever: > > * The current process is working fine > * rustc depends on many of the standard libraries, so restricting rustc > means figuring out how to stick to a fixed subset of those libraries > * It's a lot of work to make the bootstrap process even *more* complicated > * For some minor benefits > > > On 06/09/2014 06:34 AM, James Cassidy wrote: > >> On Mon, Jun 09, 2014 at 12:55:00PM +0000, Sanghyeon Seo wrote: >> >>> Do you plan to create a cleaner full-bootstrap process? >>>> By "cleaner" I mean dividing stage-0 to more [sub-]stages, >>>> which would be well-defined and documented in terms of >>>> the set of language features it implements. Currently these >>>> sub-stages are defined by a team member's mood to instruct >>>> the build-bots to make a snapshot. This kind of bootstrap >>>> seems to be a black-box. >>>> >>> As I understand, there is no plan to do this. "Bootstrap" you are talking >>> about is purely theoretical, and I don't think anyone actually performed >>> it. >>> In practice, Rust is bootstrapped from the downloaded binary. >>> >>> I understand you do not spend resource on such tasks before 1.0, >>>> but do you think this is a legitimate|sensible request at all? >>>> Would it be worth the work? >>>> >>> Personally, I don't see any value in doing this work. C compilers are >>> bootstrapped from C compiler binaries. Analogously, the Rust compiler >>> is bootstrapped from the Rust compiler binary. >>> >>> Trying to bootstrap from rustboot would be akin to trying to bootstrap >>> GCC from last1120c (the oldest C compiler with surviving source code). >>> An interesting feat of computer archaeology, but not really useful for >>> anything. >>> >>> I think he was more referring to what language features will be allowed >> in the >> rust compiler itself where earlier stages would be more restricted so >> they can >> be compiled with older rust compilers, for example hopefully rustc 2.0 >> can be >> compiled by rustc 1.0. Then later stages could use more features since >> it will >> be compiled with the more up to date earlier stage. >> >> Currently what features can be used in the compiler itself are just >> limited to >> whenever someone decides to compiler a newer stage0 compiler. >> >> >> -- Jim >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Mon Jun 9 11:50:22 2014 From: bascule at gmail.com (Tony Arcieri) Date: Mon, 9 Jun 2014 11:50:22 -0700 Subject: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0 In-Reply-To: <52D0D385.1080200@gmail.com> References: <52D0D385.1080200@gmail.com> Message-ID: On Fri, Jan 10, 2014 at 9:15 PM, Lee Braiden wrote: > > http://blog.irukado.org/2014/01/an-appeal-for-correct-capable-future-proof-math-in-nascent-programming-languages/ > Just wanted to mention that Swift's approach seems interesting here: by default check for overflow consider it an error if it happened, but also provide special operators that give you normal overflow semantics when you need performance: &+, &-, &*, &/ and &% https://developer.apple.com/library/prerelease/ios/documentation/swift/conceptual/swift_programming_language/AdvancedOperators.html#//apple_ref/doc/uid/TP40014097-CH27-XID_37 -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.pedretti at gmail.com Mon Jun 9 11:53:55 2014 From: christophe.pedretti at gmail.com (Christophe Pedretti) Date: Mon, 9 Jun 2014 20:53:55 +0200 Subject: [rust-dev] Generic Database Bindings In-Reply-To: References: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> <8586B490-B4B2-46E7-8240-5F772A9A70DF@gmail.com> Message-ID: Hi Laxmi, to compile the project, just use the nightly version of rust and run 'rustc rustic.rs' To compile and run my sample test file, just : - download the sqlite3.dll from the SQLite web site, i use " http://www.sqlite.org/2014/sqlite-dll-win32-x86-3080500.zip" - compile my sample 'rustc test-db.rs -L.' - and run it 'test-db.exe' everything has been tested from a Windows environment (mingw shell), fell free to test on linux or mac 2014-06-09 16:17 GMT+02:00 Laxmi Narayan NIT DGP : > hey chris , can you guide me on this project ... i would like to work on > it . > > > > > > * Laxmi Narayan Patel* > > * MCA NIT Durgapur ( Final year)* > > * Mob:- 8345847473 * > > > On Mon, Jun 9, 2014 at 2:27 PM, Christophe Pedretti < > christophe.pedretti at gmail.com> wrote: > >> i have started a small personal project, with for the moment, an only >> (but working) SQLite suport, you can find it here >> mainpage : http://chris-pe.github.io/Rustic/ >> github : https://github.com/chris-pe/Rustic >> documentation : http://www.rust-ci.org/chris-pe/Rustic/doc/rustic/ >> >> An example of how to use my library here >> https://github.com/chris-pe/Rustic/blob/master/test-db.rs >> >> -- >> Christophe >> >> >> 2014-06-08 22:25 GMT+02:00 Kevin Cantu : >> >> Worth mentioning, too, that the IRC channel is *way* more active at odd >>> hours now than it used to be. :) >>> >>> irc.mozilla.org #rust >>> >>> >>> Kevin >>> >>> >>> On Sun, Jun 8, 2014 at 6:00 AM, Steve Klabnik >>> wrote: >>> >>>> Like any open source, start throwing some code together and then tell >>>> us all about it! :) >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bwmaister at gmail.com Mon Jun 9 12:32:09 2014 From: bwmaister at gmail.com (Brandon W Maister) Date: Mon, 9 Jun 2014 15:32:09 -0400 Subject: [rust-dev] [ANN] Brooklyn.rs In-Reply-To: <87mwdmm1e3.fsf@xps13.home.dbpmail.net> References: <87mwdmm1e3.fsf@xps13.home.dbpmail.net> Message-ID: Hard as Saturdays are, I've been wanting one of these for so long that I'll be there. Actually that's not fair, they're pretty easy for me. bwm -------------- next part -------------- An HTML attachment was scrubbed... URL: From flaper87 at gmail.com Mon Jun 9 12:35:14 2014 From: flaper87 at gmail.com (Flaper87) Date: Mon, 9 Jun 2014 21:35:14 +0200 Subject: [rust-dev] [ANN] Brooklyn.rs In-Reply-To: References: Message-ID: 2014-06-09 19:24 GMT+02:00 Steve Klabnik : > Hey all! > > So, I've moved to NYC, and one of the things I'm gonna miss the most > about SF is they Bay Area Rust Meetup... so let's do this! > > Once my DNS resolves, the site will exist at http://www.brooklyn.rs . > Until then, you can check it out at > http://steveklabnik.github.io/brooklyn.rs/ > > TL;DR: The first meeting will be Saturday, June 21, at 1pm. I know > weekends are hard for some people, so I plan on moving it to a weekday > later, but Saturday hacking is a special thing to me[1] So the first > one will be there. No talks, just hacking on some code. Keep it nice > and simple at first. > > I hope to see you all there! > > > 1: http://words.steveklabnik.com/keep-saturdays-sacred > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > I'll be in NYC that day so, count me in! Flavio -- Flavio (@flaper87) Percoco http://www.flaper87.com http://github.com/FlaPer87 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eli at zigr.org Mon Jun 9 13:39:01 2014 From: eli at zigr.org (Eli Green) Date: Mon, 9 Jun 2014 22:39:01 +0200 Subject: [rust-dev] Generic Database Bindings In-Reply-To: References: <1C71FA69-5B5F-4A2E-B1D3-01296C089F3A@zigr.org> Message-ID: <05D62420-7556-4F80-9CCC-BA4F91792DF5@zigr.org> Having looked at this library and the other options out there, I have to say the designers of rust-postgres have built a very comfortable API and it would be an excellent place to start. The two pieces I see missing are: 1. A generic way to specify bindings inside queries. JDBC and ODBC use ? as a placeholder for parameters whereas Python's DB-API lets you use a number of different formats. This was a mistake (the API came after several modules that implemented a similar interface) and one of the things that SQLAchemy Core does for users is to define a single style for passing parameters. This part seems easy and by making a macro out of it, could even make rust-postgres' API slightly nicer: // current syntax conn.execute("SELECT a FROM b WHERE foo=$1 OR bar=$2", [&foo as &ToSql, &bar as &ToSql]); // possible syntax - handles the casting to ToSql for you conn.execute(sql!("SELECT a FROM b WHERE foo=$1 OR bar=$2", foo, bar)); 2. rust-postgres defines two traits - ToSql and FromSql - which are what let the API do magical things as shown in their code snippet on their github page. I'm still learning about rust's type system but at the moment I don't see a way to make this work in a polymorphic environment. Not only that, some database drivers may support types that others do not. The geographic extension for PostgreSQL, PostGIS, can store geometries and send them to the user in a textual or binary format. This requirement could disappear if there was no need for the option to select a new driver at run-time, which is a feature common to all the other libraries I'm familiar with (though python technically doesn't do this - each module is completely stand-alone and there's no common code between them, the dynamic nature of python makes it trivial to load different modules based on runtime configuration). Does rust have any run-time type information built into the language? I've been assuming the answer is "no" given that one of the main design goals of the language is to avoid having a costly runtime. Eli On Jun 8, 2014, at 14:19, Steve Klabnik wrote: > There isn't no. If you want to build a binding, just do it! The only > one I'm really aware of right now is > https://github.com/sfackler/rust-postgres -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4118 bytes Desc: not available URL: From me at kevincantu.org Mon Jun 9 14:01:30 2014 From: me at kevincantu.org (Kevin Cantu) Date: Mon, 9 Jun 2014 14:01:30 -0700 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: References: <24519353.205871402318500390.JavaMail.weblogic@epml09> <20140609133433.GA12302@eventhorizon> <5395E813.2020505@mozilla.com> Message-ID: There is a scary amount of stuff we could do with cfg flags... But it sounds like asking a lot to commit to that now, while there are still only a handful of Rust users. Kevin On Mon, Jun 9, 2014 at 11:15 AM, Benjamin Striegel wrote: > This does raise a good question though: post-1.0, will we continue the > current procedure of snapshotting whenever we feel like it, or will we > restrict snapshots to stable releases, as Go plans to do? > > > https://docs.google.com/document/d/1P3BLR31VA8cvLJLfMibSuTdwTuF7WWLux71CYD0eeD8/preview?sle=true > > "The rule we plan to adopt is that the Go 1.3 compiler must compile using > Go 1.2, Go 1.4 must compile using Go 1.3, and so on." > > > > > > On Mon, Jun 9, 2014 at 1:00 PM, Brian Anderson > wrote: > >> This is an interesting idea, but I don't see it happening for a long time >> if ever: >> >> * The current process is working fine >> * rustc depends on many of the standard libraries, so restricting rustc >> means figuring out how to stick to a fixed subset of those libraries >> * It's a lot of work to make the bootstrap process even *more* complicated >> * For some minor benefits >> >> >> On 06/09/2014 06:34 AM, James Cassidy wrote: >> >>> On Mon, Jun 09, 2014 at 12:55:00PM +0000, Sanghyeon Seo wrote: >>> >>>> Do you plan to create a cleaner full-bootstrap process? >>>>> By "cleaner" I mean dividing stage-0 to more [sub-]stages, >>>>> which would be well-defined and documented in terms of >>>>> the set of language features it implements. Currently these >>>>> sub-stages are defined by a team member's mood to instruct >>>>> the build-bots to make a snapshot. This kind of bootstrap >>>>> seems to be a black-box. >>>>> >>>> As I understand, there is no plan to do this. "Bootstrap" you are >>>> talking >>>> about is purely theoretical, and I don't think anyone actually >>>> performed it. >>>> In practice, Rust is bootstrapped from the downloaded binary. >>>> >>>> I understand you do not spend resource on such tasks before 1.0, >>>>> but do you think this is a legitimate|sensible request at all? >>>>> Would it be worth the work? >>>>> >>>> Personally, I don't see any value in doing this work. C compilers are >>>> bootstrapped from C compiler binaries. Analogously, the Rust compiler >>>> is bootstrapped from the Rust compiler binary. >>>> >>>> Trying to bootstrap from rustboot would be akin to trying to bootstrap >>>> GCC from last1120c (the oldest C compiler with surviving source code). >>>> An interesting feat of computer archaeology, but not really useful for >>>> anything. >>>> >>>> I think he was more referring to what language features will be >>> allowed in the >>> rust compiler itself where earlier stages would be more restricted so >>> they can >>> be compiled with older rust compilers, for example hopefully rustc 2.0 >>> can be >>> compiled by rustc 1.0. Then later stages could use more features since >>> it will >>> be compiled with the more up to date earlier stage. >>> >>> Currently what features can be used in the compiler itself are just >>> limited to >>> whenever someone decides to compiler a newer stage0 compiler. >>> >>> >>> -- Jim >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Mon Jun 9 14:59:49 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Mon, 9 Jun 2014 17:59:49 -0400 Subject: [rust-dev] how is Rust bootstrapped? In-Reply-To: References: <24519353.205871402318500390.JavaMail.weblogic@epml09> <20140609133433.GA12302@eventhorizon> <5395E813.2020505@mozilla.com> Message-ID: I like Go's rule, as it also should hopefully prevent accidental compatibility breakage. From tom at crystae.net Mon Jun 9 20:50:33 2014 From: tom at crystae.net (Tom Jakubowski) Date: Mon, 9 Jun 2014 20:50:33 -0700 Subject: [rust-dev] Preserving formatting for slice's Show impl Message-ID: I would expect that `println!("{:_>4}", [1].as_slice());` would print either `[___1]` (where the format is "mapped" over the slice) or `_[1]` (where the format is applied to the slice as a whole), but instead no formatting is applied at all and it simply prints `[1]`.? I can see uses and arguments for both the "mapping" and "whole? interpretations of the format string on slices. On the one hand this ambiguity makes a case for leaving the behavior as-is for backwards compatibility. On the other hand it would be useful to be able to format slices (and other collections, of course). Would it be appropriate to expand the syntax for format strings to allow for nested format strings, so that separate formatting can be applied to the entire collection and to its contents? I assume it this would require an RFC. (The "mapped" variant can be very easily implemented, by the way, by replacing `try!(write!("{}", x))` with `try!(x.fmt(f))` in the `impl Show for &[T]`.) Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.monrocq at gmail.com Tue Jun 10 09:47:23 2014 From: matthieu.monrocq at gmail.com (Matthieu Monrocq) Date: Tue, 10 Jun 2014 18:47:23 +0200 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: References: <538FA53E.1030309@mozilla.com> Message-ID: Could there be a risk in using JSR310 as a basis seeing the "recent" judgement of the Federal Circuit Court that judged that APIs were copyrightable (in the Google vs Oracle fight over the Java API) ? -- Matthieu On Sat, Jun 7, 2014 at 6:01 PM, Bardur Arantsson wrote: > On 2014-06-05 01:01, Brian Anderson wrote: > > # Date/Time (https://github.com/mozilla/rust/issues/14657) > > > > Our time crate is very minimal, and the API looks dated. This is a hard > > problem and JodaTime seems to be well regarded so let's just copy it. > > JSR310 has already been mentioned in the thread, but I didn't see anyone > mentioning that it was accepted into the (relatively) recently finalized > JDK8: > > http://docs.oracle.com/javase/8/docs/api/java/time/package-summary.html > > The important thing to note is basically that it was simplified quite a > lot relative to JodaTime, in particular by removing non-Gregorian > chronologies. > > Regards, > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpx.infinity at gmail.com Tue Jun 10 10:01:25 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Tue, 10 Jun 2014 21:01:25 +0400 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: References: <538FA53E.1030309@mozilla.com> Message-ID: <86DF7A86-A7BF-4299-B33E-D5AFD9004546@gmail.com> Well, JSR-310 is implemented here [1], and it is licensed under GPL2 license. As far as I remember, in that case Google reproduced some internal Java API, so this seems to be a different thing. BTW, one of the implementors of JSR-310 suggested [3] looking into an older implementation which is now a backport of JSR-310 to JavaSE 7 [2]. It is licensed under BSD license, which is even more permissive. Also because Rust is a different language with completely different idioms and approaches to API design, I think we?ll have no problems in this regard - the actual API is going to be quite different from the original JSR-310. [1]: http://hg.openjdk.java.net/threeten/threeten/jdk [2]: https://github.com/ThreeTen/threetenbp [3]: https://github.com/mozilla/rust/issues/14657#issuecomment-45240889 On 10 ???? 2014 ?., at 20:47, Matthieu Monrocq wrote: > Could there be a risk in using JSR310 as a basis seeing the "recent" judgement of the Federal Circuit Court that judged that APIs were copyrightable (in the Google vs Oracle fight over the Java API) ? > > -- Matthieu > > > On Sat, Jun 7, 2014 at 6:01 PM, Bardur Arantsson wrote: > On 2014-06-05 01:01, Brian Anderson wrote: > > # Date/Time (https://github.com/mozilla/rust/issues/14657) > > > > Our time crate is very minimal, and the API looks dated. This is a hard > > problem and JodaTime seems to be well regarded so let's just copy it. > > JSR310 has already been mentioned in the thread, but I didn't see anyone > mentioning that it was accepted into the (relatively) recently finalized > JDK8: > > http://docs.oracle.com/javase/8/docs/api/java/time/package-summary.html > > The important thing to note is basically that it was simplified quite a > lot relative to JodaTime, in particular by removing non-Gregorian > chronologies. > > Regards, > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From depp at zdome.net Tue Jun 10 12:04:42 2014 From: depp at zdome.net (Dietrich Epp) Date: Tue, 10 Jun 2014 12:04:42 -0700 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: <86DF7A86-A7BF-4299-B33E-D5AFD9004546@gmail.com> References: <538FA53E.1030309@mozilla.com> <86DF7A86-A7BF-4299-B33E-D5AFD9004546@gmail.com> Message-ID: I?m writing datetime-rs, and I have been reading the JSR-310 source code. The code is mostly helpful in the sense that it precisely expresses the relationships between concepts, e.g, ?to convert an instant to a date you need to choose a calendar?. Translating the API to Rust would result in an unhappy mess, and the implementation equally so. So, the JSR-310 API and code is not actually a very good resource. What is a good resource is Stephen Colebourne?s blog and the issue tracker for JSR-310 on GitHub. So, we may not be copying anything, but neither will we have to forge ahead and discover anything new. ?Dietrich On Jun 10, 2014, at 10:01 AM, Vladimir Matveev wrote: > Well, JSR-310 is implemented here [1], and it is licensed under GPL2 license. As far as I remember, in that case Google reproduced some internal Java API, so this seems to be a different thing. BTW, one of the implementors of JSR-310 suggested [3] looking into an older implementation which is now a backport of JSR-310 to JavaSE 7 [2]. It is licensed under BSD license, which is even more permissive. > > Also because Rust is a different language with completely different idioms and approaches to API design, I think we?ll have no problems in this regard - the actual API is going to be quite different from the original JSR-310. > > [1]: http://hg.openjdk.java.net/threeten/threeten/jdk > [2]: https://github.com/ThreeTen/threetenbp > [3]: https://github.com/mozilla/rust/issues/14657#issuecomment-45240889 > > On 10 ???? 2014 ?., at 20:47, Matthieu Monrocq wrote: > >> Could there be a risk in using JSR310 as a basis seeing the "recent" judgement of the Federal Circuit Court that judged that APIs were copyrightable (in the Google vs Oracle fight over the Java API) ? >> >> -- Matthieu >> >> >> On Sat, Jun 7, 2014 at 6:01 PM, Bardur Arantsson wrote: >> On 2014-06-05 01:01, Brian Anderson wrote: >>> # Date/Time (https://github.com/mozilla/rust/issues/14657) >>> >>> Our time crate is very minimal, and the API looks dated. This is a hard >>> problem and JodaTime seems to be well regarded so let's just copy it. >> >> JSR310 has already been mentioned in the thread, but I didn't see anyone >> mentioning that it was accepted into the (relatively) recently finalized >> JDK8: >> >> http://docs.oracle.com/javase/8/docs/api/java/time/package-summary.html >> >> The important thing to note is basically that it was simplified quite a >> lot relative to JodaTime, in particular by removing non-Gregorian >> chronologies. >> >> Regards, >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From learnopengles at gmail.com Tue Jun 10 13:20:57 2014 From: learnopengles at gmail.com (learnopengles) Date: Tue, 10 Jun 2014 16:20:57 -0400 Subject: [rust-dev] Porting a small DSP test from C++ to Rust: Comments and performance observations Message-ID: Hi all, With the recent release of Swift, I've been interested in modern alternatives to C & C++, and I've recently been reading more about Rust; I've been especially encouraged by the comments and posts about it by Patrick Walton. I develop primarily in Java for Android, but I also do some C & C++ for fun in my spare time, just to learn something else and because I think it's useful to learn for certain areas of app development that need more power & control than what Java alone can give. Rust looks very cool, and to try it out, I decided to port a small DSP test that I've been working on for another life in app dev, and wanted to share my initial experiences and hiccups that I ran across. I didn't see any other mailing lists, so my apologies if this is not the appropriate place to post this. This test involves mainly double floating-point math, and uses a Chebyshev filter created with the help of mkfilter. I checked that the results are the same between the C++ and the Rust implementations. There were a few gotchas / items I didn't quite understand while I was porting the code to Rust: * What would be the replacement for a struct-scoped static constant, so I could put a static inside a struct instead of making it a global? * Is there a better way of doing a static_assert? The way I did it wasn't very nice to use, and the compiler complained about unused variables. * Rust doesn't have prefix/postfix increment? Or, I just didn't find the right syntax of using it? * My biggest problem was figuring out how to use arrays. Originally, things just weren't working and I think it's because I was inadvertently copying an array instead of referring to the original. t just couldn't figure out how to create a mutable alias to an array passed into a function by reference. * I understand the reasoning behind explicit integer conversions, but depending on what one is doing, it can add to a lot of explicit conversions, and I also didn't figure out a way to do an unsigned for loop. * When creating / using arrays, there is sometimes duplication of the size parameter. Is there a way to reduce that? * This isn't the fault of the Rust's team, but my learning was complicated by the fact that a lot of the info on the web is out of date compared to the latest release of Rust. ;) *Performance observations* The rust code, compiled with: rustc --opt-level 3 -Z lto PerformanceTest.rs rustc 0.11.0-pre-nightly (0ee6a8e 2014-06-09 23:41:53 -0700) host: x86_64-apple-darwin Iterations: 885 Rust Results: 92,765,420 shorts per second. The C++ code, compiled with: clang PerformanceTest.cpp dsp.cpp -std=c++11 -ffast-math -flto -O3 -o PerformanceTest Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn) Target: x86_64-apple-darwin13.2.0 Thread model: posix Iterations: 586 C results: 61,349,033 shorts per second. This is just a test I'm doing to experiment, so it's quite possible I'm doing something silly in either code. Still, this is quite impressive and encouraging! I wonder what Rust is doing to optimize this above and beyond the C++ version. Here is the code: dsp.rs: use std::cmp::max; use std::cmp::min; use std::i16; static FILTER_SIZE : int = 16; pub struct FilterState { input: [f64, ..FILTER_SIZE], output: [f64, ..FILTER_SIZE], current: uint } impl FilterState { pub fn new() -> FilterState { FilterState {input:[0.0, ..FILTER_SIZE], output:[0.0, ..FILTER_SIZE], current:0} } } #[inline] fn clamp(input: int) -> i16 { return max(i16::MIN as int, min(i16::MAX as int, input)) as i16; } #[inline] fn get_offset(filter_state : &FilterState, relative_offset : int) -> uint { #[static_assert] static t: bool = (FILTER_SIZE & (FILTER_SIZE - 1)) == 0; return (filter_state.current + relative_offset as uint) % FILTER_SIZE as uint; } #[inline] fn push_sample(filter_state : &mut FilterState, sample: i16) { filter_state.input[get_offset(filter_state, 0)] = sample as f64; filter_state.current = filter_state.current + 1; } #[inline] fn get_output_sample(filter_state : &FilterState) -> i16 { return clamp(filter_state.output[get_offset(filter_state, 0)] as int); } // This is an implementation of a Chebyshev lowpass filter at 5000hz with ripple -0.50dB, // 10th order, and for an input sample rate of 44100hz. #[inline] fn apply_lowpass_single(filter_state : &mut FilterState) { #[static_assert] static t: bool = FILTER_SIZE >= 10; //let x = filter_state.input; let x = &filter_state.input; // Note: I didn't understand how to reference y; I couldn't make it work without either errors or silently dropping // the result (was copying the array?). // let y = &mut filter_state.output; // let y = mut filter_state.output; filter_state.output[get_offset(filter_state, 0)] = ( 1.0 * (1.0 / 6.928330802e+06) * (x[get_offset(filter_state, -10)] + x[get_offset(filter_state, -0)])) + ( 10.0 * (1.0 / 6.928330802e+06) * (x[get_offset(filter_state, -9)] + x[get_offset(filter_state, -1)])) + ( 45.0 * (1.0 / 6.928330802e+06) * (x[get_offset(filter_state, -8)] + x[get_offset(filter_state, -2)])) + (120.0 * (1.0 / 6.928330802e+06) * (x[get_offset(filter_state, -7)] + x[get_offset(filter_state, -3)])) + (210.0 * (1.0 / 6.928330802e+06) * (x[get_offset(filter_state, -6)] + x[get_offset(filter_state, -4)])) + (252.0 * (1.0 / 6.928330802e+06) * x[get_offset(filter_state, -5)]) + ( -0.4441854896 * filter_state.output[get_offset(filter_state, -10)]) + ( 4.2144719035 * filter_state.output[get_offset(filter_state, -9)]) + ( -18.5365677633 * filter_state.output[get_offset(filter_state, -8)]) + ( 49.7394321983 * filter_state.output[get_offset(filter_state, -7)]) + ( -90.1491003509 * filter_state.output[get_offset(filter_state, -6)]) + ( 115.3235358151 * filter_state.output[get_offset(filter_state, -5)]) + (-105.4969191433 * filter_state.output[get_offset(filter_state, -4)]) + ( 68.1964705422 * filter_state.output[get_offset(filter_state, -3)]) + ( -29.8484881821 * filter_state.output[get_offset(filter_state, -2)]) + ( 8.0012026712 * filter_state.output[get_offset(filter_state, -1)]); } #[inline] pub fn apply_lowpass(filter_state: &mut FilterState, input: &[i16], output: &mut [i16], length: int) { // Better way to do uint range? for i in range(0, length) { push_sample(filter_state, input[i as uint]); apply_lowpass_single(filter_state); output[i as uint] = get_output_sample(filter_state); } } PerformanceTest.rs: extern crate time; extern crate num; use time::precise_time_s; use std::num::FloatMath; mod dsp; static LENGTH : int = 524288; fn do_rust_test(inData: &[i16], outData: &mut[i16]) -> int { let start = precise_time_s(); let end = start + 5.0; let mut iterations = 0; let mut filter_state = dsp::FilterState::new(); let mut dummy = 0; while precise_time_s() < end { dsp::apply_lowpass(&mut filter_state, inData, outData, LENGTH); // Avoid some over-optimization dummy += outData[0]; iterations = iterations + 1; } println!("Dummy:{}", dummy); println!("Iterations:{}",iterations); let elapsed_time = precise_time_s() - start; let shorts_per_second = ((iterations * LENGTH) as f64 / elapsed_time) as int; return shorts_per_second; } fn main() { let mut inData = box [0, ..LENGTH]; let mut outData = box [0, ..LENGTH]; for i in range(0, LENGTH) { inData[i as uint] = ((i as f32).sin() * 1000.0) as i16; } println!("Beginning Rust tests...\n\n"); let rustResult = do_rust_test(inData, outData); println!("Rust Results: {} shorts per second.\n\n", rustResult); } And the C++ version: dsp.h: #include struct FilterState { static constexpr int size = 16; double input[size]; double output[size]; unsigned int current; FilterState() : input{}, output{}, current{} {} }; void apply_lowpass(FilterState& filter_state, const int16_t* input, int16_t* output, int length); dsp.cpp: #include "dsp.h" #include #include #include static constexpr int int16_min = std::numeric_limits::min(); static constexpr int int16_max = std::numeric_limits::max(); static inline int16_t clamp(int input) { return std::max(int16_min, std::min(int16_max, input)); } static inline int get_offset(const FilterState& filter_state, int relative_offset) { static_assert(!(FilterState::size & (FilterState::size - 1)), "size must be a power of two."); return (filter_state.current + relative_offset) % filter_state.size; } static inline void push_sample(FilterState& filter_state, int16_t sample) { filter_state.input[get_offset(filter_state, 0)] = sample; ++filter_state.current; } static inline int16_t get_output_sample(const FilterState& filter_state) { return clamp(filter_state.output[get_offset(filter_state, 0)]); } // This is an implementation of a Chebyshev lowpass filter at 5000hz with ripple -0.50dB, // 10th order, and for an input sample rate of 44100hz. static inline void apply_lowpass(FilterState& filter_state) { static_assert(FilterState::size >= 10, "FilterState::size must be at least 10."); double* x = filter_state.input; double* y = filter_state.output; y[get_offset(filter_state, 0)] = ( 1.0 * (1.0 / 6.928330802e+06) * (x[get_offset(filter_state, -10)] + x[get_offset(filter_state, -0)])) + ( 10.0 * (1.0 / 6.928330802e+06) * (x[get_offset(filter_state, -9)] + x[get_offset(filter_state, -1)])) + ( 45.0 * (1.0 / 6.928330802e+06) * (x[get_offset(filter_state, -8)] + x[get_offset(filter_state, -2)])) + (120.0 * (1.0 / 6.928330802e+06) * (x[get_offset(filter_state, -7)] + x[get_offset(filter_state, -3)])) + (210.0 * (1.0 / 6.928330802e+06) * (x[get_offset(filter_state, -6)] + x[get_offset(filter_state, -4)])) + (252.0 * (1.0 / 6.928330802e+06) * x[get_offset(filter_state, -5)]) + ( -0.4441854896 * y[get_offset(filter_state, -10)]) + ( 4.2144719035 * y[get_offset(filter_state, -9)]) + ( -18.5365677633 * y[get_offset(filter_state, -8)]) + ( 49.7394321983 * y[get_offset(filter_state, -7)]) + ( -90.1491003509 * y[get_offset(filter_state, -6)]) + ( 115.3235358151 * y[get_offset(filter_state, -5)]) + (-105.4969191433 * y[get_offset(filter_state, -4)]) + ( 68.1964705422 * y[get_offset(filter_state, -3)]) + ( -29.8484881821 * y[get_offset(filter_state, -2)]) + ( 8.0012026712 * y[get_offset(filter_state, -1)]); } void apply_lowpass(FilterState& filter_state, const int16_t* input, int16_t* output, int length) { for (int i = 0; i < length; ++i) { push_sample(filter_state, input[i]); apply_lowpass(filter_state); output[i] = get_output_sample(filter_state); } } PerformanceTest.cpp: #include #include #include #include #include "dsp.h" using namespace std; static const int LENGTH = 524288; static short inData[LENGTH]; static short outData[LENGTH]; static void populateData() { for (int i = 0; i < LENGTH; ++i) { inData[i] = sin(i) * 1000.0; } } static long doCTest() { clock_t start = clock(); clock_t end = clock() + (CLOCKS_PER_SEC * 5); int iterations = 0; int dummy = 0; FilterState filter_state{}; while (clock() < end) { apply_lowpass(filter_state, inData, outData, LENGTH); // Avoid some over-optimization dummy += outData[0]; iterations++; } printf("Dummy:%d\n", dummy); printf("Iterations:%d\n",iterations); clock_t elapsed_ticks = clock() - start; long shortsPerSecond = (long) ((iterations * (long)LENGTH) / (elapsed_ticks / (double) CLOCKS_PER_SEC)); return shortsPerSecond; } int main() { populateData(); printf("Beginning C tests...\n\n"); long cResult = doCTest(); printf("C results: %ld shorts per second.\n\n", cResult); } Sincerely, Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at mir2.org Tue Jun 10 14:19:09 2014 From: igor at mir2.org (Igor Bukanov) Date: Tue, 10 Jun 2014 23:19:09 +0200 Subject: [rust-dev] Building rustc @ 1GB RAM? In-Reply-To: <1401994016963.bf6ab09@Nodemailer> References: <1401994016963.bf6ab09@Nodemailer> Message-ID: I tried building rust in a VM with 1GB of memory and it seems only zswap works. With zram-only solution without any real swap I was not able to compile rust at all. The compiler generated out-of-memory exception with zram configured to take 30-70% of memory. With zswap enabled, zswap.max_pool_percent=70 and the real swap of 2.5 GB the compilation time for the latest tip was about 2 hours. This is on Mac Air and Linux inside VirtualBox. On 5 June 2014 20:46, Ian Daniher wrote: > zram is a great suggestion, thanks! I'll give it a shot. > ? > From My Tiny Glowing Screen > > > On Thu, Jun 5, 2014 at 2:25 PM, Igor Bukanov wrote: >> >> Have you considered to use zram? Typically the compression for >> compiler memory is over a factor of 3 so that can be an option as the >> performance degradation under swapping could be tolerable. A similar >> option is to enable zswap, but as the max compression with it is >> effectively limited by factor of 2, it may not be enough to avoid >> swapping. >> >> On 5 June 2014 20:13, Ian Daniher wrote: >> > 1GB is close-ish to the 1.4GB last reported (over a month ago!) by >> > http://huonw.github.io/isrustfastyet/mem/. >> > >> > Are there any workarounds to push the compilation memory down? I'm also >> > exploring distcc, but IRFY has a bit of semantic ambiguity as to whether >> > or >> > not it's 1.4GB simultaneous or net total. >> > >> > Thanks! >> > -- >> > Ian >> > >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> > > > From steve at steveklabnik.com Tue Jun 10 14:42:21 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Tue, 10 Jun 2014 17:42:21 -0400 Subject: [rust-dev] Porting a small DSP test from C++ to Rust: Comments and performance observations In-Reply-To: References: Message-ID: Hey Kevin! Thanks so much for sharing! This is the right place, though the Reddit may be interested, too. I don't have a lot to say, but I _do_ have one or two things: > Rust doesn't have prefix/postfix increment? Or, I just didn't find the right syntax of using it? It does not. x = x + 1. Much more clear, no confusion about what comes back. Tricky code leads to bugs. :) > This isn't the fault of the Rust's team, but my learning was complicated by the fact that a lot of the info on the web is out of date compared to the latest release of Rust. ;) We've been trying to encourage people to put Rust version information in posts, but it can be hard. Expect the official stuff to improve rapidly as we approach 1.0 as well. From steve at steveklabnik.com Tue Jun 10 15:01:44 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Tue, 10 Jun 2014 18:01:44 -0400 Subject: [rust-dev] Porting a small DSP test from C++ to Rust: Comments and performance observations In-Reply-To: References: Message-ID: > Ah, that makes sense. Really looking forward to the 1.0 release; I don't > know if it's on the roadmap, but if there's a way to plug it into Android or > iOS builds via LLVM, that would be really neat. Also went ahead and posted > on Reddit. ;) Builds are already tested against Android, and I think iOS is in the works, if I remember right. :) From learnopengles at gmail.com Tue Jun 10 14:44:53 2014 From: learnopengles at gmail.com (learnopengles) Date: Tue, 10 Jun 2014 17:44:53 -0400 Subject: [rust-dev] Porting a small DSP test from C++ to Rust: Comments and performance observations In-Reply-To: References: Message-ID: Hi Steve, Ah, that makes sense. Really looking forward to the 1.0 release; I don't know if it's on the roadmap, but if there's a way to plug it into Android or iOS builds via LLVM, that would be really neat. Also went ahead and posted on Reddit. ;) Cheers, and thank you to you and the team for your hard work. I was always wondering why there hasn't been an alternative to C++ in the same space until now, and Rust might finally be the answer especially if it also enables cross-platform with low penalty and easy interop (no more difficult than C). Kevin On Tue, Jun 10, 2014 at 5:42 PM, Steve Klabnik wrote: > Hey Kevin! > > Thanks so much for sharing! This is the right place, though the Reddit > may be interested, too. > > I don't have a lot to say, but I _do_ have one or two things: > > > Rust doesn't have prefix/postfix increment? Or, I just didn't find the > right syntax of using it? > > It does not. x = x + 1. Much more clear, no confusion about what comes > back. Tricky code leads to bugs. :) > > > This isn't the fault of the Rust's team, but my learning was complicated > by the fact that a lot of the info on the web is out of date compared to > the latest release of Rust. ;) > > We've been trying to encourage people to put Rust version information > in posts, but it can be hard. Expect the official stuff to improve > rapidly as we approach 1.0 as well. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Tue Jun 10 15:57:34 2014 From: bascule at gmail.com (Tony Arcieri) Date: Tue, 10 Jun 2014 15:57:34 -0700 Subject: [rust-dev] 7 high priority Rust libraries that need to be written In-Reply-To: <538FA53E.1030309@mozilla.com> References: <538FA53E.1030309@mozilla.com> Message-ID: On Wed, Jun 4, 2014 at 4:01 PM, Brian Anderson wrote: > # Crypto (https://github.com/mozilla/rust/issues/14655) > > We've previously made the decision not to distribute any crypto with > Rust at all, but this is probably not tenable since crypto is used > everywhere. My current opinion is that we should not distribute any crypto *written > in Rust*, but that distributing bindings to proven crypto is fine. > > Figure out a strategy here, build consensus, then start implementing a > robust crypto library out of tree, with the goal of merging into the main > distribution someday, and possibly - far in the future - reimplementing in > Rust. There are some existing efforts along these lines that should be > evaluated for this purpose > There's two directions to go on this. I will label them "short-term" and "long-term". Short-term, I think Rust should embrace wrappers around existing, "well-audited" crypto libraries. To that end, projects like Rust OpenSSL (despite OpenSSL's numerous and recently infamous problems) are probably our best bet: https://github.com/sfackler/rust-openssl Long term, I would love to see pure-Rust crypto libraries, as I believe Rust's safety is exactly what cryptography needs to protect us from Heartbleed-style screw ups. The most complete one of these I've seen so far is rust-crypto, however it's missing many common algorithms like RSA and Diffie-Hellman: https://github.com/DaGenix/rust-crypto I'd probably suggest people use rust-openssl over rust-crypto for the time being, as much more work has gone into OpenSSL at this point and there are better chances that existing algorithm implementations will be constant time. I would love to see organizations who use Rust (*wink* *wink* *nudge* *nudge* Mozilla) contribute to and help fund professional security audits like rust-crypto! :D Sidebar: I am also working on a Rust crypto library (ClearCrypt) but its goals are somewhat orthogonal to the needs of your average Rust user (modern/minimalistic, self-contained, C ABI, easily embeddable) -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbau.pp at gmail.com Tue Jun 10 16:08:33 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Wed, 11 Jun 2014 09:08:33 +1000 Subject: [rust-dev] Porting a small DSP test from C++ to Rust: Comments and performance observations In-Reply-To: References: Message-ID: <53978FF1.20704@gmail.com> On 11/06/14 07:42, Steve Klabnik wrote: >> Rust doesn't have prefix/postfix increment? Or, I just didn't find the right syntax of using it? > It does not. x = x + 1. Much more clear, no confusion about what comes > back. Tricky code leads to bugs. :) FWIW, primitive types offer `x += 1`, and #5992 covers extending this to all types. https://github.com/mozilla/rust/issues/5992 Huon From dpx.infinity at gmail.com Wed Jun 11 00:09:03 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Wed, 11 Jun 2014 11:09:03 +0400 Subject: [rust-dev] Porting a small DSP test from C++ to Rust: Comments and performance observations In-Reply-To: References: Message-ID: Hi, Kevin! > * What would be the replacement for a struct-scoped static constant, so I could put a static inside a struct instead of making it a global? It is not possible now. There are some suggestions on associated items, but I don?t think they are active. Currently Rust module system is used to control scopes of statics. > * Rust doesn't have prefix/postfix increment? Or, I just didn't find the right syntax of using it? Yes, Rust doesn?t have it. You should use composite assignment: x += 1. It is not an expression, though. > * My biggest problem was figuring out how to use arrays. Originally, things just weren't working and I think it's because I was inadvertently copying an array instead of referring to the original. t just couldn't figure out how to create a mutable alias to an array passed into a function by reference. Well, you have correctly figured out that it is done using slices :) > * I understand the reasoning behind explicit integer conversions, but depending on what one is doing, it can add to a lot of explicit conversions, and I also didn't figure out a way to do an unsigned for loop. Yes, explicit conversions may sometimes be too verbose. As for unsigned for loop, it is easy. Remember, Rust uses type inference to find out correct types of all local variables. `range()` function which creates range iterators is generic and looks like this: fn range(from: T, until: T) -> Range { ? } (actual definition is different because T is not arbitrary but bounded with some traits) The `T` type parameter is determined automatically from the use site of the function. In your case it is deduced as `int` because of `length` variable (which is of `int` type). So you can just cast `length` to `uint`: for i in range(0, length as uint) { ? } and `i` variable will be unsigned. BTW, why did you define `length` parameter as `int` at all? You can make it `uint` and you won?t need to do this cast. > * When creating / using arrays, there is sometimes duplication of the size parameter. Is there a way to reduce that? I don?t think so. They are statically sized arrays, so they just need their size specified. When you don?t care about their size, you usually use slices anyway. From noamraph at gmail.com Wed Jun 11 04:35:26 2014 From: noamraph at gmail.com (Noam Yorav-Raphael) Date: Wed, 11 Jun 2014 14:35:26 +0300 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: <1be6b7e90decf085737738bc47269ce5@endl.ch> References: <1be6b7e90decf085737738bc47269ce5@endl.ch> Message-ID: You can achieve overloading which is equivalent to C++ by defining a trait for all the types a specific argument can get: ``` enum IntOrFloatEnum { Int, F64, } trait IntOrFloat { fn get_type(&self) -> IntOrFloatEnum; fn get_int(self) -> int { fail!(); } fn get_f64(self) -> f64 { fail!(); } } impl IntOrFloat for int { fn get_type(&self) -> IntOrFloatEnum { Int } fn get_int(self) -> int { self } } impl IntOrFloat for f64 { fn get_type(&self) -> IntOrFloatEnum { F64 } fn get_f64(self) -> f64 { self } } fn overloaded(x: T) { match x.get_type() { Int => println!("got int: {}", x.get_int()), F64 => println!("got f64: {}", x.get_f64()), } } fn main() { overloaded(5i); // prints: got int: 5 overloaded(3.5); // prints: got f64: 3.5 } ``` This is equivalent to having to functions, overloaded(int) and overloaded(f64). From what I see, the compiler even optimizes away the logic, so the generated code is actually equivalent to this: ``` fn overloaded_int(x: int) { println!("got int: {}", x); } fn overloaded_f64(x: f64) { println!("got f64: {}", x); } fn main() { overloaded_int(5i); overloaded_f64(3.5); } ``` (I actually think that if Rust gains one day some support for overloading, it should be syntactic sugar for the above, which will allow you to define a function whose argument can be of multiple types. I don't like the C++ style of defining several different functions with the same name and letting the compiler choose which function should actually be called). Using this method you can solve both the problem of overloading and default arguments. For every possible number of arguments that C++ would allow, define a function funcN(arg0: T0, arg1: T1, ..., argN-1: TN-1). The function would check the actual types of the arguments and call the right C++ function, filling default arguments on the way. So the only difference between C++ and Rust code would be that you'd have to add the number of arguments to the method name. It would probably not be easy to generate the required code, but I think it would solve the problem perfectly. Cheers, Noam On Thu, May 22, 2014 at 11:27 PM, Alexander Tsvyashchenko wrote: > Hi All, > > Recently I was playing with bindings generator from C++ to Rust. I managed > to make things work for Qt5 wrapping, but stumbled into multiple issues > along the way. > > I tried to summarize my "pain points" in the following blog post: > http://endl.ch/content/cxx2rust-pains-wrapping-c-rust-example-qt5 > > I hope that others might benefit from my experience and that some of these > "pain points" can be fixed in Rust. > > I'll try to do my best in answering questions / acting on feedback, if > any, but I have very limited amount of free time right now so sorry in > advance if answers take some time. > > Thanks! > > -- > Good luck! Alexander > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slabode at aim.com Wed Jun 11 06:27:21 2014 From: slabode at aim.com (SiegeLord) Date: Wed, 11 Jun 2014 09:27:21 -0400 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) Message-ID: <53985939.3010603@aim.com> First, let me begin with a small discussion about C++ rvalue references. As some of you know, they were introduced to C++ in part to solve problems like this: Matrix m; m.data = {1.0, 2.0, 3.0}; Matrix m2 = m * 2.0 * 5.0 * 10.0; Before C++11, most implementations the multiplications on the third line would create two (unnecessary) temporary copies of the Matrix, causing widespread inefficiency if Matrix was large. By using rvalue references (see the implementation in this gist: https://gist.github.com/SiegeLord/85ced65ab220a3fdc1fc we can reduce the number of copies to one. What the C++ does is that the first multiplication (* 2.0) creates a copy of the matrix, and the remaining multiplications move that copy around. If you look at the implementation, you'll note how complicated the C++ move semantics are compared to Rust's (you have to use std::move everywhere, define move-constructors and move-assignment with easy-to-get-wrong implementations etc.). Since Rust has simpler move semantics, can we do the same thing in Rust? It turns out we cannot, because the operator overloading in Rust is done by overloading a trait with a method that takes self by reference: pub trait Mul { fn mul(&self, rhs: &RHS) -> Result; } This means that the crucial step of moving out from the temporary cannot be done without complicated alternatives (explained at the end of this email). If we define an a multiplication trait that takes self by value, however then this is possible and indeed relatively trivial (see implementation here: https://gist.github.com/SiegeLord/11456760237781442cfe ). This code will act just like the C++ did: it will copy during the first move_mul call, and then move the temporary around: let m = Matrix{ data: vec![1.0f32, 2.0, 3.0] }; let m2 = (&m).move_mul(2.0).move_mul(5.0).move_mul(10.0); So there's nothing in Rust move semantics which prevents this useful pattern, and it'd be possible to do that with syntax sugar if the operator overload traits did not sabotage it. Pretty much all the existing users (e.g. num::BigInt and sebcrozet's nalgebra) of operator overloading traits take the inefficient route of creating a temporary copy for each operation (see https://github.com/mozilla/rust/blob/master/src/libnum/bigint.rs#L283 and https://github.com/sebcrozet/nalgebra/blob/master/src/structs/dmat.rs#L593 ). If the operator overloading traits do not allow you to create efficient implementations of BigNums and linear algebra operations, the two use cases why you'd even *have* operator overloading as a language feature, why even have that feature? I think this goes beyond just operator overloading, however, as these kinds of situations may arise in many other traits. By defining trait methods as taking &self and &mut self, we are preventing these useful optimizations. Aside from somewhat more complicated impl's, are there any downsides to never using anything but by value 'self' in traits? If not, then I think that's what they should be using to allow people to create efficient APIs. In fact, this probably should extend to every member generic function argument: you should never force the user to tie their hands by using a reference. Rust has amazing move semantics, I just don't see what is gained by abandoning them whenever you use most traits. Now, I did say there are complicated alternatives to this. First, you actually *can* move out through a borrowed pointer using RefCell>. You can see what this looks like here: https://gist.github.com/SiegeLord/e09c32b8cf2df72b2422 . I don't know how efficient that is, but it is certainly more fragile. With my by-value MoveMul implementation, the moves are checked by the compiler... in this case, they are not. It's easy to end up with a moved-out, dangling Matrix. This is what essentially has to be done, however, if you want to preserve the general semantic of the code. Alternatively, you can use lazy evaluation/expression templates. This is the route I take in my linear algebra library. Essentially, each operation returns a struct (akin to what happens with many Iterator methods) that stores the arguments by reference. When it comes time to perform assignment, the chained operations are performed element-wise. There are no unnecessary copies and it optimizes well. The problem is that its a lot more complicated to implement and it pretty much forces you to use interior mutability (just Cell this time) if you don't want a crippled API. The latter bit introduces a whole slew of subtle bugs (in my opinion they are less common than the ones introduced by RefCell). Also, I don't think expression templates are the correct way to wrap, e.g., a LAPACK library. I.e. they only work well when you're implementing the math yourself which is not ideal for the more complicated algorithms. Along the same lines, it is not immediately obvious to me how to extend this lazy evaluation idea to something like num::BigInt. So far, it seems like lazy evaluation will force dynamic dispatch in that case which is a big shame (i.e. you'd store the operations in one array, arguments in another and then play them back at the assignment time). So, I think the situation is pretty bad. What can be done to fix it? -SL From learnopengles at gmail.com Wed Jun 11 05:51:05 2014 From: learnopengles at gmail.com (Learn OpenGL ES) Date: Wed, 11 Jun 2014 08:51:05 -0400 Subject: [rust-dev] Porting a small DSP test from C++ to Rust: Comments and performance observations In-Reply-To: References: Message-ID: <15BDEC9F-E191-4FF7-AEA6-9F1B113A3363@gmail.com> So I feel a little sheepish now, as I tried the C++ code again with LTO turned off, and it?s now slightly faster than the Rust version. I guess it?s a performance regression that gets triggered by LTO in this specific case. Here are the results I get now: clang PerformanceTest.cpp dsp.cpp -std=c++11 -ffast-math -O3 -o PerformanceTest Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn) Target: x86_64-apple-darwin13.2.0 Thread model: posix Iterations: 955 C results: 100,043,846 shorts per second. Rust is still very competitive with 93M shorts/second, and I would hope to see the gap narrowed or eliminated as the compiler & language continue to be improved. The code is also now available here if anyone is interested: https://gist.github.com/learnopengles/004ff4eee75057ca006c -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.gesemann at gmail.com Wed Jun 11 07:10:14 2014 From: s.gesemann at gmail.com (Sebastian Gesemann) Date: Wed, 11 Jun 2014 16:10:14 +0200 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <53985939.3010603@aim.com> References: <53985939.3010603@aim.com> Message-ID: On Wed, Jun 11, 2014 at 3:27 PM, SiegeLord wrote: > [...] Along the same lines, it is not immediately obvious > to me how to extend this lazy evaluation idea to something like num::BigInt. > So far, it seems like lazy evaluation will force dynamic dispatch in that > case which is a big shame (i.e. you'd store the operations in one array, > arguments in another and then play them back at the assignment time). I havn't tried something like expression templates in Rust yet. How did you come to the conclusion that it would require dynamic dispatch? From dbau.pp at gmail.com Wed Jun 11 07:17:12 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Thu, 12 Jun 2014 00:17:12 +1000 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <53985939.3010603@aim.com> References: <53985939.3010603@aim.com> Message-ID: <539864E8.5050308@gmail.com> On 11/06/14 23:27, SiegeLord wrote: > Aside from somewhat more complicated impl's, are there any downsides > to never using anything but by value 'self' in traits? Currently trait objects do not support `self` methods (#10672), and, generally, the interactions with trait objects seem peculiar, e.g. if you've implemented Trait for &Type, then you would want to be coercing a `&Type` to a `&Trait`, *not* a `&(&Type)` as is currently required. However, I don't think these concerns affect the operator overloading traits. https://github.com/mozilla/rust/issues/10672 Huon From slabode at aim.com Wed Jun 11 08:58:48 2014 From: slabode at aim.com (SiegeLord) Date: Wed, 11 Jun 2014 11:58:48 -0400 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> Message-ID: <53987CB8.5020106@aim.com> On 06/11/2014 10:10 AM, Sebastian Gesemann wrote: > On Wed, Jun 11, 2014 at 3:27 PM, SiegeLord wrote: >> [...] Along the same lines, it is not immediately obvious >> to me how to extend this lazy evaluation idea to something like num::BigInt. >> So far, it seems like lazy evaluation will force dynamic dispatch in that >> case which is a big shame (i.e. you'd store the operations in one array, >> arguments in another and then play them back at the assignment time). > > I havn't tried something like expression templates in Rust yet. How > did you come to the conclusion that it would require dynamic dispatch? It's just the first idea I had with how this could work, but you're right, I can envision a way to do this without using dynamic dispatch. It'd look something like something like this: https://gist.github.com/SiegeLord/f1af81195df89ec04d10 . So, if nothing comes out of this discussion, at least you'd be able to do that. Note that the API is uglier, since you need to call 'eval' explicitly. Additionally, you need to manually borrow 'm' because you can't specify a lifetime of the &self argument in mul (another problem with by-ref-self methods). -SL From explodingmind at gmail.com Wed Jun 11 09:15:16 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Wed, 11 Jun 2014 12:15:16 -0400 Subject: [rust-dev] Building rustc @ 1GB RAM? In-Reply-To: References: <1401994016963.bf6ab09@Nodemailer> Message-ID: I have a dual core arm machine with 1GB of RAM keeping up with rust master - every 8hrs, it updates git, runs "make install," and 8hrs later I have an up-to-date rustc w/ libs. No swap, no compression kmods, just a build of rustc & libs that passes (almost) all tests. root at debian-0d0dd:/mnt/armscratch/node-v0.10.28# free -h; uname -a; cat > /proc/cpuinfo; rustc -v > total used free shared buffers cached > Mem: 1.0G 982M 24M 0B 155M 750M > -/+ buffers/cache: 76M 930M > Swap: 0B 0B 0B > Linux debian-0d0dd 3.4.79-r0-s20-rm2+ #54 SMP Tue Feb 18 01:09:07 YEKT > 2014 armv7l GNU/Linux > Processor : ARMv7 Processor rev 4 (v7l) > processor : 0 > BogoMIPS : 1819.52 > processor : 1 > BogoMIPS : 1819.52 > Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls > vfpv4 idiva idivt > CPU implementer : 0x41 > CPU architecture: 7 > CPU variant : 0x0 > CPU part : 0xc07 > CPU revision : 4 > Hardware : sun7i > Revision : 0000 > Serial : 0000000000000000 > rustc 0.11.0-pre (f92a8fa 2014-06-10 18:07:07 -0700) > host: arm-unknown-linux-gnueabihf On Tue, Jun 10, 2014 at 5:19 PM, Igor Bukanov wrote: > I tried building rust in a VM with 1GB of memory and it seems only > zswap works. With zram-only solution without any real swap I was not > able to compile rust at all. The compiler generated out-of-memory > exception with zram configured to take 30-70% of memory. With zswap > enabled, zswap.max_pool_percent=70 and the real swap of 2.5 GB the > compilation time for the latest tip was about 2 hours. This is on Mac > Air and Linux inside VirtualBox. > > On 5 June 2014 20:46, Ian Daniher wrote: > > zram is a great suggestion, thanks! I'll give it a shot. > > ? > > From My Tiny Glowing Screen > > > > > > On Thu, Jun 5, 2014 at 2:25 PM, Igor Bukanov wrote: > >> > >> Have you considered to use zram? Typically the compression for > >> compiler memory is over a factor of 3 so that can be an option as the > >> performance degradation under swapping could be tolerable. A similar > >> option is to enable zswap, but as the max compression with it is > >> effectively limited by factor of 2, it may not be enough to avoid > >> swapping. > >> > >> On 5 June 2014 20:13, Ian Daniher wrote: > >> > 1GB is close-ish to the 1.4GB last reported (over a month ago!) by > >> > http://huonw.github.io/isrustfastyet/mem/. > >> > > >> > Are there any workarounds to push the compilation memory down? I'm > also > >> > exploring distcc, but IRFY has a bit of semantic ambiguity as to > whether > >> > or > >> > not it's 1.4GB simultaneous or net total. > >> > > >> > Thanks! > >> > -- > >> > Ian > >> > > >> > _______________________________________________ > >> > Rust-dev mailing list > >> > Rust-dev at mozilla.org > >> > https://mail.mozilla.org/listinfo/rust-dev > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at kevincantu.org Wed Jun 11 10:23:11 2014 From: me at kevincantu.org (Kevin Cantu) Date: Wed, 11 Jun 2014 10:23:11 -0700 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: References: <1be6b7e90decf085737738bc47269ce5@endl.ch> Message-ID: Noam, that's awesome. It even works for tuples like so (I didn't think it would): ``` enum AaBbEnum { Aa, Bb, } trait AaBb { fn get_type(&self) -> AaBbEnum; fn get_aa(self) -> (int, f64) { fail!(); } fn get_bb(self) -> (f64) { fail!(); } } impl AaBb for (int, f64) { fn get_type(&self) -> AaBbEnum { Aa } fn get_aa(self) -> (int, f64) { self } } impl AaBb for (f64) { fn get_type(&self) -> AaBbEnum { Bb } fn get_bb(self) -> (f64) { self } } #[cfg(not(test))] fn overloaded(x: T) { match x.get_type() { Aa => println!("got Aa: {}", x.get_aa()), Bb => println!("got Bb: {}", x.get_bb()), } } fn overloaded_format(x: T) -> String { match x.get_type() { Aa => format!("got Aa: {}", x.get_aa()), Bb => format!("got Bb: {}", x.get_bb()), } } #[cfg(not(test))] #[main] fn main() { overloaded((5i, 7.3243)); // prints: got Aa: (5, 7.3243) overloaded((3.5)); // prints: got Bb: 3.5 } #[test] fn overloaded_with_same_return_works() { // now with a shared return let x: String = overloaded_format((5i, 7.3243)); let y: String = overloaded_format((3.5)); assert_eq!(x, "got Aa: (5, 7.3243)".to_string()); assert_eq!(y, "got Bb: 3.5".to_string()); } ``` I imagine if the functions being overloaded have different return types, this gets uglier to use, but this is pretty good! Kevin On Wed, Jun 11, 2014 at 4:35 AM, Noam Yorav-Raphael wrote: > You can achieve overloading which is equivalent to C++ by defining a trait > for all the types a specific argument can get: > > ``` > enum IntOrFloatEnum { > Int, > F64, > } > > trait IntOrFloat { > fn get_type(&self) -> IntOrFloatEnum; > fn get_int(self) -> int { fail!(); } > fn get_f64(self) -> f64 { fail!(); } > } > > impl IntOrFloat for int { > fn get_type(&self) -> IntOrFloatEnum { Int } > fn get_int(self) -> int { self } > } > > impl IntOrFloat for f64 { > fn get_type(&self) -> IntOrFloatEnum { F64 } > fn get_f64(self) -> f64 { self } > } > > fn overloaded(x: T) { > match x.get_type() { > Int => println!("got int: {}", x.get_int()), > F64 => println!("got f64: {}", x.get_f64()), > } > } > > fn main() { > overloaded(5i); // prints: got int: 5 > overloaded(3.5); // prints: got f64: 3.5 > } > ``` > > This is equivalent to having to functions, overloaded(int) and > overloaded(f64). From what I see, the compiler even optimizes away the > logic, so the generated code is actually equivalent to this: > > ``` > fn overloaded_int(x: int) { println!("got int: {}", x); } > fn overloaded_f64(x: f64) { println!("got f64: {}", x); } > fn main() { > overloaded_int(5i); > overloaded_f64(3.5); > } > ``` > > (I actually think that if Rust gains one day some support for overloading, > it should be syntactic sugar for the above, which will allow you to define > a function whose argument can be of multiple types. I don't like the C++ > style of defining several different functions with the same name and > letting the compiler choose which function should actually be called). > > Using this method you can solve both the problem of overloading and > default arguments. For every possible number of arguments that C++ would > allow, define a function funcN(arg0: T0, arg1: T1, ..., > argN-1: TN-1). The function would check the actual types of the arguments > and call the right C++ function, filling default arguments on the way. So > the only difference between C++ and Rust code would be that you'd have to > add the number of arguments to the method name. > > It would probably not be easy to generate the required code, but I think > it would solve the problem perfectly. > > Cheers, > Noam > > > On Thu, May 22, 2014 at 11:27 PM, Alexander Tsvyashchenko > wrote: > >> Hi All, >> >> Recently I was playing with bindings generator from C++ to Rust. I >> managed to make things work for Qt5 wrapping, but stumbled into multiple >> issues along the way. >> >> I tried to summarize my "pain points" in the following blog post: >> http://endl.ch/content/cxx2rust-pains-wrapping-c-rust-example-qt5 >> >> I hope that others might benefit from my experience and that some of >> these "pain points" can be fixed in Rust. >> >> I'll try to do my best in answering questions / acting on feedback, if >> any, but I have very limited amount of free time right now so sorry in >> advance if answers take some time. >> >> Thanks! >> >> -- >> Good luck! Alexander >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Wed Jun 11 10:54:01 2014 From: rusty.gates at icloud.com (Tommi) Date: Wed, 11 Jun 2014 20:54:01 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <53987CB8.5020106@aim.com> References: <53985939.3010603@aim.com> <53987CB8.5020106@aim.com> Message-ID: If the `Mul` trait and similar were changed to take `self` by value, perhaps the following kind of language design would make more sense: If a variable of a type that has a destructor is passed to a function by value (moved), and the variable is used after the function call, the variable would be implicitly cloned before passing it to the function. From danielmicay at gmail.com Wed Jun 11 11:33:40 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Wed, 11 Jun 2014 14:33:40 -0400 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <53987CB8.5020106@aim.com> Message-ID: <5398A104.40803@gmail.com> On 11/06/14 01:54 PM, Tommi wrote: > If the `Mul` trait and similar were changed to take `self` by value, perhaps the following kind of language design would make more sense: > > If a variable of a type that has a destructor is passed to a function by value (moved), and the variable is used after the function call, the variable would be implicitly cloned before passing it to the function. Cloning big integers, rationals based on big integers or arbitrary precision floating point values for every single operation has a high cost. One of Rust's strength's is that it doesn't have implicit cloning as C++ does due to copy constructors. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From rusty.gates at icloud.com Wed Jun 11 11:47:44 2014 From: rusty.gates at icloud.com (Tommi) Date: Wed, 11 Jun 2014 21:47:44 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <5398A104.40803@gmail.com> References: <53985939.3010603@aim.com> <53987CB8.5020106@aim.com> <5398A104.40803@gmail.com> Message-ID: <62711649-3E7F-443C-AFF3-5EE8FA3182E2@icloud.com> On 2014-06-11, at 21:33, Daniel Micay wrote: > Cloning big integers, rationals based on big integers or arbitrary > precision floating point values for every single operation has a high > cost. I didn't say that all functions should start taking their arguments by value. I said `Mul` and similar should do it, i.e. functions that take a variable and return a variable of that same type. Instead of passing by reference and making a clone of the passed reference inside those functions, you force the caller to make the clone and mutate the passed argument in place. This enables the C++ like rvalue reference optimization for functions like multiplication. -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Wed Jun 11 11:52:47 2014 From: corey at octayn.net (Corey Richardson) Date: Wed, 11 Jun 2014 11:52:47 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <62711649-3E7F-443C-AFF3-5EE8FA3182E2@icloud.com> References: <53985939.3010603@aim.com> <53987CB8.5020106@aim.com> <5398A104.40803@gmail.com> <62711649-3E7F-443C-AFF3-5EE8FA3182E2@icloud.com> Message-ID: Keeping in mind that the `self` value here can be a reference. Ie, implementing the traits also for references to a type. On Wed, Jun 11, 2014 at 11:47 AM, Tommi wrote: > On 2014-06-11, at 21:33, Daniel Micay wrote: > > Cloning big integers, rationals based on big integers or arbitrary > precision floating point values for every single operation has a high > cost. > > > I didn't say that all functions should start taking their arguments by > value. I said `Mul` and similar should do it, i.e. functions that take a > variable and return a variable of that same type. Instead of passing by > reference and making a clone of the passed reference inside those functions, > you force the caller to make the clone and mutate the passed argument in > place. This enables the C++ like rvalue reference optimization for functions > like multiplication. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- http://octayn.net/ From explodingmind at gmail.com Wed Jun 11 13:26:19 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Wed, 11 Jun 2014 16:26:19 -0400 Subject: [rust-dev] Building rustc @ 1GB RAM? In-Reply-To: References: <1401994016963.bf6ab09@Nodemailer> Message-ID: Output of "make check" for those of you who are interested. failures: [run-pass] run-pass/intrinsic-alignment.rs [run-pass] run-pass/rec-align-u64.rs [run-pass] run-pass/stat.rs test result: FAILED. 1469 passed; 3 failed; 32 ignored; 0 measured On Wed, Jun 11, 2014 at 12:15 PM, Ian Daniher wrote: > I have a dual core arm machine with 1GB of RAM keeping up with rust master > - every 8hrs, it updates git, runs "make install," and 8hrs later I have an > up-to-date rustc w/ libs. > > No swap, no compression kmods, just a build of rustc & libs that passes > (almost) all tests. > > root at debian-0d0dd:/mnt/armscratch/node-v0.10.28# free -h; uname -a; cat >> /proc/cpuinfo; rustc -v >> total used free shared buffers cached >> Mem: 1.0G 982M 24M 0B 155M 750M >> -/+ buffers/cache: 76M 930M >> Swap: 0B 0B 0B >> Linux debian-0d0dd 3.4.79-r0-s20-rm2+ #54 SMP Tue Feb 18 01:09:07 YEKT >> 2014 armv7l GNU/Linux >> Processor : ARMv7 Processor rev 4 (v7l) >> processor : 0 >> BogoMIPS : 1819.52 >> processor : 1 >> BogoMIPS : 1819.52 >> Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls >> vfpv4 idiva idivt >> CPU implementer : 0x41 >> CPU architecture: 7 >> CPU variant : 0x0 >> CPU part : 0xc07 >> CPU revision : 4 >> Hardware : sun7i >> Revision : 0000 >> Serial : 0000000000000000 >> rustc 0.11.0-pre (f92a8fa 2014-06-10 18:07:07 -0700) >> host: arm-unknown-linux-gnueabihf > > > > > On Tue, Jun 10, 2014 at 5:19 PM, Igor Bukanov wrote: > >> I tried building rust in a VM with 1GB of memory and it seems only >> zswap works. With zram-only solution without any real swap I was not >> able to compile rust at all. The compiler generated out-of-memory >> exception with zram configured to take 30-70% of memory. With zswap >> enabled, zswap.max_pool_percent=70 and the real swap of 2.5 GB the >> compilation time for the latest tip was about 2 hours. This is on Mac >> Air and Linux inside VirtualBox. >> >> On 5 June 2014 20:46, Ian Daniher wrote: >> > zram is a great suggestion, thanks! I'll give it a shot. >> > ? >> > From My Tiny Glowing Screen >> > >> > >> > On Thu, Jun 5, 2014 at 2:25 PM, Igor Bukanov wrote: >> >> >> >> Have you considered to use zram? Typically the compression for >> >> compiler memory is over a factor of 3 so that can be an option as the >> >> performance degradation under swapping could be tolerable. A similar >> >> option is to enable zswap, but as the max compression with it is >> >> effectively limited by factor of 2, it may not be enough to avoid >> >> swapping. >> >> >> >> On 5 June 2014 20:13, Ian Daniher wrote: >> >> > 1GB is close-ish to the 1.4GB last reported (over a month ago!) by >> >> > http://huonw.github.io/isrustfastyet/mem/. >> >> > >> >> > Are there any workarounds to push the compilation memory down? I'm >> also >> >> > exploring distcc, but IRFY has a bit of semantic ambiguity as to >> whether >> >> > or >> >> > not it's 1.4GB simultaneous or net total. >> >> > >> >> > Thanks! >> >> > -- >> >> > Ian >> >> > >> >> > _______________________________________________ >> >> > Rust-dev mailing list >> >> > Rust-dev at mozilla.org >> >> > https://mail.mozilla.org/listinfo/rust-dev >> >> > >> > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noamraph at gmail.com Wed Jun 11 13:30:40 2014 From: noamraph at gmail.com (Noam Yorav-Raphael) Date: Wed, 11 Jun 2014 23:30:40 +0300 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: References: <1be6b7e90decf085737738bc47269ce5@endl.ch> Message-ID: Thanks! I looked at http://qt-project.org/doc/qt-4.8/qwidget.html, which has several overloaded functions, and didn't find overloaded functions with different return types. If there are, there are probably rare - I really think that overloaded functions with different return types are an abomination. On Wed, Jun 11, 2014 at 8:23 PM, Kevin Cantu wrote: > Noam, that's awesome. It even works for tuples like so (I didn't think it > would): > > ``` > enum AaBbEnum { > Aa, > Bb, > } > > trait AaBb { > fn get_type(&self) -> AaBbEnum; > fn get_aa(self) -> (int, f64) { fail!(); } > fn get_bb(self) -> (f64) { fail!(); } > } > > impl AaBb for (int, f64) { > fn get_type(&self) -> AaBbEnum { Aa } > fn get_aa(self) -> (int, f64) { self } > } > > impl AaBb for (f64) { > fn get_type(&self) -> AaBbEnum { Bb } > fn get_bb(self) -> (f64) { self } > } > > #[cfg(not(test))] > fn overloaded(x: T) { > match x.get_type() { > Aa => println!("got Aa: {}", x.get_aa()), > Bb => println!("got Bb: {}", x.get_bb()), > } > } > > fn overloaded_format(x: T) -> String { > match x.get_type() { > Aa => format!("got Aa: {}", x.get_aa()), > Bb => format!("got Bb: {}", x.get_bb()), > } > } > > #[cfg(not(test))] > #[main] > fn main() { > overloaded((5i, 7.3243)); // prints: got Aa: (5, 7.3243) > overloaded((3.5)); // prints: got Bb: 3.5 > } > > #[test] > fn overloaded_with_same_return_works() { > // now with a shared return > let x: String = overloaded_format((5i, 7.3243)); > let y: String = overloaded_format((3.5)); > assert_eq!(x, "got Aa: (5, 7.3243)".to_string()); > assert_eq!(y, "got Bb: 3.5".to_string()); > } > ``` > > I imagine if the functions being overloaded have different return types, > this gets uglier to use, but this is pretty good! > > > Kevin > > > On Wed, Jun 11, 2014 at 4:35 AM, Noam Yorav-Raphael > wrote: > >> You can achieve overloading which is equivalent to C++ by defining a >> trait for all the types a specific argument can get: >> >> ``` >> enum IntOrFloatEnum { >> Int, >> F64, >> } >> >> trait IntOrFloat { >> fn get_type(&self) -> IntOrFloatEnum; >> fn get_int(self) -> int { fail!(); } >> fn get_f64(self) -> f64 { fail!(); } >> } >> >> impl IntOrFloat for int { >> fn get_type(&self) -> IntOrFloatEnum { Int } >> fn get_int(self) -> int { self } >> } >> >> impl IntOrFloat for f64 { >> fn get_type(&self) -> IntOrFloatEnum { F64 } >> fn get_f64(self) -> f64 { self } >> } >> >> fn overloaded(x: T) { >> match x.get_type() { >> Int => println!("got int: {}", x.get_int()), >> F64 => println!("got f64: {}", x.get_f64()), >> } >> } >> >> fn main() { >> overloaded(5i); // prints: got int: 5 >> overloaded(3.5); // prints: got f64: 3.5 >> } >> ``` >> >> This is equivalent to having to functions, overloaded(int) and >> overloaded(f64). From what I see, the compiler even optimizes away the >> logic, so the generated code is actually equivalent to this: >> >> ``` >> fn overloaded_int(x: int) { println!("got int: {}", x); } >> fn overloaded_f64(x: f64) { println!("got f64: {}", x); } >> fn main() { >> overloaded_int(5i); >> overloaded_f64(3.5); >> } >> ``` >> >> (I actually think that if Rust gains one day some support for >> overloading, it should be syntactic sugar for the above, which will allow >> you to define a function whose argument can be of multiple types. I don't >> like the C++ style of defining several different functions with the same >> name and letting the compiler choose which function should actually be >> called). >> >> Using this method you can solve both the problem of overloading and >> default arguments. For every possible number of arguments that C++ would >> allow, define a function funcN(arg0: T0, arg1: T1, ..., >> argN-1: TN-1). The function would check the actual types of the arguments >> and call the right C++ function, filling default arguments on the way. So >> the only difference between C++ and Rust code would be that you'd have to >> add the number of arguments to the method name. >> >> It would probably not be easy to generate the required code, but I think >> it would solve the problem perfectly. >> >> Cheers, >> Noam >> >> >> On Thu, May 22, 2014 at 11:27 PM, Alexander Tsvyashchenko >> wrote: >> >>> Hi All, >>> >>> Recently I was playing with bindings generator from C++ to Rust. I >>> managed to make things work for Qt5 wrapping, but stumbled into multiple >>> issues along the way. >>> >>> I tried to summarize my "pain points" in the following blog post: >>> http://endl.ch/content/cxx2rust-pains-wrapping-c-rust-example-qt5 >>> >>> I hope that others might benefit from my experience and that some of >>> these "pain points" can be fixed in Rust. >>> >>> I'll try to do my best in answering questions / acting on feedback, if >>> any, but I have very limited amount of free time right now so sorry in >>> advance if answers take some time. >>> >>> Thanks! >>> >>> -- >>> Good luck! Alexander >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Wed Jun 11 14:41:15 2014 From: rusty.gates at icloud.com (Tommi) Date: Thu, 12 Jun 2014 00:41:15 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <62711649-3E7F-443C-AFF3-5EE8FA3182E2@icloud.com> References: <53985939.3010603@aim.com> <53987CB8.5020106@aim.com> <5398A104.40803@gmail.com> <62711649-3E7F-443C-AFF3-5EE8FA3182E2@icloud.com> Message-ID: <8AA00226-3E5A-499A-99A5-072D65EAC0D5@icloud.com> On 2014-06-11, at 21:47, Tommi wrote: > I said `Mul` and similar should do it, i.e. functions that take a variable and return a variable of that same type. Although, a larger issue of genericity is that multiplication doesn't always return the same type as one of its arguments. -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at kevincantu.org Wed Jun 11 16:13:35 2014 From: me at kevincantu.org (Kevin Cantu) Date: Wed, 11 Jun 2014 16:13:35 -0700 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: References: <1be6b7e90decf085737738bc47269ce5@endl.ch> Message-ID: Matthew Monrocq suggests this improvement, which looks even cleaner to use, although slightly more complicated to implement generation of: On Wed, Jun 11, 2014 at 11:38 AM, Matthieu Monrocq < matthieu.monrocq at gmail.com> wrote: > [snip] > > I do like the idea of the trait, however I would rather do away with all > the `get_aa`: why not directly wrap the parameters ? > > enum AaBbEnum { > Aa(int, f64), > Bb(f64), > } > trait AaBb { > fn get(&self) -> AaBbEnum; > > } > > impl AaBb for (int, f64) { > fn get(&self) -> AaBbEnum { match *self { (i, f) => Aa(i, f), } } > } > > impl AaBb for (f64) { > fn get(&self) -> AaBbEnum { Bb(*self) } > > } > > fn overloaded(x: T) { > match x.get() { > Aa(i, f) => println!("got Aa: {}", (i, f)), > Bb(f) => println!("got Bb: {}", f), > > } > } > > #[main] > fn main() { > overloaded((5i, 7.3243)); // prints: got Aa: (5, 7.3243) > overloaded((3.5)); // prints: got Bb: 3.5 > } > > Now, there is no runtime failure => you cannot accidentally match on `Bb` > and requests `get_aa`! > Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From hayakawa at valinux.co.jp Tue Jun 10 23:43:54 2014 From: hayakawa at valinux.co.jp (Akira Hayakawa) Date: Wed, 11 Jun 2014 15:43:54 +0900 Subject: [rust-dev] Is there a Parsec equivalent in Rust? Message-ID: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> Hi, Haskell's Parsec is really a good tool to parse languages. Scala also has the equivalent. What about Rust? -- Akira Hayakawa From noamraph at gmail.com Wed Jun 11 23:25:16 2014 From: noamraph at gmail.com (Noam Yorav-Raphael) Date: Thu, 12 Jun 2014 09:25:16 +0300 Subject: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback In-Reply-To: References: <1be6b7e90decf085737738bc47269ce5@endl.ch> Message-ID: Cool. I was afraid that it will be harder for the compiler to optimize away the enum, but it seems to be doing fine. (If it does turn out that it's harder for the compiler, I don't see a real problem with the approach I suggested, as a runtime failure can only be caused by a bug in the code generator, not by user code) On Thu, Jun 12, 2014 at 2:13 AM, Kevin Cantu wrote: > Matthew Monrocq suggests this improvement, which looks even cleaner to > use, although slightly more complicated to implement generation of: > > > On Wed, Jun 11, 2014 at 11:38 AM, Matthieu Monrocq < > matthieu.monrocq at gmail.com> wrote: > >> [snip] >> >> I do like the idea of the trait, however I would rather do away with all >> the `get_aa`: why not directly wrap the parameters ? >> >> enum AaBbEnum { >> Aa(int, f64), >> Bb(f64), >> } >> trait AaBb { >> fn get(&self) -> AaBbEnum; >> >> } >> >> impl AaBb for (int, f64) { >> fn get(&self) -> AaBbEnum { match *self { (i, f) => Aa(i, f), } } >> } >> >> impl AaBb for (f64) { >> fn get(&self) -> AaBbEnum { Bb(*self) } >> >> } >> >> fn overloaded(x: T) { >> match x.get() { >> Aa(i, f) => println!("got Aa: {}", (i, f)), >> Bb(f) => println!("got Bb: {}", f), >> >> } >> } >> >> #[main] >> fn main() { >> overloaded((5i, 7.3243)); // prints: got Aa: (5, 7.3243) >> overloaded((3.5)); // prints: got Bb: 3.5 >> } >> >> Now, there is no runtime failure => you cannot accidentally match on `Bb` >> and requests `get_aa`! >> > > > > Kevin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Thu Jun 12 01:02:33 2014 From: rusty.gates at icloud.com (Tommi) Date: Thu, 12 Jun 2014 11:02:33 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <53985939.3010603@aim.com> References: <53985939.3010603@aim.com> Message-ID: On 2014-06-11, at 16:27, SiegeLord wrote: > So, I think the situation is pretty bad. What can be done to fix it? I agree that this seems like a serious regression from C++. If it won't be fixed, I think I'll rather stick with C++. Better the devil you know... -------------- next part -------------- An HTML attachment was scrubbed... URL: From clonearmy at gmail.com Thu Jun 12 02:52:21 2014 From: clonearmy at gmail.com (Meredith L. Patterson) Date: Thu, 12 Jun 2014 02:52:21 -0700 Subject: [rust-dev] Is there a Parsec equivalent in Rust? In-Reply-To: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> References: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> Message-ID: I have been meaning to write Rust bindings for Hammer ( https://github.com/UpstandingHackers/hammer), my C parser-combinator library which is loosely inspired by Parsec and gratuitously rips off Scala's packrat parser implementation. There is an issue open for it ( https://github.com/UpstandingHackers/hammer/issues/64), which left off in December 2013 with the blocking problem that the Rust FFI didn't support unions yet. I haven't been following the development of the FFI; are unions supported yet? Cheers, --mlp On Tue, Jun 10, 2014 at 11:43 PM, Akira Hayakawa wrote: > Hi, > > Haskell's Parsec is really a good tool to parse languages. > Scala also has the equivalent. > > What about Rust? > > -- > Akira Hayakawa > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From explodingmind at gmail.com Thu Jun 12 05:50:05 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Thu, 12 Jun 2014 08:50:05 -0400 Subject: [rust-dev] Is there a Parsec equivalent in Rust? In-Reply-To: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> References: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> Message-ID: Might be worth checking out https://github.com/kevinmehall/rust-peg. On Wed, Jun 11, 2014 at 2:43 AM, Akira Hayakawa wrote: > Hi, > > Haskell's Parsec is really a good tool to parse languages. > Scala also has the equivalent. > > What about Rust? > > -- > Akira Hayakawa > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rick.richardson at gmail.com Thu Jun 12 08:03:24 2014 From: rick.richardson at gmail.com (Rick Richardson) Date: Thu, 12 Jun 2014 11:03:24 -0400 Subject: [rust-dev] Is there a Parsec equivalent in Rust? In-Reply-To: References: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> Message-ID: I am by no means a SWIG expert. But would it be possible to work around the missing discriminated union functionality by supplying a typemap and using that to generate a set of Enums? That would would likely result in a more Rust-ish interface as well. On Thu, Jun 12, 2014 at 8:50 AM, Ian Daniher wrote: > Might be worth checking out https://github.com/kevinmehall/rust-peg. > > > On Wed, Jun 11, 2014 at 2:43 AM, Akira Hayakawa > wrote: > >> Hi, >> >> Haskell's Parsec is really a good tool to parse languages. >> Scala also has the equivalent. >> >> What about Rust? >> >> -- >> Akira Hayakawa >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -- "Historically, the most terrible things - war, genocide, and slavery - have resulted not from disobedience, but from obedience" -- Howard Zinn -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcwalton at mozilla.com Thu Jun 12 09:05:52 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Thu, 12 Jun 2014 09:05:52 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> Message-ID: <5399CFE0.5070505@mozilla.com> On 6/12/14 1:02 AM, Tommi wrote: > On 2014-06-11, at 16:27, SiegeLord > wrote: > >> So, I think the situation is pretty bad. What can be done to fix it? > > I agree that this seems like a serious regression from C++. If it won't > be fixed, I think I'll rather stick with C++. Better the devil you know... This message is unhelpful. Patrick From pcwalton at mozilla.com Thu Jun 12 09:07:45 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Thu, 12 Jun 2014 09:07:45 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <539864E8.5050308@gmail.com> References: <53985939.3010603@aim.com> <539864E8.5050308@gmail.com> Message-ID: <5399D051.8000101@mozilla.com> On 6/11/14 7:17 AM, Huon Wilson wrote: > On 11/06/14 23:27, SiegeLord wrote: >> Aside from somewhat more complicated impl's, are there any downsides >> to never using anything but by value 'self' in traits? > > Currently trait objects do not support `self` methods (#10672) We'll have to fix this for unboxed closures, so it's a 1.0 thing. Patrick From pcwalton at mozilla.com Thu Jun 12 09:08:24 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Thu, 12 Jun 2014 09:08:24 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <53985939.3010603@aim.com> References: <53985939.3010603@aim.com> Message-ID: <5399D078.2040508@mozilla.com> On 6/11/14 6:27 AM, SiegeLord wrote: > So, I think the situation is pretty bad. What can be done to fix it? Seems to me we can just make the overloaded operator traits take by-value self. Patrick From rusty.gates at icloud.com Thu Jun 12 09:41:57 2014 From: rusty.gates at icloud.com (Tommi) Date: Thu, 12 Jun 2014 19:41:57 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <5399CFE0.5070505@mozilla.com> References: <53985939.3010603@aim.com> <5399CFE0.5070505@mozilla.com> Message-ID: <79681714-D913-4A0D-91E9-53005B1BC456@icloud.com> On 2014-06-12, at 19:05, Patrick Walton wrote: > On 6/12/14 1:02 AM, Tommi wrote: >> On 2014-06-11, at 16:27, SiegeLord > > wrote: >> >>> So, I think the situation is pretty bad. What can be done to fix it? >> >> I agree that this seems like a serious regression from C++. If it won't >> be fixed, I think I'll rather stick with C++. Better the devil you know... > > This message is unhelpful. My post was potentially helpful in conveying the seriousness of this issue to someone who might not have considered it serious. I thought of it as advertisement for an important thread that wasn't getting much traction. From rusty.gates at icloud.com Thu Jun 12 10:03:40 2014 From: rusty.gates at icloud.com (Tommi) Date: Thu, 12 Jun 2014 20:03:40 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <5399D078.2040508@mozilla.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> Message-ID: <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> On 2014-06-12, at 19:08, Patrick Walton wrote: > On 6/11/14 6:27 AM, SiegeLord wrote: >> So, I think the situation is pretty bad. What can be done to fix it? > > Seems to me we can just make the overloaded operator traits take by-value self. I definitely wouldn't want to see something like the following: pub trait GreaterByOne { fn greater_by_one(self) -> Self; } pub fn my_algorithm + Add>(value: T) -> T { value.greater_by_one() + value.greater_by_one() // error: use of moved value: `value` } -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwalton at mozilla.com Thu Jun 12 10:17:19 2014 From: pwalton at mozilla.com (Patrick Walton) Date: Thu, 12 Jun 2014 10:17:19 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> Message-ID: <650d6a4f-86bf-4ec3-a998-d634c05af34b@email.android.com> You could just clone the value to get around that error. On June 12, 2014 10:03:40 AM PDT, Tommi wrote: >On 2014-06-12, at 19:08, Patrick Walton wrote: > >> On 6/11/14 6:27 AM, SiegeLord wrote: >>> So, I think the situation is pretty bad. What can be done to fix it? >> >> Seems to me we can just make the overloaded operator traits take >by-value self. > >I definitely wouldn't want to see something like the following: > >pub trait GreaterByOne { > fn greater_by_one(self) -> Self; >} > >pub fn my_algorithm + Add>(value: T) -> T { > value.greater_by_one() + > value.greater_by_one() // error: use of moved value: `value` >} -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Thu Jun 12 10:18:56 2014 From: corey at octayn.net (Corey Richardson) Date: Thu, 12 Jun 2014 10:18:56 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <650d6a4f-86bf-4ec3-a998-d634c05af34b@email.android.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <650d6a4f-86bf-4ec3-a998-d634c05af34b@email.android.com> Message-ID: Or bound by Copy. On Thu, Jun 12, 2014 at 10:17 AM, Patrick Walton wrote: > You could just clone the value to get around that error. > > > On June 12, 2014 10:03:40 AM PDT, Tommi wrote: >> >> On 2014-06-12, at 19:08, Patrick Walton wrote: >> >> On 6/11/14 6:27 AM, SiegeLord wrote: >> >> So, I think the situation is pretty bad. What can be done to fix it? >> >> >> Seems to me we can just make the overloaded operator traits take by-value >> self. >> >> >> I definitely wouldn't want to see something like the following: >> >> pub trait GreaterByOne { >> fn greater_by_one(self) -> Self; >> } >> >> pub fn my_algorithm + Add>(value: T) -> T { >> value.greater_by_one() + >> value.greater_by_one() // error: use of moved value: `value` >> } >> > > -- > Sent from my Android phone with K-9 Mail. Please excuse my brevity. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- http://octayn.net/ From rusty.gates at icloud.com Thu Jun 12 10:26:07 2014 From: rusty.gates at icloud.com (Tommi) Date: Thu, 12 Jun 2014 20:26:07 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> Message-ID: I think a new keyword, something like `stable`, is needed for specifying that an argument passed to a trait function is guaranteed to be logically unchanged after the function call. For example: trait Foo { fn foo(stable self); } impl Foo for int { fn foo(&self) {} // OK } impl Foo for uint { fn foo(self) {} // OK } impl Foo for Box { fn foo(stable self) {} // OK (implicitly clones self) } fn main() { let x: Box = box 42; x.foo(); // `x` is implicitly cloned x.foo(); // OK } From corey at octayn.net Thu Jun 12 10:30:26 2014 From: corey at octayn.net (Corey Richardson) Date: Thu, 12 Jun 2014 10:30:26 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> Message-ID: It's called Copy. `trait Foo: Copy { ... }`. On Thu, Jun 12, 2014 at 10:26 AM, Tommi wrote: > I think a new keyword, something like `stable`, is needed for specifying that an argument passed to a trait function is guaranteed to be logically unchanged after the function call. For example: > > trait Foo { > fn foo(stable self); > } > > impl Foo for int { > fn foo(&self) {} // OK > } > > impl Foo for uint { > fn foo(self) {} // OK > } > > impl Foo for Box { > fn foo(stable self) {} // OK (implicitly clones self) > } > > > fn main() { > let x: Box = box 42; > x.foo(); // `x` is implicitly cloned > x.foo(); // OK > } > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -- http://octayn.net/ From rusty.gates at icloud.com Thu Jun 12 10:46:18 2014 From: rusty.gates at icloud.com (Tommi) Date: Thu, 12 Jun 2014 20:46:18 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> Message-ID: `Copy` types aren't really relevant to a discussion about adding to Rust the C++ like optimization of moving rvalues (of non-Copy types) when they're passed to certain functions. On 2014-06-12, at 20:30, Corey Richardson wrote: > It's called Copy. `trait Foo: Copy { ... }`. > > On Thu, Jun 12, 2014 at 10:26 AM, Tommi wrote: >> I think a new keyword, something like `stable`, is needed for specifying that an argument passed to a trait function is guaranteed to be logically unchanged after the function call. For example: >> >> trait Foo { >> fn foo(stable self); >> } >> >> impl Foo for int { >> fn foo(&self) {} // OK >> } >> >> impl Foo for uint { >> fn foo(self) {} // OK >> } >> >> impl Foo for Box { >> fn foo(stable self) {} // OK (implicitly clones self) >> } >> >> >> fn main() { >> let x: Box = box 42; >> x.foo(); // `x` is implicitly cloned >> x.foo(); // OK >> } >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > > > -- > http://octayn.net/ From pcwalton at mozilla.com Thu Jun 12 10:51:25 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Thu, 12 Jun 2014 10:51:25 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> Message-ID: <5399E89D.8040905@mozilla.com> On 6/12/14 10:46 AM, Tommi wrote: > `Copy` types aren't really relevant to a discussion about adding to > Rust the C++ like optimization of moving rvalues (of non-Copy types) > when they're passed to certain functions. There's nothing to add to Rust. Rust supports moves. Patrick From corey at octayn.net Thu Jun 12 10:59:22 2014 From: corey at octayn.net (Corey Richardson) Date: Thu, 12 Jun 2014 10:59:22 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> Message-ID: Implicit cloning is a non-starter. Clones can be very expensive. Hiding that cost is undesirable and would require adding Clone to the language (it's currently a normal library feature). On Thu, Jun 12, 2014 at 10:46 AM, Tommi wrote: > `Copy` types aren't really relevant to a discussion about adding to Rust the C++ like optimization of moving rvalues (of non-Copy types) when they're passed to certain functions. > > On 2014-06-12, at 20:30, Corey Richardson wrote: > >> It's called Copy. `trait Foo: Copy { ... }`. >> >> On Thu, Jun 12, 2014 at 10:26 AM, Tommi wrote: >>> I think a new keyword, something like `stable`, is needed for specifying that an argument passed to a trait function is guaranteed to be logically unchanged after the function call. For example: >>> >>> trait Foo { >>> fn foo(stable self); >>> } >>> >>> impl Foo for int { >>> fn foo(&self) {} // OK >>> } >>> >>> impl Foo for uint { >>> fn foo(self) {} // OK >>> } >>> >>> impl Foo for Box { >>> fn foo(stable self) {} // OK (implicitly clones self) >>> } >>> >>> >>> fn main() { >>> let x: Box = box 42; >>> x.foo(); // `x` is implicitly cloned >>> x.foo(); // OK >>> } >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >> >> >> >> -- >> http://octayn.net/ > -- http://octayn.net/ From rusty.gates at icloud.com Thu Jun 12 10:59:51 2014 From: rusty.gates at icloud.com (Tommi) Date: Thu, 12 Jun 2014 20:59:51 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <5399E89D.8040905@mozilla.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <5399E89D.8040905@mozilla.com> Message-ID: On 2014-06-12, at 20:51, Patrick Walton wrote: > On 6/12/14 10:46 AM, Tommi wrote: >> `Copy` types aren't really relevant to a discussion about adding to >> Rust the C++ like optimization of moving rvalues (of non-Copy types) >> when they're passed to certain functions. > > There's nothing to add to Rust. Rust supports moves. You're right, I said it wrong (the last part, not the `Copy` being irrelevant part). There's no need to add to the language the ability to move (obviously it has that), but perhaps there's a need to add the ability to not move certain arguments by default, for convenience. I wouldn't want to see generic code littered with explicit .clone()'s all over the place. From romanovda at gmail.com Thu Jun 12 11:05:25 2014 From: romanovda at gmail.com (Dmitry Romanov) Date: Thu, 12 Jun 2014 22:05:25 +0400 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <5399E89D.8040905@mozilla.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <5399E89D.8040905@mozilla.com> Message-ID: Sorry, Im a little new to Rust, but I'm,as many, now considering moving from c++ to Rust and the topic is really important for my tasks. Could you give an Rust example for the concern listed above? Thank you! On Jun 12, 2014 9:51 PM, "Patrick Walton" wrote: > On 6/12/14 10:46 AM, Tommi wrote: > >> `Copy` types aren't really relevant to a discussion about adding to >> Rust the C++ like optimization of moving rvalues (of non-Copy types) >> when they're passed to certain functions. >> > > There's nothing to add to Rust. Rust supports moves. > > Patrick > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Thu Jun 12 11:15:07 2014 From: rusty.gates at icloud.com (Tommi) Date: Thu, 12 Jun 2014 21:15:07 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> Message-ID: <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> On 2014-06-12, at 20:59, Corey Richardson wrote: > Implicit cloning is a non-starter. Clones can be very expensive. > Hiding that cost is undesirable and would require adding Clone to the > language (it's currently a normal library feature). But I think it will be easy to make the error of writing the explicit .clone() in places where it's not needed. For example: fn foo(value: T) {} let x = box 123; x.clone().foo(); x.clone().foo(); ...given that `x` is not used after those lines, the last call to .clone() is unnecessary. Whereas, if the task of cloning (implicitly) is assigned to the compiler, then the compiler can use static analysis to make sure such programming errors never occur. The example above would become something like: fn foo(stable value: T) {} let x = box 123; x.foo(); // here `x` gets cloned here x.foo(); // here `x` doesn't get cloned because this is the last use of `x` -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcwalton at mozilla.com Thu Jun 12 11:15:42 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Thu, 12 Jun 2014 11:15:42 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> Message-ID: <5399EE4E.60808@mozilla.com> On 6/12/14 11:15 AM, Tommi wrote: > On 2014-06-12, at 20:59, Corey Richardson > wrote: > >> Implicit cloning is a non-starter. Clones can be very expensive. >> Hiding that cost is undesirable and would require adding Clone to the >> language (it's currently a normal library feature). > > But I think it will be easy to make the error of writing the explicit > .clone() in places where it's not needed. For example: > > fn foo(value: T) {} > > let x = box 123; > x.clone().foo(); > x.clone().foo(); > > ...given that `x` is not used after those lines, the last call to > .clone() is unnecessary. Whereas, if the task of cloning (implicitly) is > assigned to the compiler, then the compiler can use static analysis to > make sure such programming errors never occur. The example above would > become something like: > > fn foo(stable value: T) {} > > let x = box 123; > x.foo(); // here `x` gets cloned here > x.foo(); // here `x` doesn't get cloned because this is the last use of `x` We tried that in earlier versions of Rust. There were way too many clones. Patrick From rusty.gates at icloud.com Thu Jun 12 11:23:35 2014 From: rusty.gates at icloud.com (Tommi) Date: Thu, 12 Jun 2014 21:23:35 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <5399EE4E.60808@mozilla.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> <5399EE4E.60808@mozilla.com> Message-ID: <106143AF-C804-43D2-94F7-AFD7DAAFF0DB@icloud.com> On 2014-06-12, at 21:15, Patrick Walton wrote: > On 6/12/14 11:15 AM, Tommi wrote: >> On 2014-06-12, at 20:59, Corey Richardson > > wrote: >> >>> Implicit cloning is a non-starter. Clones can be very expensive. >>> Hiding that cost is undesirable and would require adding Clone to the >>> language (it's currently a normal library feature). >> >> But I think it will be easy to make the error of writing the explicit >> .clone() in places where it's not needed. For example: >> >> fn foo(value: T) {} >> >> let x = box 123; >> x.clone().foo(); >> x.clone().foo(); >> >> ...given that `x` is not used after those lines, the last call to >> .clone() is unnecessary. Whereas, if the task of cloning (implicitly) is >> assigned to the compiler, then the compiler can use static analysis to >> make sure such programming errors never occur. The example above would >> become something like: >> >> fn foo(stable value: T) {} >> >> let x = box 123; >> x.foo(); // here `x` gets cloned here >> x.foo(); // here `x` doesn't get cloned because this is the last use of `x` > > We tried that in earlier versions of Rust. There were way too many clones. Oh, okay. But I bet you didn't have the `stable` keyword back then, and thus, all by-value arguments were implicitly `stable`, which I wouldn't suggest. Some functions clearly need to take a hold of the ownership of their argument and shouldn't need to guarantee that the variable passed in stays the same from the caller's point of view. From rusty.gates at icloud.com Thu Jun 12 11:32:37 2014 From: rusty.gates at icloud.com (Tommi) Date: Thu, 12 Jun 2014 21:32:37 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <106143AF-C804-43D2-94F7-AFD7DAAFF0DB@icloud.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> <5399EE4E.60808@mozilla.com> <106143AF-C804-43D2-94F7-AFD7DAAFF0DB@icloud.com> Message-ID: For some reason I got really side-tracked here. The whole point of that `stable` keyword I proposed was not syntax sugar, but that it allows the implementor of such a trait to pass by reference when the operator shouldn't move the passed in argument(s). Like, when you multiply two matrices and the returned type is not the same size as neither of the arguments types, there's no point in modifying either of those arguments in place, but rather you need to allocate a new matrix. From laden at csclub.uwaterloo.ca Thu Jun 12 12:51:25 2014 From: laden at csclub.uwaterloo.ca (Luqman Aden) Date: Thu, 12 Jun 2014 12:51:25 -0700 Subject: [rust-dev] Building rustc @ 1GB RAM? In-Reply-To: References: <1401994016963.bf6ab09@Nodemailer> Message-ID: On Wed, Jun 11, 2014 at 1:26 PM, Ian Daniher wrote: > Output of "make check" for those of you who are interested. > > failures: > [run-pass] run-pass/intrinsic-alignment.rs > [run-pass] run-pass/rec-align-u64.rs > These are failing because it seems like we just overlooked adding the right cases for non-android arm & mips. I filed a bug here: https://github.com/mozilla/rust/issues/14848 > [run-pass] run-pass/stat.rs > I'm not too sure as to why this one is failing. I'll try and figure it out later. > > test result: FAILED. 1469 passed; 3 failed; 32 ignored; 0 measured > > > > On Wed, Jun 11, 2014 at 12:15 PM, Ian Daniher > wrote: > >> I have a dual core arm machine with 1GB of RAM keeping up with rust >> master - every 8hrs, it updates git, runs "make install," and 8hrs later I >> have an up-to-date rustc w/ libs. >> >> No swap, no compression kmods, just a build of rustc & libs that passes >> (almost) all tests. >> >> root at debian-0d0dd:/mnt/armscratch/node-v0.10.28# free -h; uname -a; cat >>> /proc/cpuinfo; rustc -v >>> total used free shared buffers cached >>> Mem: 1.0G 982M 24M 0B 155M 750M >>> -/+ buffers/cache: 76M 930M >>> Swap: 0B 0B 0B >>> Linux debian-0d0dd 3.4.79-r0-s20-rm2+ #54 SMP Tue Feb 18 01:09:07 YEKT >>> 2014 armv7l GNU/Linux >>> Processor : ARMv7 Processor rev 4 (v7l) >>> processor : 0 >>> BogoMIPS : 1819.52 >>> processor : 1 >>> BogoMIPS : 1819.52 >>> Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 >>> tls vfpv4 idiva idivt >>> CPU implementer : 0x41 >>> CPU architecture: 7 >>> CPU variant : 0x0 >>> CPU part : 0xc07 >>> CPU revision : 4 >>> Hardware : sun7i >>> Revision : 0000 >>> Serial : 0000000000000000 >>> rustc 0.11.0-pre (f92a8fa 2014-06-10 18:07:07 -0700) >>> host: arm-unknown-linux-gnueabihf >> >> >> >> >> On Tue, Jun 10, 2014 at 5:19 PM, Igor Bukanov wrote: >> >>> I tried building rust in a VM with 1GB of memory and it seems only >>> zswap works. With zram-only solution without any real swap I was not >>> able to compile rust at all. The compiler generated out-of-memory >>> exception with zram configured to take 30-70% of memory. With zswap >>> enabled, zswap.max_pool_percent=70 and the real swap of 2.5 GB the >>> compilation time for the latest tip was about 2 hours. This is on Mac >>> Air and Linux inside VirtualBox. >>> >>> On 5 June 2014 20:46, Ian Daniher wrote: >>> > zram is a great suggestion, thanks! I'll give it a shot. >>> > ? >>> > From My Tiny Glowing Screen >>> > >>> > >>> > On Thu, Jun 5, 2014 at 2:25 PM, Igor Bukanov wrote: >>> >> >>> >> Have you considered to use zram? Typically the compression for >>> >> compiler memory is over a factor of 3 so that can be an option as the >>> >> performance degradation under swapping could be tolerable. A similar >>> >> option is to enable zswap, but as the max compression with it is >>> >> effectively limited by factor of 2, it may not be enough to avoid >>> >> swapping. >>> >> >>> >> On 5 June 2014 20:13, Ian Daniher wrote: >>> >> > 1GB is close-ish to the 1.4GB last reported (over a month ago!) by >>> >> > http://huonw.github.io/isrustfastyet/mem/. >>> >> > >>> >> > Are there any workarounds to push the compilation memory down? I'm >>> also >>> >> > exploring distcc, but IRFY has a bit of semantic ambiguity as to >>> whether >>> >> > or >>> >> > not it's 1.4GB simultaneous or net total. >>> >> > >>> >> > Thanks! >>> >> > -- >>> >> > Ian >>> >> > >>> >> > _______________________________________________ >>> >> > Rust-dev mailing list >>> >> > Rust-dev at mozilla.org >>> >> > https://mail.mozilla.org/listinfo/rust-dev >>> >> > >>> > >>> > >>> >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richo at psych0tik.net Thu Jun 12 15:28:03 2014 From: richo at psych0tik.net (richo) Date: Thu, 12 Jun 2014 15:28:03 -0700 Subject: [rust-dev] Is there a Parsec equivalent in Rust? In-Reply-To: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> References: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> Message-ID: <20140612222803.GA36014@xenia.local> On 11/06/14 15:43 +0900, Akira Hayakawa wrote: >Hi, > >Haskell's Parsec is really a good tool to parse languages. >Scala also has the equivalent. > >What about Rust? > Largely offtopic, but I've spent the last few days thinking seriously about working on a ragel backend for rust, which while orthagonal to Parsec (which I love), would be another option for solving a lot of the same problems. From corey at octayn.net Thu Jun 12 15:31:33 2014 From: corey at octayn.net (Corey Richardson) Date: Thu, 12 Jun 2014 15:31:33 -0700 Subject: [rust-dev] Is there a Parsec equivalent in Rust? In-Reply-To: <20140612222803.GA36014@xenia.local> References: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> <20140612222803.GA36014@xenia.local> Message-ID: We have a ragel backend. https://github.com/erickt/ragel On Thu, Jun 12, 2014 at 3:28 PM, richo wrote: > On 11/06/14 15:43 +0900, Akira Hayakawa wrote: >> >> Hi, >> >> Haskell's Parsec is really a good tool to parse languages. >> Scala also has the equivalent. >> >> What about Rust? >> > > Largely offtopic, but I've spent the last few days thinking seriously about > working on a ragel backend for rust, which while orthagonal to Parsec (which > I love), would be another option for solving a lot of the same problems. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -- http://octayn.net/ From richo at psych0tik.net Thu Jun 12 16:44:31 2014 From: richo at psych0tik.net (richo) Date: Thu, 12 Jun 2014 16:44:31 -0700 Subject: [rust-dev] Is there a Parsec equivalent in Rust? In-Reply-To: References: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> <20140612222803.GA36014@xenia.local> Message-ID: <20140612234431.GA46695@xenia.local> On 12/06/14 15:31 -0700, Corey Richardson wrote: >We have a ragel backend. https://github.com/erickt/ragel > Ask and ye shall receive. Amazing! \o/ From steve at steveklabnik.com Thu Jun 12 21:41:22 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Fri, 13 Jun 2014 00:41:22 -0400 Subject: [rust-dev] Is there a Parsec equivalent in Rust? In-Reply-To: <20140612234431.GA46695@xenia.local> References: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> <20140612222803.GA36014@xenia.local> <20140612234431.GA46695@xenia.local> Message-ID: It's not possible to directly write a Parsec port because we don't have HKT and therefore monads. Ragel is probably the best bet for now. From clonearmy at gmail.com Fri Jun 13 02:55:43 2014 From: clonearmy at gmail.com (Meredith L. Patterson) Date: Fri, 13 Jun 2014 02:55:43 -0700 Subject: [rust-dev] Is there a Parsec equivalent in Rust? In-Reply-To: References: <20140611154354.5e65a88f836529b874c551a8@valinux.co.jp> <20140612222803.GA36014@xenia.local> <20140612234431.GA46695@xenia.local> Message-ID: That seems a bit excessive. C doesn't have higher-kinded types or monads either, *and* it's strict, but we cloned Parsec effectively enough. Neither parser combinators nor PEG/packrat *require* monads, or even lazy evaluation for that matter; they're just easier to implement that way. You can even do Iteratees in C. http://code.khjk.org/citer/ Regarding Rick's SWIG question, really I'd rather avoid using SWIG for any future Hammer bindings, and eventually it'll be eliminated from the existing ones. But could someone explain what makes generating a set of Enums more Rust-ish than using a discriminated union? Don't get me wrong, I love Ragel, but it has its limitations (character-oriented, not really intended to handle recursion, extra code-generation step), and Hammer was partially written to address those. Cheers, --mlp On Fri, Jun 13, 2014 at 6:41 AM, Steve Klabnik wrote: > It's not possible to directly write a Parsec port because we don't > have HKT and therefore monads. Ragel is probably the best bet for now. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Fri Jun 13 03:14:42 2014 From: rusty.gates at icloud.com (Tommi) Date: Fri, 13 Jun 2014 13:14:42 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> <5399EE4E.60808@mozilla.com> <106143AF-C804-43D2-94F7-AFD7DAAFF0DB@icloud.com> Message-ID: <813E8D58-E58A-4C79-8BC4-2BB5839F42FC@icloud.com> The problem: Chained calls to certain operators such as binary `*` and `+` may cause unnecessary memory allocations. For example: struct Vector { coordinates: Vec } impl Mul for Vector { fn mul(&self, rhs: &int) -> Vector { let mut new_coordinates = self.coordinates.clone(); for c in new_coordinates.mut_iter() { *c *= *rhs; } Vector { coordinates: new_coordinates } } } fn get_vector() -> Vector { let v = Vector { coordinates: vec!(1, 2, 3) }; v * 2 * 5 } The last line of `get_vector` causes two new memory allocations. Preferably that line wouldn't allocate at all; it should take the guts out of `v` and multiply the coordinates in place. The goal: We want to be able to write the following function `calculate` and have it be guaranteed that `calculate` doesn't cause unnecessary memory allocations. fn calculate + Add>(value: T, mult: X) -> T { value * mult * mult + value } Insufficient first idea for a solution: Change the definition of `Mul` trait to: pub trait Mul { fn mul(self, rhs: &RHS) -> Result; } And then, change the implementation of `Mul` for `Vector` to: impl Mul for Vector { fn mul(self, rhs: &int) -> Vector { for c in self.coordinates.mut_iter() { *c *= *rhs; } self } } First of all, as a result of these changes, the `calculate` function wouldn't compile complaining about the last use of `value` that: "error: use of moved value: `value`". This could be fixed by changing the definition of `calculate`, but this is not the main problem. But the real problem is that for some types, the binary `*` operator shouldn't move the `self` into the `mul` method. For example, when the return type of the `mul` method is different from the type of `self` (and both are heap allocated), then `mul` method is forced to allocate a new value which it returns, and it should take `self` by reference. My proposed solution: Add a new keyword `stable` to the language. Marking a function argument `stable` gives the guarantee to the caller of that function, that a variable passed in as that argument is logically unchanged after the function call ends. Then, change the definition of `Mul` trait to: pub trait Mul { fn mul(stable self, rhs: &RHS) -> Result; } Note: any other syntax for marking `self` as `stable` would be illegal. One could implement `Mul` for any type by taking `self` by shared reference: impl Mul for T { fn mul(&self, rhs: &RHS) -> Result { ... } } Or, one could implement `Mul` for any type that implements `Copy` by taking `self` by value: impl Mul for T { fn mul(self, rhs: &RHS) -> Result { ... } } Or, one could implement `Mul` for any type that implements `Clone` by taking `self` by "stable value": impl Mul for T { fn mul(stable self, rhs: &RHS) -> Result { ... } } Taking an argument by "stable value" (as `self` above) means that any (clonable) variable passed in as that argument is implicitly cloned before it's passed in if the variable is potentially used after been passed in. For example: impl Mul for Vector { fn mul(stable self, rhs: &int) -> Vector { for c in self.coordinates.mut_iter() { *c *= *rhs; } self } } fn testing() { let mut v = Vector { coordinates: vec!(1, 2, 3) }; v * 1; // Cloned due to not last use v * 1; // Not cloned due to last use before assignment v = Vector { coordinates: vec!(2, 4, 6) }; v * 1; // Cloned due to not last use v = v * 1; // Not cloned due to last use before assignment v * 1; // Not cloned due to last use } Open questions: What should happen for example with `Rc` types w.r.t. `stable`: impl Mul for Rc { fn mul(stable self, rhs: &RHS) -> Result { ... } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Fri Jun 13 03:39:51 2014 From: rusty.gates at icloud.com (Tommi) Date: Fri, 13 Jun 2014 13:39:51 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <813E8D58-E58A-4C79-8BC4-2BB5839F42FC@icloud.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> <5399EE4E.60808@mozilla.com> <106143AF-C804-43D2-94F7-AFD7DAAFF0DB@icloud.com> <813E8D58-E58A-4C79-8BC4-2BB5839F42FC@icloud.com> Message-ID: <50AAC5BB-BAAD-4E37-8AB5-329DFCB159BF@icloud.com> That was assuming `Vector` implements `Clone`. For example: #[deriving(Clone)] struct Vector { coordinates: Vec } On 2014-06-13, at 13:14, Tommi wrote: > Taking an argument by "stable value" (as `self` above) means that any (clonable) variable passed in as that argument is implicitly cloned before it's passed in if the variable is potentially used after been passed in. For example: > > impl Mul for Vector { > fn mul(stable self, rhs: &int) -> Vector { > for c in self.coordinates.mut_iter() { > *c *= *rhs; > } > self > } > } > > fn testing() { > let mut v = Vector { coordinates: vec!(1, 2, 3) }; > v * 1; // Cloned due to not last use > v * 1; // Not cloned due to last use before assignment > v = Vector { coordinates: vec!(2, 4, 6) }; > v * 1; // Cloned due to not last use > v = v * 1; // Not cloned due to last use before assignment > v * 1; // Not cloned due to last use > } -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Fri Jun 13 05:39:46 2014 From: rusty.gates at icloud.com (Tommi) Date: Fri, 13 Jun 2014 15:39:46 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <813E8D58-E58A-4C79-8BC4-2BB5839F42FC@icloud.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> <5399EE4E.60808@mozilla.com> <106143AF-C804-43D2-94F7-AFD7DAAFF0DB@icloud.com> <813E8D58-E58A-4C79-8BC4-2BB5839F42FC@icloud.com> Message-ID: <3A166BB1-0236-4F58-A447-46129128F7D4@icloud.com> On 2014-06-13, at 13:14, Tommi wrote: > pub trait Mul { > fn mul(stable self, rhs: &RHS) -> Result; > } > > Note: any other syntax for marking `self` as `stable` would be illegal. Although, I could see this kind of syntax being allowed as well: pub trait Mul { fn mul(stable lhs: Self, rhs: &RHS) -> Result; } ..and allowing similar syntax for an implementation like: impl Mul for T { fn mul(stable lhs: T, rhs: &RHS) -> Result { ... } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at sb.org Fri Jun 13 12:24:02 2014 From: kevin at sb.org (Kevin Ballard) Date: Fri, 13 Jun 2014 12:24:02 -0700 Subject: [rust-dev] Preserving formatting for slice's Show impl In-Reply-To: References: Message-ID: <18AAB7F0-45DF-4B15-9CAD-D7BEA9CF5770@sb.org> I would not expect this to be ?mapped? over the slice. I encourage you to come up with an appropriate syntax to describe that and submit an RFC, although I wonder how you plan on dealing with things like key vs value in Maps, and further nesting (e.g. slices of slices, etc). As for applying it to the slice as a whole, that would be the appropriate way to handle this format parameter. The problem is doing that requires building an intermediate string first, because you have to know the length of the full output before you can know how to pad it, and it?s generally considered to be undesired work to do that. As far as I?m aware, the only types right now that actually support the various padding-related formatting controls are the ones that can be printed in a single operation (such as numbers, or strings). -Kevin On Jun 9, 2014, at 8:50 PM, Tom Jakubowski wrote: > I would expect that `println!("{:_>4}", [1].as_slice());` would print either `[___1]` (where the format is "mapped" over the slice) or `_[1]` (where the format is applied to the slice as a whole), but instead no formatting is applied at all and it simply prints `[1]`. > > I can see uses and arguments for both the "mapping" and "whole? interpretations of the format string on slices. On the one hand this ambiguity makes a case for leaving the behavior as-is for backwards compatibility. On the other hand it would be useful to be able to format slices (and other collections, of course). Would it be appropriate to expand the syntax for format strings to allow for nested format strings, so that separate formatting can be applied to the entire collection and to its contents? I assume it this would require an RFC. > > (The "mapped" variant can be very easily implemented, by the way, by replacing `try!(write!("{}", x))` with `try!(x.fmt(f))` in the `impl Show for &[T]`.) > > Tom > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcwalton at mozilla.com Fri Jun 13 17:46:42 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Fri, 13 Jun 2014 17:46:42 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <53985939.3010603@aim.com> References: <53985939.3010603@aim.com> Message-ID: <539B9B72.7020509@mozilla.com> I have filed RFC #118 for this: https://github.com/rust-lang/rfcs/pull/118 Patrick From valerii.hiora at gmail.com Fri Jun 13 18:55:30 2014 From: valerii.hiora at gmail.com (Valerii Hiora) Date: Sat, 14 Jun 2014 04:55:30 +0300 Subject: [rust-dev] Nightly docs for Dash Message-ID: <539BAB92.1090304@gmail.com> Hi, Being a big fan of offline documentation I've prepared a fresh docset for Dash (zeal, helm-dash, any other compatible software). Here is the link for subscription: dash-feed://https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fnet.vhbit.rust-doc%2FRustNightly.xml It's a beta and has a couple of known issues in it. If there would be enough interest - it could be also integrated with existing buildbots. -- Valerii From cg.wowus.cg at gmail.com Fri Jun 13 19:49:53 2014 From: cg.wowus.cg at gmail.com (Clark Gaebel) Date: Fri, 13 Jun 2014 22:49:53 -0400 Subject: [rust-dev] Nightly docs for Dash In-Reply-To: <539BAB92.1090304@gmail.com> References: <539BAB92.1090304@gmail.com> Message-ID: Whoa this is cool stuff. I'll have you know it's useful to at least one person! - Clark On Fri, Jun 13, 2014 at 9:55 PM, Valerii Hiora wrote: > Hi, > > Being a big fan of offline documentation I've prepared a fresh docset > for Dash (zeal, helm-dash, any other compatible software). > > Here is the link for subscription: > > dash-feed://https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fnet. > vhbit.rust-doc%2FRustNightly.xml > > It's a beta and has a couple of known issues in it. If there would be > enough interest - it could be also integrated with existing buildbots. > > -- > > Valerii > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- Clark. Key ID : 0x78099922 Fingerprint: B292 493C 51AE F3AB D016 DD04 E5E3 C36F 5534 F907 -------------- next part -------------- An HTML attachment was scrubbed... URL: From farcaller at gmail.com Sat Jun 14 02:35:44 2014 From: farcaller at gmail.com (Vladimir Pouzanov) Date: Sat, 14 Jun 2014 10:35:44 +0100 Subject: [rust-dev] Nightly docs for Dash In-Reply-To: <539BAB92.1090304@gmail.com> References: <539BAB92.1090304@gmail.com> Message-ID: Funnily enough I did the same yesterday. I made a small extension to https://github.com/indirect/dash-rust, my fork can be found here: https://github.com/farcaller/dash-rust It removes the left side navigation panel from docs and adds TOC generation. On Sat, Jun 14, 2014 at 2:55 AM, Valerii Hiora wrote: > Hi, > > Being a big fan of offline documentation I've prepared a fresh docset > for Dash (zeal, helm-dash, any other compatible software). > > Here is the link for subscription: > > dash-feed://https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fnet. > vhbit.rust-doc%2FRustNightly.xml > > It's a beta and has a couple of known issues in it. If there would be > enough interest - it could be also integrated with existing buildbots. > > -- > > Valerii > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- Sincerely, Vladimir "Farcaller" Pouzanov http://farcaller.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Sat Jun 14 08:37:45 2014 From: rusty.gates at icloud.com (Tommi) Date: Sat, 14 Jun 2014 18:37:45 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <539B9B72.7020509@mozilla.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> Message-ID: I think that the deeper and larger language design issue here is that traits in some cases force you to impose on the trait-implementing type some implementation details that should remain hidden in the specification of the trait and should be left to the trait-implementing type to specify. The high level (of abstraction) description of an expression like: a * b ...is that that it should evaluate to the product of `a` and `b` without modifying neither `a` nor `b`. The manner in which the expression accomplishes this task is an implementation detail. If the expression above evaluates to a function call where `a` and `b` are passed as arguments, then the manner in which the two arguments are passed into such a function is an implementation detail, assuming there are multiple ways in which the arguments could be passed and the function call could still fulfill the high level requirements and guarantees of the expression. For example, POD `a` and `b` could be passed to such a product evaluating function in multiple ways, namely by value and by reference. If the definition of `Mul` trait specifies the exact manner in which the arguments must be passed to this product evaluating function, then the trait is revealing and imposing an implementation detail that should be left to the judgement of the type which implements such trait. From ben.striegel at gmail.com Sat Jun 14 23:56:30 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sun, 15 Jun 2014 02:56:30 -0400 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> Message-ID: > The manner in which the expression accomplishes this task is an implementation detail. You're welcome to draft a proposal if you think that you have an idea to make this possible. Though all of the solutions that I can envision require abandoning the idea of operators-as-traits and introducing a whole lot of magic in their place. -------------- next part -------------- An HTML attachment was scrubbed... URL: From farcaller at gmail.com Sun Jun 15 00:14:55 2014 From: farcaller at gmail.com (Vladimir Pouzanov) Date: Sun, 15 Jun 2014 08:14:55 +0100 Subject: [rust-dev] Flatten a tree into HashMap, how do I pass a &mut around? Message-ID: I have a tree of Nodes where each node might have a name. I'm trying to convert the tree into a HashMap of named nodes, but my code is getting extremely complex due to the fact that I cannot pass &mut HashMap around. let mut named_nodes = HashMap::new(); let nodes = vec!( ... ); named_nodes = self.collect_node_names(&named_nodes, &nodes); println!('{}', named_nodes); fn collect_node_names(&self, map: &HashMap>, nodes: &Vec>) -> HashMap> { let mut local_map: HashMap> = HashMap::new(); for (k,v) in map.iter() { local_map.insert(k.clone(), *v); } for n in nodes.iter() { for (k,v) in self.collect_node_names(&local_map, &n.subnodes).iter() { local_map.insert(k.clone(), *v); } match n.name { Some(ref name) => { if local_map.contains_key(name) { } else { local_map.insert(name.clone(), *n); } }, None => (), } } local_map } this one works, but it's bloated and slow. Any hints on how to improve the code? -- Sincerely, Vladimir "Farcaller" Pouzanov http://farcaller.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Sun Jun 15 00:37:39 2014 From: rusty.gates at icloud.com (Tommi) Date: Sun, 15 Jun 2014 10:37:39 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> Message-ID: On 2014-06-15, at 9:56, Benjamin Striegel wrote: > You're welcome to draft a proposal if you think that you have an idea to make this possible. The idea of the `stable` keyword was designed specifically as a bandage on the current trait-system to allow a trait to say that: "this function argument can be passed however you like as long as the caller of this function won't be able to see it modified". I introduced this idea on a long, previous post which began with: > The problem: > Chained calls to certain operators such as binary `*` and `+` may cause unnecessary memory allocations. [..] -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty.gates at icloud.com Sun Jun 15 01:14:54 2014 From: rusty.gates at icloud.com (Tommi) Date: Sun, 15 Jun 2014 11:14:54 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> Message-ID: I realized that, in my proposal, there's a potentially pretty confusing asymmetry of what `stable` means. In the trait definition: pub trait Mul { fn mul(stable self, rhs: &RHS) -> Result; } ...the keyword `stable` means that however the type which implements this trait decides to pass `self` to `mul`, it must be guaranteed that the caller of `mul` won't be able to observe the variable passed in as `self` being modified by this call to `mul`. Also, a trait function that has at least one argument marked as `stable` wouldn't be allowed to have a provided (default) definition, because its signature doesn't say how that `stable` argument should be passed in. But on the actual implementation, like here: impl Mul for T { fn mul(stable self, rhs: &RHS) -> Result { ... } } ...the keyword `stable` means that the variable which the caller of `mul` passes in as the `self` argument will be implicitly cloned before it's passed in if it is necessary to do so in order to ensure that the caller of `mul` won't see that variable modified by `mul`. Static analysis will be used by the compiler to determine when this implicit cloning can be omitted (it may potentially be omitted due to the caller not observing the state of the passed in variable after the function call and (possibly) before the variable getting assigned a new value). Perhaps another keyword would be needed for this second meaning of `stable`, since it's completely different from the meaning of `stable` in trait functions. Maybe something like `cloned`. -------------- next part -------------- An HTML attachment was scrubbed... URL: From farcaller at gmail.com Sun Jun 15 02:50:22 2014 From: farcaller at gmail.com (Vladimir Pouzanov) Date: Sun, 15 Jun 2014 10:50:22 +0100 Subject: [rust-dev] Flatten a tree into HashMap, how do I pass a &mut around? In-Reply-To: References: Message-ID: After a few hints on IRC I managed to simplify it to: fn collect_node_names(&self, map: &mut HashMap>, nodes: &Vec>) -> bool { for n in nodes.iter() { if !self.collect_node_names(map, &n.subnodes) { return false; } match n.name { Some(ref name) => { if map.contains_key(name) { self.sess.span_diagnostic.span_err(n.name_span, format!( "duplicate `{}` definition", name).as_slice()); self.sess.span_diagnostic.span_warn( map.get(name).name_span, "previously defined here"); return false; } else { map.insert(name.clone(), *n); } }, None => (), } } true } My failure point was that I didn't realise you cannot access &mut while yo have a reference to anything you pass &mut to. On Sun, Jun 15, 2014 at 8:14 AM, Vladimir Pouzanov wrote: > I have a tree of Nodes where each node might have a name. I'm trying to > convert the tree into a HashMap of named nodes, but my code is getting > extremely complex due to the fact that I cannot pass &mut HashMap around. > > let mut named_nodes = HashMap::new(); > let nodes = vec!( ... ); > named_nodes = self.collect_node_names(&named_nodes, &nodes); > println!('{}', named_nodes); > > fn collect_node_names(&self, map: &HashMap>, > nodes: &Vec>) -> HashMap> { > let mut local_map: HashMap> = HashMap::new(); > for (k,v) in map.iter() { > local_map.insert(k.clone(), *v); > } > for n in nodes.iter() { > for (k,v) in self.collect_node_names(&local_map, &n.subnodes).iter() { > local_map.insert(k.clone(), *v); > } > match n.name { > Some(ref name) => { > if local_map.contains_key(name) { > > } else { > local_map.insert(name.clone(), *n); > } > }, > None => (), > } > } > local_map > } > > this one works, but it's bloated and slow. Any hints on how to improve the > code? > > -- > Sincerely, > Vladimir "Farcaller" Pouzanov > http://farcaller.net/ > -- Sincerely, Vladimir "Farcaller" Pouzanov http://farcaller.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ml at isaac.cedarswampstudios.org Sun Jun 15 11:10:21 2014 From: ml at isaac.cedarswampstudios.org (Isaac Dupree) Date: Sun, 15 Jun 2014 14:10:21 -0400 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <5399EE4E.60808@mozilla.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> <5399EE4E.60808@mozilla.com> Message-ID: <539DE18D.5010705@isaac.cedarswampstudios.org> > On 6/12/14 11:15 AM, Tommi wrote: >> But I think it will be easy to make the error of writing the explicit >> .clone() in places where it's not needed. For example: >> [...] Would a compiler warning for unnecessary clones be feasible? useful? Would it have false positives -- situations where a clone would compile as a move but do the wrong thing? -Isaac From rusty.gates at icloud.com Mon Jun 16 05:41:18 2014 From: rusty.gates at icloud.com (Tommi) Date: Mon, 16 Jun 2014 15:41:18 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <539DE18D.5010705@isaac.cedarswampstudios.org> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> <5399EE4E.60808@mozilla.com> <539DE18D.5010705@isaac.cedarswampstudios.org> Message-ID: <11951789-9618-47E3-B408-2055C209EBB5@icloud.com> On 2014-06-15, at 21:10, Isaac Dupree wrote: >> On 6/12/14 11:15 AM, Tommi wrote: >>> But I think it will be easy to make the error of writing the explicit >>> .clone() in places where it's not needed. For example: >>> [...] > > Would a compiler warning for unnecessary clones be feasible? useful? Would it have false positives -- situations where a clone would compile as a move but do the wrong thing? I would just ignore the first 12 of my posts on this thread. From glaebhoerl at gmail.com Mon Jun 16 05:44:31 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Mon, 16 Jun 2014 14:44:31 +0200 Subject: [rust-dev] Clarification about RFCs Message-ID: Hi! There's a few things about the RFC process and the "meaning" of RFCs which aren't totally clear to me, and which I'd like to request some clarification about. 1) Which of the following does submitting an RFC imply? 1a) We should implement this right away. 1b) We should implement this before 1.0. 1c) We should implement this whenever we feel like it. 2) Some RFC PRs get closed and tagged with the "postponed" label. Does this mean: 2a) It's too early to implement this proposal, or 2b) it's too early to *evaluate* this proposal? 3) Are the designs outlined by RFCs supposed to be "incremental" or "final"? I.e., 3a) First of all we should make this change, without implying anything about what happens after that; or 3b) This is how it should look in the final version of the language? 4) If someone submits an RFC, does she imply that "I am planning to implement this", or, if an RFC is accepted, does that mean "anyone who wants to can feel free to implement this"? 5) The reviewing process is somewhat opaque to me. 5a) What determines which RFCs get reviewed in a given weekly meeting? 5b) As an observer, how can I tell which RFCs are considered to be in a ready-for-review or will-be-reviewed-next state? 5c) What if the author of the reviewed RFC isn't a participant in the meetings? 5d) (I might also ask "what determines which RFC PRs get attention from the team?", but obviously the answer is "whatever they find interesting".) I think that's all... Thanks! G?bor -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.gesemann at gmail.com Mon Jun 16 07:32:12 2014 From: s.gesemann at gmail.com (Sebastian Gesemann) Date: Mon, 16 Jun 2014 16:32:12 +0200 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> Message-ID: The following message got sent to Patrick instead to the list by mistake. Sorry, Patrick! ---------- Forwarded message ---------- From: s.gesemann at gmail.com Date: Mon, Jun 16, 2014 at 4:29 PM Subject: Re: [rust-dev] &self/&mut self in traits considered harmful(?) To: Patrick Walton On Sat, Jun 14, 2014 at 2:46 AM, Patrick Walton wrote: > I have filed RFC #118 for this: > https://github.com/rust-lang/rfcs/pull/118 > > Patrick Bold move. But I'm not convinced that this is a good idea. I may be missing something, but forcing a move as opposed to forcing an immutable reference seems just as bad as an approach. Also, I'm not sure why you mention C++ rvalue references there. References in C++ are not objects/values like C++ Pointers or References in Rust. They are auto-borrowing and auto-deref'ing non-values, so to speak. These different kinds of L- and Rvalue references combined with overloading is what makes C++ enable move semantics. I think one aspect of the issue is Rust's Trait system itself. It tries to kill two birds with one stone: (1) Having "Interfaces" with runtime dispatching where Traits are used as dynamically-sized types and (2) as type bound for generics. Initially, I found this to be a very cool Rust feature. But now, I'm not so sure about that anymore. Back in 2009 when "concepts" were considered for C++ standardization, I spent much time on understanding the intricacies of that C++ topic. This initial "concepts" design also tried to define some type requirements in terms of function signatures. But this actually interacted somewhat badly with rvalue references (among other things) and I think this is one of the reasons why "concepts lite" (a new and simplified incarnation of the concepts design, expected to augment C++1y standard in form of a technical report) replaced the function signatures with "usage patterns". As a user of some well-behaved type, I don't really care about what kind of optimizations it offers for + or * and how they work. I'm just glad that I can "use" the "pattern" x*y where x and y refer to instances of some type. Whether the implementer of that type distinguishes between lvalues and rvalues via overloading or not is kind of an implementation detail that does not affect how the type is being used syntactically. So, I expect "C++ concepts lite" to be able to specify type requirements in terms of "usage patters" in a way that it allows "models" of these "concepts" to satisfy the requirements in a number of ways (with move optimizations being optional but possible). Another thing I'm not 100% comfortable with (yet?) is the funky way references are used in Rust in combination with auto-borrowing (for operators and self at least) and auto-deref'ing while at the same time, they are used as "values" (like C++ pointers as opposed to C++ references). I've trouble putting this into words. But it feels to me like the lines are blurred which could cause some trouble or bad surprizes. Assuming this RFC is accepted: How would I have to implement Add for a custom type T where moving doesn't make much sense and I'd rather use immutable references to bind the operands? I could write impl Add<&T,T> for &T {...} but it seems to me that this requires explicit borrowing in the user code ? la let x: T = ...; let y: T = ...; let c = &x + &y; Or is this also handled via implicit borrowing for operators (making operators a special case)? Still, I find it very weird to impl Add for &T instead of T and have this asymmetry between &T and T for operands and return value. Can you shed some more light on your RFC? Maybe including examples? A discussion about the implications? How it would affect Trait-lookup, implicit borrowing etc? What did you mean by "The AutorefArgs stuff in typeck will be removed; all overloaded operators will typecheck as though they were DontAutorefArgs."? Many thanks in advance! Cheers sg From s.gesemann at gmail.com Mon Jun 16 07:47:32 2014 From: s.gesemann at gmail.com (Sebastian Gesemann) Date: Mon, 16 Jun 2014 16:47:32 +0200 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> Message-ID: On Mon, Jun 16, 2014 at 4:32 PM, Sebastian Gesemann > [...] > Assuming this RFC is accepted: How would I have to implement Add for a > custom type T where moving doesn't make much sense and I'd rather use > immutable references to bind the operands? I could write > > impl Add<&T,T> for &T {...} > > but it seems to me that this requires explicit borrowing in the user code ? la > > let x: T = ...; > let y: T = ...; > let c = &x + &y; > > Or is this also handled via implicit borrowing for operators (making > operators a special case)? > > Still, I find it very weird to impl Add for &T instead of T and have > this asymmetry between &T and T for operands and return value. On top of that, it seems that this would make the writing of generics pretty messy. Suddenly, the following generic function won't work anymore for my T: fn foo>(x: &T, y: &T) -> R { *x + *y } because T is not an Add, but &T is an Add<&T,R>. Comments? From pnkfelix at mozilla.com Mon Jun 16 07:56:00 2014 From: pnkfelix at mozilla.com (Felix S. Klock II) Date: Mon, 16 Jun 2014 16:56:00 +0200 Subject: [rust-dev] Clarification about RFCs In-Reply-To: References: Message-ID: <5B38DDA5-3E81-4147-9447-70F5C2AF842D@mozilla.com> Gabor (cc?ing rust-dev)- I have filed an issue to track incorporating the answers to these questions into the RFC process documentation. https://github.com/rust-lang/rfcs/issues/121 Here is some of my opinionated answers to the questions: 1. When an RFC PR is merged as accepted into the repository, then that implies that we should implement it (or accept a community provided implementation) whenever we feel it best. This could be a matter of scratching an itch, or it could be to satisfy a 1.0 requirement; so there is no hard and fast rule about when the implementation for an RFC will actually land. 2. An RFC closed with ?postponed? is being marked as such because we do not want to think about the proposal further until post-1.0, and we believe that we can afford to wait until then to do so. ?Evaluate? is a funny word; usually something marked as ?postponed? has already passed an informal first round of evaluation, namely the round of ?do we think we would ever possibly consider making this change, as outlined here or some semi-obvious variation of it.? (When the answer to that question is ?no?, then the appropriate response is to close the RFC, not postpone it.) 3. We strive to write each RFC in a manner that it will reflect the final design of the feature; but the nature of the process means that we cannot expect every merged RFC to actually reflect what the end result will be when 1.0 is released. The intention, I believe, is to try to keep each RFC document somewhat in sync with the language feature as planned. But just because an RFC has been accepted does not mean that the design of the feature is set in stone; one can file pull-requests to change an RFC if there is some change to the feature that we want to make, or need to make, (or have already made, and are going to keep in place). 4. If an RFC is accepted, the RFC author is of course free to submit an implementation, but it is not a requirement that an RFC author drive the implementation of the change. Each time an RFC PR is accepted and merged into the repository, a corresponding tracking issue is supposed to be opened up on the rust repository. A large point of the RFC process is to help guide community members in selecting subtasks to work on that where each member can be reasonably confident that their efforts will not be totally misguided. So, it would probably be best if anyone who plans to work on implementing a feature actually write a comment *saying* that they are planning such implementation on the tracking issue on the rust github repository. Having said that, I do not think we have been strictly following the latter process; I think currently you would need to also review the meeting notes to determine if someone might have already claimed responsibility for implementation. 5. The choice of which RFC?s get reviewed is somewhat ad-hoc at the moment. We do try to post each agenda topic ahead of time in a bulleted list at the top of the shared etherpad ( https://etherpad.mozilla.org/Rust-meeting-weekly ) , and RFC?s are no different in this respect. But in terms of how they are selected, I think it is largely driven by an informal estimate of whether the comment thread has reached a steady state (i.e. either died out or not showing any sign of providing further insight or improvement feedback to the RFC itself). Other than that, we basically try to make sure that any RFC that we accept is accepted at the Tuesday meeting, so that there is a formal record of discussion regarding acceptance. So we do not accept RFC?s at the Thursday meeting. We may reject RFC?s at either meeting; in other words, the only RFC activity on Thursdays is closing the ones that have reached a steady state and that the team agrees we will not be adopting. I want to call special attention to the question of "What if the author of the reviewed RFC isn't a participant in the meetings?? This is an important issue, since one might worry that the viewpoint of the author will not be represented at the meeting itself. In general, we try to only review RFC?s that at least a few people have taken the time to read the corresponding discussion thread and are prepared to represent the viewpoints presented there. Ideally at least one meeting participant would act as a champion for the feature (and hopefully also have read the discussion thread). Such a person need not *personally* desire the feature; they just need to act to represent its virtues and the community?s desire for it. (I think of it like a criminal defense attorney; they may not approve of their client?s actions, but they want to ensure their client gets proper legal representation.) But I did have the qualifier ?Ideally? there, since our current process does not guarantee that such a champion exists. If no champion exists, it is either because not enough people have read the RFC (and thus we usually try to postpone making a decision for a later meeting), or because no one present is willing to champion it (in which case it seems like the best option is to close the RFC, though I am open to hearing alternative actions for this scenario). ---- Did the above help answer your questions? Let me know if I missed the point of one or more of your questions. Cheers, -Felix On 16 Jun 2014, at 14:44, G?bor Lehel wrote: > Hi! > > There's a few things about the RFC process and the "meaning" of RFCs which aren't totally clear to me, and which I'd like to request some clarification about. > > > 1) Which of the following does submitting an RFC imply? > > 1a) We should implement this right away. > > 1b) We should implement this before 1.0. > > 1c) We should implement this whenever we feel like it. > > > 2) Some RFC PRs get closed and tagged with the "postponed" label. Does this mean: > > 2a) It's too early to implement this proposal, or > > 2b) it's too early to *evaluate* this proposal? > > > 3) Are the designs outlined by RFCs supposed to be "incremental" or "final"? I.e., > > 3a) First of all we should make this change, without implying anything about what happens after that; or > > 3b) This is how it should look in the final version of the language? > > > 4) If someone submits an RFC, does she imply that "I am planning to implement this", or, if an RFC is accepted, does that mean "anyone who wants to can feel free to implement this"? > > > 5) The reviewing process is somewhat opaque to me. > > 5a) What determines which RFCs get reviewed in a given weekly meeting? > > 5b) As an observer, how can I tell which RFCs are considered to be in a ready-for-review or will-be-reviewed-next state? > > 5c) What if the author of the reviewed RFC isn't a participant in the meetings? > > 5d) (I might also ask "what determines which RFC PRs get attention from the team?", but obviously the answer is "whatever they find interesting".) > > > I think that's all... Thanks! > > G?bor > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From valerii.hiora at gmail.com Mon Jun 16 09:11:55 2014 From: valerii.hiora at gmail.com (Valerii Hiora) Date: Mon, 16 Jun 2014 19:11:55 +0300 Subject: [rust-dev] Nightly docs for Dash In-Reply-To: References: <539BAB92.1090304@gmail.com> Message-ID: <539F174B.2020104@gmail.com> Hi Vladimir, > It removes the left side navigation panel from docs and adds TOC generation. Looks nice, I've added TOC generation too (so far I'm not using dash-rust and plan to publish the code after cleaning up a bit). One of the reason is that dash-rust actually shows much more information than actually available, for example, if you open Fields section there is alloc::rc::Rc::_noshare, alloc::rc::Rc::_nosend, alloc::rc::Rc::_ptr . All of them are actually private fields and aren't visible in the struct doc. It looks more like rustdoc problem, but still. My method of generating is more fragile to changes (as it actually processes html) but funnily enough it is faster than using precompiled JS indexes. Although it might be misconfiguration on my side if there is requirement to install some additional libraries. -- Valerii From valerii.hiora at gmail.com Mon Jun 16 09:19:22 2014 From: valerii.hiora at gmail.com (Valerii Hiora) Date: Mon, 16 Jun 2014 19:19:22 +0300 Subject: [rust-dev] iOS cross compilation Message-ID: <539F190A.4090302@gmail.com> Hi, So finally Rust can cross-compile for iOS (armv7 only for now). BTW, it also means that Rust now can be used both for iOS and Android low-level development. Short instructions are available here: https://github.com/mozilla/rust/wiki/Doc-building-for-ios Unfortunately LLVM patch for supporting segmented stacks on armv7 was declined by Apple (it used kind of private API) and therefore there is no stack protection at all. It still could be enabled by compiling with a patched LLVM (I can provide a patch and instructions if needed). Everything else should "just work" but let me know if you have any problem. -- Valerii From pcwalton at mozilla.com Mon Jun 16 10:32:18 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Mon, 16 Jun 2014 10:32:18 -0700 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> Message-ID: <539F2A22.408@mozilla.com> On 6/16/14 7:32 AM, Sebastian Gesemann wrote: > I think one aspect of the issue is Rust's Trait system itself. It > tries to kill two birds with one stone: (1) Having "Interfaces" with > runtime dispatching where Traits are used as dynamically-sized types > and (2) as type bound for generics. Initially, I found this to be a > very cool Rust feature. But now, I'm not so sure about that anymore. > Back in 2009 when "concepts" were considered for C++ standardization, > I spent much time on understanding the intricacies of that C++ topic. > This initial "concepts" design also tried to define some type > requirements in terms of function signatures. But this actually > interacted somewhat badly with rvalue references (among other things) > and I think this is one of the reasons why "concepts lite" (a new and > simplified incarnation of the concepts design, expected to augment > C++1y standard in form of a technical report) replaced the function > signatures with "usage patterns". As a user of some well-behaved type, > I don't really care about what kind of optimizations it offers for + > or * and how they work. I'm just glad that I can "use" the "pattern" > x*y where x and y refer to instances of some type. Whether the > implementer of that type distinguishes between lvalues and rvalues via > overloading or not is kind of an implementation detail that does not > affect how the type is being used syntactically. So, I expect "C++ > concepts lite" to be able to specify type requirements in terms of > "usage patters" in a way that it allows "models" of these "concepts" > to satisfy the requirements in a number of ways (with move > optimizations being optional but possible). > > Another thing I'm not 100% comfortable with (yet?) is the funky way > references are used in Rust in combination with auto-borrowing (for > operators and self at least) and auto-deref'ing while at the same > time, they are used as "values" (like C++ pointers as opposed to C++ > references). I've trouble putting this into words. But it feels to me > like the lines are blurred which could cause some trouble or bad > surprizes. I don't really want to debate the entire Rust generics system here. Patrick From pcwalton at mozilla.com Mon Jun 16 10:36:22 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Mon, 16 Jun 2014 10:36:22 -0700 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> Message-ID: <539F2B16.60608@mozilla.com> On 6/16/14 7:32 AM, Sebastian Gesemann wrote: > Assuming this RFC is accepted: How would I have to implement Add for a > custom type T where moving doesn't make much sense and I'd rather use > immutable references to bind the operands? You don't implement Add for those types. The purpose of strongly-typed (as opposed to ad-hoc, like C++) traits is that you can actually tell what the type signature is. Patrick From alex at crichton.co Mon Jun 16 11:04:56 2014 From: alex at crichton.co (Alex Crichton) Date: Mon, 16 Jun 2014 11:04:56 -0700 Subject: [rust-dev] iOS cross compilation In-Reply-To: <539F190A.4090302@gmail.com> References: <539F190A.4090302@gmail.com> Message-ID: Nice job Valerii! This is all thanks to the awesome work you've been doing wrangling compiler-rt and the standard libraries. I'm excited to see what new applications Rust can serve on iOS! On Mon, Jun 16, 2014 at 9:19 AM, Valerii Hiora wrote: > Hi, > > So finally Rust can cross-compile for iOS (armv7 only for now). BTW, > it also means that Rust now can be used both for iOS and Android > low-level development. > > Short instructions are available here: > https://github.com/mozilla/rust/wiki/Doc-building-for-ios > > Unfortunately LLVM patch for supporting segmented stacks on armv7 was > declined by Apple (it used kind of private API) and therefore there is > no stack protection at all. > > It still could be enabled by compiling with a patched LLVM (I can > provide a patch and instructions if needed). > > Everything else should "just work" but let me know if you have any > problem. > > -- > > Valerii > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From banderson at mozilla.com Mon Jun 16 11:39:49 2014 From: banderson at mozilla.com (Brian Anderson) Date: Mon, 16 Jun 2014 11:39:49 -0700 Subject: [rust-dev] Clarification about RFCs In-Reply-To: <5B38DDA5-3E81-4147-9447-70F5C2AF842D@mozilla.com> References: <5B38DDA5-3E81-4147-9447-70F5C2AF842D@mozilla.com> Message-ID: <539F39F5.1050803@mozilla.com> Thanks, Felix. I agree with your interpretation, and hope this gives some clarity on why decisions are being made as they are. That so many RFC's don't make it through the process is disappointing I imagine, but is a reality of where we are in Rust's lifecycle. At this point the fundamental design has largely been done, and our goal must be to drive wider adoption while the opportunity is ripe. We are focused on making the core technology we've built over 4 years completely solid, fixing the rough edges; any changes that touch the language itself and that can be postponed almost certainly will be; same for other high-risk changes. On 06/16/2014 07:56 AM, Felix S. Klock II wrote: > Gabor (cc?ing rust-dev)- > > I have filed an issue to track incorporating the answers to these questions into the RFC process documentation. > > https://github.com/rust-lang/rfcs/issues/121 > > Here is some of my opinionated answers to the questions: > > 1. When an RFC PR is merged as accepted into the repository, then that implies that we should implement it (or accept a community provided implementation) whenever we feel it best. This could be a matter of scratching an itch, or it could be to satisfy a 1.0 requirement; so there is no hard and fast rule about when the implementation for an RFC will actually land. > > 2. An RFC closed with ?postponed? is being marked as such because we do not want to think about the proposal further until post-1.0, and we believe that we can afford to wait until then to do so. ?Evaluate? is a funny word; usually something marked as ?postponed? has already passed an informal first round of evaluation, namely the round of ?do we think we would ever possibly consider making this change, as outlined here or some semi-obvious variation of it.? (When the answer to that question is ?no?, then the appropriate response is to close the RFC, not postpone it.) > > 3. We strive to write each RFC in a manner that it will reflect the final design of the feature; but the nature of the process means that we cannot expect every merged RFC to actually reflect what the end result will be when 1.0 is released. The intention, I believe, is to try to keep each RFC document somewhat in sync with the language feature as planned. But just because an RFC has been accepted does not mean that the design of the feature is set in stone; one can file pull-requests to change an RFC if there is some change to the feature that we want to make, or need to make, (or have already made, and are going to keep in place). > > 4. If an RFC is accepted, the RFC author is of course free to submit an implementation, but it is not a requirement that an RFC author drive the implementation of the change. Each time an RFC PR is accepted and merged into the repository, a corresponding tracking issue is supposed to be opened up on the rust repository. A large point of the RFC process is to help guide community members in selecting subtasks to work on that where each member can be reasonably confident that their efforts will not be totally misguided. So, it would probably be best if anyone who plans to work on implementing a feature actually write a comment *saying* that they are planning such implementation on the tracking issue on the rust github repository. Having said that, I do not think we have been strictly following the latter process; I think currently you would need to also review the meeting notes to determine if someone might have already claimed responsibility for implementation. > > 5. The choice of which RFC?s get reviewed is somewhat ad-hoc at the moment. We do try to post each agenda topic ahead of time in a bulleted list at the top of the shared etherpad ( https://etherpad.mozilla.org/Rust-meeting-weekly ) , and RFC?s are no different in this respect. But in terms of how they are selected, I think it is largely driven by an informal estimate of whether the comment thread has reached a steady state (i.e. either died out or not showing any sign of providing further insight or improvement feedback to the RFC itself). Other than that, we basically try to make sure that any RFC that we accept is accepted at the Tuesday meeting, so that there is a formal record of discussion regarding acceptance. So we do not accept RFC?s at the Thursday meeting. We may reject RFC?s at either meeting; in other words, the only RFC activity on Thursdays is closing the ones that have reached a steady state and that the team agrees we will not be adopting. > > I want to call special attention to the question of "What if the author of the reviewed RFC isn't a participant in the meetings?? This is an important issue, since one might worry that the viewpoint of the author will not be represented at the meeting itself. In general, we try to only review RFC?s that at least a few people have taken the time to read the corresponding discussion thread and are prepared to represent the viewpoints presented there. > > Ideally at least one meeting participant would act as a champion for the feature (and hopefully also have read the discussion thread). Such a person need not *personally* desire the feature; they just need to act to represent its virtues and the community?s desire for it. (I think of it like a criminal defense attorney; they may not approve of their client?s actions, but they want to ensure their client gets proper legal representation.) > > But I did have the qualifier ?Ideally? there, since our current process does not guarantee that such a champion exists. If no champion exists, it is either because not enough people have read the RFC (and thus we usually try to postpone making a decision for a later meeting), or because no one present is willing to champion it (in which case it seems like the best option is to close the RFC, though I am open to hearing alternative actions for this scenario). > > ---- > > Did the above help answer your questions? Let me know if I missed the point of one or more of your questions. > > Cheers, > -Felix > > > On 16 Jun 2014, at 14:44, G?bor Lehel wrote: > >> Hi! >> >> There's a few things about the RFC process and the "meaning" of RFCs which aren't totally clear to me, and which I'd like to request some clarification about. >> >> >> 1) Which of the following does submitting an RFC imply? >> >> 1a) We should implement this right away. >> >> 1b) We should implement this before 1.0. >> >> 1c) We should implement this whenever we feel like it. >> >> >> 2) Some RFC PRs get closed and tagged with the "postponed" label. Does this mean: >> >> 2a) It's too early to implement this proposal, or >> >> 2b) it's too early to *evaluate* this proposal? >> >> >> 3) Are the designs outlined by RFCs supposed to be "incremental" or "final"? I.e., >> >> 3a) First of all we should make this change, without implying anything about what happens after that; or >> >> 3b) This is how it should look in the final version of the language? >> >> >> 4) If someone submits an RFC, does she imply that "I am planning to implement this", or, if an RFC is accepted, does that mean "anyone who wants to can feel free to implement this"? >> >> >> 5) The reviewing process is somewhat opaque to me. >> >> 5a) What determines which RFCs get reviewed in a given weekly meeting? >> >> 5b) As an observer, how can I tell which RFCs are considered to be in a ready-for-review or will-be-reviewed-next state? >> >> 5c) What if the author of the reviewed RFC isn't a participant in the meetings? >> >> 5d) (I might also ask "what determines which RFC PRs get attention from the team?", but obviously the answer is "whatever they find interesting".) >> >> >> I think that's all... Thanks! >> >> G?bor >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From s.gesemann at gmail.com Mon Jun 16 13:04:14 2014 From: s.gesemann at gmail.com (Sebastian Gesemann) Date: Mon, 16 Jun 2014 22:04:14 +0200 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <539F2B16.60608@mozilla.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> Message-ID: <539F4DBE.2010308@gmail.com> Am 16.06.2014 19:36, schrieb Patrick Walton: > On 6/16/14 7:32 AM, Sebastian Gesemann wrote: >> Assuming this RFC is accepted: How would I have to implement Add for a >> custom type T where moving doesn't make much sense and I'd rather use >> immutable references to bind the operands? > > You don't implement Add for those types. As far as I'm concerned that's anything but a satisfactory answer. I really don't see the point of your RFC. If anything, it seems to make things worse from my perspective. That's why I asked you for some clarifications. Of course, you don't owe me anything. And I hope you don't take this personally. > The purpose of strongly-typed (as opposed to ad-hoc, like C++) traits is > that you can actually tell what the type signature is. I think you misunderstood. Nobody is arguing for "ad-hoc generics" where there is no or just very limited type checking before monomorphization. At least I'm not. I like the fact that Rust checks generics before instantiation. But to be honest, I do find it somewhat annoying that Add<> either forces immutable references or move semantics (depending on whether your RFC is accepted) on to everybody who wants to build new arithmetic types. It's fine with me that you don't want to debate "the entire generic system". I didn't expect you to. I just wanted to share my thoughts on this hoping it to be of some kind of value since I spent much time on how the old C++ concepts design (where structural requirements were also defined in terms of function signatures) interacts with the rest of the language including rvalue references. Just in case people following this thread don't know what I'm talking about: "concepts" is what C++ people call the "type system for types" with which templates are supposed to be constrained (whenever they arrive) -- just like Rust's traits can be used as type bounds in generics. Of course, I realize that Rust is different enough that it can't go down that exact road. From farcaller at gmail.com Mon Jun 16 13:59:45 2014 From: farcaller at gmail.com (Vladimir Pouzanov) Date: Mon, 16 Jun 2014 21:59:45 +0100 Subject: [rust-dev] What do I do if I need several &muts into a struct? Message-ID: I have a problem figuring the reasonable data access pattern for my code, here's a brief description. I have a tree structure where each node has a path and an optional name. Path is used to locate the node in tree hierarchy, names are stored in a flat namespace as node aliases: lpx17xx at mcu { clock { attr = "value"; } } os { thread { entry = "start"; ref = &lpx17xx; } } this tree has 4 nodes, /mcu, /mcu/clock, /os and /os/thread. /mcu is also known as "lpx17xx". I use the following structure to hold root nodes: #[deriving(Show)] pub struct PlatformTree { nodes: HashMap>, named: HashMap>, } Everything works fine up to the point of references. References allow some nodes to modify other nodes. So, the first pass ? I parse that snippet into PlatformTree/Nodes struct. Second pass ? I walk over the tree and for each reference I invoke some handling code, e.g. in the handler of thread node it might add some attribute to lpx17xx node. Which is not really possible as it's in immutable Gc box. Also, such modifications are performed through `named` hashmap, so I can't even store &muts in `nodes`, as I still can store only immutable pointers in `named`. How would you solve this problem? -- Sincerely, Vladimir "Farcaller" Pouzanov http://farcaller.net/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcwalton at mozilla.com Mon Jun 16 14:24:06 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Mon, 16 Jun 2014 14:24:06 -0700 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <539F4DBE.2010308@gmail.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> Message-ID: <539F6076.5090507@mozilla.com> On 6/16/14 1:04 PM, Sebastian Gesemann wrote: > Am 16.06.2014 19:36, schrieb Patrick Walton: >> On 6/16/14 7:32 AM, Sebastian Gesemann wrote: >>> Assuming this RFC is accepted: How would I have to implement Add for a >>> custom type T where moving doesn't make much sense and I'd rather use >>> immutable references to bind the operands? >> >> You don't implement Add for those types. > > As far as I'm concerned that's anything but a satisfactory answer. I > really don't see the point of your RFC. If anything, it seems to make > things worse from my perspective. That's why I asked you for some > clarifications. Of course, you don't owe me anything. And I hope you > don't take this personally. I don't see much of a use case for `Add` without move semantics. Does anyone have any? Strings and vectors, perhaps, but I would argue that having to call `.clone()` on the LHS or RHS (as appropriate) is an improvement, because cloning strings and vectors can be very expensive. Patrick From zwarich at mozilla.com Mon Jun 16 15:03:25 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Mon, 16 Jun 2014 15:03:25 -0700 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <539F6076.5090507@mozilla.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> Message-ID: On Jun 16, 2014, at 2:24 PM, Patrick Walton wrote: > On 6/16/14 1:04 PM, Sebastian Gesemann wrote: >> Am 16.06.2014 19:36, schrieb Patrick Walton: >>> On 6/16/14 7:32 AM, Sebastian Gesemann wrote: >>>> Assuming this RFC is accepted: How would I have to implement Add for a >>>> custom type T where moving doesn't make much sense and I'd rather use >>>> immutable references to bind the operands? >>> >>> You don't implement Add for those types. >> >> As far as I'm concerned that's anything but a satisfactory answer. I >> really don't see the point of your RFC. If anything, it seems to make >> things worse from my perspective. That's why I asked you for some >> clarifications. Of course, you don't owe me anything. And I hope you >> don't take this personally. > > I don't see much of a use case for `Add` without move semantics. Does anyone have any? > > Strings and vectors, perhaps, but I would argue that having to call `.clone()` on the LHS or RHS (as appropriate) is an improvement, because cloning strings and vectors can be very expensive. This applies to Mul rather than Add, but if you are multiplying matrices then you want the destination to not alias either of the sources for vectorization purposes, so passing by reference is preferred. Cameron From kimhyunkang at gmail.com Mon Jun 16 15:04:13 2014 From: kimhyunkang at gmail.com (kimhyunkang at gmail.com) Date: Tue, 17 Jun 2014 07:04:13 +0900 Subject: [rust-dev] Request for review: libsql & libsql_macro Message-ID: Hi, list. I'm trying to write libsql and libsql_macro, a library to handle SQL in a type-safe manner. The goal this library is trying to achieve is: - A unified interface to access various RDBMS systems - SQL-like DSL to generate SQL queries - Type-safe database manipulation I want to have comments on the initial design. http://kimhyunkang.github.io/blog/2014/06/15/rust-rfc-libsql/ Please feel free to leave comment on the blog post, or through this mailing list. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Mon Jun 16 15:07:52 2014 From: banderson at mozilla.com (Brian Anderson) Date: Mon, 16 Jun 2014 15:07:52 -0700 Subject: [rust-dev] The 'rust' repo has moved to the 'rust-lang' organization on GitHub Message-ID: <539F6AB8.4060707@mozilla.com> Hi, folks. I've just moved the main repo from the 'mozilla' organization to 'rust-lang': https://github.com/rust-lang/rust. This has been a while coming and reflects that Rust is a major project with its own community and culture, and not simply another project under the Mozilla umbrella. Since GitHub sets up redirects, for the most part this should just work and not affect anybody. I'll spend some time updating links in documentation, but please let me know if you see breakage. I've also taken this opportunity to do some house cleaning on the GitHub teams that have access to the repo: our dev process has changed over the years to not require that many people actually have write access, but during that time we've accumulated a long list of people who do have such access. I believe the only need for this now is to push to the `try` branch, and to tag and close issues. Accordingly, I've gone through the list of collaborators from the old organization and used my judgement to put currently-active contributors who appear to use this sort of access into a new 'Rust-Push' team. If you find that suddenly you can't do something you need to be able to do, let me know in private: it's not an intentional slight. Regards, Brian From steve at steveklabnik.com Mon Jun 16 15:08:51 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Mon, 16 Jun 2014 18:08:51 -0400 Subject: [rust-dev] The 'rust' repo has moved to the 'rust-lang' organization on GitHub In-Reply-To: <539F6AB8.4060707@mozilla.com> References: <539F6AB8.4060707@mozilla.com> Message-ID: Wonderful! From steve at steveklabnik.com Mon Jun 16 15:10:11 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Mon, 16 Jun 2014 18:10:11 -0400 Subject: [rust-dev] Rust's documentation is about to drastically improve Message-ID: Hey all! I wrote up a blog post that you all should know about: http://words.steveklabnik.com/rusts-documentation-is-about-to-drastically-improve Here's the text, in Markdown: Historically, [Rust](http://rust-lang.org/) has had a tough time with documentation. Such a young programming language changes a lot, which means that documentation can quickly be out of date. However, Rust is nearing a 1.0 release, and so that's about to change. I've just signed a contract with Mozilla, and starting Monday, June 23rd, I will be devoting forty hours per week of my time to ensuring that Rust has wonderful documentation. A year and a half ago, I was in my hometown for Christmas. I happened upon a link: [Rust 0.5 released](https://mail.mozilla.org/pipermail/rust-dev/2012-December/002787.html). I've always enjoyed learning new programming languages, and I had vaguely heard of Rust before, but didn't really remember what it was all about. So I dug in. I loved systems programming in college, but had done web-based stuff my entire professional life, and hadn't seriously thought about pointers as part of my day-to-day in quite some time. There was just one problem: Rust was really difficult to learn. I liked what I saw, but there was very little of it. At the same time, I had been working on some ideas for a new toolchain for my [book on hypermedia APIs](http://www.designinghypermediaapis.com/), but wanted to try it out on something else before I took the time to port the content. And so, [Rust for Rubyists](http://www.rustforrubyists.com/) was born. I decided that the best way to teach people Rust was to mimic how I learned Rust. And so, as I learned, I wrote. It ended up at about fifty pages of two weeks of work. I never contributed it to the main repository, because for me, it was really about learning, both about my ebook toolchain as well as Rust itself. I didn't want the burden that came with writing an official tutorial, making sure that you cover every single feature, pleasing every single Github contributor... After learning Rust, I decided that I really liked it. No other language provided such a promising successor to C++. And I really missed low-level programming. So I kept evangelizing Rust, and every so often, contributing official documentation. I figured that even if my low-level chops weren't up to the task of writing actual patches, I could at least help with my writing skills. I'd previously contributed lots of documentation to Ruby and Rails, so it was something that was very familiar to me. I've often found that I start with documentation and then move into contributing code, once I get my head wrapped around everything. Writing is part of my own process of understanding. Rust for Rubyists was a great hit, even amongst non-Rubyists (damn my love of alliteration!). Six months ago, on the eve of the first anniversary of the initial release of Rust for Rubyists, I [gave a talk](https://air.mozilla.org/rust-meetup-december-2013/) at the Bay Area Rust meetup, specifically on the state of Rust's documentation. In it, I laid out a plan for how I envisioned docs looking in the future. In the last six months, a lot has improved, but a lot hasn't. But no more! I'm now going to be able to allocate a significant amount of my time on getting documentation done. I'm also pleased in a meta-sense. You see, by contracting someone to work on documentation full-time, Mozilla is indicating that they take Rust and its future very seriously. You can (and I do) talk a lot of trash on Microsoft, but one of the reasons that the Microsoft platform is used by so many people around the world is that Microsoft products often have excellent documentation. I often find that open source 'products' are technically superior, but are often much harder to use, because they're built by a community, for free, and very few people want to write documentation for free. Combined with the work that Tilde is doing on [Cargo](https://github.com/carlhuda/cargo), Mozilla is investing a significant amount of effort and dollars into ensuring that Rust will be a fantastic platform for those developing on it. Since I love Rust, this makes me very, very happy. Forty hours a week is a significant amount of documentation, and I have a lot of work in front of me. But my first area of focus will be on the area of Rust's documentation that's the most weak, and simultaneously the most important: the tutorial. I tackled the first tip of that iceberg with [my 30 minute introduction](http://doc.rust-lang.org/master/intro.html), and I'd like to tweak it too. The main tutorial, however, is the first place where people go to _really_ learn about Rust and how it works. And the current tutorial is largely the same as the 0.5 days, back when I first learned Rust. It suffers from receiving patchwork contributions from a variety of people, rather than having one unifying vision. It's also much more of a list of features and how to use them than a coherent tutorial. I'm really excited to get to work. Let's all make Rust 1.0 a fantastic release. From ronald.dahlgren at gmail.com Mon Jun 16 15:12:14 2014 From: ronald.dahlgren at gmail.com (Ron Dahlgren) Date: Mon, 16 Jun 2014 18:12:14 -0400 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: <539F6BBE.5030802@gmail.com> That's great news Steve, looking forward to updated and shiny docs! On 06/16/2014 06:10 PM, Steve Klabnik wrote: > Hey all! I wrote up a blog post that you all should know about: > http://words.steveklabnik.com/rusts-documentation-is-about-to-drastically-improve > > Here's the text, in Markdown: > > Historically, [Rust](http://rust-lang.org/) has had a tough time with > documentation. Such a young programming language changes a lot, which > means that documentation can quickly be out of date. However, Rust is > nearing a 1.0 release, and so that's about to change. I've just signed > a contract with Mozilla, and starting Monday, June 23rd, I will be > devoting forty hours per week of my time to ensuring that Rust has > wonderful documentation. > > A year and a half ago, I was in my hometown for Christmas. I happened > upon a link: [Rust 0.5 > released](https://mail.mozilla.org/pipermail/rust-dev/2012-December/002787.html). > I've always enjoyed learning new programming languages, and I had > vaguely heard of Rust before, but didn't really remember what it was > all about. So I dug in. I loved systems programming in college, but > had done web-based stuff my entire professional life, and hadn't > seriously thought about pointers as part of my day-to-day in quite > some time. > > There was just one problem: Rust was really difficult to learn. I > liked what I saw, but there was very little of it. At the same time, I > had been working on some ideas for a new toolchain for my [book on > hypermedia APIs](http://www.designinghypermediaapis.com/), but wanted > to try it out on something else before I took the time to port the > content. And so, [Rust for Rubyists](http://www.rustforrubyists.com/) > was born. I decided that the best way to teach people Rust was to > mimic how I learned Rust. And so, as I learned, I wrote. It ended up > at about fifty pages of two weeks of work. I never contributed it to > the main repository, because for me, it was really about learning, > both about my ebook toolchain as well as Rust itself. I didn't want > the burden that came with writing an official tutorial, making sure > that you cover every single feature, pleasing every single Github > contributor... > > After learning Rust, I decided that I really liked it. No other > language provided such a promising successor to C++. And I really > missed low-level programming. So I kept evangelizing Rust, and every > so often, contributing official documentation. I figured that even if > my low-level chops weren't up to the task of writing actual patches, I > could at least help with my writing skills. I'd previously contributed > lots of documentation to Ruby and Rails, so it was something that was > very familiar to me. I've often found that I start with documentation > and then move into contributing code, once I get my head wrapped > around everything. Writing is part of my own process of understanding. > > Rust for Rubyists was a great hit, even amongst non-Rubyists (damn my > love of alliteration!). Six months ago, on the eve of the first > anniversary of the initial release of Rust for Rubyists, I [gave a > talk](https://air.mozilla.org/rust-meetup-december-2013/) at the Bay > Area Rust meetup, specifically on the state of Rust's documentation. > In it, I laid out a plan for how I envisioned docs looking in the > future. In the last six months, a lot has improved, but a lot hasn't. > But no more! I'm now going to be able to allocate a significant amount > of my time on getting documentation done. > > I'm also pleased in a meta-sense. You see, by contracting someone to > work on documentation full-time, Mozilla is indicating that they take > Rust and its future very seriously. You can (and I do) talk a lot of > trash on Microsoft, but one of the reasons that the Microsoft platform > is used by so many people around the world is that Microsoft products > often have excellent documentation. I often find that open source > 'products' are technically superior, but are often much harder to use, > because they're built by a community, for free, and very few people > want to write documentation for free. Combined with the work that > Tilde is doing on [Cargo](https://github.com/carlhuda/cargo), Mozilla > is investing a significant amount of effort and dollars into ensuring > that Rust will be a fantastic platform for those developing on it. > Since I love Rust, this makes me very, very happy. > > Forty hours a week is a significant amount of documentation, and I > have a lot of work in front of me. But my first area of focus will be > on the area of Rust's documentation that's the most weak, and > simultaneously the most important: the tutorial. I tackled the first > tip of that iceberg with [my 30 minute > introduction](http://doc.rust-lang.org/master/intro.html), and I'd > like to tweak it too. The main tutorial, however, is the first place > where people go to _really_ learn about Rust and how it works. And the > current tutorial is largely the same as the 0.5 days, back when I > first learned Rust. It suffers from receiving patchwork contributions > from a variety of people, rather than having one unifying vision. It's > also much more of a list of features and how to use them than a > coherent tutorial. > > I'm really excited to get to work. Let's all make Rust 1.0 a fantastic release. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- Ron Dahlgren http://dahlgren.so | @ScaleItRon From zwarich at mozilla.com Mon Jun 16 15:17:16 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Mon, 16 Jun 2014 15:17:16 -0700 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> Message-ID: <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> On Jun 16, 2014, at 3:03 PM, Cameron Zwarich wrote: > On Jun 16, 2014, at 2:24 PM, Patrick Walton wrote: > >> On 6/16/14 1:04 PM, Sebastian Gesemann wrote: >>> Am 16.06.2014 19:36, schrieb Patrick Walton: >>>> On 6/16/14 7:32 AM, Sebastian Gesemann wrote: >>>>> Assuming this RFC is accepted: How would I have to implement Add for a >>>>> custom type T where moving doesn't make much sense and I'd rather use >>>>> immutable references to bind the operands? >>>> >>>> You don't implement Add for those types. >>> >>> As far as I'm concerned that's anything but a satisfactory answer. I >>> really don't see the point of your RFC. If anything, it seems to make >>> things worse from my perspective. That's why I asked you for some >>> clarifications. Of course, you don't owe me anything. And I hope you >>> don't take this personally. >> >> I don't see much of a use case for `Add` without move semantics. Does anyone have any? >> >> Strings and vectors, perhaps, but I would argue that having to call `.clone()` on the LHS or RHS (as appropriate) is an improvement, because cloning strings and vectors can be very expensive. > > This applies to Mul rather than Add, but if you are multiplying matrices then you want the destination to not alias either of the sources for vectorization purposes, so passing by reference is preferred. I stated the right case, but the wrong reason. It?s not for vectorization, it?s because it?s not easy to reuse the storage of a matrix while multiplying into it. Cameron From banderson at mozilla.com Mon Jun 16 15:17:44 2014 From: banderson at mozilla.com (Brian Anderson) Date: Mon, 16 Jun 2014 15:17:44 -0700 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: <539F6D08.1020502@mozilla.com> Thanks, Steve! This is going to do wonders for our fit and finish going into the home stretch for 1.0. On 06/16/2014 03:10 PM, Steve Klabnik wrote: > Hey all! I wrote up a blog post that you all should know about: > http://words.steveklabnik.com/rusts-documentation-is-about-to-drastically-improve > > Here's the text, in Markdown: > > Historically, [Rust](http://rust-lang.org/) has had a tough time with > documentation. Such a young programming language changes a lot, which > means that documentation can quickly be out of date. However, Rust is > nearing a 1.0 release, and so that's about to change. I've just signed > a contract with Mozilla, and starting Monday, June 23rd, I will be > devoting forty hours per week of my time to ensuring that Rust has > wonderful documentation. > > A year and a half ago, I was in my hometown for Christmas. I happened > upon a link: [Rust 0.5 > released](https://mail.mozilla.org/pipermail/rust-dev/2012-December/002787.html). > I've always enjoyed learning new programming languages, and I had > vaguely heard of Rust before, but didn't really remember what it was > all about. So I dug in. I loved systems programming in college, but > had done web-based stuff my entire professional life, and hadn't > seriously thought about pointers as part of my day-to-day in quite > some time. > > There was just one problem: Rust was really difficult to learn. I > liked what I saw, but there was very little of it. At the same time, I > had been working on some ideas for a new toolchain for my [book on > hypermedia APIs](http://www.designinghypermediaapis.com/), but wanted > to try it out on something else before I took the time to port the > content. And so, [Rust for Rubyists](http://www.rustforrubyists.com/) > was born. I decided that the best way to teach people Rust was to > mimic how I learned Rust. And so, as I learned, I wrote. It ended up > at about fifty pages of two weeks of work. I never contributed it to > the main repository, because for me, it was really about learning, > both about my ebook toolchain as well as Rust itself. I didn't want > the burden that came with writing an official tutorial, making sure > that you cover every single feature, pleasing every single Github > contributor... > > After learning Rust, I decided that I really liked it. No other > language provided such a promising successor to C++. And I really > missed low-level programming. So I kept evangelizing Rust, and every > so often, contributing official documentation. I figured that even if > my low-level chops weren't up to the task of writing actual patches, I > could at least help with my writing skills. I'd previously contributed > lots of documentation to Ruby and Rails, so it was something that was > very familiar to me. I've often found that I start with documentation > and then move into contributing code, once I get my head wrapped > around everything. Writing is part of my own process of understanding. > > Rust for Rubyists was a great hit, even amongst non-Rubyists (damn my > love of alliteration!). Six months ago, on the eve of the first > anniversary of the initial release of Rust for Rubyists, I [gave a > talk](https://air.mozilla.org/rust-meetup-december-2013/) at the Bay > Area Rust meetup, specifically on the state of Rust's documentation. > In it, I laid out a plan for how I envisioned docs looking in the > future. In the last six months, a lot has improved, but a lot hasn't. > But no more! I'm now going to be able to allocate a significant amount > of my time on getting documentation done. > > I'm also pleased in a meta-sense. You see, by contracting someone to > work on documentation full-time, Mozilla is indicating that they take > Rust and its future very seriously. You can (and I do) talk a lot of > trash on Microsoft, but one of the reasons that the Microsoft platform > is used by so many people around the world is that Microsoft products > often have excellent documentation. I often find that open source > 'products' are technically superior, but are often much harder to use, > because they're built by a community, for free, and very few people > want to write documentation for free. Combined with the work that > Tilde is doing on [Cargo](https://github.com/carlhuda/cargo), Mozilla > is investing a significant amount of effort and dollars into ensuring > that Rust will be a fantastic platform for those developing on it. > Since I love Rust, this makes me very, very happy. > > Forty hours a week is a significant amount of documentation, and I > have a lot of work in front of me. But my first area of focus will be > on the area of Rust's documentation that's the most weak, and > simultaneously the most important: the tutorial. I tackled the first > tip of that iceberg with [my 30 minute > introduction](http://doc.rust-lang.org/master/intro.html), and I'd > like to tweak it too. The main tutorial, however, is the first place > where people go to _really_ learn about Rust and how it works. And the > current tutorial is largely the same as the 0.5 days, back when I > first learned Rust. It suffers from receiving patchwork contributions > from a variety of people, rather than having one unifying vision. It's > also much more of a list of features and how to use them than a > coherent tutorial. > > I'm really excited to get to work. Let's all make Rust 1.0 a fantastic release. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From pcwalton at mozilla.com Mon Jun 16 15:19:45 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Mon, 16 Jun 2014 15:19:45 -0700 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> Message-ID: <539F6D81.30708@mozilla.com> On 6/16/14 3:17 PM, Cameron Zwarich wrote: > I stated the right case, but the wrong reason. It?s not for > vectorization, it?s because it?s not easy to reuse the storage of a > matrix while multiplying into it. Wouldn't most matrices be implicitly copyable (and thus optimized--or at least optimizable--into by-ref at the ABI level)? Patrick From rusty.gates at icloud.com Mon Jun 16 15:36:44 2014 From: rusty.gates at icloud.com (Tommi) Date: Tue, 17 Jun 2014 01:36:44 +0300 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <539F6D81.30708@mozilla.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <539F6D81.30708@mozilla.com> Message-ID: <273879C6-41AF-4A1D-BCE8-D5E2C99CBC17@icloud.com> I wrote my suggestion as RFC #124 https://github.com/rust-lang/rfcs/pull/124 From richo at psych0tik.net Mon Jun 16 16:01:27 2014 From: richo at psych0tik.net (richo) Date: Mon, 16 Jun 2014 16:01:27 -0700 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: <20140616230127.GA62375@xenia.local> On 16/06/14 18:10 -0400, Steve Klabnik wrote: >Hey all! I wrote up a blog post that you all should know about: >http://words.steveklabnik.com/rusts-documentation-is-about-to-drastically-improve > >Here's the text, in Markdown: > > >> Snip << I'm super excited to see this happen. Looking forward to it! From s.gesemann at gmail.com Mon Jun 16 16:03:15 2014 From: s.gesemann at gmail.com (Sebastian Gesemann) Date: Tue, 17 Jun 2014 01:03:15 +0200 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> Message-ID: <539F77B3.4040000@gmail.com> Am 17.06.2014 00:17, schrieb Cameron Zwarich: > On Jun 16, 2014, at 3:03 PM, Cameron Zwarich wrote: > >> On Jun 16, 2014, at 2:24 PM, Patrick Walton wrote: >> >>> On 6/16/14 1:04 PM, Sebastian Gesemann wrote: >>>> Am 16.06.2014 19:36, schrieb Patrick Walton: >>>>> On 6/16/14 7:32 AM, Sebastian Gesemann wrote: >>>>>> Assuming this RFC is accepted: How would I have to implement Add for a >>>>>> custom type T where moving doesn't make much sense and I'd rather use >>>>>> immutable references to bind the operands? >>>>> >>>>> You don't implement Add for those types. >>>> >>>> As far as I'm concerned that's anything but a satisfactory answer. I >>>> really don't see the point of your RFC. If anything, it seems to make >>>> things worse from my perspective. That's why I asked you for some >>>> clarifications. Of course, you don't owe me anything. And I hope you >>>> don't take this personally. >>> >>> I don't see much of a use case for `Add` without move semantics. Does anyone have any? >>> >>> Strings and vectors, perhaps, but I would argue that having to call `.clone()` on the LHS or RHS (as appropriate) is an improvement, because cloning strings and vectors can be very expensive. >> >> This applies to Mul rather than Add, but if you are multiplying matrices then you want the destination to not alias either of the sources for vectorization purposes, so passing by reference is preferred. > > I stated the right case, but the wrong reason. It?s not for vectorization, it?s because it?s not easy to reuse the storage of a matrix while multiplying into it. Good example! I think even with scalar multiplication/division for bignum it's hard to do the calculation in-place of one operand. In those cases this RFC would cost us a couple of unnecessary clones (assuming one wants to keep using the variables as opposed to having them moved from). Suppose I want to evaluate a polynomial over "BigRationals" using Horner's method: a + x * (b + x * (c + x * d)) What I DON'T want to type is a + x.clone() * (b + x.clone() * (c + x.clone() * d)) and I hope that I'm not the only one who would dislike having to write this. So, this would not only be ugly to write down, it would also add unnecessary clones in case Mul impl for that type is not able to calculate the result in-place but allocates new heap storage for the result anyways. I'm leaning towards keeping things as they are and telling people who want to benefit from move semantics to provide additional functions like x.inplace_add(y) with fn inplace_add(self, rhs: &SomeBigType) -> SomeBigType {...} or something like this. - esgeh From ben.striegel at gmail.com Mon Jun 16 20:06:52 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Mon, 16 Jun 2014 23:06:52 -0400 Subject: [rust-dev] iOS cross compilation In-Reply-To: References: <539F190A.4090302@gmail.com> Message-ID: This is great! But how are we testing this? Do we have an iOS buildbot? Or is it liable to break at any moment? On Mon, Jun 16, 2014 at 2:04 PM, Alex Crichton wrote: > Nice job Valerii! This is all thanks to the awesome work you've been > doing wrangling compiler-rt and the standard libraries. I'm excited to > see what new applications Rust can serve on iOS! > > On Mon, Jun 16, 2014 at 9:19 AM, Valerii Hiora > wrote: > > Hi, > > > > So finally Rust can cross-compile for iOS (armv7 only for now). BTW, > > it also means that Rust now can be used both for iOS and Android > > low-level development. > > > > Short instructions are available here: > > https://github.com/mozilla/rust/wiki/Doc-building-for-ios > > > > Unfortunately LLVM patch for supporting segmented stacks on armv7 was > > declined by Apple (it used kind of private API) and therefore there is > > no stack protection at all. > > > > It still could be enabled by compiling with a patched LLVM (I can > > provide a patch and instructions if needed). > > > > Everything else should "just work" but let me know if you have any > > problem. > > > > -- > > > > Valerii > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkuper at cs.indiana.edu Mon Jun 16 20:17:29 2014 From: lkuper at cs.indiana.edu (Lindsey Kuper) Date: Mon, 16 Jun 2014 23:17:29 -0400 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: On Mon, Jun 16, 2014 at 6:10 PM, Steve Klabnik wrote: > I've just signed > a contract with Mozilla, and starting Monday, June 23rd, I will be > devoting forty hours per week of my time to ensuring that Rust has > wonderful documentation. This is great news for Rust! I've been hoping this would happen for some time. Steve, let us know how we can help. Lindsey From steve at steveklabnik.com Mon Jun 16 22:43:32 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Tue, 17 Jun 2014 01:43:32 -0400 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: Thanks everyone! :D > Steve, let us know how we can help. I think the best thing that the community can do is go through and add examples in the API docs. I want to have 100% of the standard library having examples by 1.0, but it's last on my list. The reason is that they're nice, small chunks that can easily be tackled by others, whereas things like writing a whole tutorial are large, and take a while. I'd rather spend my time on those things first, and then get to the API stuff. But I can't tell you how valuable it is to look something up, and then cut and paste from the examples. From comexk at gmail.com Mon Jun 16 23:24:08 2014 From: comexk at gmail.com (comex) Date: Tue, 17 Jun 2014 02:24:08 -0400 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <539F6076.5090507@mozilla.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> Message-ID: On Mon, Jun 16, 2014 at 5:24 PM, Patrick Walton wrote: > I don't see much of a use case for `Add` without move semantics. Does anyone > have any? > > Strings and vectors, perhaps, but I would argue that having to call > `.clone()` on the LHS or RHS (as appropriate) is an improvement, because > cloning strings and vectors can be very expensive. For a vector, calling .clone() on either side is guaranteed to make a duplicate copy, since a cloned LHS will have capacity = size and will have to be thrown away in favor of a larger combined buffer. From klesnil at centrum.cz Tue Jun 17 01:04:21 2014 From: klesnil at centrum.cz (Jan Klesnil) Date: Tue, 17 Jun 2014 10:04:21 +0200 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> Message-ID: <539FF685.60900@centrum.cz> Hi, Maybe we should look at the problem from other perspective? Currently the x += y expression is sugar for x = x + y. Can we flip it over and make x = x + y expression sugar for x = { let tmp = x; tmp += y }? What to do for non-Copy types then? It can be fixed by having traits for both + and += operators with default implementation of Add for Copy types. trait Add { fn add(&mut self, &rhs : RHS); } trait BinaryAdd { fn add(&self, &rhs : RHS) -> RES; } impl> BinaryAdd for T { fn add(&self, &rhs : RHS) -> T { let mut tmp = self; tmp += rhs; tmp } } For any addable type you either implement Add and Copy, or Add and BinaryAdd if the type is too expensive to be implicitly copyable, or just the BinaryAdd if the result's type is different from Self. Then the rustc may be allowed to use Add instead of BinaryAdd for temporary values and for += expressions: let a = a + b; => a += b; let c = get_new_foo() + b; => let c=get_new_foo(); c+=b; c += a; It will be user's responsibility to provide compatible implementations for Add and BinaryAdd (results, side-effects, etc.). JK On 16.6.2014 16:32, Sebastian Gesemann wrote: > The following message got sent to Patrick instead to the list by mistake. > Sorry, Patrick! > > ---------- Forwarded message ---------- > From: s.gesemann at gmail.com > Date: Mon, Jun 16, 2014 at 4:29 PM > Subject: Re: [rust-dev] &self/&mut self in traits considered harmful(?) > To: Patrick Walton > > > On Sat, Jun 14, 2014 at 2:46 AM, Patrick Walton wrote: >> I have filed RFC #118 for this: >> https://github.com/rust-lang/rfcs/pull/118 >> >> Patrick > Bold move. But I'm not convinced that this is a good idea. I may be > missing something, but forcing a move as opposed to forcing an > immutable reference seems just as bad as an approach. Also, I'm not > sure why you mention C++ rvalue references there. References in C++ > are not objects/values like C++ Pointers or References in Rust. They > are auto-borrowing and auto-deref'ing non-values, so to speak. These > different kinds of L- and Rvalue references combined with overloading > is what makes C++ enable move semantics. > > I think one aspect of the issue is Rust's Trait system itself. It > tries to kill two birds with one stone: (1) Having "Interfaces" with > runtime dispatching where Traits are used as dynamically-sized types > and (2) as type bound for generics. Initially, I found this to be a > very cool Rust feature. But now, I'm not so sure about that anymore. > Back in 2009 when "concepts" were considered for C++ standardization, > I spent much time on understanding the intricacies of that C++ topic. > This initial "concepts" design also tried to define some type > requirements in terms of function signatures. But this actually > interacted somewhat badly with rvalue references (among other things) > and I think this is one of the reasons why "concepts lite" (a new and > simplified incarnation of the concepts design, expected to augment > C++1y standard in form of a technical report) replaced the function > signatures with "usage patterns". As a user of some well-behaved type, > I don't really care about what kind of optimizations it offers for + > or * and how they work. I'm just glad that I can "use" the "pattern" > x*y where x and y refer to instances of some type. Whether the > implementer of that type distinguishes between lvalues and rvalues via > overloading or not is kind of an implementation detail that does not > affect how the type is being used syntactically. So, I expect "C++ > concepts lite" to be able to specify type requirements in terms of > "usage patters" in a way that it allows "models" of these "concepts" > to satisfy the requirements in a number of ways (with move > optimizations being optional but possible). > > Another thing I'm not 100% comfortable with (yet?) is the funky way > references are used in Rust in combination with auto-borrowing (for > operators and self at least) and auto-deref'ing while at the same > time, they are used as "values" (like C++ pointers as opposed to C++ > references). I've trouble putting this into words. But it feels to me > like the lines are blurred which could cause some trouble or bad > surprizes. > > Assuming this RFC is accepted: How would I have to implement Add for a > custom type T where moving doesn't make much sense and I'd rather use > immutable references to bind the operands? I could write > > impl Add<&T,T> for &T {...} > > but it seems to me that this requires explicit borrowing in the user code ? la > > let x: T = ...; > let y: T = ...; > let c = &x + &y; > > Or is this also handled via implicit borrowing for operators (making > operators a special case)? > > Still, I find it very weird to impl Add for &T instead of T and have > this asymmetry between &T and T for operands and return value. > > Can you shed some more light on your RFC? Maybe including examples? A > discussion about the implications? How it would affect Trait-lookup, > implicit borrowing etc? What did you mean by "The AutorefArgs stuff in > typeck will be removed; all overloaded operators will typecheck as > though they were DontAutorefArgs."? Many thanks in advance! > > Cheers > sg > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From jcd at sdf.org Tue Jun 17 01:29:51 2014 From: jcd at sdf.org (J. Cliff Dyer) Date: Tue, 17 Jun 2014 10:29:51 +0200 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <539F77B3.4040000@gmail.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <539F77B3.4040000@gmail.com> Message-ID: <20140617102951.4577dc4b@gdoba.domain.local> On Tue, 17 Jun 2014 01:03:15 +0200 Sebastian Gesemann wrote: > I'm leaning towards keeping things as they are and telling people who > want to benefit from move semantics to provide additional functions > like > > x.inplace_add(y) > > with > > fn inplace_add(self, rhs: &SomeBigType) -> SomeBigType {...} > > or something like this. This puts me in mind of the way python handles operator overloading. In python, there are several special methods that can be used to define addition: An object can have any or all of the following methods: * foo.__sub__(bar) -- defines the semantics for foo - bar * foo.__rsub__(bar), (for right-subtract) -- defines how to implement bar - foo if bar.__add__(foo) is not defined. * foo.__isub__(bar) -- defines foo -= bar. If __isub__ is not defined, it uses foo.__sub__(bar) instead, and re-assigns the resulting object to foo. Would it be possible to define multiple traits that can be used to implement different desired semantics for the arguments, with a well defined order of resolution? So there would be a Sub and a RefSub<&T, T> for instance, and RefSub gets used if it exists for the relevant types, otherwise it falls back to using Sub. That way, Sub could be kept simple, but by-reference semantics could be supported as well. Cheers, Cliff From rusty.gates at icloud.com Tue Jun 17 02:48:49 2014 From: rusty.gates at icloud.com (Tommi) Date: Tue, 17 Jun 2014 12:48:49 +0300 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <539FF685.60900@centrum.cz> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539FF685.60900@centrum.cz> Message-ID: <642FBB78-052D-43E4-AB7C-D5C1079F4E18@icloud.com> Here's one use case your proposal doesn't cover: Subtracting two points returns a vector, which either has the same internal representation as a point or uses point as its internal representation, and thus, despite the return type of the subtraction operation being different from `Self`, it's still eligible for the implicit move-optimization. For example, assuming a `Sub` definition that moves `self`: struct Point { coordinates: Vec } struct Vector { end_point: Point } impl Sub for Point { fn sub(self, rhs: &Point) -> Vector { for (left, right) in self.coordinates.mut_iter().zip(rhs.coordinates.iter()) { *left -= *right; } Vector { end_point: self } } } On 2014-06-17, at 11:04, Jan Klesnil wrote: > Hi, > > Maybe we should look at the problem from other perspective? Currently the x += y expression is sugar for x = x + y. Can we flip it over and make x = x + y expression sugar for x = { let tmp = x; tmp += y }? What to do for non-Copy types then? > > It can be fixed by having traits for both + and += operators with default implementation of Add for Copy types. > > trait Add { > fn add(&mut self, &rhs : RHS); > } > > trait BinaryAdd { > fn add(&self, &rhs : RHS) -> RES; > } > > impl> BinaryAdd for T { > fn add(&self, &rhs : RHS) -> T { > let mut tmp = self; > tmp += rhs; > tmp > } > } > > For any addable type you either implement Add and Copy, or Add and BinaryAdd if the type is too expensive to be implicitly copyable, or just the BinaryAdd if the result's type is different from Self. > > Then the rustc may be allowed to use Add instead of BinaryAdd for temporary values and for += expressions: > let a = a + b; => a += b; > let c = get_new_foo() + b; => let c=get_new_foo(); c+=b; > c += a; > > It will be user's responsibility to provide compatible implementations for Add and BinaryAdd (results, side-effects, etc.). > > JK > > On 16.6.2014 16:32, Sebastian Gesemann wrote: >> The following message got sent to Patrick instead to the list by mistake. >> Sorry, Patrick! >> >> ---------- Forwarded message ---------- >> From: s.gesemann at gmail.com >> Date: Mon, Jun 16, 2014 at 4:29 PM >> Subject: Re: [rust-dev] &self/&mut self in traits considered harmful(?) >> To: Patrick Walton >> >> >> On Sat, Jun 14, 2014 at 2:46 AM, Patrick Walton wrote: >>> I have filed RFC #118 for this: >>> https://github.com/rust-lang/rfcs/pull/118 >>> >>> Patrick >> Bold move. But I'm not convinced that this is a good idea. I may be >> missing something, but forcing a move as opposed to forcing an >> immutable reference seems just as bad as an approach. Also, I'm not >> sure why you mention C++ rvalue references there. References in C++ >> are not objects/values like C++ Pointers or References in Rust. They >> are auto-borrowing and auto-deref'ing non-values, so to speak. These >> different kinds of L- and Rvalue references combined with overloading >> is what makes C++ enable move semantics. >> >> I think one aspect of the issue is Rust's Trait system itself. It >> tries to kill two birds with one stone: (1) Having "Interfaces" with >> runtime dispatching where Traits are used as dynamically-sized types >> and (2) as type bound for generics. Initially, I found this to be a >> very cool Rust feature. But now, I'm not so sure about that anymore. >> Back in 2009 when "concepts" were considered for C++ standardization, >> I spent much time on understanding the intricacies of that C++ topic. >> This initial "concepts" design also tried to define some type >> requirements in terms of function signatures. But this actually >> interacted somewhat badly with rvalue references (among other things) >> and I think this is one of the reasons why "concepts lite" (a new and >> simplified incarnation of the concepts design, expected to augment >> C++1y standard in form of a technical report) replaced the function >> signatures with "usage patterns". As a user of some well-behaved type, >> I don't really care about what kind of optimizations it offers for + >> or * and how they work. I'm just glad that I can "use" the "pattern" >> x*y where x and y refer to instances of some type. Whether the >> implementer of that type distinguishes between lvalues and rvalues via >> overloading or not is kind of an implementation detail that does not >> affect how the type is being used syntactically. So, I expect "C++ >> concepts lite" to be able to specify type requirements in terms of >> "usage patters" in a way that it allows "models" of these "concepts" >> to satisfy the requirements in a number of ways (with move >> optimizations being optional but possible). >> >> Another thing I'm not 100% comfortable with (yet?) is the funky way >> references are used in Rust in combination with auto-borrowing (for >> operators and self at least) and auto-deref'ing while at the same >> time, they are used as "values" (like C++ pointers as opposed to C++ >> references). I've trouble putting this into words. But it feels to me >> like the lines are blurred which could cause some trouble or bad >> surprizes. >> >> Assuming this RFC is accepted: How would I have to implement Add for a >> custom type T where moving doesn't make much sense and I'd rather use >> immutable references to bind the operands? I could write >> >> impl Add<&T,T> for &T {...} >> >> but it seems to me that this requires explicit borrowing in the user code ? la >> >> let x: T = ...; >> let y: T = ...; >> let c = &x + &y; >> >> Or is this also handled via implicit borrowing for operators (making >> operators a special case)? >> >> Still, I find it very weird to impl Add for &T instead of T and have >> this asymmetry between &T and T for operands and return value. >> >> Can you shed some more light on your RFC? Maybe including examples? A >> discussion about the implications? How it would affect Trait-lookup, >> implicit borrowing etc? What did you mean by "The AutorefArgs stuff in >> typeck will be removed; all overloaded operators will typecheck as >> though they were DontAutorefArgs."? Many thanks in advance! >> >> Cheers >> sg >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From rusty.gates at icloud.com Tue Jun 17 03:43:34 2014 From: rusty.gates at icloud.com (Tommi) Date: Tue, 17 Jun 2014 13:43:34 +0300 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <5399EE4E.60808@mozilla.com> References: <53985939.3010603@aim.com> <5399D078.2040508@mozilla.com> <024A211A-4F99-4FBB-9327-070CB9C0B952@icloud.com> <35FCD7CE-7DE0-4995-B988-D81A81E8AB44@icloud.com> <5399EE4E.60808@mozilla.com> Message-ID: <8C44BD17-D301-4F09-8103-BA67EC337E7A@icloud.com> Can you elaborate a bit more on why/how exactly did the earlier language design (implicitly clone by default if it's absolutely necessary and otherwise move, instead of move by default) result in much more cloning? Because intuitively it seems to me that if you try to use data that has been moved away, then it means that you didn't intend to move the data away in the first place, but to clone it. And vice versa. On 2014-06-12, at 21:15, Patrick Walton wrote: > On 6/12/14 11:15 AM, Tommi wrote: >> On 2014-06-12, at 20:59, Corey Richardson > > wrote: >> >>> Implicit cloning is a non-starter. Clones can be very expensive. >>> Hiding that cost is undesirable and would require adding Clone to the >>> language (it's currently a normal library feature). >> >> But I think it will be easy to make the error of writing the explicit >> .clone() in places where it's not needed. For example: >> >> fn foo(value: T) {} >> >> let x = box 123; >> x.clone().foo(); >> x.clone().foo(); >> >> ...given that `x` is not used after those lines, the last call to >> .clone() is unnecessary. Whereas, if the task of cloning (implicitly) is >> assigned to the compiler, then the compiler can use static analysis to >> make sure such programming errors never occur. The example above would >> become something like: >> >> fn foo(stable value: T) {} >> >> let x = box 123; >> x.foo(); // here `x` gets cloned here >> x.foo(); // here `x` doesn't get cloned because this is the last use of `x` > > We tried that in earlier versions of Rust. There were way too many clones. > > Patrick > From slabode at aim.com Tue Jun 17 04:26:09 2014 From: slabode at aim.com (SiegeLord) Date: Tue, 17 Jun 2014 07:26:09 -0400 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <539F77B3.4040000@gmail.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <539F77B3.4040000@gmail.com> Message-ID: <53A025D1.2080504@aim.com> On 06/16/2014 07:03 PM, Sebastian Gesemann wrote: > Good example! I think even with scalar multiplication/division for > bignum it's hard to do the calculation in-place of one operand. Each bignum can carry with it some extra space for this purpose. > > Suppose I want to evaluate a polynomial over "BigRationals" using > Horner's method: > > a + x * (b + x * (c + x * d)) > > What I DON'T want to type is > > a + x.clone() * (b + x.clone() * (c + x.clone() * d)) But apparently you want to type a.inplace_add(x.clone().inplace_mul(b.inplace_add(x.clone().inplace_mul(c.inplace_add(x.clone().inplace_mul(d)))))) Because that's the alternative you are suggesting to people who want to benefit from move semantics. Why would you deliberately set up a situation where less efficient code is much cleaner to write? This hasn't been the choice made by Rust in the past (consider the overflowing arithmetic getting sugar, but the non-overflowing one not). -SL From slabode at aim.com Tue Jun 17 04:34:24 2014 From: slabode at aim.com (SiegeLord) Date: Tue, 17 Jun 2014 07:34:24 -0400 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> Message-ID: <53A027C0.40206@aim.com> On 06/16/2014 06:17 PM, Cameron Zwarich wrote: > On Jun 16, 2014, at 3:03 PM, Cameron Zwarich wrote: > I stated the right case, but the wrong reason. It?s not for vectorization, it?s because it?s not easy to reuse the storage of a matrix while multiplying into it. Overloading Mul for matrix multiplication would be a mistake, since that operator does not act the same way multiplication acts for scalars. I.e. you'd overload it, but passing two matrices into a generic function could do very unexpected things if the code, e.g., relies on the operation being commutative. Additionally, if your matrices contain the size in their type, then some generic code wouldn't even accept them. Matrix scalar multiplication is a better candidate for the Mul overload. -SL From dpx.infinity at gmail.com Tue Jun 17 04:41:37 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Tue, 17 Jun 2014 15:41:37 +0400 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <53A027C0.40206@aim.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <53A027C0.40206@aim.com> Message-ID: <9B3A0092-0EEE-4099-8684-E2D014D0E5DE@gmail.com> > Overloading Mul for matrix multiplication would be a mistake, since that operator does not act the same way multiplication acts for scalars. I think that one of the main reasons for overloading operators is not their genericity but their usage in the code. let a = Matrix::new(?); let x = Vector::new(?); let b = Vector::new(?); let result = a * x + b; Looks much nicer than let result = a.times(x).plus(b); In mathematical computations you usually use concrete types, and having overloadable operators just makes your code nicer to read. That said, for mathematicians it is absolutely not a problem that multiplication for matrices works differently than for scalars. From rusty.gates at icloud.com Tue Jun 17 04:43:53 2014 From: rusty.gates at icloud.com (Tommi) Date: Tue, 17 Jun 2014 14:43:53 +0300 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <53A027C0.40206@aim.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <53A027C0.40206@aim.com> Message-ID: <10473C76-9224-4906-87B9-CD5BEAF9BA77@icloud.com> As it stands, the documentation doesn't say that `Mul` means commutative multiplication. Therefore it would be a mistake to write generic code that assumes that a type which implements `Mul` has commutative multiplication. Implementing `Mul` for two matrices would not be a mistake. On 2014-06-17, at 14:34, SiegeLord wrote: > On 06/16/2014 06:17 PM, Cameron Zwarich wrote: >> On Jun 16, 2014, at 3:03 PM, Cameron Zwarich wrote: >> I stated the right case, but the wrong reason. It?s not for vectorization, it?s because it?s not easy to reuse the storage of a matrix while multiplying into it. > > Overloading Mul for matrix multiplication would be a mistake, since that operator does not act the same way multiplication acts for scalars. I.e. you'd overload it, but passing two matrices into a generic function could do very unexpected things if the code, e.g., relies on the operation being commutative. Additionally, if your matrices contain the size in their type, then some generic code wouldn't even accept them. Matrix scalar multiplication is a better candidate for the Mul overload. > > -SL > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From glaebhoerl at gmail.com Tue Jun 17 11:40:39 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Tue, 17 Jun 2014 20:40:39 +0200 Subject: [rust-dev] Clarification about RFCs In-Reply-To: <5B38DDA5-3E81-4147-9447-70F5C2AF842D@mozilla.com> References: <5B38DDA5-3E81-4147-9447-70F5C2AF842D@mozilla.com> Message-ID: Thanks for the extremely thorough response! I think I don't actually have any questions left; you answered them all. On Mon, Jun 16, 2014 at 4:56 PM, Felix S. Klock II wrote: > Gabor (cc?ing rust-dev)- > > I have filed an issue to track incorporating the answers to these > questions into the RFC process documentation. > > https://github.com/rust-lang/rfcs/issues/121 > > Here is some of my opinionated answers to the questions: > > 1. When an RFC PR is merged as accepted into the repository, then that > implies that we should implement it (or accept a community provided > implementation) whenever we feel it best. This could be a matter of > scratching an itch, or it could be to satisfy a 1.0 requirement; so there > is no hard and fast rule about when the implementation for an RFC will > actually land. > > 2. An RFC closed with ?postponed? is being marked as such because we do > not want to think about the proposal further until post-1.0, and we believe > that we can afford to wait until then to do so. ?Evaluate? is a funny > word; usually something marked as ?postponed? has already passed an > informal first round of evaluation, namely the round of ?do we think we > would ever possibly consider making this change, as outlined here or some > semi-obvious variation of it.? (When the answer to that question is ?no?, > then the appropriate response is to close the RFC, not postpone it.) > > 3. We strive to write each RFC in a manner that it will reflect the final > design of the feature; but the nature of the process means that we cannot > expect every merged RFC to actually reflect what the end result will be > when 1.0 is released. The intention, I believe, is to try to keep each RFC > document somewhat in sync with the language feature as planned. But just > because an RFC has been accepted does not mean that the design of the > feature is set in stone; one can file pull-requests to change an RFC if > there is some change to the feature that we want to make, or need to make, > (or have already made, and are going to keep in place). > > 4. If an RFC is accepted, the RFC author is of course free to submit an > implementation, but it is not a requirement that an RFC author drive the > implementation of the change. Each time an RFC PR is accepted and merged > into the repository, a corresponding tracking issue is supposed to be > opened up on the rust repository. A large point of the RFC process is to > help guide community members in selecting subtasks to work on that where > each member can be reasonably confident that their efforts will not be > totally misguided. So, it would probably be best if anyone who plans to > work on implementing a feature actually write a comment *saying* that they > are planning such implementation on the tracking issue on the rust github > repository. Having said that, I do not think we have been strictly > following the latter process; I think currently you would need to also > review the meeting notes to determine if someone might have already claimed > responsibility for implementation. > > 5. The choice of which RFC?s get reviewed is somewhat ad-hoc at the > moment. We do try to post each agenda topic ahead of time in a bulleted > list at the top of the shared etherpad ( > https://etherpad.mozilla.org/Rust-meeting-weekly ) , and RFC?s are no > different in this respect. But in terms of how they are selected, I think > it is largely driven by an informal estimate of whether the comment thread > has reached a steady state (i.e. either died out or not showing any sign of > providing further insight or improvement feedback to the RFC itself). > Other than that, we basically try to make sure that any RFC that we accept > is accepted at the Tuesday meeting, so that there is a formal record of > discussion regarding acceptance. So we do not accept RFC?s at the Thursday > meeting. We may reject RFC?s at either meeting; in other words, the only > RFC activity on Thursdays is closing the ones that have reached a steady > state and that the team agrees we will not be adopting. > > I want to call special attention to the question of "What if the author of > the reviewed RFC isn't a participant in the meetings?? This is an > important issue, since one might worry that the viewpoint of the author > will not be represented at the meeting itself. In general, we try to only > review RFC?s that at least a few people have taken the time to read the > corresponding discussion thread and are prepared to represent the > viewpoints presented there. > > Ideally at least one meeting participant would act as a champion for the > feature (and hopefully also have read the discussion thread). Such a > person need not *personally* desire the feature; they just need to act to > represent its virtues and the community?s desire for it. (I think of it > like a criminal defense attorney; they may not approve of their client?s > actions, but they want to ensure their client gets proper legal > representation.) > > But I did have the qualifier ?Ideally? there, since our current process > does not guarantee that such a champion exists. If no champion exists, it > is either because not enough people have read the RFC (and thus we usually > try to postpone making a decision for a later meeting), or because no one > present is willing to champion it (in which case it seems like the best > option is to close the RFC, though I am open to hearing alternative actions > for this scenario). > > ---- > > Did the above help answer your questions? Let me know if I missed the > point of one or more of your questions. > > Cheers, > -Felix > > > On 16 Jun 2014, at 14:44, G?bor Lehel wrote: > > > Hi! > > > > There's a few things about the RFC process and the "meaning" of RFCs > which aren't totally clear to me, and which I'd like to request some > clarification about. > > > > > > 1) Which of the following does submitting an RFC imply? > > > > 1a) We should implement this right away. > > > > 1b) We should implement this before 1.0. > > > > 1c) We should implement this whenever we feel like it. > > > > > > 2) Some RFC PRs get closed and tagged with the "postponed" label. Does > this mean: > > > > 2a) It's too early to implement this proposal, or > > > > 2b) it's too early to *evaluate* this proposal? > > > > > > 3) Are the designs outlined by RFCs supposed to be "incremental" or > "final"? I.e., > > > > 3a) First of all we should make this change, without implying anything > about what happens after that; or > > > > 3b) This is how it should look in the final version of the language? > > > > > > 4) If someone submits an RFC, does she imply that "I am planning to > implement this", or, if an RFC is accepted, does that mean "anyone who > wants to can feel free to implement this"? > > > > > > 5) The reviewing process is somewhat opaque to me. > > > > 5a) What determines which RFCs get reviewed in a given weekly meeting? > > > > 5b) As an observer, how can I tell which RFCs are considered to be in a > ready-for-review or will-be-reviewed-next state? > > > > 5c) What if the author of the reviewed RFC isn't a participant in the > meetings? > > > > 5d) (I might also ask "what determines which RFC PRs get attention from > the team?", but obviously the answer is "whatever they find interesting".) > > > > > > I think that's all... Thanks! > > > > G?bor > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hansjorg at gmail.com Wed Jun 18 01:11:23 2014 From: hansjorg at gmail.com (=?UTF-8?Q?Hans_J=C3=B8rgen_Hoel?=) Date: Wed, 18 Jun 2014 10:11:23 +0200 Subject: [rust-dev] Rust CI Message-ID: Hi, Rust Ci wasn't working for a period due to problems with building the nightly PPA for the platform used by Travis (required GCC version was bumped with no way to specify alternative to configure script). This has been fixed for a while, but it turns out that many Travis auth tokens has expired in the mean time. If you want fix this for your project to start triggering builds again, simply press the red padlock icon next to your project on the frontpage (http://rust-ci.org/) and you will be redirected to GitHub for authentication. If you've got any questions, ping me on irc (hansjorg). Regards, Hans J?rgen From zo1980 at gmail.com Wed Jun 18 04:10:59 2014 From: zo1980 at gmail.com (=?UTF-8?B?Wm9sdMOhbiBUw7N0aA==?=) Date: Wed, 18 Jun 2014 13:10:59 +0200 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: Please do not create examples for _all_ entities in the API doc. In case of trivial entities [like Iterator::filter] it would just unnecessarily decrease the conciseness of the doc. Also: think about the amount of update this may make necessary in case Rust language syntax changes. On Tue, Jun 17, 2014 at 7:43 AM, Steve Klabnik wrote: > Thanks everyone! :D > > > Steve, let us know how we can help. > > I think the best thing that the community can do is go through and add > examples in the API docs. I want to have 100% of the standard library > having examples by 1.0, but it's last on my list. The reason is that > they're nice, small chunks that can easily be tackled by others, > whereas things like writing a whole tutorial are large, and take a > while. I'd rather spend my time on those things first, and then get to > the API stuff. But I can't tell you how valuable it is to look > something up, and then cut and paste from the examples. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Wed Jun 18 09:22:05 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Wed, 18 Jun 2014 12:22:05 -0400 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: > In case of trivial entities The problem with this is what's trivial to you isn't trivial to someone else. > think about the amount of update this may make necessary in case Rust language syntax changes. Literally my job. ;) Luckily, the syntax has been pretty stable lately, and most changes have just been mechanical. From matthieu.monrocq at gmail.com Wed Jun 18 09:27:37 2014 From: matthieu.monrocq at gmail.com (Matthieu Monrocq) Date: Wed, 18 Jun 2014 18:27:37 +0200 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: On Wed, Jun 18, 2014 at 6:22 PM, Steve Klabnik wrote: > > In case of trivial entities > > The problem with this is what's trivial to you isn't trivial to someone > else. > > > think about the amount of update this may make necessary in case Rust > language syntax changes. > > Literally my job. ;) Luckily, the syntax has been pretty stable > lately, and most changes have just been mechanical. > If you could, it would be awesome to invest in a check that the provided examples compile with the current release of the compiler (possibly as part of the documentation generation). This not only guarantees that the examples are up-to-date, but also helps in locating out-dated examples. On the other hand, this may require more boilerplate to get self-contained examples (that can actually be compiled), so YMMV. -- Matthieu > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Wed Jun 18 09:28:17 2014 From: corey at octayn.net (Corey Richardson) Date: Wed, 18 Jun 2014 09:28:17 -0700 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: We already compile and run all examples in the docs as part of the testsuite. On Wed, Jun 18, 2014 at 9:27 AM, Matthieu Monrocq wrote: > > > > On Wed, Jun 18, 2014 at 6:22 PM, Steve Klabnik > wrote: >> >> > In case of trivial entities >> >> The problem with this is what's trivial to you isn't trivial to someone >> else. >> >> > think about the amount of update this may make necessary in case Rust >> > language syntax changes. >> >> Literally my job. ;) Luckily, the syntax has been pretty stable >> lately, and most changes have just been mechanical. > > > If you could, it would be awesome to invest in a check that the provided > examples compile with the current release of the compiler (possibly as part > of the documentation generation). > > This not only guarantees that the examples are up-to-date, but also helps in > locating out-dated examples. > > On the other hand, this may require more boilerplate to get self-contained > examples (that can actually be compiled), so YMMV. > > -- Matthieu > >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- http://octayn.net/ From zo1980 at gmail.com Wed Jun 18 09:52:49 2014 From: zo1980 at gmail.com (=?UTF-8?B?Wm9sdMOhbiBUw7N0aA==?=) Date: Wed, 18 Jun 2014 18:52:49 +0200 Subject: [rust-dev] Rust's documentation is about to drastically improve Message-ID: ?One long-standing problem of mine with the docs is that they are split to multiple sources: tutorial, manual, RustByExample, RustForC++Programmers, ... . Would be nice to have one central starting location and a strict hierarchy of links from the center to a searched topic, and then being able to find every kind of info about the topic there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zo1980 at gmail.com Wed Jun 18 09:53:48 2014 From: zo1980 at gmail.com (=?UTF-8?B?Wm9sdMOhbiBUw7N0aA==?=) Date: Wed, 18 Jun 2014 18:53:48 +0200 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: A tip: Monitor StackOverflow for Rust tagged questions, and after each reasonable question modify the doc so that going to SO becomes unnecessary for those who sought answer in the doc first. -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Wed Jun 18 10:08:21 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Wed, 18 Jun 2014 19:08:21 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 Message-ID: # Exposition We've debated the subject of integer overflow quite a bit, without much apparent progress. Essentially, we've been running in circles around two core facts: wrapping is bad, but checking is slow. The current consensus seems to be to, albeit grudgingly, stick with the status quo. I think we've established that a perfect, one-size-fits-all solution is not possible. But I don't think this means that we're out of options, or have no room for improvement. I think there are several imperfect, partial solutions we could pursue, to address the various use cases in a divide-and-conquer fashion. This is not a formal RFC, more of a grab bag of thoughts and ideas. The central consideration has to be the observation that, while wrapping around on overflow is well-supported by hardware, for the large majority of programs, it's the wrong behavior. Basically, programs are just hoping that overflow won't happen. And if it ever does happen, it's going to result in unexpected behavior and bugs. (Including the possibility of security bugs: not all security bugs are memory safety bugs.) This is a step up from C's insanity of undefined behavior for signed overflow, where the compiler assumes that overflow *cannot* happen and even optimizes based on that assumption, but it's still a sad state of affairs. If we're clearing the bar, that's only because it's been buried underground. We can divide programs into three categories. (I'm using "program" in the general sense of "piece of code which does a thing".) 1) Programs where wrapping around on overflow is the desired semantics. 2) Programs where wrapping around on overflow is not the desired semantics, but performance is not critical. 3) Programs where wrapping around on overflow is not the desired semantics and performance is critical. Programs in (1) are well-served by the language and libraries as they are, and there's not much to do except to avoid regressing. Programs in (2) and (3) are not as well-served. # Checked math For (2), the standard library offers checked math in the `CheckedAdd`, `CheckedMul` etc. traits, as well as integer types of unbounded size: `BigInt` and `BigUint`. This is good, but it's not enough. The acid test has to be whether for non-performance-critical code, people are actually *using* checked math. If they're not, then we've failed. `CheckedAdd` and co. are important to have for flexibility, but they're far too unwieldy for general use. People aren't going to write `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate design might be something like this: * Have traits for all the arithmetic operations for both checking on overflow and for wrapping around on overflow, e.g. `CheckedAdd` (as now), `WrappingAdd`, `CheckedSub`, `WrappingSub`, and so on. * Offer convenience methods for the Checked traits which perform `unwrap()` automatically. * Have separate sets of integer types which check for overflow and which wrap around on overflow. Whatever they're called: `CheckedU8`, `checked::u8`, `uc8`, ... * Both sets of types implement all of the Checked* and Wrapping* traits. You can use explicit calls to get either behavior with either types. * The checked types use the Checked traits to implement the operator overloads (`Add`, Mul`, etc.), while the wrapping types use the Wrapping traits to implement them. In other words, the difference between the types is (probably only) in the behavior of the operators. * `BigInt` also implements all of the Wrapping and Checked traits: because overflow never happens, it can claim to do anything if it "does happen". `BigUint` implements all of them except for the Wrapping traits which may underflow, such as `WrappingSub`, because it has nowhere to wrap around to. Another option would be to have just one set of types but two sets of operators, like Swift does. I think that would work as well, or even better, but we've been avoiding having any operators which aren't familiar from C. # Unbounded integers While checked math helps catch instances of overflow and prevent misbehaviors and bugs, many programs would prefer integer types which do the right thing and don't overflow in the first place. For this, again, we currently have `BigInt` and `BigUint`. There's one problem with these: because they may allocate, they no longer `Copy`, which means that they can't be just drop-in replacements for the fixed-size types. To partially address this, once we have tracing GC, and if we manage to make `Gc: Copy`, we should add unbounded `Integer` (as in Haskell) and `Natural` types which internally use `Gc`, and so are also `Copy`. (In exchange, they wouldn't be `Send`, but that's far less pervasive.) These would (again, asides from `Send`) look and feel just like the built-in fixed-size types, while having the semantics of actual mathematical integers, resp. naturals (up to resource exhaustion of course). They would be ideal for code which is not performance critical and doesn't mind incurring, or already uses, garbage collection. For those cases, you wouldn't have to think about the tradeoffs, or make difficult choices: `Integer` is what you use. One concern with this would be the possibility of programs incurring GC accidentally by using these types. There's several ways to deal with this: * Note the fact that they use GC prominently in the documentation. * Make sure the No-GC lint catches any use of them. * Offer a "no GC in this task" mode which fails the task if GC allocation is invoked, to catch mistakes at runtime. I think these would be more than adequate to address the concern. # Between a rock and a hard place Having dispatched the "easy" cases above, for category #3 we're left between the rock (wraparound on overflow is wrong) and the hard place (checking for overflow is slow). Even here, we may have options. An observation: * We are adamantly opposed to compiler switches to turn off array bounds checking, because we are unwilling to compromise memory safety. * We are relatively unbothered by unchecked arithmetic, because it *doesn't* compromise memory safety. Taking these together, I think we should at least be less than adamantly opposed to compiler switches for enabling or disabling checked arithmetic. Consider the status quo. People use integer types which wrap on overflow. If they ever do overflow, it means misbehaviors and bugs. If we had a compiler flag to turn on checked arithmetic, even if it were only used a few times in testing, it would be a strict improvement: more bugs would be caught with less effort. But we clearly can't just add this flag for existing types, because they're well-defined to wrap around on overflow, and some programs (category #1) rely on this. So we need to have separate types. One option is therefore to just define this set of types as failing the task on overflow if checked arithmetic is enabled, and to wrap around if it's disabled. But it doesn't necessarily make sense to specify wraparound in the latter case, given that programs are not supposed to depend on it: they may be compiled with either flag, and should avoid overflow. Another observation: * Undefined behavior is anathema to the safe subset of the language. That would mean that it's not safe. * But *unspecified results* are maybe not so bad. We might already have them for bit-shifts. (Question for the audience: do we?) If unspecified results are acceptable, then we could instead say that these types fail on overflow if checked arithmetic is enabled, and have unspecified results if it isn't. But saying they wrap around is fine as well. This way, we can put off the choice between the rock and the hard place from program-writing time to compile time, at least. # Defaults Even if we provide the various options from above, from the perspective of what types people end up using, defaults are very important. There's two kinds of defaults: * The de jure default, inferred by the type system in the absence of other information, which used to be `int`. Thankfully, we're removing this. * The de facto, cultural default. For instance, if there is a type called "int", most people will use it without thinking. The latter question is still something we need to think about. Should we have a clear cultural default? Or should we force people to explicitly choose between checked and wrapping arithmetic? For the most part, this boils down to: * If `int` is checked, the default is slow * If `int` wraps, the default is wrong * If there is no `int`, people are confused Regarding the first, we seem to be living in deathly fear of someone naively writing an arithmetic benchmark in Rust, putting it up on the internet, and saying, "Look: Rust is slow". This is not an unrealistic scenario, sadly. The question is whether we'd rather have more programs be accidentally incorrect in order to avoid bad publicity from benchmarks being accidentally slow. Regarding the third, even if the only types we have are `intc`, `intw`, `ic8`, `iw8`, and so on, we *still* can't entirely escape creating a cultural default, because we still need to choose types for functions in the standard library and for built-in operations like array indexing. Then the path of least resistance for any given program will be to use the same types. There's one thing that might help resolve this conundrum, which is if we consider the previously-described scheme with compiler flags to control checked arithmetic to be acceptable. In that case, I think those types would be the clear choice to be the de facto defaults. Then we would have: * `i8`, `u8` .. `i64`, `u64`, `int`, and `uint` types, which fail the task on overflow if checked arithmetic is turned on, and either wrap around or have an unspecified result if it's off * a corresponding set of types somewhere in the standard library, which wrap around no matter what * and another set of corresponding types, which are checked no matter what -G?bor -------------- next part -------------- An HTML attachment was scrubbed... URL: From uther.ii at gmail.com Wed Jun 18 11:04:07 2014 From: uther.ii at gmail.com (Uther) Date: Wed, 18 Jun 2014 20:04:07 +0200 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: <53A1D497.5000300@gmail.com> I agree both points of view. Having examples for everything is great but displaying examples is bad for the conciseness of the doc. IMHO I think that code blocs are already too much eye-candy. I think that code blocks should be hidden and we should click on a button/link to display them. Le 18/06/2014 18:22, Steve Klabnik a ?crit : >> In case of trivial entities > The problem with this is what's trivial to you isn't trivial to > someone else. > >> think about the amount of update this may make necessary in case Rust >> language syntax changes. > Literally my job. Luckily, the syntax has been pretty stable > lately, and most changes have just been mechanical. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From dnfagnan at gmail.com Wed Jun 18 11:05:28 2014 From: dnfagnan at gmail.com (Daniel Fagnan) Date: Wed, 18 Jun 2014 12:05:28 -0600 Subject: [rust-dev] Rust's documentation is about to drastically improve In-Reply-To: References: Message-ID: Oops, forgot to reply to all. RustByExample is not an official documentation source. The manual is severally outdated and should not be used. So there's the curated docs, and generated ones. -- Daniel Fagnan @TheHydroImpulse http://hydrocodedesign.com M: (780) 983-4997 On Wed, Jun 18, 2014 at 10:52 AM, Zolt?n T?th wrote: > ?One long-standing problem of mine with the docs is that they are split to > multiple sources: tutorial, manual, RustByExample, RustForC++Programmers, > ... . Would be nice to have one central starting location and a strict > hierarchy of links from the center to a searched topic, and then being able > to find every kind of info about the topic there. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Wed Jun 18 11:07:42 2014 From: banderson at mozilla.com (Brian Anderson) Date: Wed, 18 Jun 2014 11:07:42 -0700 Subject: [rust-dev] Rust CI In-Reply-To: References: Message-ID: <53A1D56E.6010801@mozilla.com> Thanks, Hans! Rust CI is a fantastic resource and I'm glad it's running smoothly again. On 06/18/2014 01:11 AM, Hans J?rgen Hoel wrote: > Hi, > > Rust Ci wasn't working for a period due to problems with building the > nightly PPA for the platform used by Travis (required GCC version was > bumped with no way to specify alternative to configure script). > > This has been fixed for a while, but it turns out that many Travis > auth tokens has expired in the mean time. > > If you want fix this for your project to start triggering builds > again, simply press the red padlock icon next to your project on the > frontpage (http://rust-ci.org/) and you will be redirected to GitHub > for authentication. > > If you've got any questions, ping me on irc (hansjorg). > > Regards, > > Hans J?rgen > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From bascule at gmail.com Wed Jun 18 11:15:31 2014 From: bascule at gmail.com (Tony Arcieri) Date: Wed, 18 Jun 2014 11:15:31 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: Message-ID: On Wed, Jun 18, 2014 at 10:08 AM, G?bor Lehel wrote: > > # Between a rock and a hard place > > Having dispatched the "easy" cases above, for category #3 we're left > between the rock (wraparound on overflow is wrong) and the hard place > (checking for overflow is slow). > > Even here, we may have options. > I really like what Swift did: define two sets of operators, a default one which checks/errors on overflow, and a second set of "overflow operators" (which look like &+ &- etc) when you need the performance of unchecked operations or otherwise desire overflow behavior: https://developer.apple.com/library/prerelease/ios/documentation/swift/conceptual/swift_programming_language/AdvancedOperators.html#//apple_ref/doc/uid/TP40014097-CH27-XID_37 -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Wed Jun 18 11:20:26 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Wed, 18 Jun 2014 14:20:26 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: Message-ID: <53A1D86A.9040208@gmail.com> On 18/06/14 01:08 PM, G?bor Lehel wrote: > # Exposition > > We've debated the subject of integer overflow quite a bit, without much > apparent progress. Essentially, we've been running in circles around two > core facts: wrapping is bad, but checking is slow. The current consensus > seems to be to, albeit grudgingly, stick with the status quo. > > I think we've established that a perfect, one-size-fits-all solution is > not possible. But I don't think this means that we're out of options, or > have no room for improvement. I think there are several imperfect, > partial solutions we could pursue, to address the various use cases in a > divide-and-conquer fashion. > > This is not a formal RFC, more of a grab bag of thoughts and ideas. > > The central consideration has to be the observation that, while wrapping > around on overflow is well-supported by hardware, for the large majority > of programs, it's the wrong behavior. > > Basically, programs are just hoping that overflow won't happen. And if > it ever does happen, it's going to result in unexpected behavior and > bugs. (Including the possibility of security bugs: not all security bugs > are memory safety bugs.) This is a step up from C's insanity of > undefined behavior for signed overflow, where the compiler assumes that > overflow *cannot* happen and even optimizes based on that assumption, > but it's still a sad state of affairs. If we're clearing the bar, that's > only because it's been buried underground. > > We can divide programs into three categories. (I'm using "program" in > the general sense of "piece of code which does a thing".) > > 1) Programs where wrapping around on overflow is the desired semantics. > > 2) Programs where wrapping around on overflow is not the desired > semantics, but performance is not critical. If performance wasn't critical, the program wouldn't be written in Rust. The language isn't aimed at use cases where performance isn't a bug deal, as it makes many sacrifices to provide the level of control that's available. > 3) Programs where wrapping around on overflow is not the desired > semantics and performance is critical. > > Programs in (1) are well-served by the language and libraries as they > are, and there's not much to do except to avoid regressing. > > Programs in (2) and (3) are not as well-served. > > > # Checked math > > For (2), the standard library offers checked math in the `CheckedAdd`, > `CheckedMul` etc. traits, as well as integer types of unbounded size: > `BigInt` and `BigUint`. This is good, but it's not enough. The acid test > has to be whether for non-performance-critical code, people are actually > *using* checked math. If they're not, then we've failed. > > `CheckedAdd` and co. are important to have for flexibility, but they're > far too unwieldy for general use. People aren't going to write > `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate > design might be something like this: > > * Have traits for all the arithmetic operations for both checking on > overflow and for wrapping around on overflow, e.g. `CheckedAdd` (as > now), `WrappingAdd`, `CheckedSub`, `WrappingSub`, and so on. > > * Offer convenience methods for the Checked traits which perform > `unwrap()` automatically. > > * Have separate sets of integer types which check for overflow and > which wrap around on overflow. Whatever they're called: `CheckedU8`, > `checked::u8`, `uc8`, ... > > * Both sets of types implement all of the Checked* and Wrapping* > traits. You can use explicit calls to get either behavior with either types. > > * The checked types use the Checked traits to implement the operator > overloads (`Add`, Mul`, etc.), while the wrapping types use the Wrapping > traits to implement them. In other words, the difference between the > types is (probably only) in the behavior of the operators. > > * `BigInt` also implements all of the Wrapping and Checked traits: > because overflow never happens, it can claim to do anything if it "does > happen". `BigUint` implements all of them except for the Wrapping traits > which may underflow, such as `WrappingSub`, because it has nowhere to > wrap around to. > > Another option would be to have just one set of types but two sets of > operators, like Swift does. I think that would work as well, or even > better, but we've been avoiding having any operators which aren't > familiar from C. > > > # Unbounded integers > > While checked math helps catch instances of overflow and prevent > misbehaviors and bugs, many programs would prefer integer types which do > the right thing and don't overflow in the first place. For this, again, > we currently have `BigInt` and `BigUint`. There's one problem with > these: because they may allocate, they no longer `Copy`, which means > that they can't be just drop-in replacements for the fixed-size types. > > > To partially address this, once we have tracing GC, and if we manage to > make `Gc: Copy`, we should add unbounded `Integer` (as in Haskell) > and `Natural` types which internally use `Gc`, and so are also `Copy`. > (In exchange, they wouldn't be `Send`, but that's far less pervasive.) > These would (again, asides from `Send`) look and feel just like the > built-in fixed-size types, while having the semantics of actual > mathematical integers, resp. naturals (up to resource exhaustion of > course). They would be ideal for code which is not performance critical > and doesn't mind incurring, or already uses, garbage collection. For > those cases, you wouldn't have to think about the tradeoffs, or make > difficult choices: `Integer` is what you use. A tracing garbage collector for Rust is a possibility but not a certainty. I don't think it would make sense to have `Gc` support `Copy` but have it left out for `Rc`. The fact that an arbitrary compiler decision like that would determine the most convenient type is a great reason to avoid making that arbitrary choice. There's no opportunity for cycles in integers, and `Rc` will be faster along with using far less memory. It doesn't have the overhead associated with reference counting in other languages due to being task-local (not atomic) and much of the reference counting is elided by move semantics / borrows. With the help of sized deallocation, Rust can have an incredibly fast allocator implementation. Since `Rc` is task-local, it also doesn't need to be using the same allocator entry point as sendable types. It can make use of a thread-local allocator with less complexity and overhead, although this could also be available on an opt-in basis for sendable types by changing the allocator parameter from the default. > One concern with this would be the possibility of programs incurring GC > accidentally by using these types. There's several ways to deal with this: > > * Note the fact that they use GC prominently in the documentation. > > * Make sure the No-GC lint catches any use of them. > > * Offer a "no GC in this task" mode which fails the task if GC > allocation is invoked, to catch mistakes at runtime. > > I think these would be more than adequate to address the concern. I don't think encouraging tracing garbage collection is appropriate for a language designed around avoiding it. It would be fine to have it as a choice if it never gets in the way, but it shouldn't be promoted as a default. > # Between a rock and a hard place > > Having dispatched the "easy" cases above, for category #3 we're left > between the rock (wraparound on overflow is wrong) and the hard place > (checking for overflow is slow). > > Even here, we may have options. > > An observation: > > * We are adamantly opposed to compiler switches to turn off array > bounds checking, because we are unwilling to compromise memory safety. > > * We are relatively unbothered by unchecked arithmetic, because it > *doesn't* compromise memory safety. > > Taking these together, I think we should at least be less than adamantly > opposed to compiler switches for enabling or disabling checked arithmetic. I think compiler switches or attributes enabling a different dialect of the language are a bad idea as a whole. Code from libraries is directly mixed into other crates, so changing the semantics of the language is inherently broken. > Consider the status quo. People use integer types which wrap on > overflow. If they ever do overflow, it means misbehaviors and bugs. If > we had a compiler flag to turn on checked arithmetic, even if it were > only used a few times in testing, it would be a strict improvement: more > bugs would be caught with less effort. > > But we clearly can't just add this flag for existing types, because > they're well-defined to wrap around on overflow, and some programs > (category #1) rely on this. So we need to have separate types. > > One option is therefore to just define this set of types as failing the > task on overflow if checked arithmetic is enabled, and to wrap around if > it's disabled. But it doesn't necessarily make sense to specify > wraparound in the latter case, given that programs are not supposed to > depend on it: they may be compiled with either flag, and should avoid > overflow. > > Another observation: > > * Undefined behavior is anathema to the safe subset of the language. > That would mean that it's not safe. > > * But *unspecified results* are maybe not so bad. We might already have > them for bit-shifts. (Question for the audience: do we?) > > If unspecified results are acceptable, then we could instead say that > these types fail on overflow if checked arithmetic is enabled, and have > unspecified results if it isn't. But saying they wrap around is fine as > well. > > This way, we can put off the choice between the rock and the hard place > from program-writing time to compile time, at least. > > > # Defaults > > Even if we provide the various options from above, from the perspective > of what types people end up using, defaults are very important. > > There's two kinds of defaults: > > * The de jure default, inferred by the type system in the absence of > other information, which used to be `int`. Thankfully, we're removing this. > > * The de facto, cultural default. For instance, if there is a type > called "int", most people will use it without thinking. > > The latter question is still something we need to think about. Should we > have a clear cultural default? Or should we force people to explicitly > choose between checked and wrapping arithmetic? > > For the most part, this boils down to: > > * If `int` is checked, the default is slow > > * If `int` wraps, the default is wrong > > * If there is no `int`, people are confused > > Regarding the first, we seem to be living in deathly fear of someone > naively writing an arithmetic benchmark in Rust, putting it up on the > internet, and saying, "Look: Rust is slow". This is not an unrealistic > scenario, sadly. The question is whether we'd rather have more programs > be accidentally incorrect in order to avoid bad publicity from > benchmarks being accidentally slow. > > Regarding the third, even if the only types we have are `intc`, `intw`, > `ic8`, `iw8`, and so on, we *still* can't entirely escape creating a > cultural default, because we still need to choose types for functions in > the standard library and for built-in operations like array indexing. > Then the path of least resistance for any given program will be to use > the same types. > > There's one thing that might help resolve this conundrum, which is if we > consider the previously-described scheme with compiler flags to control > checked arithmetic to be acceptable. In that case, I think those types > would be the clear choice to be the de facto defaults. Then we would have: > > * `i8`, `u8` .. `i64`, `u64`, `int`, and `uint` types, which fail the > task on overflow if checked arithmetic is turned on, and either wrap > around or have an unspecified result if it's off > > * a corresponding set of types somewhere in the standard library, which > wrap around no matter what > > * and another set of corresponding types, which are checked no matter what > > > -G?bor -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From banderson at mozilla.com Wed Jun 18 11:21:06 2014 From: banderson at mozilla.com (Brian Anderson) Date: Wed, 18 Jun 2014 11:21:06 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: Message-ID: <53A1D892.5080403@mozilla.com> On 06/18/2014 10:08 AM, G?bor Lehel wrote: > > # Checked math > > For (2), the standard library offers checked math in the `CheckedAdd`, > `CheckedMul` etc. traits, as well as integer types of unbounded size: > `BigInt` and `BigUint`. This is good, but it's not enough. The acid > test has to be whether for non-performance-critical code, people are > actually *using* checked math. If they're not, then we've failed. > > `CheckedAdd` and co. are important to have for flexibility, but > they're far too unwieldy for general use. People aren't going to write > `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate > design might be something like this: > > * Have traits for all the arithmetic operations for both checking on > overflow and for wrapping around on overflow, e.g. `CheckedAdd` (as > now), `WrappingAdd`, `CheckedSub`, `WrappingSub`, and so on. > > * Offer convenience methods for the Checked traits which perform > `unwrap()` automatically. > > * Have separate sets of integer types which check for overflow and > which wrap around on overflow. Whatever they're called: `CheckedU8`, > `checked::u8`, `uc8`, ... > > * Both sets of types implement all of the Checked* and Wrapping* > traits. You can use explicit calls to get either behavior with either > types. > > * The checked types use the Checked traits to implement the operator > overloads (`Add`, Mul`, etc.), while the wrapping types use the > Wrapping traits to implement them. In other words, the difference > between the types is (probably only) in the behavior of the operators. > > * `BigInt` also implements all of the Wrapping and Checked traits: > because overflow never happens, it can claim to do anything if it > "does happen". `BigUint` implements all of them except for the > Wrapping traits which may underflow, such as `WrappingSub`, because it > has nowhere to wrap around to. > > Another option would be to have just one set of types but two sets of > operators, like Swift does. I think that would work as well, or even > better, but we've been avoiding having any operators which aren't > familiar from C. The general flavor of this proposal w/r/t checked arithmetic sounds pretty reasonable to me, and we can probably make progress on this now. I particularly think that having checked types that use operator overloading is important for ergonomics. From comexk at gmail.com Wed Jun 18 12:40:41 2014 From: comexk at gmail.com (comex) Date: Wed, 18 Jun 2014 15:40:41 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: Message-ID: On Wed, Jun 18, 2014 at 1:08 PM, G?bor Lehel wrote: > To partially address this, once we have tracing GC, and if we manage to make > `Gc: Copy`, we should add unbounded `Integer` (as in Haskell) and > `Natural` types which internally use `Gc`, and so are also `Copy`. (In > exchange, they wouldn't be `Send`, but that's far less pervasive.) Wait, what? Since when is sharing data between threads an uncommon use case? (Personally I think this more points to the unwieldiness of typing .clone() for cheap and potentially frequent clones like Rc...) From gmaxwell at gmail.com Wed Jun 18 13:05:45 2014 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Wed, 18 Jun 2014 13:05:45 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: Message-ID: On Wed, Jun 18, 2014 at 10:08 AM, G?bor Lehel wrote: > memory safety bugs.) This is a step up from C's insanity of undefined > behavior for signed overflow, where the compiler assumes that overflow > *cannot* happen and even optimizes based on that assumption, but it's still > a sad state of affairs. C's behavior is not without an advantage. It means that every operation on signed values in C has an implicit latent assertion for analysis tools: If wrapping can happen in operation the program is wrong, end of story. This means you can use existing static and dynamic analysis tools while testing and have a zero false positive rate? not just on your own code but on any third party code you're depending on too. In languages like rust where signed overflow is defined, no such promises exists? signed overflow at runtime might be perfectly valid behavior, and so analysis and testing require more work to produce useful results. You might impose a standard on your own code that requires that all valid signed overflow must be annotated in some manner, but this does nothing for third party code (including the standard libraries). The value here persists even when there is normally no checking at runtime, because the tools can still be run sometimes? which is less of a promise than always on runtime checking but it also has no runtime cost. So I think there would be a value in rust of having types for which wrap is communicated by the developer as being invalid, even if it were not normally checked at runtime. Being able to switch between safety levels is not generally the rust way? or so it seems to me? and may not be justifiably in cases where the risk vs cost ratio is especially high (e.g. bounds checking on memory)... but I think it's better than not having the safety facility at all. The fact that C can optimize non-overflow is also fairly useful in proving loop bounds and allowing the autovectorization to work. I've certantly had signal processing codebases where this made a difference, but I'm not sure if the same communication to the optimizer might not be available in other ways in rust. > `CheckedAdd` and co. are important to have for flexibility, but they're far > too unwieldy for general use. People aren't going to write > `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate design > might be something like this: Not only does friction like that discourage use? it also gets in the way of people switching behaviors between testing and production when performance considerations really do preclude always on testing. From danielmicay at gmail.com Wed Jun 18 13:19:24 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Wed, 18 Jun 2014 16:19:24 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: Message-ID: <53A1F44C.6090404@gmail.com> On 18/06/14 03:40 PM, comex wrote: > On Wed, Jun 18, 2014 at 1:08 PM, G?bor Lehel wrote: >> To partially address this, once we have tracing GC, and if we manage to make >> `Gc: Copy`, we should add unbounded `Integer` (as in Haskell) and >> `Natural` types which internally use `Gc`, and so are also `Copy`. (In >> exchange, they wouldn't be `Send`, but that's far less pervasive.) > > Wait, what? Since when is sharing data between threads an uncommon use case? Data remaining local to the thread it was allocated in is the common case. That doesn't mean that sending dynamic allocations to other tasks or sharing dynamic allocations is bad. `Rc` is inherently local to a thread, so it might as well be using an allocator leveraging that. > (Personally I think this more points to the unwieldiness of typing > .clone() for cheap and potentially frequent clones like Rc...) Either way, it doesn't make sense to make a special case for `Gc`. If `copy_nonoverlapping_memory` isn't enough to move it somewhere, then it's not `Copy`. A special-case shouldn't be arbitrarily created for it without providing the same thing for user-defined types. That's exactly the kind of poor design that Rust has been fleeing from. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From jpakkane at gmail.com Wed Jun 18 15:17:42 2014 From: jpakkane at gmail.com (Jussi Pakkanen) Date: Thu, 19 Jun 2014 01:17:42 +0300 Subject: [rust-dev] Compiling Rust programs with the Meson build system Message-ID: Hi I'm working on a build system called Meson ( https://jpakkane.github.io/meson/) and I figured I'd add native Rust support. Here's what a build definition for a simple Rust application ended up looking: ----- project('rustproject', 'rust') executable('rustprog', 'prog.rs') ---- This gives you all the features you'd expect such as different build types (debug/release/etc), accurate dependency tracking via --dep-info, unit tests, install targets and so on. Shared library support is there, but I need to first fix one issue before it will actually work. This has to do with the fact that you can't know beforehand what the output file name will be (if you set it manually with -o, rustc will not link against it). If you want to try it our yourself, here's the steps: - check out Meson's git trunk: https://github.com/jpakkane/meson - cd into it, mkdir buildtest - ./meson.py test\ cases/rust/1\ basic buildtest - cd buildtest - ninja (or ninja-build if you are on Fedora) Test 2 does not work because of the above mentioned issue, so you probably don't want to run it. Feel free to try it out. If you have any questions I'm happy to answer them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Wed Jun 18 15:21:10 2014 From: corey at octayn.net (Corey Richardson) Date: Wed, 18 Jun 2014 15:21:10 -0700 Subject: [rust-dev] Compiling Rust programs with the Meson build system In-Reply-To: References: Message-ID: Hey Jussi, Very cool! Always happy to see tools with Rust support :) As to file names, do you know about `rustc --crate-file-name`? On Wed, Jun 18, 2014 at 3:17 PM, Jussi Pakkanen wrote: > Hi > > I'm working on a build system called Meson > (https://jpakkane.github.io/meson/) and I figured I'd add native Rust > support. Here's what a build definition for a simple Rust application ended > up looking: > > ----- > > project('rustproject', 'rust') > executable('rustprog', 'prog.rs') > > ---- > > This gives you all the features you'd expect such as different build types > (debug/release/etc), accurate dependency tracking via --dep-info, unit > tests, install targets and so on. Shared library support is there, but I > need to first fix one issue before it will actually work. This has to do > with the fact that you can't know beforehand what the output file name will > be (if you set it manually with -o, rustc will not link against it). > > If you want to try it our yourself, here's the steps: > > - check out Meson's git trunk: https://github.com/jpakkane/meson > - cd into it, mkdir buildtest > - ./meson.py test\ cases/rust/1\ basic buildtest > - cd buildtest > - ninja (or ninja-build if you are on Fedora) > > Test 2 does not work because of the above mentioned issue, so you probably > don't want to run it. > > Feel free to try it out. If you have any questions I'm happy to answer them. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- http://octayn.net/ From hayakawa at valinux.co.jp Wed Jun 18 16:31:19 2014 From: hayakawa at valinux.co.jp (Akira Hayakawa) Date: Thu, 19 Jun 2014 08:31:19 +0900 Subject: [rust-dev] Compiling Rust programs with the Meson build system In-Reply-To: References: Message-ID: <20140619083119.b37dc1cc37c57316fac5fb94@valinux.co.jp> Jussi, Is this a program to generate ninja script? I have once prototyped such system (but far imperfect from yours) that generates ninja script through Rake. https://github.com/akiradeveloper/ninja-rake As a great fun of ninja build system, I am really looking forward to the Rust support. - Akira From jhm456 at gmail.com Thu Jun 19 00:05:00 2014 From: jhm456 at gmail.com (Jerry Morrison) Date: Thu, 19 Jun 2014 00:05:00 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A1D892.5080403@mozilla.com> References: <53A1D892.5080403@mozilla.com> Message-ID: Nice analysis! Over what scope should programmers pick between G?bor's 3 categories? The "wraparound is desired" category should only occur in narrow parts of code, like computing a hash. That suits a wraparound-operator better than a wraparound-type, and certainly better than a compiler switch. And it doesn't make sense for a type like 'int' that doesn't have a fixed size. The "wraparound is undesired but performance is critical" category occurs in the most performance critical bits of code [I'm doubting that all parts of all Rust programs are performance critical], and programmers need to make the trade-off over that limited scope. Maybe via operators or types, but not by a compiler switch over whole compilation units. That leaves "wraparound is undesired and performance is not critical" category for everything else. The choice between checked vs. unbounded sounds like typing. BigUint is weird: It can underflow but not overflow. When you use its value in a more bounded way you'll need to bounds-check it then, whether it can go negative or not. Wouldn't it be easier to discard it than squeeze it into the wraparound or checked models? On Wed, Jun 18, 2014 at 11:21 AM, Brian Anderson wrote: > > On 06/18/2014 10:08 AM, G?bor Lehel wrote: > >> >> # Checked math >> >> For (2), the standard library offers checked math in the `CheckedAdd`, >> `CheckedMul` etc. traits, as well as integer types of unbounded size: >> `BigInt` and `BigUint`. This is good, but it's not enough. The acid test >> has to be whether for non-performance-critical code, people are actually >> *using* checked math. If they're not, then we've failed. >> >> `CheckedAdd` and co. are important to have for flexibility, but they're >> far too unwieldy for general use. People aren't going to write >> `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate >> design might be something like this: >> >> * Have traits for all the arithmetic operations for both checking on >> overflow and for wrapping around on overflow, e.g. `CheckedAdd` (as now), >> `WrappingAdd`, `CheckedSub`, `WrappingSub`, and so on. >> >> * Offer convenience methods for the Checked traits which perform >> `unwrap()` automatically. >> >> * Have separate sets of integer types which check for overflow and which >> wrap around on overflow. Whatever they're called: `CheckedU8`, >> `checked::u8`, `uc8`, ... >> >> * Both sets of types implement all of the Checked* and Wrapping* traits. >> You can use explicit calls to get either behavior with either types. >> >> * The checked types use the Checked traits to implement the operator >> overloads (`Add`, Mul`, etc.), while the wrapping types use the Wrapping >> traits to implement them. In other words, the difference between the types >> is (probably only) in the behavior of the operators. >> >> * `BigInt` also implements all of the Wrapping and Checked traits: >> because overflow never happens, it can claim to do anything if it "does >> happen". `BigUint` implements all of them except for the Wrapping traits >> which may underflow, such as `WrappingSub`, because it has nowhere to wrap >> around to. >> >> Another option would be to have just one set of types but two sets of >> operators, like Swift does. I think that would work as well, or even >> better, but we've been avoiding having any operators which aren't familiar >> from C. >> > > The general flavor of this proposal w/r/t checked arithmetic sounds pretty > reasonable to me, and we can probably make progress on this now. I > particularly think that having checked types that use operator overloading > is important for ergonomics. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpakkane at gmail.com Thu Jun 19 00:17:19 2014 From: jpakkane at gmail.com (Jussi Pakkanen) Date: Thu, 19 Jun 2014 10:17:19 +0300 Subject: [rust-dev] Compiling Rust programs with the Meson build system In-Reply-To: References: Message-ID: On Thu, Jun 19, 2014 at 1:21 AM, Corey Richardson wrote: > Very cool! Always happy to see tools with Rust support :) As to file > names, do you know about `rustc --crate-file-name`? I do but that is not the issue. The output file name can change (as far as I understood from the docs) whenever you change one of the files that make up the crate as the file name contains a hash of the API. This does not tie nicely with Ninja that Meson uses to actually build things as it deals with explicit file names. Changing an output target file name would mean, in the naive case, that the Ninja file must be regenerated on every compile, which is way too slow. Meson is designed to be fast, e.g. no-op builds of 10 000 files must take less than one second. This is a fixable problem, it just needs some wrappers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpakkane at gmail.com Thu Jun 19 00:34:46 2014 From: jpakkane at gmail.com (Jussi Pakkanen) Date: Thu, 19 Jun 2014 10:34:46 +0300 Subject: [rust-dev] Compiling Rust programs with the Meson build system In-Reply-To: <20140619083119.b37dc1cc37c57316fac5fb94@valinux.co.jp> References: <20140619083119.b37dc1cc37c57316fac5fb94@valinux.co.jp> Message-ID: On Thu, Jun 19, 2014 at 2:31 AM, Akira Hayakawa wrote: > Is this a program to generate ninja script? > It does generate Ninja scripts but, like CMake, also VS2010 and XCode projects, though Rust only works with the Ninja backend ATM. > I have once prototyped such system (but far imperfect from yours) > that generates ninja script through Rake. > https://github.com/akiradeveloper/ninja-rake > > As a great fun of ninja build system, I am really looking forward > to the Rust support. > Yeah, Ninja is really cool. The main philosophical difference between Meson and your code is that Meson does not provide a Turing-complete language, but rather a mostly declarative DSL. This makes it easier to write and make robust. -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.gesemann at gmail.com Thu Jun 19 04:08:44 2014 From: s.gesemann at gmail.com (Sebastian Gesemann) Date: Thu, 19 Jun 2014 13:08:44 +0200 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <53A025D1.2080504@aim.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <539F77B3.4040000@gmail.com> <53A025D1.2080504@aim.com> Message-ID: On Tue, Jun 17, 2014 at 1:26 PM, SiegeLord wrote: > On 06/16/2014 07:03 PM, Sebastian Gesemann wrote: >> >> Good example! I think even with scalar multiplication/division for >> bignum it's hard to do the calculation in-place of one operand. > > Each bignum can carry with it some extra space for this purpose. > >> Suppose I want to evaluate a polynomial over "BigRationals" using >> Horner's method: >> >> a + x * (b + x * (c + x * d)) >> >> What I DON'T want to type is >> >> a + x.clone() * (b + x.clone() * (c + x.clone() * d)) > > But apparently you want to type > a.inplace_add(x.clone().inplace_mul(b.inplace_add(x.clone().inplace_mul(c.inplace_add(x.clone().inplace_mul(d)))))) No, it's more like a + x * (b + x * (c + x * d))) At least when a,b,c,d are parameters I continue to use and want not moved from. And even if I want to use a,b,c,d only once, I don't need to clone x all the time. No idea what made you write this. > Because that's the alternative you are suggesting to people who want to > benefit from move semantics. The difference is that the short operator-based expression *works* right now whereas forcing a "moving self type" on to people makes this expression not compile for types that are not Copy. So, given the two options, I actually prefer the way things are right now. Of course, I'm not really 100% happy with it. But it's simple. And there is only so much you can do without overloading. I think, to convince people (either way) a deeper analysis is needed. What implementation techniques are possible? (e.g. extra buffers) What's the distribution of the different use cases? (How often does one need a single value more than once?). If we can believe the guys who invented the Mill CPU architechture, about 90% of the values computed are only consumed once (I think this was mentioned in the "Belt" talk). This, for example, does not make the "moving self parameter" look that bad. > Why would you deliberately set up a situation > where less efficient code is much cleaner to write? Because it's simple and allows the short expression to compile in *every* use case (even for non-Copy types). > This hasn't been the > choice made by Rust in the past (consider the overflowing arithmetic getting > sugar, but the non-overflowing one not). Rust also tries to stay simple and does not try to support every feature C++ has (e.g. overloading on different kinds of references). Cheers! On Tue, Jun 17, 2014 at 1:26 PM, SiegeLord wrote: > On 06/16/2014 07:03 PM, Sebastian Gesemann wrote: >> >> Good example! I think even with scalar multiplication/division for >> bignum it's hard to do the calculation in-place of one operand. > > > Each bignum can carry with it some extra space for this purpose. > > >> >> Suppose I want to evaluate a polynomial over "BigRationals" using >> Horner's method: >> >> a + x * (b + x * (c + x * d)) >> >> What I DON'T want to type is >> >> a + x.clone() * (b + x.clone() * (c + x.clone() * d)) > > > But apparently you want to type > > a.inplace_add(x.clone().inplace_mul(b.inplace_add(x.clone().inplace_mul(c.inplace_add(x.clone().inplace_mul(d)))))) > > Because that's the alternative you are suggesting to people who want to > benefit from move semantics. Why would you deliberately set up a situation > where less efficient code is much cleaner to write? This hasn't been the > choice made by Rust in the past (consider the overflowing arithmetic getting > sugar, but the non-overflowing one not). > > -SL > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From slabode at aim.com Thu Jun 19 04:24:16 2014 From: slabode at aim.com (SiegeLord) Date: Thu, 19 Jun 2014 07:24:16 -0400 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <9B3A0092-0EEE-4099-8684-E2D014D0E5DE@gmail.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <53A027C0.40206@aim.com> <9B3A0092-0EEE-4099-8684-E2D014D0E5DE@gmail.com> Message-ID: <53A2C860.8050303@aim.com> On 06/17/2014 07:41 AM, Vladimir Matveev wrote: >> Overloading Mul for matrix multiplication would be a mistake, since that operator does not act the same way multiplication acts for scalars. > > I think that one of the main reasons for overloading operators is not their genericity but their usage in the code. > > let a = Matrix::new(?); > let x = Vector::new(?); > let b = Vector::new(?); > let result = a * x + b; > > Looks much nicer than > > let result = a.times(x).plus(b); > > In mathematical computations you usually use concrete types, and having overloadable operators just makes your code nicer to read. Fair enough (indeed I don't think the current operator overloading is usable for generics), but I still stand by my point. It is just more useful in practice to sugar the element-wise multiplication than matrix multiplication (i.e. I find that I use the elementwise multiplication a lot more often than matrix multiplication). This isn't unprecedented, as Python's Numpy library does this too for its multi-dimensional array class (notably it doesn't do this for the dedicated matrix class... but I found it to be very limited and not useful in generic code). -SL From slabode at aim.com Thu Jun 19 04:59:19 2014 From: slabode at aim.com (SiegeLord) Date: Thu, 19 Jun 2014 07:59:19 -0400 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <539F77B3.4040000@gmail.com> <53A025D1.2080504@aim.com> Message-ID: <53A2D097.8010007@aim.com> On 06/19/2014 07:08 AM, Sebastian Gesemann wrote: > No, it's more like > > a + x * (b + x * (c + x * d))) It can't be that and be efficient as it is right now (returning a clone for each binop). I am not willing to trade efficiency for sugar, especially not when trade is this bad (in fact that's the entire point of this thread). In my matrix library through the use of lazy evaluation I can make my code run as fast as a manual loop implementation would. This would not be the case here. I will note that you could very well implement a by-value self operator overload trait > The difference is that the short operator-based expression *works* > right now whereas forcing a "moving self type" on to people makes this > expression not compile for types that are not Copy. So, given the two > options, I actually prefer the way things are right now. It's inefficient, so it doesn't 'work'. > I think, to convince people (either way) a deeper analysis is needed. I don't think merely changing the type of self/arg to be by move is the only solution, or the best solution. It does seem to be the case that different operators benefit from reusing temporaries differently, but it's clear that *some* operators definitely do (e.g. thing/scalar operators, addition/subtraction, element-wise matrix-operators) and the current operator overloading traits prevent that optimization without the use of RefCell or lazy evaluation. Both of those have non-trivial semantic costs. In the case of operators that do benefit from reusing temporaries, doing so via moving seems very natural and relatively clean. Regardless, bignum and linear-algebra are prime candidates for operator overloading, but the way it is done now by most implementations is just unsatisfactory. Maybe bignums in std::num and all the linear algebra libraries could be converted to use a lazy evaluation, thus avoiding this issue of changing the operator overload traits. Alternatively, maybe the whole operator-overloading idea-via traits is broken anyway since operators do have such different semantics for different types (in particular, efficient semantics of operators differ from built-in numeric types and complicated library numeric types). Maybe adding attributes to methods (e.g. #[operator_add] fn my_add(self, whatever) {} ) is more flexible since it is becoming clear that generic numeric code (generic to bignums and normal integers and matrices) might not be a possibility. I should note that it is already an impossibility in many cases (I guarantee that my lazy matrix library won't fulfill the type bounds of any function due to how its return types differ from the argument types). -SL From slabode at aim.com Thu Jun 19 05:14:00 2014 From: slabode at aim.com (SiegeLord) Date: Thu, 19 Jun 2014 08:14:00 -0400 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <53A2D097.8010007@aim.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <539F77B3.4040000@gmail.com> <53A025D1.2080504@aim.com> <53A2D097.8010007@aim.com> Message-ID: <53A2D408.90206@aim.com> On 06/19/2014 07:59 AM, SiegeLord wrote: > I will note that you could very well implement a by-value self operator > overload trait Forgot to finish this one. I was going to go into how you could implement it for a &Foo to get your 'by-ref' behavior back. Of course this ruins generics, but they are tricky regardless (see the rest of that email). -SL From s.gesemann at gmail.com Thu Jun 19 09:13:07 2014 From: s.gesemann at gmail.com (Sebastian Gesemann) Date: Thu, 19 Jun 2014 18:13:07 +0200 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <53A2D097.8010007@aim.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <539F77B3.4040000@gmail.com> <53A025D1.2080504@aim.com> <53A2D097.8010007@aim.com> Message-ID: On Thu, Jun 19, 2014 at 1:59 PM, SiegeLord wrote: > [...] > I don't think merely changing the type of self/arg to be by move is the only > solution, or the best solution. It does seem to be the case that different > operators benefit from reusing temporaries differently, but it's clear that > *some* operators definitely do (e.g. thing/scalar operators, > addition/subtraction, element-wise matrix-operators) Sure. I'm not denying this. > and the current > operator overloading traits prevent that optimization without the use of > RefCell or lazy evaluation. Both of those have non-trivial semantic costs. > In the case of operators that do benefit from reusing temporaries, doing so > via moving seems very natural and relatively clean. Here's another idea: Introduce a new uniform parameter type that supports both: acting like an immutable reference to lvalues (to keep them as they are) OR doing a move for rvalues (because almost nobody would care): trait Mul { fn mul<'a,'b>(self: Arg<'a,Self>, rhs: Arg<'b,RHS>) ->RES; } where Arg is something like this: pub enum Arg<'b,T> { keep(&'b T), move(T) } and the compiler coerses Lvalues into keep(&mylvalue) and and Rvalues into move(myrvalue) automatically if the user was not explicit. This way one can still benefit from move semantics (sometimes with an explicit move) and it does not require clone()-calls in the user code. See here for an example on how Arg could be used: http://is.gd/etvAu7 (in this example, I used the keep/move variant constructors explicitly but the idea is to let the compiler automatically pick the right, see previous paragraph). It's like C++ overload resolution deferred to runtime matching. Maybe this could be optimized in some clever way. For example, if the function is inlined, only one match arm remains ("static matching"). And maybe it would even benefit from some core language sugar. I hear @ is not used anymore ;) Some sugar would remove the need to specify lifetimes (in almost all cases) and make self work again without explicit types: trait Mul { fn mul(@self, rhs: @RHS) ->RES; } We have something similar already (MaybeOwned) but the "constructor magic" (inferring the right variant based on the value category) hasn't been done yet as far as I know. Cheers! sg From pcwalton at mozilla.com Thu Jun 19 10:04:55 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Thu, 19 Jun 2014 10:04:55 -0700 Subject: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?) In-Reply-To: <53A2D097.8010007@aim.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <539F77B3.4040000@gmail.com> <53A025D1.2080504@aim.com> <53A2D097.8010007@aim.com> Message-ID: <53A31837.8020505@mozilla.com> On 6/19/14 4:59 AM, SiegeLord wrote: > Regardless, bignum and linear-algebra are prime candidates for operator > overloading, but the way it is done now by most implementations is just > unsatisfactory. Maybe bignums in std::num and all the linear algebra > libraries could be converted to use a lazy evaluation, thus avoiding > this issue of changing the operator overload traits. Alternatively, > maybe the whole operator-overloading idea-via traits is broken anyway > since operators do have such different semantics for different types (in > particular, efficient semantics of operators differ from built-in > numeric types and complicated library numeric types). Maybe adding > attributes to methods (e.g. #[operator_add] fn my_add(self, whatever) {} > ) is more flexible since it is becoming clear that generic numeric code > (generic to bignums and normal integers and matrices) might not be a > possibility. That's introducing ad-hoc overloading and does not interact well with non-ad-hoc generics. I always felt that Haskell's approach was better for code readability: allowing user-defined symbolic operators. That way if `+`'s signature is not to your liking, you can define `+.` or whatever and get most of the benefits of overloading, while maintaining the invariant that the readers of the code always know what the signature of `+` is. Patrick From danielmicay at gmail.com Thu Jun 19 12:03:49 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Thu, 19 Jun 2014 15:03:49 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <53A1D892.5080403@mozilla.com> Message-ID: <53A33415.2000303@gmail.com> On 19/06/14 03:05 AM, Jerry Morrison wrote: > > BigUint is weird: It can underflow but not overflow. When you use its > value in a more bounded way you'll need to bounds-check it then, whether > it can go negative or not. Wouldn't it be easier to discard it than > squeeze it into the wraparound or checked models? I don't think we should have a big unsigned integer. It's not something I've seen other big integer libraries do. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From matt at mcpherrin.ca Thu Jun 19 16:02:51 2014 From: matt at mcpherrin.ca (Matthew McPherrin) Date: Thu, 19 Jun 2014 16:02:51 -0700 Subject: [rust-dev] Why are generic integers not usable as floats? Message-ID: This came up on IRC today, and it was something I've wondered in the past but nobody had an immediately good answer either way. I think it's fairly inconsistent that these two code samples aren't equivalent: let a = 1f32; let b: f32 = 1; It's fairly annoying in my opinion to have to occasionally add a .0 after floating point literals. Especially since we're getting rid of integer fallback in RFC 30, I think this issue ought to be thought about. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zwarich at mozilla.com Thu Jun 19 16:06:59 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Thu, 19 Jun 2014 16:06:59 -0700 Subject: [rust-dev] Why are generic integers not usable as floats? In-Reply-To: References: Message-ID: Not all integer constants can be perfectly represented as floating-point values. What do you propose in that case, just a hard error? Cameron > On Jun 19, 2014, at 4:02 PM, Matthew McPherrin wrote: > > This came up on IRC today, and it was something I've wondered in the past but nobody had an immediately good answer either way. > > I think it's fairly inconsistent that these two code samples aren't equivalent: > > let a = 1f32; > let b: f32 = 1; > > It's fairly annoying in my opinion to have to occasionally add a .0 after floating point literals. > > Especially since we're getting rid of integer fallback in RFC 30, I think this issue ought to be thought about. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mozilla at mcpherrin.ca Thu Jun 19 16:16:43 2014 From: mozilla at mcpherrin.ca (Matthew McPherrin) Date: Thu, 19 Jun 2014 16:16:43 -0700 Subject: [rust-dev] Why are generic integers not usable as floats? In-Reply-To: References: Message-ID: fn main() { println!("{}", 16777217f32) } This program prints 16777216. So I think allowing integer literals doesn't really change anything, since you can already type unrepresentable float literals. That said, this ought to at least trigger a warning like type_overflow does for integers. On Thu, Jun 19, 2014 at 4:06 PM, Cameron Zwarich wrote: > Not all integer constants can be perfectly represented as floating-point > values. What do you propose in that case, just a hard error? > > Cameron > > On Jun 19, 2014, at 4:02 PM, Matthew McPherrin wrote: > > This came up on IRC today, and it was something I've wondered in the past > but nobody had an immediately good answer either way. > > I think it's fairly inconsistent that these two code samples aren't > equivalent: > > let a = 1f32; > let b: f32 = 1; > > It's fairly annoying in my opinion to have to occasionally add a .0 after > floating point literals. > > Especially since we're getting rid of integer fallback in RFC 30, I think > this issue ought to be thought about. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.striegel at gmail.com Thu Jun 19 16:23:40 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Thu, 19 Jun 2014 19:23:40 -0400 Subject: [rust-dev] Why are generic integers not usable as floats? In-Reply-To: References: Message-ID: I'm actually very pleased that floating point literals are entirely separate from integer literals, but I can't quite explain why. A matter of taste, I suppose. Perhaps it stems from symmetry with the fact that I wouldn't want `let x: int = 1.0;` to be valid. On Thu, Jun 19, 2014 at 7:02 PM, Matthew McPherrin wrote: > This came up on IRC today, and it was something I've wondered in the past > but nobody had an immediately good answer either way. > > I think it's fairly inconsistent that these two code samples aren't > equivalent: > > let a = 1f32; > let b: f32 = 1; > > It's fairly annoying in my opinion to have to occasionally add a .0 after > floating point literals. > > Especially since we're getting rid of integer fallback in RFC 30, I think > this issue ought to be thought about. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at mir2.org Thu Jun 19 22:04:38 2014 From: igor at mir2.org (Igor Bukanov) Date: Fri, 20 Jun 2014 07:04:38 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A33415.2000303@gmail.com> References: <53A1D892.5080403@mozilla.com> <53A33415.2000303@gmail.com> Message-ID: On 19 June 2014 21:03, Daniel Micay wrote: > I don't think we should have a big unsigned integer. It's not something > I've seen other big integer libraries do. I once spent some time figuring out a bug in a crypto library. It was caused by writing in a corner case b - a, not a - b. unsigned BigNum library that faults on a - b when a < b would have trivially caught that. In addition unsigned BigNum could be more efficient (important for crypto) as extra sign checks that signed BigNum often use may bear non-trivial cost. From pnathan.software at gmail.com Thu Jun 19 23:17:09 2014 From: pnathan.software at gmail.com (Paul Nathan) Date: Thu, 19 Jun 2014 23:17:09 -0700 Subject: [rust-dev] Seattle Meetup in July Message-ID: Good evening Rust colleagues; As we discussed at the May Seattle meetup (A success! about 6 people from larger, smaller, and academic environments visited), we'd like to meetup in July and talk about our current projects in a quasi-presentation format at one of the local companies or the University of Washington. I'd like to propose a day in the July 7-11 range with the following agenda: *7pm Start* - 20 minutes Meet & Greet - 2-4 10-minute long presentations of different projects (ideas, scraps of code, working projects). - 1-2 20-minute presentations - I think Eli? from UW volunteered to talk about the borrow checker. - 20 minutes Socialize More *9pm end* If I remember right, several different venues with projectors/large TVs were an option... I know I can lay my hands on at least one venue. :-) Best Regards, Paul Nathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko at alum.mit.edu Fri Jun 20 02:37:09 2014 From: niko at alum.mit.edu (Niko Matsakis) Date: Fri, 20 Jun 2014 05:37:09 -0400 Subject: [rust-dev] What do I do if I need several &muts into a struct? In-Reply-To: References: Message-ID: <20140620093709.GC2963@Mr-Bennet> On Mon, Jun 16, 2014 at 09:59:45PM +0100, Vladimir Pouzanov wrote: > Everything works fine up to the point of references. References allow some > nodes to modify other nodes. So, the first pass ? I parse that snippet into > PlatformTree/Nodes struct. Second pass ? I walk over the tree and for each > reference I invoke some handling code, e.g. in the handler of thread node > it might add some attribute to lpx17xx node. Which is not really possible > as it's in immutable Gc box. Also, such modifications are performed through > `named` hashmap, so I can't even store &muts in `nodes`, as I still can > store only immutable pointers in `named`. > > How would you solve this problem? I would either: 1. Use RefCell or Cell for the fields that need to be modified. No shame in that. 2. Not use GC/RC etc but instead use a Vector that holds all the nodes. Use newtyped indices to represent pointers. Convert your nodes vec, therefore, to `HashMap` and so on. This allows you to freeze and unfreeze the whole graph at once. The second option is appropriate if your set of nodes only grows. Once you start removing nodes it becomes less attractive, because your index set becomes non-dense and you start writing free lists (though that might be a useful library at some point). Certainly, due to Rust's emphasis on ownership, graphs can be the hardest thing to represent. The best answer will depend on the particulars of your situation: what access patterns you need, what lifetimes the nodes have, and so forth. Let me know if you want more details. Niko From ntypanski at gmail.com Fri Jun 20 04:36:46 2014 From: ntypanski at gmail.com (Nathan Typanski) Date: Fri, 20 Jun 2014 07:36:46 -0400 Subject: [rust-dev] Why are generic integers not usable as floats? In-Reply-To: References: Message-ID: On 06/19, Benjamin Striegel wrote: > I'm actually very pleased that floating point literals are entirely > separate from integer literals, but I can't quite explain why. A matter of > taste, I suppose. Perhaps it stems from symmetry with the fact that I > wouldn't want `let x: int = 1.0;` to be valid. I agree that `let x: int = 1.0` should not be valid. But that is type *demotion*, and with `let x: f32 = 1` we are doing type *promotion*. Demotion is not exactly popular among languages, but promotion has some arguments going for it. The literal is an integer type (at least by how my brain parses it), and it is being implicitly promoted to a float. Now, in the following instance, I have to explicitly convert `y` to a `f32` type before it compiles. There's no implicit promotion when performing addition. let x: f32 = 1.0; let y: int = 1; print!("{}", x + y as f32); So the question is: do we want to make a special case where we do implicit type promotion at assignment, and nowhere else? I say no. Either you are picky about your numeric types, or you do type promotion everywhere, but not both. Personally I would sooner not think about edge cases here, and just say that all numeric types should be explicit. Nathan From slabode at aim.com Fri Jun 20 05:13:03 2014 From: slabode at aim.com (SiegeLord) Date: Fri, 20 Jun 2014 08:13:03 -0400 Subject: [rust-dev] Why are generic integers not usable as floats? In-Reply-To: References: Message-ID: <53A4254F.8080909@aim.com> On 06/20/2014 07:36 AM, Nathan Typanski wrote: > On 06/19, Benjamin Striegel wrote: >> I'm actually very pleased that floating point literals are entirely >> separate from integer literals, but I can't quite explain why. A matter of >> taste, I suppose. Perhaps it stems from symmetry with the fact that I >> wouldn't want `let x: int = 1.0;` to be valid. > > I agree that `let x: int = 1.0` should not be valid. But that is type > *demotion*, and with `let x: f32 = 1` we are doing type *promotion*. This isn't promotion because 1.0 does not have a concrete type. E.g. consider this code: let a: f32 = 1.0; let mut b: f64 = 1.0; b = a as f64; // Cast has to be here Even though we used 1.0 to initialize both f32 and f64 variables, we can't assign f32 to f64 without a cast. I'm in favor of this unification and I think somebody should write an RFC for this. -SL From depp at zdome.net Fri Jun 20 08:20:58 2014 From: depp at zdome.net (Dietrich Epp) Date: Fri, 20 Jun 2014 08:20:58 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <53A1D892.5080403@mozilla.com> <53A33415.2000303@gmail.com> Message-ID: <5F1CB886-E19B-4951-BA67-582D245DEA1C@zdome.net> It?s a mistake to write crypto using general-purpose big number libraries. You usually want crypto code to protect against timing attacks, for example, and your average big number library aims for performance; the two goals are at odds. On Jun 19, 2014, at 10:04 PM, Igor Bukanov wrote: > On 19 June 2014 21:03, Daniel Micay wrote: >> I don't think we should have a big unsigned integer. It's not something >> I've seen other big integer libraries do. > > I once spent some time figuring out a bug in a crypto library. It was > caused by writing in a corner case b - a, not a - b. unsigned BigNum > library that faults on a - b when a < b would have trivially caught > that. In addition unsigned BigNum could be more efficient (important > for crypto) as extra sign checks that signed BigNum often use may bear > non-trivial cost. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From danielmicay at gmail.com Fri Jun 20 10:25:51 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Fri, 20 Jun 2014 13:25:51 -0400 Subject: [rust-dev] Why are generic integers not usable as floats? In-Reply-To: References: Message-ID: <53A46E9F.5010506@gmail.com> On 20/06/14 07:36 AM, Nathan Typanski wrote: > On 06/19, Benjamin Striegel wrote: >> I'm actually very pleased that floating point literals are entirely >> separate from integer literals, but I can't quite explain why. A matter of >> taste, I suppose. Perhaps it stems from symmetry with the fact that I >> wouldn't want `let x: int = 1.0;` to be valid. > > I agree that `let x: int = 1.0` should not be valid. But that is type > *demotion*, and with `let x: f32 = 1` we are doing type *promotion*. > Demotion is not exactly popular among languages, but promotion has > some arguments going for it. > > The literal is an integer type (at least by how my brain parses it), > and it is being implicitly promoted to a float. > > Now, in the following instance, I have to explicitly convert `y` to a > `f32` type before it compiles. There's no implicit promotion when > performing addition. > > let x: f32 = 1.0; > let y: int = 1; > print!("{}", x + y as f32); > > So the question is: do we want to make a special case where we do > implicit type promotion at assignment, and nowhere else? > > I say no. Either you are picky about your numeric types, or you do > type promotion everywhere, but not both. Personally I would sooner not > think about edge cases here, and just say that all numeric types > should be explicit. > > Nathan It wouldn't be a type conversion at all. The literal `1` does not have the type `int`, it's a generic integer literal with an inferred type. In Haskell, `1` is a generic *number* literal and can be inferred as any kind of integer, floating point, fixed point or rational type. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From jhaberman at gmail.com Fri Jun 20 11:02:54 2014 From: jhaberman at gmail.com (Josh Haberman) Date: Fri, 20 Jun 2014 11:02:54 -0700 Subject: [rust-dev] optimizing away const/pure function calls? Message-ID: Does Rust have any way of optimizing away repeated calls to the same function where possible? Like GCC's "pure" function attribute? To get a little more crazy, say you're working with a Map. Sometimes it's convenient to write code like this: if (map.contains_key(foo)) { let val = map.get(foo); // ... } This code, naively compiled, would perform two lookups. But only one is logically required, and caching the lookup would only require a single pointer. Is there any reasonable scenario under which the compiler could decide to allocate stack space to cache that lookup, so that the code above would be optimized to only perform one lookup? Josh -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcwalton at mozilla.com Fri Jun 20 11:09:15 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Fri, 20 Jun 2014 11:09:15 -0700 Subject: [rust-dev] optimizing away const/pure function calls? In-Reply-To: References: Message-ID: <53A478CB.3060601@mozilla.com> On 6/20/14 11:02 AM, Josh Haberman wrote: > Is there any reasonable scenario under which the compiler could decide > to allocate stack space to cache that lookup, so that the code above > would be optimized to only perform one lookup? LLVM will do this if it can see the definition of `contains_key` (which it will, if it's generic) and can tell that it's pure. We don't have any way at the moment to force LLVM to decide that a function is pure if it can't work it out for itself, though. Patrick From danielmicay at gmail.com Fri Jun 20 11:14:13 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Fri, 20 Jun 2014 14:14:13 -0400 Subject: [rust-dev] optimizing away const/pure function calls? In-Reply-To: References: Message-ID: <53A479F5.2010800@gmail.com> On 20/06/14 02:02 PM, Josh Haberman wrote: > Does Rust have any way of optimizing away repeated calls to the same > function where possible? Like GCC's "pure" function attribute? > > To get a little more crazy, say you're working with a Map. Sometimes > it's convenient to write code like this: > > if (map.contains_key(foo)) { > let val = map.get(foo); > // ... > } > > This code, naively compiled, would perform two lookups. But only one is > logically required, and caching the lookup would only require a single > pointer. > > Is there any reasonable scenario under which the compiler could decide > to allocate stack space to cache that lookup, so that the code above > would be optimized to only perform one lookup? > > Josh Rust has no way to mark effects. LLVM is able to infer the readonly, readnone and nounwind attributes in some cases, but not most. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From pssalmeida at gmail.com Fri Jun 20 12:07:49 2014 From: pssalmeida at gmail.com (=?UTF-8?Q?Paulo_S=C3=A9rgio_Almeida?=) Date: Fri, 20 Jun 2014 20:07:49 +0100 Subject: [rust-dev] On Copy = POD Message-ID: Hi all, Currently being Copy equates with being Pod. The more time passes and the more code examples I see, it is amazing the amount of ugliness that it causes. I wonder if there is a way out. There are two aspects regarding Copy: the semantic side and the implementation side. On the more abstract semantic side, copy is the ability to remain using a variable after assignment by value (from the current version of the manual: "Types that do not move ownership when used by-value"). If x:X is copy then making y = x, or f(x) allows keeping using x afterwards. On the implementation side, Copy is defined as POD (it was even named Pod previously), so that copies can be made by simple memcpy and no surprising and possibly expensive operation is performed, which would occur if arbitrary user-defined copy constructors were allowed. Regarding the semantic level, Copyness is too much an important part of the interface of a type to be left to be inferred from the type being POD, and change more or less randomly and break client-code, e.g., if some field is added. This concern is already addressed by Niko's proposal of opt-in-builtin traits ( https://github.com/rust-lang/rfcs/blob/master/active/0003-opt-in-builtin-traits.md). I hope it gets adopted for 1.0. Regarding the implementation side, Copy is currently restricted to being POD, being memcpy copied, a simple rule, but which forbids some cases fitting the spirit of "type is small and cheap to copy", and which would benefit from being Copy, leading to ugliness and lack of uniformity, namely regarding smart-pointer types. Maybe the best example is the Rc and Gc types. From the semantic point of view, these aim to share ownership of immutable values, and both should offer the same interface and both be Copy, to make usage more transparent, and avoid excessive cloning. But while Gc is Copy, Rc cannot be, even though it is "small and cheap to copy" even if not by a memcpy. This low-level definition of Copy = POD is resulting in code with smart-pointers that has a number of clones which are actually misleading, because they are not cloning the referent but the pointer, which is something that will sound artificial, to say the least, for people coming to rust. I have been thinking if there would be any way to have a more encompassing Copy, allowing, e.g. Rc to be copy, fitting the spirit "of small and cheap to copy", while forbidding general user-defined copy constructors. If Rc needs a copy-constructor to update the reference count (the current clone), and Rc is implemented as a normal library type, in Rust, allowing it would mean allowing general user defined copy-constructors, which is ruled out. So, is there no way out? Imagine that all the essential pointer types for sharing ownership: Rc, Gc, and even Arc, were all built-in. We could decide to say they were Copy, to infer PODness for types, as now, use memcpy for those that are POD, as now, use the built-in copy-constructors for types Copy but not POD, and allow deriving Copy for user defined-types only if they are POD, ruling out user-defined copy-constructors. But Rc, Gc, Arc are implemented in Rust. Does this mean that to prevent user-defined copy-constructors we must give up all hope of having these essential pointer-types Copy? I.e. must orthogonality rule at all costs? I wonder whether it would be possible to keep the essential spirit of Copyness, while allowing special cases for a small number of "blessed" library types, something like: "Implicit copy under assignment or by value parameter passing cannot be arbitrarily user-defined, to rule out expensive implicit copies; only POD user-defined types can derive Copy. However, each version of the language will define a small approved list of types (essentially the pointer-types for shared ownership), for which the cost of copy has been deemed small, and which are defined as Copy." Even considering Arc, where copy (the current clone()) would be more expensive, having auto-borrowing (which should be made uniform for all pointer types) means that functions which take a reference to the referent won't involve copying the Arc itself, which together with a last-use move optimisation will make programs have basically the same run-time cost as now, where the implicit copies will happen where we now have an explicit clone(), while making them more elegant. E.g., instead of writing: fn main() { let numbers = Vec::from_fn(100, |i| i as f32); let shared_numbers = Arc::new(numbers); for _ in range(0, 10) { let child_numbers = shared_numbers.clone(); spawn(proc() { let local_numbers = child_numbers.as_slice(); // Work with the local numbers }); } } which may be misleading to people coming to Rust, as the numbers are not being cloned, and there are not several Vecs around (parent ones, child ones), but a single Vec, what I would like to write is: fn main() { let numbers = Vec::from_fn(100, |i| i as f32); let shared_numbers = Arc::new(numbers); for _ in range(0, 10) { spawn(proc() { let slice = shared_numbers.as_slice(); // Work with the numbers }); } } Which would "just work", while being more clear, as the shared_numbers Arc, being Copy, would be copied to the proc, as happens now for POD types. I have seen many other examples, where the code could mislead the reader into thinking there are several, e.g., Mutexes: let mutex = Arc::new(Mutex::new(1)); let mutex2 = mutex.clone(); or several Barriers, or other similar cases. What will happen when we need say, two real different Mutexes, or Vecs, to be shared by 2 tasks? We will have 4 different variables, more trouble choosing names, and greater cognitive burden in seeing which one refers to which. E.g., does mutex_1_2 mean the first mutex to be used in task 2 or vice-versa? I dream of not having these ugly things in Rust. The advanced Rust type system, namely having borrowing, allows avoiding what would be otherwise many copies, when we only need to pass a reference to be used in some function, while having implicit copies mostly when we now do many misleading explicit clones. As of now we are not exploiting it as much as we could to get more beautiful programs with the same performance. Regards, Paulo -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcwalton at mozilla.com Fri Jun 20 12:10:20 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Fri, 20 Jun 2014 12:10:20 -0700 Subject: [rust-dev] On Copy = POD In-Reply-To: References: Message-ID: <53A4871C.20901@mozilla.com> On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] > Currently being Copy equates with being Pod. The more time passes and > the more code examples I see, it is amazing the amount of ugliness that > it causes. I wonder if there is a way out. Part of the problem is that a lot of library code assumes that Copy types can be copied by just moving bytes around. Having copy constructors would mean that this simplifying assumption would have to change. It's doable, I suppose, but having copy constructors would have a significant downside. Patrick From ben.striegel at gmail.com Fri Jun 20 13:00:12 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Fri, 20 Jun 2014 16:00:12 -0400 Subject: [rust-dev] On Copy = POD In-Reply-To: <53A4871C.20901@mozilla.com> References: <53A4871C.20901@mozilla.com> Message-ID: I'm not a fan of the idea of blessing certain types with a compiler-defined whitelist. And if the choice is then between ugly code and copy constructors, I'll take ugly code over surprising code. On Fri, Jun 20, 2014 at 3:10 PM, Patrick Walton wrote: > On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] > > Currently being Copy equates with being Pod. The more time passes and >> the more code examples I see, it is amazing the amount of ugliness that >> it causes. I wonder if there is a way out. >> > > Part of the problem is that a lot of library code assumes that Copy types > can be copied by just moving bytes around. Having copy constructors would > mean that this simplifying assumption would have to change. It's doable, I > suppose, but having copy constructors would have a significant downside. > > Patrick > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Fri Jun 20 13:51:49 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Fri, 20 Jun 2014 22:51:49 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A1D86A.9040208@gmail.com> References: <53A1D86A.9040208@gmail.com> Message-ID: On Wed, Jun 18, 2014 at 8:20 PM, Daniel Micay wrote: > On 18/06/14 01:08 PM, G?bor Lehel wrote: > > # Exposition > > > > We've debated the subject of integer overflow quite a bit, without much > > apparent progress. Essentially, we've been running in circles around two > > core facts: wrapping is bad, but checking is slow. The current consensus > > seems to be to, albeit grudgingly, stick with the status quo. > > > > I think we've established that a perfect, one-size-fits-all solution is > > not possible. But I don't think this means that we're out of options, or > > have no room for improvement. I think there are several imperfect, > > partial solutions we could pursue, to address the various use cases in a > > divide-and-conquer fashion. > > > > This is not a formal RFC, more of a grab bag of thoughts and ideas. > > > > The central consideration has to be the observation that, while wrapping > > around on overflow is well-supported by hardware, for the large majority > > of programs, it's the wrong behavior. > > > > Basically, programs are just hoping that overflow won't happen. And if > > it ever does happen, it's going to result in unexpected behavior and > > bugs. (Including the possibility of security bugs: not all security bugs > > are memory safety bugs.) This is a step up from C's insanity of > > undefined behavior for signed overflow, where the compiler assumes that > > overflow *cannot* happen and even optimizes based on that assumption, > > but it's still a sad state of affairs. If we're clearing the bar, that's > > only because it's been buried underground. > > > > We can divide programs into three categories. (I'm using "program" in > > the general sense of "piece of code which does a thing".) > > > > 1) Programs where wrapping around on overflow is the desired semantics. > > > > 2) Programs where wrapping around on overflow is not the desired > > semantics, but performance is not critical. > > If performance wasn't critical, the program wouldn't be written in Rust. > The language isn't aimed at use cases where performance isn't a bug > deal, as it makes many sacrifices to provide the level of control that's > available. > People write GUI frameworks and applications in C++ and even in C. Just because a language is appropriate for low-level and performance-critical code doesn't mean it needs to be inappropriate for anything else. I think Rust is far superior as a general-purpose language to most of today's mainstream languages. And even in applications where some parts are performance-critical, many parts may not be. I expect the ratios may be tilted differently for Rust code, but not the fundamental pattern to be different. > > > 3) Programs where wrapping around on overflow is not the desired > > semantics and performance is critical. > > > > Programs in (1) are well-served by the language and libraries as they > > are, and there's not much to do except to avoid regressing. > > > > Programs in (2) and (3) are not as well-served. > > > > > > # Checked math > > > > For (2), the standard library offers checked math in the `CheckedAdd`, > > `CheckedMul` etc. traits, as well as integer types of unbounded size: > > `BigInt` and `BigUint`. This is good, but it's not enough. The acid test > > has to be whether for non-performance-critical code, people are actually > > *using* checked math. If they're not, then we've failed. > > > > `CheckedAdd` and co. are important to have for flexibility, but they're > > far too unwieldy for general use. People aren't going to write > > `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate > > design might be something like this: > > > > * Have traits for all the arithmetic operations for both checking on > > overflow and for wrapping around on overflow, e.g. `CheckedAdd` (as > > now), `WrappingAdd`, `CheckedSub`, `WrappingSub`, and so on. > > > > * Offer convenience methods for the Checked traits which perform > > `unwrap()` automatically. > > > > * Have separate sets of integer types which check for overflow and > > which wrap around on overflow. Whatever they're called: `CheckedU8`, > > `checked::u8`, `uc8`, ... > > > > * Both sets of types implement all of the Checked* and Wrapping* > > traits. You can use explicit calls to get either behavior with either > types. > > > > * The checked types use the Checked traits to implement the operator > > overloads (`Add`, Mul`, etc.), while the wrapping types use the Wrapping > > traits to implement them. In other words, the difference between the > > types is (probably only) in the behavior of the operators. > > > > * `BigInt` also implements all of the Wrapping and Checked traits: > > because overflow never happens, it can claim to do anything if it "does > > happen". `BigUint` implements all of them except for the Wrapping traits > > which may underflow, such as `WrappingSub`, because it has nowhere to > > wrap around to. > > > > Another option would be to have just one set of types but two sets of > > operators, like Swift does. I think that would work as well, or even > > better, but we've been avoiding having any operators which aren't > > familiar from C. > > > > > > # Unbounded integers > > > > While checked math helps catch instances of overflow and prevent > > misbehaviors and bugs, many programs would prefer integer types which do > > the right thing and don't overflow in the first place. For this, again, > > we currently have `BigInt` and `BigUint`. There's one problem with > > these: because they may allocate, they no longer `Copy`, which means > > that they can't be just drop-in replacements for the fixed-size types. > > > > > > To partially address this, once we have tracing GC, and if we manage to > > make `Gc: Copy`, we should add unbounded `Integer` (as in Haskell) > > and `Natural` types which internally use `Gc`, and so are also `Copy`. > > (In exchange, they wouldn't be `Send`, but that's far less pervasive.) > > These would (again, asides from `Send`) look and feel just like the > > built-in fixed-size types, while having the semantics of actual > > mathematical integers, resp. naturals (up to resource exhaustion of > > course). They would be ideal for code which is not performance critical > > and doesn't mind incurring, or already uses, garbage collection. For > > those cases, you wouldn't have to think about the tradeoffs, or make > > difficult choices: `Integer` is what you use. > > A tracing garbage collector for Rust is a possibility but not a > certainty. I don't think it would make sense to have `Gc` support > `Copy` but have it left out for `Rc`. The fact that an arbitrary > compiler decision like that would determine the most convenient type is > a great reason to avoid making that arbitrary choice. > > There's no opportunity for cycles in integers, and `Rc` will be > faster along with using far less memory. It doesn't have the overhead > associated with reference counting in other languages due to being > task-local (not atomic) and much of the reference counting is elided by > move semantics / borrows. > > With the help of sized deallocation, Rust can have an incredibly fast > allocator implementation. Since `Rc` is task-local, it also doesn't > need to be using the same allocator entry point as sendable types. It > can make use of a thread-local allocator with less complexity and > overhead, although this could also be available on an opt-in basis for > sendable types by changing the allocator parameter from the default. > > > One concern with this would be the possibility of programs incurring GC > > accidentally by using these types. There's several ways to deal with > this: > > > > * Note the fact that they use GC prominently in the documentation. > > > > * Make sure the No-GC lint catches any use of them. > > > > * Offer a "no GC in this task" mode which fails the task if GC > > allocation is invoked, to catch mistakes at runtime. > > > > I think these would be more than adequate to address the concern. > > I don't think encouraging tracing garbage collection is appropriate for > a language designed around avoiding it. It would be fine to have it as a > choice if it never gets in the way, but it shouldn't be promoted as a > default. > The idea is to pick the low-hanging fruit. For programs that use garbage collection, we can offer an integer type that requires neither ergonomic nor semantic compromises. So let's. > > > # Between a rock and a hard place > > > > Having dispatched the "easy" cases above, for category #3 we're left > > between the rock (wraparound on overflow is wrong) and the hard place > > (checking for overflow is slow). > > > > Even here, we may have options. > > > > An observation: > > > > * We are adamantly opposed to compiler switches to turn off array > > bounds checking, because we are unwilling to compromise memory safety. > > > > * We are relatively unbothered by unchecked arithmetic, because it > > *doesn't* compromise memory safety. > > > > Taking these together, I think we should at least be less than adamantly > > opposed to compiler switches for enabling or disabling checked > arithmetic. > > I think compiler switches or attributes enabling a different dialect of > the language are a bad idea as a whole. Code from libraries is directly > mixed into other crates, so changing the semantics of the language is > inherently broken. > Even if checked arithmetic is only turned on for testing/debugging and not in production code, it's still a strict improvement over the status quo. Under the status quo, except where wraparound is the intended semantics, overflow is silently wrong 100% of the time. With the alternative, that percentage is smaller. > > > Consider the status quo. People use integer types which wrap on > > overflow. If they ever do overflow, it means misbehaviors and bugs. If > > we had a compiler flag to turn on checked arithmetic, even if it were > > only used a few times in testing, it would be a strict improvement: more > > bugs would be caught with less effort. > > > > But we clearly can't just add this flag for existing types, because > > they're well-defined to wrap around on overflow, and some programs > > (category #1) rely on this. So we need to have separate types. > > > > One option is therefore to just define this set of types as failing the > > task on overflow if checked arithmetic is enabled, and to wrap around if > > it's disabled. But it doesn't necessarily make sense to specify > > wraparound in the latter case, given that programs are not supposed to > > depend on it: they may be compiled with either flag, and should avoid > > overflow. > > > > Another observation: > > > > * Undefined behavior is anathema to the safe subset of the language. > > That would mean that it's not safe. > > > > * But *unspecified results* are maybe not so bad. We might already have > > them for bit-shifts. (Question for the audience: do we?) > > > > If unspecified results are acceptable, then we could instead say that > > these types fail on overflow if checked arithmetic is enabled, and have > > unspecified results if it isn't. But saying they wrap around is fine as > > well. > > > > This way, we can put off the choice between the rock and the hard place > > from program-writing time to compile time, at least. > > > > > > # Defaults > > > > Even if we provide the various options from above, from the perspective > > of what types people end up using, defaults are very important. > > > > There's two kinds of defaults: > > > > * The de jure default, inferred by the type system in the absence of > > other information, which used to be `int`. Thankfully, we're removing > this. > > > > * The de facto, cultural default. For instance, if there is a type > > called "int", most people will use it without thinking. > > > > The latter question is still something we need to think about. Should we > > have a clear cultural default? Or should we force people to explicitly > > choose between checked and wrapping arithmetic? > > > > For the most part, this boils down to: > > > > * If `int` is checked, the default is slow > > > > * If `int` wraps, the default is wrong > > > > * If there is no `int`, people are confused > > > > Regarding the first, we seem to be living in deathly fear of someone > > naively writing an arithmetic benchmark in Rust, putting it up on the > > internet, and saying, "Look: Rust is slow". This is not an unrealistic > > scenario, sadly. The question is whether we'd rather have more programs > > be accidentally incorrect in order to avoid bad publicity from > > benchmarks being accidentally slow. > > > > Regarding the third, even if the only types we have are `intc`, `intw`, > > `ic8`, `iw8`, and so on, we *still* can't entirely escape creating a > > cultural default, because we still need to choose types for functions in > > the standard library and for built-in operations like array indexing. > > Then the path of least resistance for any given program will be to use > > the same types. > > > > There's one thing that might help resolve this conundrum, which is if we > > consider the previously-described scheme with compiler flags to control > > checked arithmetic to be acceptable. In that case, I think those types > > would be the clear choice to be the de facto defaults. Then we would > have: > > > > * `i8`, `u8` .. `i64`, `u64`, `int`, and `uint` types, which fail the > > task on overflow if checked arithmetic is turned on, and either wrap > > around or have an unspecified result if it's off > > > > * a corresponding set of types somewhere in the standard library, which > > wrap around no matter what > > > > * and another set of corresponding types, which are checked no matter > what > > > > > > -G?bor > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Fri Jun 20 13:55:03 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Fri, 20 Jun 2014 22:55:03 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: Message-ID: On Wed, Jun 18, 2014 at 10:05 PM, Gregory Maxwell wrote: > On Wed, Jun 18, 2014 at 10:08 AM, G?bor Lehel > wrote: > > memory safety bugs.) This is a step up from C's insanity of undefined > > behavior for signed overflow, where the compiler assumes that overflow > > *cannot* happen and even optimizes based on that assumption, but it's > still > > a sad state of affairs. > > C's behavior is not without an advantage. It means that every > operation on signed values in C has an implicit latent assertion for > analysis tools: If wrapping can happen in operation the program is > wrong, end of story. This means you can use existing static and > dynamic analysis tools while testing and have a zero false positive > rate? not just on your own code but on any third party code you're > depending on too. > > In languages like rust where signed overflow is defined, no such > promises exists? signed overflow at runtime might be perfectly valid > behavior, and so analysis and testing require more work to produce > useful results. You might impose a standard on your own code that > requires that all valid signed overflow must be annotated in some > manner, but this does nothing for third party code (including the > standard libraries). > > The value here persists even when there is normally no checking at > runtime, because the tools can still be run sometimes? which is less > of a promise than always on runtime checking but it also has no > runtime cost. > > So I think there would be a value in rust of having types for which > wrap is communicated by the developer as being invalid, even if it > were not normally checked at runtime. Being able to switch between > safety levels is not generally the rust way? or so it seems to me? and > may not be justifiably in cases where the risk vs cost ratio is > especially high (e.g. bounds checking on memory)... but I think it's > better than not having the safety facility at all. > > The fact that C can optimize non-overflow is also fairly useful in > proving loop bounds and allowing the autovectorization to work. I've > certantly had signal processing codebases where this made a > difference, but I'm not sure if the same communication to the > optimizer might not be available in other ways in rust. > You seem to be making the same arguments that I did in the "Between a rock and a hard place" section. Is that intentional? > > > `CheckedAdd` and co. are important to have for flexibility, but they're > far > > too unwieldy for general use. People aren't going to write > > `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate > design > > might be something like this: > > Not only does friction like that discourage use? it also gets in the > way of people switching behaviors between testing and production when > performance considerations really do preclude always on testing. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Fri Jun 20 13:56:17 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Fri, 20 Jun 2014 22:56:17 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A1F44C.6090404@gmail.com> References: <53A1F44C.6090404@gmail.com> Message-ID: On Wed, Jun 18, 2014 at 10:19 PM, Daniel Micay wrote: > On 18/06/14 03:40 PM, comex wrote: > > On Wed, Jun 18, 2014 at 1:08 PM, G?bor Lehel > wrote: > >> To partially address this, once we have tracing GC, and if we manage to > make > >> `Gc: Copy`, we should add unbounded `Integer` (as in Haskell) and > >> `Natural` types which internally use `Gc`, and so are also `Copy`. (In > >> exchange, they wouldn't be `Send`, but that's far less pervasive.) > > > > Wait, what? Since when is sharing data between threads an uncommon use > case? > > Data remaining local to the thread it was allocated in is the common > case. That doesn't mean that sending dynamic allocations to other tasks > or sharing dynamic allocations is bad. `Rc` is inherently local to a > thread, so it might as well be using an allocator leveraging that. > > > (Personally I think this more points to the unwieldiness of typing > > .clone() for cheap and potentially frequent clones like Rc...) > > Either way, it doesn't make sense to make a special case for `Gc`. > > If `copy_nonoverlapping_memory` isn't enough to move it somewhere, then > it's not `Copy`. A special-case shouldn't be arbitrarily created for it > without providing the same thing for user-defined types. That's exactly > the kind of poor design that Rust has been fleeing from. > I agree. Sorry if I wasn't clear: I wasn't supposing we might bend the rules for `Gc`, but that `Gc` might fit them. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Fri Jun 20 14:07:58 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Fri, 20 Jun 2014 23:07:58 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <53A1D892.5080403@mozilla.com> Message-ID: On Thu, Jun 19, 2014 at 9:05 AM, Jerry Morrison wrote: > Nice analysis! > > Over what scope should programmers pick between G?bor's 3 categories? > > The "wraparound is desired" category should only occur in narrow parts of > code, like computing a hash. That suits a wraparound-operator better than a > wraparound-type, and certainly better than a compiler switch. And it > doesn't make sense for a type like 'int' that doesn't have a fixed size. > I thought hash algorithms were precisely the kind of case where you might opt for types which were clearly defined as wrapping. Why do you think using different operators instead would be preferred? > > The "wraparound is undesired but performance is critical" category occurs > in the most performance critical bits of code [I'm doubting that all parts > of all Rust programs are performance critical], and programmers need to > make the trade-off over that limited scope. Maybe via operators or types, > but not by a compiler switch over whole compilation units. > > That leaves "wraparound is undesired and performance is not critical" > category for everything else. The choice between checked vs. unbounded > sounds like typing. > > BigUint is weird: It can underflow but not overflow. When you use its > value in a more bounded way you'll need to bounds-check it then, whether it > can go negative or not. Wouldn't it be easier to discard it than squeeze it > into the wraparound or checked models? > Making the unbounded integer types implement the Checking/Wrapping traits is more for completeness than anything else, I'm not sure whether it has practical value. A BigUint/Natural type is not as important as BigInt/Integer, but it can be nice to have. Haskell only has Integer in the Prelude, but an external package provides Natural, and there've been proposals to mainline it. It's useful for function inputs where only nonnegative values make sense. You could write asserts manually, but you might as well factor them out. And types are documentation. The Haskell implementation of Natural is just a newtype over Integer with added checks, and the same thing might make sense for Rust. On Wed, Jun 18, 2014 at 11:21 AM, Brian Anderson wrote: > > On 06/18/2014 10:08 AM, G?bor Lehel wrote: > >> >> # Checked math >> >> For (2), the standard library offers checked math in the `CheckedAdd`, >> `CheckedMul` etc. traits, as well as integer types of unbounded size: >> `BigInt` and `BigUint`. This is good, but it's not enough. The acid test >> has to be whether for non-performance-critical code, people are actually >> *using* checked math. If they're not, then we've failed. >> >> `CheckedAdd` and co. are important to have for flexibility, but they're >> far too unwieldy for general use. People aren't going to write >> `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate >> design might be something like this: >> >> * Have traits for all the arithmetic operations for both checking on >> overflow and for wrapping around on overflow, e.g. `CheckedAdd` (as now), >> `WrappingAdd`, `CheckedSub`, `WrappingSub`, and so on. >> >> * Offer convenience methods for the Checked traits which perform >> `unwrap()` automatically. >> >> * Have separate sets of integer types which check for overflow and which >> wrap around on overflow. Whatever they're called: `CheckedU8`, >> `checked::u8`, `uc8`, ... >> >> * Both sets of types implement all of the Checked* and Wrapping* traits. >> You can use explicit calls to get either behavior with either types. >> >> * The checked types use the Checked traits to implement the operator >> overloads (`Add`, Mul`, etc.), while the wrapping types use the Wrapping >> traits to implement them. In other words, the difference between the types >> is (probably only) in the behavior of the operators. >> >> * `BigInt` also implements all of the Wrapping and Checked traits: >> because overflow never happens, it can claim to do anything if it "does >> happen". `BigUint` implements all of them except for the Wrapping traits >> which may underflow, such as `WrappingSub`, because it has nowhere to wrap >> around to. >> >> Another option would be to have just one set of types but two sets of >> operators, like Swift does. I think that would work as well, or even >> better, but we've been avoiding having any operators which aren't familiar >> from C. >> > > The general flavor of this proposal w/r/t checked arithmetic sounds pretty > reasonable to me, and we can probably make progress on this now. I > particularly think that having checked types that use operator overloading > is important for ergonomics. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- Jerry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Fri Jun 20 14:29:50 2014 From: banderson at mozilla.com (Brian Anderson) Date: Fri, 20 Jun 2014 14:29:50 -0700 Subject: [rust-dev] Weekend buildbot maintenance Message-ID: <53A4A7CE.6050605@mozilla.com> This weekend I'll be moving the Rust buildbot master and bors over to a new machine. There will be some downtime but otherwise no new changes. That is all. From jhm456 at gmail.com Fri Jun 20 16:37:31 2014 From: jhm456 at gmail.com (Jerry Morrison) Date: Fri, 20 Jun 2014 16:37:31 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <53A1D892.5080403@mozilla.com> Message-ID: On Fri, Jun 20, 2014 at 2:07 PM, G?bor Lehel wrote: > > > > On Thu, Jun 19, 2014 at 9:05 AM, Jerry Morrison wrote: > >> Nice analysis! >> >> Over what scope should programmers pick between G?bor's 3 categories? >> >> The "wraparound is desired" category should only occur in narrow parts of >> code, like computing a hash. That suits a wraparound-operator better than a >> wraparound-type, and certainly better than a compiler switch. And it >> doesn't make sense for a type like 'int' that doesn't have a fixed size. >> > > I thought hash algorithms were precisely the kind of case where you might > opt for types which were clearly defined as wrapping. Why do you think > using different operators instead would be preferred? > Considering a hashing or CRC example, the code reads a bunch of non-wraparound values, mashes them together using wraparound arithmetic, then uses the result in a way that does not mean to wrap around at the integer size. It's workable to convert inputs to wraparound types and use wraparound accumulators, then convert the result to a non-wraparound type. But using wraparound operators seems simpler, more visible, and less error-prone. E.g. it'd be a mistake if the hash function returned a wraparound type, which gets assigned with type inference, and so downstream operations wrap around. > >> >> The "wraparound is undesired but performance is critical" category occurs >> in the most performance critical bits of code [I'm doubting that all parts >> of all Rust programs are performance critical], and programmers need to >> make the trade-off over that limited scope. Maybe via operators or types, >> but not by a compiler switch over whole compilation units. >> >> That leaves "wraparound is undesired and performance is not critical" >> category for everything else. The choice between checked vs. unbounded >> sounds like typing. >> >> BigUint is weird: It can underflow but not overflow. When you use its >> value in a more bounded way you'll need to bounds-check it then, whether it >> can go negative or not. Wouldn't it be easier to discard it than squeeze it >> into the wraparound or checked models? >> > > Making the unbounded integer types implement the Checking/Wrapping traits > is more for completeness than anything else, I'm not sure whether it has > practical value. > > A BigUint/Natural type is not as important as BigInt/Integer, but it can > be nice to have. Haskell only has Integer in the Prelude, but an external > package provides Natural, and there've been proposals to mainline it. It's > useful for function inputs where only nonnegative values make sense. You > could write asserts manually, but you might as well factor them out. And > types are documentation. > > The Haskell implementation of Natural is just a newtype over Integer with > added checks, and the same thing might make sense for Rust. > I see. Good points. > > On Wed, Jun 18, 2014 at 11:21 AM, Brian Anderson > wrote: > >> >> On 06/18/2014 10:08 AM, G?bor Lehel wrote: >> >>> >>> # Checked math >>> >>> For (2), the standard library offers checked math in the `CheckedAdd`, >>> `CheckedMul` etc. traits, as well as integer types of unbounded size: >>> `BigInt` and `BigUint`. This is good, but it's not enough. The acid test >>> has to be whether for non-performance-critical code, people are actually >>> *using* checked math. If they're not, then we've failed. >>> >>> `CheckedAdd` and co. are important to have for flexibility, but they're >>> far too unwieldy for general use. People aren't going to write >>> `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate >>> design might be something like this: >>> >>> * Have traits for all the arithmetic operations for both checking on >>> overflow and for wrapping around on overflow, e.g. `CheckedAdd` (as now), >>> `WrappingAdd`, `CheckedSub`, `WrappingSub`, and so on. >>> >>> * Offer convenience methods for the Checked traits which perform >>> `unwrap()` automatically. >>> >>> * Have separate sets of integer types which check for overflow and >>> which wrap around on overflow. Whatever they're called: `CheckedU8`, >>> `checked::u8`, `uc8`, ... >>> >>> * Both sets of types implement all of the Checked* and Wrapping* >>> traits. You can use explicit calls to get either behavior with either types. >>> >>> * The checked types use the Checked traits to implement the operator >>> overloads (`Add`, Mul`, etc.), while the wrapping types use the Wrapping >>> traits to implement them. In other words, the difference between the types >>> is (probably only) in the behavior of the operators. >>> >>> * `BigInt` also implements all of the Wrapping and Checked traits: >>> because overflow never happens, it can claim to do anything if it "does >>> happen". `BigUint` implements all of them except for the Wrapping traits >>> which may underflow, such as `WrappingSub`, because it has nowhere to wrap >>> around to. >>> >>> Another option would be to have just one set of types but two sets of >>> operators, like Swift does. I think that would work as well, or even >>> better, but we've been avoiding having any operators which aren't familiar >>> from C. >>> >> >> The general flavor of this proposal w/r/t checked arithmetic sounds >> pretty reasonable to me, and we can probably make progress on this now. I >> particularly think that having checked types that use operator overloading >> is important for ergonomics. >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > > -- > Jerry > >> > -- Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Fri Jun 20 17:36:43 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Sat, 21 Jun 2014 02:36:43 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <53A1D892.5080403@mozilla.com> Message-ID: On Sat, Jun 21, 2014 at 1:37 AM, Jerry Morrison wrote: > > On Fri, Jun 20, 2014 at 2:07 PM, G?bor Lehel wrote: > >> >> >> >> On Thu, Jun 19, 2014 at 9:05 AM, Jerry Morrison wrote: >> >>> Nice analysis! >>> >>> Over what scope should programmers pick between G?bor's 3 categories? >>> >>> The "wraparound is desired" category should only occur in narrow parts >>> of code, like computing a hash. That suits a wraparound-operator better >>> than a wraparound-type, and certainly better than a compiler switch. And it >>> doesn't make sense for a type like 'int' that doesn't have a fixed size. >>> >> >> I thought hash algorithms were precisely the kind of case where you might >> opt for types which were clearly defined as wrapping. Why do you think >> using different operators instead would be preferred? >> > > Considering a hashing or CRC example, the code reads a bunch of > non-wraparound values, mashes them together using wraparound arithmetic, > then uses the result in a way that does not mean to wrap around at the > integer size. > > It's workable to convert inputs to wraparound types and use > wraparound accumulators, then convert the result to a non-wraparound type. > But using wraparound operators seems simpler, more visible, and less > error-prone. E.g. it'd be a mistake if the hash function returned a > wraparound type, which gets assigned with type inference, and so downstream > operations wrap around. > Yes, the signature of the hash function shouldn't necessarily expose the implementation's use of wraparound types... though it's not completely obvious to me. What kind of downstream operations would it make sense to perform on a hash value anyway? Anything besides further hashing? I'm only minimally knowledgeable about hashing algorithms, but I would've thought that casting the inputs to wraparound types at the outset and then casting the result back at the end would be *less* error prone than making sure to use the wraparound version for every operation in the function. Is that wrong? Are there any operations within the body of the hash function where overflow should be caught? And if we'd be going with separate operators instead of separate types, hash functions are a niche enough use case that, in themselves, I don't think they *warrant* having distinct symbolic operators for the wraparound operations; they could just use named methods instead. Hashing is the one that always comes up, but are there any other instances where wraparound is the desired semantics? > > >> >>> >>> The "wraparound is undesired but performance is critical" category >>> occurs in the most performance critical bits of code [I'm doubting that all >>> parts of all Rust programs are performance critical], and programmers need >>> to make the trade-off over that limited scope. Maybe via operators or >>> types, but not by a compiler switch over whole compilation units. >>> >>> That leaves "wraparound is undesired and performance is not critical" >>> category for everything else. The choice between checked vs. unbounded >>> sounds like typing. >>> >>> BigUint is weird: It can underflow but not overflow. When you use its >>> value in a more bounded way you'll need to bounds-check it then, whether it >>> can go negative or not. Wouldn't it be easier to discard it than squeeze it >>> into the wraparound or checked models? >>> >> >> Making the unbounded integer types implement the Checking/Wrapping traits >> is more for completeness than anything else, I'm not sure whether it has >> practical value. >> >> A BigUint/Natural type is not as important as BigInt/Integer, but it can >> be nice to have. Haskell only has Integer in the Prelude, but an external >> package provides Natural, and there've been proposals to mainline it. It's >> useful for function inputs where only nonnegative values make sense. You >> could write asserts manually, but you might as well factor them out. And >> types are documentation. >> >> The Haskell implementation of Natural is just a newtype over Integer with >> added checks, and the same thing might make sense for Rust. >> > > I see. Good points. > > >> >> On Wed, Jun 18, 2014 at 11:21 AM, Brian Anderson >> wrote: >> >>> >>> On 06/18/2014 10:08 AM, G?bor Lehel wrote: >>> >>>> >>>> # Checked math >>>> >>>> For (2), the standard library offers checked math in the `CheckedAdd`, >>>> `CheckedMul` etc. traits, as well as integer types of unbounded size: >>>> `BigInt` and `BigUint`. This is good, but it's not enough. The acid test >>>> has to be whether for non-performance-critical code, people are actually >>>> *using* checked math. If they're not, then we've failed. >>>> >>>> `CheckedAdd` and co. are important to have for flexibility, but they're >>>> far too unwieldy for general use. People aren't going to write >>>> `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate >>>> design might be something like this: >>>> >>>> * Have traits for all the arithmetic operations for both checking on >>>> overflow and for wrapping around on overflow, e.g. `CheckedAdd` (as now), >>>> `WrappingAdd`, `CheckedSub`, `WrappingSub`, and so on. >>>> >>>> * Offer convenience methods for the Checked traits which perform >>>> `unwrap()` automatically. >>>> >>>> * Have separate sets of integer types which check for overflow and >>>> which wrap around on overflow. Whatever they're called: `CheckedU8`, >>>> `checked::u8`, `uc8`, ... >>>> >>>> * Both sets of types implement all of the Checked* and Wrapping* >>>> traits. You can use explicit calls to get either behavior with either types. >>>> >>>> * The checked types use the Checked traits to implement the operator >>>> overloads (`Add`, Mul`, etc.), while the wrapping types use the Wrapping >>>> traits to implement them. In other words, the difference between the types >>>> is (probably only) in the behavior of the operators. >>>> >>>> * `BigInt` also implements all of the Wrapping and Checked traits: >>>> because overflow never happens, it can claim to do anything if it "does >>>> happen". `BigUint` implements all of them except for the Wrapping traits >>>> which may underflow, such as `WrappingSub`, because it has nowhere to wrap >>>> around to. >>>> >>>> Another option would be to have just one set of types but two sets of >>>> operators, like Swift does. I think that would work as well, or even >>>> better, but we've been avoiding having any operators which aren't familiar >>>> from C. >>>> >>> >>> The general flavor of this proposal w/r/t checked arithmetic sounds >>> pretty reasonable to me, and we can probably make progress on this now. I >>> particularly think that having checked types that use operator overloading >>> is important for ergonomics. >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> >> >> -- >> Jerry >> >>> >> > > > -- > Jerry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhm456 at gmail.com Fri Jun 20 18:31:57 2014 From: jhm456 at gmail.com (Jerry Morrison) Date: Fri, 20 Jun 2014 18:31:57 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <53A1D892.5080403@mozilla.com> Message-ID: On Fri, Jun 20, 2014 at 5:36 PM, G?bor Lehel wrote: > > > > On Sat, Jun 21, 2014 at 1:37 AM, Jerry Morrison wrote: > >> >> On Fri, Jun 20, 2014 at 2:07 PM, G?bor Lehel >> wrote: >> >>> >>> >>> >>> On Thu, Jun 19, 2014 at 9:05 AM, Jerry Morrison >>> wrote: >>> >>>> Nice analysis! >>>> >>>> Over what scope should programmers pick between G?bor's 3 categories? >>>> >>>> The "wraparound is desired" category should only occur in narrow parts >>>> of code, like computing a hash. That suits a wraparound-operator better >>>> than a wraparound-type, and certainly better than a compiler switch. And it >>>> doesn't make sense for a type like 'int' that doesn't have a fixed size. >>>> >>> >>> I thought hash algorithms were precisely the kind of case where you >>> might opt for types which were clearly defined as wrapping. Why do you >>> think using different operators instead would be preferred? >>> >> >> Considering a hashing or CRC example, the code reads a bunch of >> non-wraparound values, mashes them together using wraparound arithmetic, >> then uses the result in a way that does not mean to wrap around at the >> integer size. >> >> It's workable to convert inputs to wraparound types and use >> wraparound accumulators, then convert the result to a non-wraparound type. >> But using wraparound operators seems simpler, more visible, and less >> error-prone. E.g. it'd be a mistake if the hash function returned a >> wraparound type, which gets assigned with type inference, and so downstream >> operations wrap around. >> > > Yes, the signature of the hash function shouldn't necessarily expose the > implementation's use of wraparound types... though it's not completely > obvious to me. What kind of downstream operations would it make sense to > perform on a hash value anyway? Anything besides further hashing? > > I'm only minimally knowledgeable about hashing algorithms, but I would've > thought that casting the inputs to wraparound types at the outset and then > casting the result back at the end would be *less* error prone than making > sure to use the wraparound version for every operation in the function. Is > that wrong? Are there any operations within the body of the hash function > where overflow should be caught? > > And if we'd be going with separate operators instead of separate types, > hash functions are a niche enough use case that, in themselves, I don't > think they *warrant* having distinct symbolic operators for the wraparound > operations; they could just use named methods instead. > > Hashing is the one that always comes up, but are there any other instances > where wraparound is the desired semantics? > Here's an example hash function from *Effective Java * (page 48) following its recipe for writing hash functions by combining the object's significant fields: @Override public int hashCode() { int result = 17; result = 31 * result + areaCode; result = 31 * result + prefix; result = 31 * result + lineNumber; return result; } So using Swift's wraparound operators in Java looks like: @Override public int hashCode() { int result = 17; result = 31 &* result &+ areaCode; result = 31 &* result &+ prefix; result = 31 &* result &+ lineNumber; return result; } Alternatively, with a wraparound integer type wint (note that int is defined to be 32 bits in Java): @Override public int hashCode() { wint result = 17; result = (wint) 31 * result + (wint) areaCode; result = (wint) 31 * result + (wint) prefix; result = (wint) 31 * result + (wint) lineNumber; return (int) result; } In this example, it's easier to get the first one right than the second one. The prototypical use for a hash code is to index into a hash table modulo the table's current size. It can also be used for debugging, e.g. Java's default toString() method uses the object's class name and hash, returning something like "PhoneNumber at 163b91". Another example of wraparound math is computing a checksum like CRC32. The checksum value is typically sent over a wire or stored in a storage medium to cross-check data integrity at the receiving end. After computing the checksum, you only want to pass it around and compare it. The only other example that comes to mind is emulating the arithmetic operations of a target CPU or other hardware. In other cases of bounded numbers, like ARGB color components, one wants to deal with overflow, not silently wraparound. Implementing BigInt can use wraparound math if it can also get the carry bit. Yes, these cases are so few that named operators may suffice. That's a bit less convenient but linguistically simpler than Swift's 5 wraparound arithmetic operators. >> >>> >>>> >>>> The "wraparound is undesired but performance is critical" category >>>> occurs in the most performance critical bits of code [I'm doubting that all >>>> parts of all Rust programs are performance critical], and programmers need >>>> to make the trade-off over that limited scope. Maybe via operators or >>>> types, but not by a compiler switch over whole compilation units. >>>> >>>> That leaves "wraparound is undesired and performance is not critical" >>>> category for everything else. The choice between checked vs. unbounded >>>> sounds like typing. >>>> >>>> BigUint is weird: It can underflow but not overflow. When you use its >>>> value in a more bounded way you'll need to bounds-check it then, whether it >>>> can go negative or not. Wouldn't it be easier to discard it than squeeze it >>>> into the wraparound or checked models? >>>> >>> >>> Making the unbounded integer types implement the Checking/Wrapping >>> traits is more for completeness than anything else, I'm not sure whether it >>> has practical value. >>> >>> A BigUint/Natural type is not as important as BigInt/Integer, but it >>> can be nice to have. Haskell only has Integer in the Prelude, but an >>> external package provides Natural, and there've been proposals to mainline >>> it. It's useful for function inputs where only nonnegative values make >>> sense. You could write asserts manually, but you might as well factor them >>> out. And types are documentation. >>> >>> The Haskell implementation of Natural is just a newtype over Integer >>> with added checks, and the same thing might make sense for Rust. >>> >> >> I see. Good points. >> >> >>> >>> On Wed, Jun 18, 2014 at 11:21 AM, Brian Anderson >>> wrote: >>> >>>> >>>> On 06/18/2014 10:08 AM, G?bor Lehel wrote: >>>> >>>>> >>>>> # Checked math >>>>> >>>>> For (2), the standard library offers checked math in the `CheckedAdd`, >>>>> `CheckedMul` etc. traits, as well as integer types of unbounded size: >>>>> `BigInt` and `BigUint`. This is good, but it's not enough. The acid test >>>>> has to be whether for non-performance-critical code, people are actually >>>>> *using* checked math. If they're not, then we've failed. >>>>> >>>>> `CheckedAdd` and co. are important to have for flexibility, but >>>>> they're far too unwieldy for general use. People aren't going to write >>>>> `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate >>>>> design might be something like this: >>>>> >>>>> * Have traits for all the arithmetic operations for both checking on >>>>> overflow and for wrapping around on overflow, e.g. `CheckedAdd` (as now), >>>>> `WrappingAdd`, `CheckedSub`, `WrappingSub`, and so on. >>>>> >>>>> * Offer convenience methods for the Checked traits which perform >>>>> `unwrap()` automatically. >>>>> >>>>> * Have separate sets of integer types which check for overflow and >>>>> which wrap around on overflow. Whatever they're called: `CheckedU8`, >>>>> `checked::u8`, `uc8`, ... >>>>> >>>>> * Both sets of types implement all of the Checked* and Wrapping* >>>>> traits. You can use explicit calls to get either behavior with either types. >>>>> >>>>> * The checked types use the Checked traits to implement the operator >>>>> overloads (`Add`, Mul`, etc.), while the wrapping types use the Wrapping >>>>> traits to implement them. In other words, the difference between the types >>>>> is (probably only) in the behavior of the operators. >>>>> >>>>> * `BigInt` also implements all of the Wrapping and Checked traits: >>>>> because overflow never happens, it can claim to do anything if it "does >>>>> happen". `BigUint` implements all of them except for the Wrapping traits >>>>> which may underflow, such as `WrappingSub`, because it has nowhere to wrap >>>>> around to. >>>>> >>>>> Another option would be to have just one set of types but two sets of >>>>> operators, like Swift does. I think that would work as well, or even >>>>> better, but we've been avoiding having any operators which aren't familiar >>>>> from C. >>>>> >>>> >>>> The general flavor of this proposal w/r/t checked arithmetic sounds >>>> pretty reasonable to me, and we can probably make progress on this now. I >>>> particularly think that having checked types that use operator overloading >>>> is important for ergonomics. >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>> >>> >>> >>> -- >>> Jerry >>> >>>> >>> >> >> >> -- >> Jerry >> > > -- Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmaxwell at gmail.com Fri Jun 20 19:20:58 2014 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Fri, 20 Jun 2014 19:20:58 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: Message-ID: On Wed, Jun 18, 2014 at 10:08 AM, G?bor Lehel wrote: > core facts: wrapping is bad, but checking is slow. The current consensus On this point, has anyone tried changing the emitted code for all i32 operations to add trivial checks, hopefully in a way that llvm can optimize out when value analysis proves them redundant, which do something trivial update a per task counter when hit and benchmarked servo / language benchmark game programs to try to get a sane bound on how bad the hit is even when the programmers aren't making any effort to avoid the overhead? From lists at ncameron.org Fri Jun 20 20:49:27 2014 From: lists at ncameron.org (Nick Cameron) Date: Sat, 21 Jun 2014 15:49:27 +1200 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> Message-ID: I think having copy constructors is the only way to get rid of `.clone()` all over the place when using` Rc`. That, to me, seems very important (in making smart pointers first class citizens of Rust, without this, I would rather go back to having @-pointers). The trouble is, I see incrementing a ref count as the upper bound on the work that should be done in a copy constructor and I see no way to enforce that. So, I guess +1 to spirit of the OP, but no solid proposal for how to do it. On Sat, Jun 21, 2014 at 8:00 AM, Benjamin Striegel wrote: > I'm not a fan of the idea of blessing certain types with a > compiler-defined whitelist. And if the choice is then between ugly code and > copy constructors, I'll take ugly code over surprising code. > > > On Fri, Jun 20, 2014 at 3:10 PM, Patrick Walton > wrote: > >> On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] >> >> Currently being Copy equates with being Pod. The more time passes and >>> the more code examples I see, it is amazing the amount of ugliness that >>> it causes. I wonder if there is a way out. >>> >> >> Part of the problem is that a lot of library code assumes that Copy types >> can be copied by just moving bytes around. Having copy constructors would >> mean that this simplifying assumption would have to change. It's doable, I >> suppose, but having copy constructors would have a significant downside. >> >> Patrick >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.striegel at gmail.com Fri Jun 20 21:04:23 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sat, 21 Jun 2014 00:04:23 -0400 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> Message-ID: > I think having copy constructors is the only way to get rid of `.clone()` all over the place when using` Rc`. I find making `.clone()` explicit to be a valuable feature, and I bet I'm not the only one. Its absence guarantees that the pointer is being moved and no refcount is being bumped. It's one thing if the inclusion of copy constructors allows code to be more generic. But muddying program behavior just to make the source code prettier is a poor tradeoff for a systems language. On Fri, Jun 20, 2014 at 11:49 PM, Nick Cameron wrote: > I think having copy constructors is the only way to get rid of `.clone()` > all over the place when using` Rc`. That, to me, seems very important (in > making smart pointers first class citizens of Rust, without this, I would > rather go back to having @-pointers). The trouble is, I see incrementing a > ref count as the upper bound on the work that should be done in a copy > constructor and I see no way to enforce that. > > So, I guess +1 to spirit of the OP, but no solid proposal for how to do it. > > > On Sat, Jun 21, 2014 at 8:00 AM, Benjamin Striegel > wrote: > >> I'm not a fan of the idea of blessing certain types with a >> compiler-defined whitelist. And if the choice is then between ugly code and >> copy constructors, I'll take ugly code over surprising code. >> >> >> On Fri, Jun 20, 2014 at 3:10 PM, Patrick Walton >> wrote: >> >>> On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] >>> >>> Currently being Copy equates with being Pod. The more time passes and >>>> the more code examples I see, it is amazing the amount of ugliness that >>>> it causes. I wonder if there is a way out. >>>> >>> >>> Part of the problem is that a lot of library code assumes that Copy >>> types can be copied by just moving bytes around. Having copy constructors >>> would mean that this simplifying assumption would have to change. It's >>> doable, I suppose, but having copy constructors would have a significant >>> downside. >>> >>> Patrick >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zwarich at mozilla.com Fri Jun 20 21:05:53 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Fri, 20 Jun 2014 21:05:53 -0700 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> Message-ID: <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> I sort of like being forced to use .clone() to clone a ref-counted value, since it makes the memory accesses and increment more explicit and forces you to think which functions actually need to take an Rc and which functions can simply take an &. Also, if Rc becomes implicitly copyable, then would it be copied rather than moved on every use, or would you move it on the last use? The former seems untenable for performance reasons, since removing unnecessary ref-count operations is important for performance. The latter seems unpredictable, since adding a second use of a value in a function would mean that new code is implicitly executed wherever the first use is. Cameron On Jun 20, 2014, at 8:49 PM, Nick Cameron wrote: > I think having copy constructors is the only way to get rid of `.clone()` all over the place when using` Rc`. That, to me, seems very important (in making smart pointers first class citizens of Rust, without this, I would rather go back to having @-pointers). The trouble is, I see incrementing a ref count as the upper bound on the work that should be done in a copy constructor and I see no way to enforce that. > > So, I guess +1 to spirit of the OP, but no solid proposal for how to do it. > > > On Sat, Jun 21, 2014 at 8:00 AM, Benjamin Striegel wrote: > I'm not a fan of the idea of blessing certain types with a compiler-defined whitelist. And if the choice is then between ugly code and copy constructors, I'll take ugly code over surprising code. > > > On Fri, Jun 20, 2014 at 3:10 PM, Patrick Walton wrote: > On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] > > Currently being Copy equates with being Pod. The more time passes and > the more code examples I see, it is amazing the amount of ugliness that > it causes. I wonder if there is a way out. > > Part of the problem is that a lot of library code assumes that Copy types can be copied by just moving bytes around. Having copy constructors would mean that this simplifying assumption would have to change. It's doable, I suppose, but having copy constructors would have a significant downside. > > Patrick > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ncameron.org Fri Jun 20 23:06:22 2014 From: lists at ncameron.org (Nick Cameron) Date: Sat, 21 Jun 2014 18:06:22 +1200 Subject: [rust-dev] On Copy = POD In-Reply-To: <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: bstrie: you're right it is a trade off, but I don't agree that its not worth it. We're talking about non-atomic incrementing of an integer - that is pretty much the cheapest thing you can do on a processor (not free of course, since caching, etc., but still very cheap). I've programmed a lot in C++ with ref counted pointers and never had a problem remembering that there is a cost, and it makes using them pleasant. I found all the clone()s in Rust unpleasant, it really put me off using ref counting. The transition from using references to using Rc was particularly awful. Given that this is something C++ programmers coming to Rust will be used to using, I believe ergonomics is especially important. In this case I don't think we need to aim to be more 'bare metal' than C++. Transparent, ref counted pointers in C++ are popular and seem to work pretty well, although obviously not perfectly. zwarich: I haven't thought this through to a great extent, and I don't think here is the right place to plan the API. But, you ought to still have control over whether an Rc pointer is copied or referenced. If you have an Rc object and pass it to a function which takes an Rc, it is copied, if it takes a &Rc or a &T then it references (in the latter case with an autoderef-ref). If the function is parametric over U and takes a &U, then we instantiate U with either Rc or T (in either case it would be passed by ref without an increment, deciding which is not changed by having a copy constructor). If the function takes a U literal, then U must be instantiated with Rc. So, you still get to control whether you reference with an increment or not. I think if Rc is copy, then it is always copied. I would not expect it to ever move. I don't think that is untenable, performance wise, after all it is what everyone is currently doing in C++. I agree the second option seems unpredictable and thus less pleasant. Cheers, Nick On Sat, Jun 21, 2014 at 4:05 PM, Cameron Zwarich wrote: > I sort of like being forced to use .clone() to clone a ref-counted value, > since it makes the memory accesses and increment more explicit and forces > you to think which functions actually need to take an Rc and which > functions can simply take an &. > > Also, if Rc becomes implicitly copyable, then would it be copied rather > than moved on every use, or would you move it on the last use? The former > seems untenable for performance reasons, since removing unnecessary > ref-count operations is important for performance. The latter seems > unpredictable, since adding a second use of a value in a function would > mean that new code is implicitly executed wherever the first use is. > > Cameron > > On Jun 20, 2014, at 8:49 PM, Nick Cameron wrote: > > I think having copy constructors is the only way to get rid of `.clone()` > all over the place when using` Rc`. That, to me, seems very important (in > making smart pointers first class citizens of Rust, without this, I would > rather go back to having @-pointers). The trouble is, I see incrementing a > ref count as the upper bound on the work that should be done in a copy > constructor and I see no way to enforce that. > > So, I guess +1 to spirit of the OP, but no solid proposal for how to do it. > > > On Sat, Jun 21, 2014 at 8:00 AM, Benjamin Striegel > wrote: > >> I'm not a fan of the idea of blessing certain types with a >> compiler-defined whitelist. And if the choice is then between ugly code and >> copy constructors, I'll take ugly code over surprising code. >> >> >> On Fri, Jun 20, 2014 at 3:10 PM, Patrick Walton >> wrote: >> >>> On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] >>> >>> Currently being Copy equates with being Pod. The more time passes and >>>> the more code examples I see, it is amazing the amount of ugliness that >>>> it causes. I wonder if there is a way out. >>>> >>> >>> Part of the problem is that a lot of library code assumes that Copy >>> types can be copied by just moving bytes around. Having copy constructors >>> would mean that this simplifying assumption would have to change. It's >>> doable, I suppose, but having copy constructors would have a significant >>> downside. >>> >>> Patrick >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Fri Jun 20 23:14:23 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 21 Jun 2014 02:14:23 -0400 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: <53A522BF.20002@gmail.com> On 21/06/14 02:06 AM, Nick Cameron wrote: > bstrie: you're right it is a trade off, but I don't agree that its not > worth it. We're talking about non-atomic incrementing of an integer - > that is pretty much the cheapest thing you can do on a processor (not > free of course, since caching, etc., but still very cheap). I've > programmed a lot in C++ with ref counted pointers and never had a > problem remembering that there is a cost, and it makes using them > pleasant. I found all the clone()s in Rust unpleasant, it really put me > off using ref counting. The transition from using references to using Rc > was particularly awful. Given that this is something C++ programmers > coming to Rust will be used to using, I believe ergonomics is especially > important. > > In this case I don't think we need to aim to be more 'bare metal' than > C++. Transparent, ref counted pointers in C++ are popular and seem to > work pretty well, although obviously not perfectly. > > zwarich: I haven't thought this through to a great extent, and I don't > think here is the right place to plan the API. But, you ought to still > have control over whether an Rc pointer is copied or referenced. If you > have an Rc object and pass it to a function which takes an Rc, it > is copied, if it takes a &Rc or a &T then it references (in the > latter case with an autoderef-ref). If the function is parametric over U > and takes a &U, then we instantiate U with either Rc or T (in either > case it would be passed by ref without an increment, deciding which is > not changed by having a copy constructor). If the function takes a U > literal, then U must be instantiated with Rc. So, you still get to > control whether you reference with an increment or not. > > I think if Rc is copy, then it is always copied. I would not expect it > to ever move. I don't think that is untenable, performance wise, after > all it is what everyone is currently doing in C++. I agree the second > option seems unpredictable and thus less pleasant. It's a severe performance issue in C++11 with `std::shared_ptr` because it uses atomic reference counting. Even for `Rc`, these writes cause significant issues for alias analysis and end up causing many missed optimizations. Rust needs a way to elide them, and with the `move` keyword gone that means last use analysis or maintaining the current situation where Rust always performs the same operation as C for passing, returning and assignment (a shallow copy). It will be much harder to write failure-safe code if basic operations like assignment can fail. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From steve at steveklabnik.com Fri Jun 20 23:19:59 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Sat, 21 Jun 2014 02:19:59 -0400 Subject: [rust-dev] On Copy = POD In-Reply-To: <53A522BF.20002@gmail.com> References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> <53A522BF.20002@gmail.com> Message-ID: > I found all the clone()s in Rust unpleasant, it really put me off using ref counting. Excellent. ;) From zwarich at mozilla.com Fri Jun 20 23:21:44 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Fri, 20 Jun 2014 23:21:44 -0700 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> On Jun 20, 2014, at 11:06 PM, Nick Cameron wrote: > zwarich: I haven't thought this through to a great extent, and I don't think here is the right place to plan the API. But, you ought to still have control over whether an Rc pointer is copied or referenced. If you have an Rc object and pass it to a function which takes an Rc, it is copied, if it takes a &Rc or a &T then it references (in the latter case with an autoderef-ref). If the function is parametric over U and takes a &U, then we instantiate U with either Rc or T (in either case it would be passed by ref without an increment, deciding which is not changed by having a copy constructor). If the function takes a U literal, then U must be instantiated with Rc. So, you still get to control whether you reference with an increment or not. > > I think if Rc is copy, then it is always copied. I would not expect it to ever move. I don't think that is untenable, performance wise, after all it is what everyone is currently doing in C++. I agree the second option seems unpredictable and thus less pleasant. Copying on every single transfer of a ref-counted smart pointer is definitely *not* what everyone is doing in C++. In C++11, move constructors were added, partially to enable smart pointers to behave sanely and eliminate extra copies in this fashion (albeit in some cases requiring explicit moves rather than implicit ones like in Rust). Before that, it was possible to encode this idiom using a separate smart pointer for the expiring value. WebKit relies on (or relied on, before C++11) a scheme like this for adequate performance: https://www.webkit.org/coding/RefPtr.html In theory, you could encode such a scheme into this ?always copy on clone" version of Rust, where Rc would always copy, and RcTemp wouldn?t even implement clone, and would only be moveable and convertible back to an Rc. However, it seems strange to go out of your way to encode a bad version of move semantics back into a language that has native move semantics. Cameron From artella.coding at googlemail.com Sat Jun 21 01:23:38 2014 From: artella.coding at googlemail.com (Artella Coding) Date: Sat, 21 Jun 2014 09:23:38 +0100 Subject: [rust-dev] Proposal to rename &mut Message-ID: In : http://www.reddit.com/r/rust/comments/2581s5/informal_survey_which_is_clearer_mutability_or/ http://smallcultfollowing.com/babysteps/blog/2014/05/13/focusing-on-ownership/ https://air.mozilla.org/guaranteeing-memory-safety-in-rust/ there was talk of renaming &mut. What is the current status of this? Is it being worked on or has it been abandoned in favour of something else? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ncameron.org Sat Jun 21 02:10:35 2014 From: lists at ncameron.org (Nick Cameron) Date: Sat, 21 Jun 2014 21:10:35 +1200 Subject: [rust-dev] On Copy = POD In-Reply-To: <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> Message-ID: I guess I forgot that C++ ref counted pointers (pre-11) generally have a move version of the type. Thanks for pointing that out. I agree it would be odd to copy that design (Rc/RcTemp) in a language which has move semantics by default. I wonder if we could come up with _some_ design that would be better than the current one. My reasoning is that copy-with-increment is the (overwhelmingly) common case for ref-counted pointers and so should be easier/prettier than the less common case (moving). One could argue that the more efficient case (moving) should be prettier and I think that is valid. I'm not sure how to square the two arguments. I do think this deserves more thought than just accepting the current (`.clone()`) situation - I think it is very un-ergonimic. Having two types rather than two copying mechanisms seems more preferable to me, but I hope there is a better solution. On Sat, Jun 21, 2014 at 6:21 PM, Cameron Zwarich wrote: > On Jun 20, 2014, at 11:06 PM, Nick Cameron wrote: > > > zwarich: I haven't thought this through to a great extent, and I don't > think here is the right place to plan the API. But, you ought to still have > control over whether an Rc pointer is copied or referenced. If you have an > Rc object and pass it to a function which takes an Rc, it is copied, > if it takes a &Rc or a &T then it references (in the latter case with an > autoderef-ref). If the function is parametric over U and takes a &U, then > we instantiate U with either Rc or T (in either case it would be passed > by ref without an increment, deciding which is not changed by having a copy > constructor). If the function takes a U literal, then U must be > instantiated with Rc. So, you still get to control whether you reference > with an increment or not. > > > > I think if Rc is copy, then it is always copied. I would not expect it > to ever move. I don't think that is untenable, performance wise, after all > it is what everyone is currently doing in C++. I agree the second option > seems unpredictable and thus less pleasant. > > Copying on every single transfer of a ref-counted smart pointer is > definitely *not* what everyone is doing in C++. In C++11, move constructors > were added, partially to enable smart pointers to behave sanely and > eliminate extra copies in this fashion (albeit in some cases requiring > explicit moves rather than implicit ones like in Rust). > > Before that, it was possible to encode this idiom using a separate smart > pointer for the expiring value. WebKit relies on (or relied on, before > C++11) a scheme like this for adequate performance: > > https://www.webkit.org/coding/RefPtr.html > > In theory, you could encode such a scheme into this ?always copy on clone" > version of Rust, where Rc would always copy, and RcTemp wouldn?t even > implement clone, and would only be moveable and convertible back to an Rc. > However, it seems strange to go out of your way to encode a bad version of > move semantics back into a language that has native move semantics. > > Cameron -------------- next part -------------- An HTML attachment was scrubbed... URL: From val at markovic.io Sat Jun 21 03:29:32 2014 From: val at markovic.io (Val Markovic) Date: Sat, 21 Jun 2014 03:29:32 -0700 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: On Fri, Jun 20, 2014 at 11:06 PM, Nick Cameron wrote: > I found all the clone()s in Rust unpleasant, it really put me off using > ref counting. > I consider that to be a feature, not a bug. > Given that this is something C++ programmers coming to Rust will be used > to using, I believe ergonomics is especially important. > I write C++ for a living in a massive codebase and shared_ptrs are used extremely rarely, and it's *not *because of the perf overhead of the atomic increment/decrement, but because using shared_ptrs obscures ownership. People tend to just put some memory in a shared_ptr and not care which part of the system owns what and that ends up producing code that's very hard to reason about and maintain. unique_ptrs have made the transfer of ownership of heap-allocated memory super-easy. Damn-nigh every design can be expressed with unique_ptrs owned by the logical owners of that memory passing refs or const refs to other parts of the system. So please don't represent that shared_ptrs are commonly used in all good C++ code. Experience has thought me and others to look at shared_ptrs as a code smell and something to be flagged for extra clarification by the author in code review. I hate to quote the Google C++ style guide since it has many flaws, but this is one of the things it gets completely right : "Do not design your code to use shared ownership without a very good reason. " Rust has unique_ptrs in the form of ~ and they're doing their job just fine. Rust needs special support for Rc ergonomics as much as it needs such support for Gc, which is none at all. In fact, making Rc and Gc pointers more difficult to use should steer people away from such poor design crutches. > > In this case I don't think we need to aim to be more 'bare metal' than > C++. Transparent, ref counted pointers in C++ are popular and seem to work > pretty well, although obviously not perfectly. > > zwarich: I haven't thought this through to a great extent, and I don't > think here is the right place to plan the API. But, you ought to still have > control over whether an Rc pointer is copied or referenced. If you have an > Rc object and pass it to a function which takes an Rc, it is copied, > if it takes a &Rc or a &T then it references (in the latter case with an > autoderef-ref). If the function is parametric over U and takes a &U, then > we instantiate U with either Rc or T (in either case it would be passed > by ref without an increment, deciding which is not changed by having a copy > constructor). If the function takes a U literal, then U must be > instantiated with Rc. So, you still get to control whether you reference > with an increment or not. > > I think if Rc is copy, then it is always copied. I would not expect it to > ever move. I don't think that is untenable, performance wise, after all it > is what everyone is currently doing in C++. I agree the second option seems > unpredictable and thus less pleasant. > > Cheers, Nick > > > On Sat, Jun 21, 2014 at 4:05 PM, Cameron Zwarich > wrote: > >> I sort of like being forced to use .clone() to clone a ref-counted value, >> since it makes the memory accesses and increment more explicit and forces >> you to think which functions actually need to take an Rc and which >> functions can simply take an &. >> >> Also, if Rc becomes implicitly copyable, then would it be copied rather >> than moved on every use, or would you move it on the last use? The former >> seems untenable for performance reasons, since removing unnecessary >> ref-count operations is important for performance. The latter seems >> unpredictable, since adding a second use of a value in a function would >> mean that new code is implicitly executed wherever the first use is. >> >> Cameron >> >> On Jun 20, 2014, at 8:49 PM, Nick Cameron wrote: >> >> I think having copy constructors is the only way to get rid of `.clone()` >> all over the place when using` Rc`. That, to me, seems very important (in >> making smart pointers first class citizens of Rust, without this, I would >> rather go back to having @-pointers). The trouble is, I see incrementing a >> ref count as the upper bound on the work that should be done in a copy >> constructor and I see no way to enforce that. >> >> So, I guess +1 to spirit of the OP, but no solid proposal for how to do >> it. >> >> >> On Sat, Jun 21, 2014 at 8:00 AM, Benjamin Striegel < >> ben.striegel at gmail.com> wrote: >> >>> I'm not a fan of the idea of blessing certain types with a >>> compiler-defined whitelist. And if the choice is then between ugly code and >>> copy constructors, I'll take ugly code over surprising code. >>> >>> >>> On Fri, Jun 20, 2014 at 3:10 PM, Patrick Walton >>> wrote: >>> >>>> On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] >>>> >>>> Currently being Copy equates with being Pod. The more time passes and >>>>> the more code examples I see, it is amazing the amount of ugliness that >>>>> it causes. I wonder if there is a way out. >>>>> >>>> >>>> Part of the problem is that a lot of library code assumes that Copy >>>> types can be copied by just moving bytes around. Having copy constructors >>>> would mean that this simplifying assumption would have to change. It's >>>> doable, I suppose, but having copy constructors would have a significant >>>> downside. >>>> >>>> Patrick >>>> >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pssalmeida at gmail.com Sat Jun 21 04:42:56 2014 From: pssalmeida at gmail.com (=?UTF-8?Q?Paulo_S=C3=A9rgio_Almeida?=) Date: Sat, 21 Jun 2014 12:42:56 +0100 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> Message-ID: Regarding the white-listing, I also find it weird, but should the user experience be worse just because Rc and Arc are implemented in Rust and so we should do nothing in the name of orthogonality? A user won't care if Arc and Rc are built-in or not. I may have passed the message that its only ugliness involved, and being lazy to type .clone(). But the point is uniformity and coherence of the pointers API, clarity, and ease of refactoring code. I am thinking of writing an RFC about cleaning up pointers. As of now there are some, in my opinion, needless non-uniformities in the use of pointers. I would like to have some properties: 1) Two programs that use pointers, identical except for pointer types and that both compile should produce semantically equivalent result (i.e., only differ in performance). The idea is that different pointer types would be chosen according to capabilities (is move enough or do I want to mutate something I own, pick Box; do I want to share intra-task, pick Rc or Gc; inter-task, pick Arc). If a program fragment is written using, say Gc, and later I decide to switch to Rc, I should need only change the declaration site(s) and not have to go all-over and add .clone(). (There are other things than Copy that need to be improved, like uniformity of auto-dereferencing and auto-borrowing. Fortunately I hope those to be not as controverse.) 2) Pointers should be transparent, and avoid confusion between methods of the pointer and methods of the referent. In particular, having non Box pointers Copy and avoiding pointer cloning, all .clone() in code would mean cloning the referent, being uniform with what happens with Box pointers. A clone should be something more rare and conscious. As of now, having mechanical .clone() in many places makes the "real" refent clones less obvious. An unintended referent clone may go more easily unnoticed. (Other aspects involve switching to UFCS style for pointer methods.) 3) Last use move optimisation should be applied for pointer types. This is not as essential, but as now, the compiler will know the last use place of a variable and use move instead of copy. All white-listed for Copy pointer-types must allow this optimisation. As we are talking about a controlled, to be approved set of types (i.e. Rc and Arc), and not general user-defined types, we can be sure that for all white-listed types this is so. Last use move optimisation would result in the same performance of code as now. Again, this is coherent with Box types, where the first use (by value) must be also the last use. Anyway, I would like to stress that much fewer pointer copies will exist in Rust given (auto-)borrowing. It is important to educate programers to always start by considering &T in function arguments, if they only need to use the T, and add more capabilities if needed. Something like &Rc if the function needs to use the T and occasionally copy the pointer, and only Rc if the normal case is copying the pointer. This is why I even argue that Arc should be Copy even considering the much more expensive copy. The important thing is programers knowing the cost, which will happen for the very few white-listed types, as opposed to allowing general user-defined copies for which the cost is not clear for the reader. A program using Arc should not Use Arc all-over, but only in a few places; after spawning, the proc will typically pass not the Arc but only &T to functions performing the work; this is unless those functions need to spawn further tasks but in this case, the Arc copies are need and in those places we would use .clone() anyway, resulting in the same performance. On 21 June 2014 10:10, Nick Cameron wrote: > I guess I forgot that C++ ref counted pointers (pre-11) generally have a > move version of the type. Thanks for pointing that out. > > I agree it would be odd to copy that design (Rc/RcTemp) in a language > which has move semantics by default. I wonder if we could come up with > _some_ design that would be better than the current one. My reasoning is > that copy-with-increment is the (overwhelmingly) common case for > ref-counted pointers and so should be easier/prettier than the less common > case (moving). One could argue that the more efficient case (moving) should > be prettier and I think that is valid. I'm not sure how to square the two > arguments. I do think this deserves more thought than just accepting the > current (`.clone()`) situation - I think it is very un-ergonimic. Having > two types rather than two copying mechanisms seems more preferable to me, > but I hope there is a better solution. > > > On Sat, Jun 21, 2014 at 6:21 PM, Cameron Zwarich > wrote: > >> On Jun 20, 2014, at 11:06 PM, Nick Cameron wrote: >> >> > zwarich: I haven't thought this through to a great extent, and I don't >> think here is the right place to plan the API. But, you ought to still have >> control over whether an Rc pointer is copied or referenced. If you have an >> Rc object and pass it to a function which takes an Rc, it is copied, >> if it takes a &Rc or a &T then it references (in the latter case with an >> autoderef-ref). If the function is parametric over U and takes a &U, then >> we instantiate U with either Rc or T (in either case it would be passed >> by ref without an increment, deciding which is not changed by having a copy >> constructor). If the function takes a U literal, then U must be >> instantiated with Rc. So, you still get to control whether you reference >> with an increment or not. >> > >> > I think if Rc is copy, then it is always copied. I would not expect it >> to ever move. I don't think that is untenable, performance wise, after all >> it is what everyone is currently doing in C++. I agree the second option >> seems unpredictable and thus less pleasant. >> >> Copying on every single transfer of a ref-counted smart pointer is >> definitely *not* what everyone is doing in C++. In C++11, move constructors >> were added, partially to enable smart pointers to behave sanely and >> eliminate extra copies in this fashion (albeit in some cases requiring >> explicit moves rather than implicit ones like in Rust). >> >> Before that, it was possible to encode this idiom using a separate smart >> pointer for the expiring value. WebKit relies on (or relied on, before >> C++11) a scheme like this for adequate performance: >> >> https://www.webkit.org/coding/RefPtr.html >> >> In theory, you could encode such a scheme into this ?always copy on >> clone" version of Rust, where Rc would always copy, and RcTemp wouldn?t >> even implement clone, and would only be moveable and convertible back to an >> Rc. However, it seems strange to go out of your way to encode a bad version >> of move semantics back into a language that has native move semantics. >> >> Cameron > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pssalmeida at gmail.com Sat Jun 21 05:18:38 2014 From: pssalmeida at gmail.com (=?UTF-8?Q?Paulo_S=C3=A9rgio_Almeida?=) Date: Sat, 21 Jun 2014 13:18:38 +0100 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: It should not need to be said, but I am not advocating (over)using Rc, Gc, or Arc. One should start by Box and &T. But when this is not enough, using the other pointer types should not be a pain. More than a pain, I would not like to have unpleasant surprises when I find out that Box is not enough, start refactoring code changing Box to, say, Rc, and then finding out that referents that were cloned when Box was used are now not cloned anymore, and only the pointer is now "cloned". On 21 June 2014 11:29, Val Markovic wrote: > On Fri, Jun 20, 2014 at 11:06 PM, Nick Cameron wrote: > >> I found all the clone()s in Rust unpleasant, it really put me off using >> ref counting. >> > > I consider that to be a feature, not a bug. > > >> Given that this is something C++ programmers coming to Rust will be used >> to using, I believe ergonomics is especially important. >> > > I write C++ for a living in a massive codebase and shared_ptrs are used > extremely rarely, and it's *not *because of the perf overhead of the > atomic increment/decrement, but because using shared_ptrs obscures > ownership. People tend to just put some memory in a shared_ptr and not care > which part of the system owns what and that ends up producing code that's > very hard to reason about and maintain. > > unique_ptrs have made the transfer of ownership of heap-allocated memory > super-easy. Damn-nigh every design can be expressed with unique_ptrs owned > by the logical owners of that memory passing refs or const refs to other > parts of the system. > > So please don't represent that shared_ptrs are commonly used in all good > C++ code. Experience has thought me and others to look at shared_ptrs as a > code smell and something to be flagged for extra clarification by the > author in code review. I hate to quote the Google C++ style guide since it > has many flaws, but this is one of the things it gets completely right > : > "Do not design your code to use shared ownership without a very good > reason." > > Rust has unique_ptrs in the form of ~ and they're doing their job just > fine. Rust needs special support for Rc ergonomics as much as it needs such > support for Gc, which is none at all. In fact, making Rc and Gc pointers > more difficult to use should steer people away from such poor design > crutches. > > >> >> In this case I don't think we need to aim to be more 'bare metal' than >> C++. Transparent, ref counted pointers in C++ are popular and seem to work >> pretty well, although obviously not perfectly. >> >> zwarich: I haven't thought this through to a great extent, and I don't >> think here is the right place to plan the API. But, you ought to still have >> control over whether an Rc pointer is copied or referenced. If you have an >> Rc object and pass it to a function which takes an Rc, it is copied, >> if it takes a &Rc or a &T then it references (in the latter case with an >> autoderef-ref). If the function is parametric over U and takes a &U, then >> we instantiate U with either Rc or T (in either case it would be passed >> by ref without an increment, deciding which is not changed by having a copy >> constructor). If the function takes a U literal, then U must be >> instantiated with Rc. So, you still get to control whether you reference >> with an increment or not. >> >> I think if Rc is copy, then it is always copied. I would not expect it to >> ever move. I don't think that is untenable, performance wise, after all it >> is what everyone is currently doing in C++. I agree the second option seems >> unpredictable and thus less pleasant. >> >> Cheers, Nick >> >> >> On Sat, Jun 21, 2014 at 4:05 PM, Cameron Zwarich >> wrote: >> >>> I sort of like being forced to use .clone() to clone a ref-counted >>> value, since it makes the memory accesses and increment more explicit and >>> forces you to think which functions actually need to take an Rc and which >>> functions can simply take an &. >>> >>> Also, if Rc becomes implicitly copyable, then would it be copied rather >>> than moved on every use, or would you move it on the last use? The former >>> seems untenable for performance reasons, since removing unnecessary >>> ref-count operations is important for performance. The latter seems >>> unpredictable, since adding a second use of a value in a function would >>> mean that new code is implicitly executed wherever the first use is. >>> >>> Cameron >>> >>> On Jun 20, 2014, at 8:49 PM, Nick Cameron wrote: >>> >>> I think having copy constructors is the only way to get rid of >>> `.clone()` all over the place when using` Rc`. That, to me, seems very >>> important (in making smart pointers first class citizens of Rust, without >>> this, I would rather go back to having @-pointers). The trouble is, I see >>> incrementing a ref count as the upper bound on the work that should be done >>> in a copy constructor and I see no way to enforce that. >>> >>> So, I guess +1 to spirit of the OP, but no solid proposal for how to do >>> it. >>> >>> >>> On Sat, Jun 21, 2014 at 8:00 AM, Benjamin Striegel < >>> ben.striegel at gmail.com> wrote: >>> >>>> I'm not a fan of the idea of blessing certain types with a >>>> compiler-defined whitelist. And if the choice is then between ugly code and >>>> copy constructors, I'll take ugly code over surprising code. >>>> >>>> >>>> On Fri, Jun 20, 2014 at 3:10 PM, Patrick Walton >>>> wrote: >>>> >>>>> On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] >>>>> >>>>> Currently being Copy equates with being Pod. The more time passes and >>>>>> the more code examples I see, it is amazing the amount of ugliness >>>>>> that >>>>>> it causes. I wonder if there is a way out. >>>>>> >>>>> >>>>> Part of the problem is that a lot of library code assumes that Copy >>>>> types can be copied by just moving bytes around. Having copy constructors >>>>> would mean that this simplifying assumption would have to change. It's >>>>> doable, I suppose, but having copy constructors would have a significant >>>>> downside. >>>>> >>>>> Patrick >>>>> >>>>> _______________________________________________ >>>>> Rust-dev mailing list >>>>> Rust-dev at mozilla.org >>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>> >>>> >>>> >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at dhardy.name Sat Jun 21 05:27:42 2014 From: lists at dhardy.name (Diggory Hardy) Date: Sat, 21 Jun 2014 08:27:42 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: Message-ID: <1653164.tiqDWJOpWC@tph-l13071> As far as I am aware, using theorem proving tools[1] to provide limits on value ranges is pretty hard and often computationally intensive to do in /simple/ code. I've only seen prototype systems where the user is expected to write full contracts on exactly how every function may modify every value it could, as well as often providing hints to the prover (especially for loops). So I really don't think this is going to help much. [1]: https://en.wikipedia.org/wiki/Interactive_theorem_proving On Friday 20 Jun 2014 19:20:58 Gregory Maxwell wrote: > On Wed, Jun 18, 2014 at 10:08 AM, G?bor Lehel wrote: > > core facts: wrapping is bad, but checking is slow. The current consensus > > On this point, has anyone tried changing the emitted code for all i32 > operations to add trivial checks, hopefully in a way that llvm can > optimize out when value analysis proves them redundant, which do > something trivial update a per task counter when hit and benchmarked > servo / language benchmark game programs to try to get a sane bound on > how bad the hit is even when the programmers aren't making any effort > to avoid the overhead? > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmaxwell at gmail.com Sat Jun 21 05:50:18 2014 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Sat, 21 Jun 2014 05:50:18 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <3978007.QzGOc367vL@tph-l13071> References: <3978007.QzGOc367vL@tph-l13071> Message-ID: On Sat, Jun 21, 2014 at 5:18 AM, Diggory Hardy wrote: > As far as I am aware, using theorem proving tools[1] to provide limits on > value ranges is pretty hard and often computationally intensive to do in > simple code. I've only seen prototype systems where the user is expected to > write full contracts on exactly how every function may modify every value it > could, as well as often providing hints to the prover (especially for > loops). So I really don't think this is going to help much. To be sound is hard to catch lots of cases less so? existing C compilers manage to prove enough about ranges to eliminate redundant tests pretty often? e.g. #include int main(int argc, char **argv){ (void)argv; if(argc<16)return 1; argc+=1000; if(argc<8)printf("This code is not emitted by an optimizing compiler.\n"); return 0; } GCC 4.8.2 -O2 manages this example fine and I expect is true for other modern optimizing C compilers. This is part of the reason that C's undefinedness has real performance implications, as sometimes it can only prove loop iteration counts or pointer non-aliasing (both useful for vectorization) when it knows that indexes cannot overflow. From comexk at gmail.com Sat Jun 21 08:08:29 2014 From: comexk at gmail.com (comex) Date: Sat, 21 Jun 2014 11:08:29 -0400 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> Message-ID: On Sat, Jun 21, 2014 at 5:10 AM, Nick Cameron wrote: > I wonder if we could come up with _some_ > design that would be better than the current one. Could always pave the road to hell and have a short postfix operator. It would have to be only for cheap clones, but frankly I think those should have a different method name in any case - it's bad for readability that '.clone()' could mean either copying an entire vector or incrementing an integer. While you're at it, copy Swift's ! operator... From ben.striegel at gmail.com Sat Jun 21 09:00:35 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sat, 21 Jun 2014 12:00:35 -0400 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: > I don't think that is untenable, performance wise, after all it is what everyone is currently doing in C++. We have already made several decisions that will disadvantage us with regard to C++. When we have the opportunity to do better, performance-wise, than C++, we must seize it in order to maintain performance parity overall. On Sat, Jun 21, 2014 at 2:06 AM, Nick Cameron wrote: > bstrie: you're right it is a trade off, but I don't agree that its not > worth it. We're talking about non-atomic incrementing of an integer - that > is pretty much the cheapest thing you can do on a processor (not free of > course, since caching, etc., but still very cheap). I've programmed a lot > in C++ with ref counted pointers and never had a problem remembering that > there is a cost, and it makes using them pleasant. I found all the clone()s > in Rust unpleasant, it really put me off using ref counting. The transition > from using references to using Rc was particularly awful. Given that this > is something C++ programmers coming to Rust will be used to using, I > believe ergonomics is especially important. > > In this case I don't think we need to aim to be more 'bare metal' than > C++. Transparent, ref counted pointers in C++ are popular and seem to work > pretty well, although obviously not perfectly. > > zwarich: I haven't thought this through to a great extent, and I don't > think here is the right place to plan the API. But, you ought to still have > control over whether an Rc pointer is copied or referenced. If you have an > Rc object and pass it to a function which takes an Rc, it is copied, > if it takes a &Rc or a &T then it references (in the latter case with an > autoderef-ref). If the function is parametric over U and takes a &U, then > we instantiate U with either Rc or T (in either case it would be passed > by ref without an increment, deciding which is not changed by having a copy > constructor). If the function takes a U literal, then U must be > instantiated with Rc. So, you still get to control whether you reference > with an increment or not. > > I think if Rc is copy, then it is always copied. I would not expect it to > ever move. I don't think that is untenable, performance wise, after all it > is what everyone is currently doing in C++. I agree the second option seems > unpredictable and thus less pleasant. > > Cheers, Nick > > > On Sat, Jun 21, 2014 at 4:05 PM, Cameron Zwarich > wrote: > >> I sort of like being forced to use .clone() to clone a ref-counted value, >> since it makes the memory accesses and increment more explicit and forces >> you to think which functions actually need to take an Rc and which >> functions can simply take an &. >> >> Also, if Rc becomes implicitly copyable, then would it be copied rather >> than moved on every use, or would you move it on the last use? The former >> seems untenable for performance reasons, since removing unnecessary >> ref-count operations is important for performance. The latter seems >> unpredictable, since adding a second use of a value in a function would >> mean that new code is implicitly executed wherever the first use is. >> >> Cameron >> >> On Jun 20, 2014, at 8:49 PM, Nick Cameron wrote: >> >> I think having copy constructors is the only way to get rid of `.clone()` >> all over the place when using` Rc`. That, to me, seems very important (in >> making smart pointers first class citizens of Rust, without this, I would >> rather go back to having @-pointers). The trouble is, I see incrementing a >> ref count as the upper bound on the work that should be done in a copy >> constructor and I see no way to enforce that. >> >> So, I guess +1 to spirit of the OP, but no solid proposal for how to do >> it. >> >> >> On Sat, Jun 21, 2014 at 8:00 AM, Benjamin Striegel < >> ben.striegel at gmail.com> wrote: >> >>> I'm not a fan of the idea of blessing certain types with a >>> compiler-defined whitelist. And if the choice is then between ugly code and >>> copy constructors, I'll take ugly code over surprising code. >>> >>> >>> On Fri, Jun 20, 2014 at 3:10 PM, Patrick Walton >>> wrote: >>> >>>> On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] >>>> >>>> Currently being Copy equates with being Pod. The more time passes and >>>>> the more code examples I see, it is amazing the amount of ugliness that >>>>> it causes. I wonder if there is a way out. >>>>> >>>> >>>> Part of the problem is that a lot of library code assumes that Copy >>>> types can be copied by just moving bytes around. Having copy constructors >>>> would mean that this simplifying assumption would have to change. It's >>>> doable, I suppose, but having copy constructors would have a significant >>>> downside. >>>> >>>> Patrick >>>> >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.striegel at gmail.com Sat Jun 21 09:05:55 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sat, 21 Jun 2014 12:05:55 -0400 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> Message-ID: > A user won't care if Arc and Rc are built-in or not. They will definitely care, once they attempt to write their own pointer types and find that they're second-class citizens compared to the types that have been blessed by the compiler. There's little point in having a powerful and extensible language if even simple types need hardcoded compiler magic to function. On Sat, Jun 21, 2014 at 7:42 AM, Paulo S?rgio Almeida wrote: > Regarding the white-listing, I also find it weird, but should the user > experience be worse just because Rc and Arc are implemented in Rust and so > we should do nothing in the name of orthogonality? A user won't care if Arc > and Rc are built-in or not. > > I may have passed the message that its only ugliness involved, and being > lazy to type .clone(). But the point is uniformity and coherence of the > pointers API, clarity, and ease of refactoring code. I am thinking of > writing an RFC about cleaning up pointers. As of now there are some, in my > opinion, needless non-uniformities in the use of pointers. I would like to > have some properties: > > 1) Two programs that use pointers, identical except for pointer types and > that both compile should produce semantically equivalent result (i.e., only > differ in performance). > > The idea is that different pointer types would be chosen according to > capabilities (is move enough or do I want to mutate something I own, pick > Box; do I want to share intra-task, pick Rc or Gc; inter-task, pick Arc). > If a program fragment is written using, say Gc, and later I decide to > switch to Rc, I should need only change the declaration site(s) and not > have to go all-over and add .clone(). > > (There are other things than Copy that need to be improved, like > uniformity of auto-dereferencing and auto-borrowing. Fortunately I hope > those to be not as controverse.) > > 2) Pointers should be transparent, and avoid confusion between methods of > the pointer and methods of the referent. > > In particular, having non Box pointers Copy and avoiding pointer cloning, > all .clone() in code would mean cloning the referent, being uniform with > what happens with Box pointers. A clone should be something more rare and > conscious. As of now, having mechanical .clone() in many places makes the > "real" refent clones less obvious. An unintended referent clone may go more > easily unnoticed. > > (Other aspects involve switching to UFCS style for pointer methods.) > > 3) Last use move optimisation should be applied for pointer types. > > This is not as essential, but as now, the compiler will know the last use > place of a variable and use move instead of copy. All white-listed for Copy > pointer-types must allow this optimisation. As we are talking about a > controlled, to be approved set of types (i.e. Rc and Arc), and not general > user-defined types, we can be sure that for all white-listed types this is > so. Last use move optimisation would result in the same performance of code > as now. Again, this is coherent with Box types, where the first use (by > value) must be also the last use. > > Anyway, I would like to stress that much fewer pointer copies will exist > in Rust given (auto-)borrowing. It is important to educate programers to > always start by considering &T in function arguments, if they only need to > use the T, and add more capabilities if needed. Something like &Rc if > the function needs to use the T and occasionally copy the pointer, and only > Rc if the normal case is copying the pointer. > > This is why I even argue that Arc should be Copy even considering the much > more expensive copy. The important thing is programers knowing the cost, > which will happen for the very few white-listed types, as opposed to > allowing general user-defined copies for which the cost is not clear for > the reader. A program using Arc should not Use Arc all-over, but only in a > few places; after spawning, the proc will typically pass not the Arc but > only &T to functions performing the work; this is unless those functions > need to spawn further tasks but in this case, the Arc copies are need and > in those places we would use .clone() anyway, resulting in the same > performance. > > > > On 21 June 2014 10:10, Nick Cameron wrote: > >> I guess I forgot that C++ ref counted pointers (pre-11) generally have a >> move version of the type. Thanks for pointing that out. >> >> I agree it would be odd to copy that design (Rc/RcTemp) in a language >> which has move semantics by default. I wonder if we could come up with >> _some_ design that would be better than the current one. My reasoning is >> that copy-with-increment is the (overwhelmingly) common case for >> ref-counted pointers and so should be easier/prettier than the less common >> case (moving). One could argue that the more efficient case (moving) should >> be prettier and I think that is valid. I'm not sure how to square the two >> arguments. I do think this deserves more thought than just accepting the >> current (`.clone()`) situation - I think it is very un-ergonimic. Having >> two types rather than two copying mechanisms seems more preferable to me, >> but I hope there is a better solution. >> >> >> On Sat, Jun 21, 2014 at 6:21 PM, Cameron Zwarich >> wrote: >> >>> On Jun 20, 2014, at 11:06 PM, Nick Cameron wrote: >>> >>> > zwarich: I haven't thought this through to a great extent, and I don't >>> think here is the right place to plan the API. But, you ought to still have >>> control over whether an Rc pointer is copied or referenced. If you have an >>> Rc object and pass it to a function which takes an Rc, it is copied, >>> if it takes a &Rc or a &T then it references (in the latter case with an >>> autoderef-ref). If the function is parametric over U and takes a &U, then >>> we instantiate U with either Rc or T (in either case it would be passed >>> by ref without an increment, deciding which is not changed by having a copy >>> constructor). If the function takes a U literal, then U must be >>> instantiated with Rc. So, you still get to control whether you reference >>> with an increment or not. >>> > >>> > I think if Rc is copy, then it is always copied. I would not expect it >>> to ever move. I don't think that is untenable, performance wise, after all >>> it is what everyone is currently doing in C++. I agree the second option >>> seems unpredictable and thus less pleasant. >>> >>> Copying on every single transfer of a ref-counted smart pointer is >>> definitely *not* what everyone is doing in C++. In C++11, move constructors >>> were added, partially to enable smart pointers to behave sanely and >>> eliminate extra copies in this fashion (albeit in some cases requiring >>> explicit moves rather than implicit ones like in Rust). >>> >>> Before that, it was possible to encode this idiom using a separate smart >>> pointer for the expiring value. WebKit relies on (or relied on, before >>> C++11) a scheme like this for adequate performance: >>> >>> https://www.webkit.org/coding/RefPtr.html >>> >>> In theory, you could encode such a scheme into this ?always copy on >>> clone" version of Rust, where Rc would always copy, and RcTemp wouldn?t >>> even implement clone, and would only be moveable and convertible back to an >>> Rc. However, it seems strange to go out of your way to encode a bad version >>> of move semantics back into a language that has native move semantics. >>> >>> Cameron >> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Sat Jun 21 09:46:38 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Sat, 21 Jun 2014 12:46:38 -0400 Subject: [rust-dev] [ANN] Brooklyn.rs In-Reply-To: References: Message-ID: For those of you coming today, my train has been delayed multiple times, so I will be a few minutes late. I'll be wearing a bright red Ruby shirt, because that's funny and also more noticeable. See you all soon! From rick.richardson at gmail.com Sat Jun 21 09:49:05 2014 From: rick.richardson at gmail.com (Rick Richardson) Date: Sat, 21 Jun 2014 12:49:05 -0400 Subject: [rust-dev] [ANN] Brooklyn.rs In-Reply-To: References: Message-ID: Wish I could join you guys. Hack beautifully. Be safe from evil. On Jun 21, 2014 12:46 PM, "Steve Klabnik" wrote: > For those of you coming today, my train has been delayed multiple times, > so I will be a few minutes late. I'll be wearing a bright red Ruby shirt, > because that's funny and also more noticeable. > > See you all soon! > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flaper87 at gmail.com Sat Jun 21 10:08:57 2014 From: flaper87 at gmail.com (Flaper87) Date: Sat, 21 Jun 2014 19:08:57 +0200 Subject: [rust-dev] [ANN] Brooklyn.rs In-Reply-To: References: Message-ID: On Jun 21, 2014 12:46 PM, "Steve Klabnik" wrote: > > For those of you coming today, my train has been delayed multiple times, so I will be a few minutes late. I'll be wearing a bright red Ruby shirt, because that's funny and also more noticeable. > > See you all soon! Already here. I'm not wearing a red T-shirt but I've Rust stickers to give away. :) > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at mir2.org Sat Jun 21 10:52:53 2014 From: igor at mir2.org (Igor Bukanov) Date: Sat, 21 Jun 2014 19:52:53 +0200 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> Message-ID: On 21 June 2014 11:10, Nick Cameron wrote: > I wonder if we could come up with _some_ > design that would be better than the current one. The reason the ugliness is the repeated clone calls: let x = Rc::::new(1); ... foo(x.clone()); bar(x.clone()); last_x_use(x); In this pattern the x is repeatedly cloned to pass it as argument that will be moved. The ugliness can be eliminated if x as the local variable can be annotated to tell the compiler to clone it before passing to functions. I.e. something like: let autoclone x = Rc::::new(1); ... foo(x); bar(x); last_x_use(x); Another possibility is to allow for move-in-move-out params that moves the the value back to the caller when the function returns forcing the callee to use the clone call if it wants to store the argument for a later use. From diggory.hardy at unibas.ch Sat Jun 21 05:18:44 2014 From: diggory.hardy at unibas.ch (Diggory Hardy) Date: Sat, 21 Jun 2014 08:18:44 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: Message-ID: <3978007.QzGOc367vL@tph-l13071> As far as I am aware, using theorem proving tools[1] to provide limits on value ranges is pretty hard and often computationally intensive to do in /simple/ code. I've only seen prototype systems where the user is expected to write full contracts on exactly how every function may modify every value it could, as well as often providing hints to the prover (especially for loops). So I really don't think this is going to help much. [1]: https://en.wikipedia.org/wiki/Interactive_theorem_proving On Friday 20 Jun 2014 19:20:58 Gregory Maxwell wrote: > On Wed, Jun 18, 2014 at 10:08 AM, G?bor Lehel wrote: > > core facts: wrapping is bad, but checking is slow. The current consensus > > On this point, has anyone tried changing the emitted code for all i32 > operations to add trivial checks, hopefully in a way that llvm can > optimize out when value analysis proves them redundant, which do > something trivial update a per task counter when hit and benchmarked > servo / language benchmark game programs to try to get a sane bound on > how bad the hit is even when the programmers aren't making any effort > to avoid the overhead? > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From comexk at gmail.com Sat Jun 21 12:24:35 2014 From: comexk at gmail.com (comex) Date: Sat, 21 Jun 2014 15:24:35 -0400 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> Message-ID: On Sat, Jun 21, 2014 at 1:52 PM, Igor Bukanov wrote: > Another possibility is to allow for move-in-move-out params that moves > the the value back to the caller when the function returns forcing the > callee to use the clone call if it wants to store the argument for a > later use. It should be possible to do that already with a type like &Rc... From igor at mir2.org Sat Jun 21 13:32:25 2014 From: igor at mir2.org (Igor Bukanov) Date: Sat, 21 Jun 2014 22:32:25 +0200 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> Message-ID: &Rc introduces useless indirection that not only clatters the code with &x but also harms performance as the compiler cannot eliminate that when calling across crates. On 21 June 2014 21:24, comex wrote: > On Sat, Jun 21, 2014 at 1:52 PM, Igor Bukanov wrote: >> Another possibility is to allow for move-in-move-out params that moves >> the the value back to the caller when the function returns forcing the >> callee to use the clone call if it wants to store the argument for a >> later use. > > It should be possible to do that already with a type like &Rc... From vadimcn at gmail.com Sat Jun 21 13:34:59 2014 From: vadimcn at gmail.com (Vadim Chugunov) Date: Sat, 21 Jun 2014 13:34:59 -0700 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> Message-ID: This is assuming that foo and bar are fn(RC), right? In normal use I would expect them to be fn(int) ot fn(&int), unless they need to retain a reference. And in the latter case I would make them fn(&mut RC) and clone() internally. On Sat, Jun 21, 2014 at 10:52 AM, Igor Bukanov wrote: > On 21 June 2014 11:10, Nick Cameron wrote: > > I wonder if we could come up with _some_ > > design that would be better than the current one. > > The reason the ugliness is the repeated clone calls: > > let x = Rc::::new(1); > ... > foo(x.clone()); > bar(x.clone()); > last_x_use(x); > > In this pattern the x is repeatedly cloned to pass it as argument that > will be moved. The ugliness can be eliminated if x as the local > variable can be annotated to tell the compiler to clone it before > passing to functions. I.e. something like: > > let autoclone x = Rc::::new(1); > ... > foo(x); > bar(x); > last_x_use(x); > > Another possibility is to allow for move-in-move-out params that moves > the the value back to the caller when the function returns forcing the > callee to use the clone call if it wants to store the argument for a > later use. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at mir2.org Sat Jun 21 14:03:24 2014 From: igor at mir2.org (Igor Bukanov) Date: Sat, 21 Jun 2014 23:03:24 +0200 Subject: [rust-dev] On Copy = POD In-Reply-To: References: Message-ID: On 20 June 2014 21:07, Paulo S?rgio Almeida wrote: > I have seen many other examples, where the code could mislead the reader into > thinking there are several, e.g., Mutexes: > > let mutex = Arc::new(Mutex::new(1)); > let mutex2 = mutex.clone(); Does this experience exist outside multithreaded code? I am asking because if the need to use extra temporary to create clones is limited mostly to cases involving implementations of Send, then this is rather different case than the issue of avoiding explicit clone() in general. From lists at ncameron.org Sat Jun 21 14:15:20 2014 From: lists at ncameron.org (Nick Cameron) Date: Sun, 22 Jun 2014 09:15:20 +1200 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: I am assuming that people will use Rc not out of choice, but when they have to. We can trust our users to use ownership and references where possible, after all these are the easiest things to use, and privileged in all the Rust docs and tutorials. We will still need style guides in Rust and we will still tell people to prefer ownership to ref counting. We should assume our users are smart, if they don't understand the trade-offs around ref counting, then they are going to have much bigger problems writing systems code than a few extra counter increments. Ownership does not always work, graphs often appear in programming. When they do, you have to use Rc or Gc to cope. We shouldn't punish programmers who have these problems to deal with. Telling them to use ownership is pointless if your data is not hierarchical. In this situation, we are not encouraging users to use ownership instead of ref counting, we are encouraging them to use garbage collection, even when that is not the optimal solution for their problem. On Sat, Jun 21, 2014 at 10:29 PM, Val Markovic wrote: > On Fri, Jun 20, 2014 at 11:06 PM, Nick Cameron wrote: > >> I found all the clone()s in Rust unpleasant, it really put me off using >> ref counting. >> > > I consider that to be a feature, not a bug. > > >> Given that this is something C++ programmers coming to Rust will be used >> to using, I believe ergonomics is especially important. >> > > I write C++ for a living in a massive codebase and shared_ptrs are used > extremely rarely, and it's *not *because of the perf overhead of the > atomic increment/decrement, but because using shared_ptrs obscures > ownership. People tend to just put some memory in a shared_ptr and not care > which part of the system owns what and that ends up producing code that's > very hard to reason about and maintain. > > unique_ptrs have made the transfer of ownership of heap-allocated memory > super-easy. Damn-nigh every design can be expressed with unique_ptrs owned > by the logical owners of that memory passing refs or const refs to other > parts of the system. > > So please don't represent that shared_ptrs are commonly used in all good > C++ code. Experience has thought me and others to look at shared_ptrs as a > code smell and something to be flagged for extra clarification by the > author in code review. I hate to quote the Google C++ style guide since it > has many flaws, but this is one of the things it gets completely right > : > "Do not design your code to use shared ownership without a very good > reason." > > Rust has unique_ptrs in the form of ~ and they're doing their job just > fine. Rust needs special support for Rc ergonomics as much as it needs such > support for Gc, which is none at all. In fact, making Rc and Gc pointers > more difficult to use should steer people away from such poor design > crutches. > > >> >> In this case I don't think we need to aim to be more 'bare metal' than >> C++. Transparent, ref counted pointers in C++ are popular and seem to work >> pretty well, although obviously not perfectly. >> >> zwarich: I haven't thought this through to a great extent, and I don't >> think here is the right place to plan the API. But, you ought to still have >> control over whether an Rc pointer is copied or referenced. If you have an >> Rc object and pass it to a function which takes an Rc, it is copied, >> if it takes a &Rc or a &T then it references (in the latter case with an >> autoderef-ref). If the function is parametric over U and takes a &U, then >> we instantiate U with either Rc or T (in either case it would be passed >> by ref without an increment, deciding which is not changed by having a copy >> constructor). If the function takes a U literal, then U must be >> instantiated with Rc. So, you still get to control whether you reference >> with an increment or not. >> >> I think if Rc is copy, then it is always copied. I would not expect it to >> ever move. I don't think that is untenable, performance wise, after all it >> is what everyone is currently doing in C++. I agree the second option seems >> unpredictable and thus less pleasant. >> >> Cheers, Nick >> >> >> On Sat, Jun 21, 2014 at 4:05 PM, Cameron Zwarich >> wrote: >> >>> I sort of like being forced to use .clone() to clone a ref-counted >>> value, since it makes the memory accesses and increment more explicit and >>> forces you to think which functions actually need to take an Rc and which >>> functions can simply take an &. >>> >>> Also, if Rc becomes implicitly copyable, then would it be copied rather >>> than moved on every use, or would you move it on the last use? The former >>> seems untenable for performance reasons, since removing unnecessary >>> ref-count operations is important for performance. The latter seems >>> unpredictable, since adding a second use of a value in a function would >>> mean that new code is implicitly executed wherever the first use is. >>> >>> Cameron >>> >>> On Jun 20, 2014, at 8:49 PM, Nick Cameron wrote: >>> >>> I think having copy constructors is the only way to get rid of >>> `.clone()` all over the place when using` Rc`. That, to me, seems very >>> important (in making smart pointers first class citizens of Rust, without >>> this, I would rather go back to having @-pointers). The trouble is, I see >>> incrementing a ref count as the upper bound on the work that should be done >>> in a copy constructor and I see no way to enforce that. >>> >>> So, I guess +1 to spirit of the OP, but no solid proposal for how to do >>> it. >>> >>> >>> On Sat, Jun 21, 2014 at 8:00 AM, Benjamin Striegel < >>> ben.striegel at gmail.com> wrote: >>> >>>> I'm not a fan of the idea of blessing certain types with a >>>> compiler-defined whitelist. And if the choice is then between ugly code and >>>> copy constructors, I'll take ugly code over surprising code. >>>> >>>> >>>> On Fri, Jun 20, 2014 at 3:10 PM, Patrick Walton >>>> wrote: >>>> >>>>> On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] >>>>> >>>>> Currently being Copy equates with being Pod. The more time passes and >>>>>> the more code examples I see, it is amazing the amount of ugliness >>>>>> that >>>>>> it causes. I wonder if there is a way out. >>>>>> >>>>> >>>>> Part of the problem is that a lot of library code assumes that Copy >>>>> types can be copied by just moving bytes around. Having copy constructors >>>>> would mean that this simplifying assumption would have to change. It's >>>>> doable, I suppose, but having copy constructors would have a significant >>>>> downside. >>>>> >>>>> Patrick >>>>> >>>>> _______________________________________________ >>>>> Rust-dev mailing list >>>>> Rust-dev at mozilla.org >>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>> >>>> >>>> >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vadimcn at gmail.com Sat Jun 21 14:21:14 2014 From: vadimcn at gmail.com (Vadim Chugunov) Date: Sat, 21 Jun 2014 14:21:14 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> Message-ID: My 2c: The world is finally becoming security-conscious, so I think it is a only matter of time before architectures that implement zero-cost integer overflow checking appear. I think we should be ready for it when this happens. So I would propose the following practical solution (I think Gabor is also leaning in favor of something like this): 1. Declare that regular int types (i8, u8, i32, u32, ...) are non-wrapping. Check them for overflow in debug builds, maybe even in optimized builds on platforms where the overhead is not too egregious. There should probably be a per-module performance escape hatch that disables overflow checks in optimized builds on all platforms. On zero-cost overflow checking platforms, the checks would of course always be on. Also, since we are saving LLVM IR in rlibs for LTO, it may even be possible to make this a global (i.e. not just for the current crate) compile-time decision. 2. Introduce new wrapping counterparts of the above for cases when wrapping is actually desired. If we don't do this now, it will be much more painful later, when large body of Rust code will have been written that does not make the distinction between wrapping and non-wrapping ints. Vadim -------------- next part -------------- An HTML attachment was scrubbed... URL: From zwarich at mozilla.com Sat Jun 21 14:29:31 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Sat, 21 Jun 2014 14:29:31 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> Message-ID: <1CCC50D8-D5AA-4D0F-9773-F80193CC06DB@mozilla.com> On Jun 21, 2014, at 5:50 AM, Gregory Maxwell wrote: > On Sat, Jun 21, 2014 at 5:18 AM, Diggory Hardy wrote: >> As far as I am aware, using theorem proving tools[1] to provide limits on >> value ranges is pretty hard and often computationally intensive to do in >> simple code. I've only seen prototype systems where the user is expected to >> write full contracts on exactly how every function may modify every value it >> could, as well as often providing hints to the prover (especially for >> loops). So I really don't think this is going to help much. > > To be sound is hard to catch lots of cases less so? existing C > compilers manage to prove enough about ranges to eliminate redundant > tests pretty often? e.g. > > #include > int main(int argc, char **argv){ > (void)argv; > if(argc<16)return 1; > argc+=1000; > if(argc<8)printf("This code is not emitted by an optimizing compiler.\n"); > return 0; > } > > GCC 4.8.2 -O2 manages this example fine and I expect is true for other > modern optimizing C compilers. LLVM doesn?t actually optimize this, although it does if you remove the `argc+=1000`. LLVM?s range analysis is quite a bit worse than GCC?s. This is a fixable problem (the algorithm that GCC uses is pretty simple), but it would negatively impact compile time by a small amount. > This is part of the reason that C's undefinedness has real performance > implications, as sometimes it can only prove loop iteration counts or > pointer non-aliasing (both useful for vectorization) when it knows > that indexes cannot overflow. You only get a benefit from the assumption that indices can?t overflow when you are using signed indices. If you have the loop int len = ?; for (int i = 0; i < len; i++) { ? } then there are two possible trip counts for the loop depending on whether `len` is negative (i.e. whether i wraps or not). I don?t think there is any situation where it helps signed arithmetic be more efficient than unsigned arithmetic. This is arguably bad code in C/C++ anyways, since the language strongly encourages you to use unsigned indices. Rust doesn?t have any analogue of this problem, because `uint` is used for indexing and coercions are explicit. Cameron From danielmicay at gmail.com Sat Jun 21 14:42:05 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 21 Jun 2014 17:42:05 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> Message-ID: <53A5FC2D.7080106@gmail.com> On 21/06/14 05:21 PM, Vadim Chugunov wrote: > My 2c: > > The world is finally becoming security-conscious, so I think it is a > only matter of time before architectures that implement zero-cost > integer overflow checking appear. I think we should be ready for it > when this happens. So I would propose the following practical solution > (I think Gabor is also leaning in favor of something like this): ARM and x86_64 aren't going anywhere and it's too late for trap on overflow to be part of the baseline instruction set. It's far from a sure thing that it would even be added. The small x86_64 instructions are all used up so it's impossible to add trap on overflow without more expensive instruction decoding and bloated code size. Anyway, trapping would *not* map to how Rust currently deals with logic errors. It would need to give up on unwinding for logic errors in order to leverage these kinds of instructions. The alternative is depending on asynchronous unwinding support and OS-specific handling for the trapping instructions (SIGFPE like division by zero?). Processors already implement a trap on division by zero but Rust is currently not able to take advantage of it... until we're doing it for division, there's no indication that we'll be able to do it for other operations. > 1. Declare that regular int types (i8, u8, i32, u32, ...) are > non-wrapping. > Check them for overflow in debug builds, maybe even in optimized builds > on platforms where the overhead is not too egregious. There should > probably be a per-module performance escape hatch that disables overflow > checks in optimized builds on all platforms. On zero-cost overflow > checking platforms, the checks would of course always be on. > Also, since we are saving LLVM IR in rlibs for LTO, it may even be > possible to make this a global (i.e. not just for the current crate) > compile-time decision. If they're not defined as wrapping on overflow, how *are* they defined? It does have to be defined as something, even if that means an arbitrary result left up to the implementation. The result must be consistent between reads to maintain memory safety. > 2. Introduce new wrapping counterparts of the above for cases when > wrapping is actually desired. > > If we don't do this now, it will be much more painful later, when large > body of Rust code will have been written that does not make the > distinction between wrapping and non-wrapping ints. > > Vadim -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From bascule at gmail.com Sat Jun 21 14:47:51 2014 From: bascule at gmail.com (Tony Arcieri) Date: Sat, 21 Jun 2014 14:47:51 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A5FC2D.7080106@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> Message-ID: On Sat, Jun 21, 2014 at 2:42 PM, Daniel Micay wrote: > ARM and x86_64 aren't going anywhere and it's too late for trap on > overflow to be part of the baseline instruction set. It's far from a > sure thing that it would even be added. Having watched the debacle that was Azul trying to get features added to Intel CPUs, like hardware transactional memory or realtime zeroing into L1 cache, I strongly agree. We can't assume anything about the hardware manufacturers will do, they just don't care about this stuff, and their roadmaps for adding anything like this are terrible at best. But here's a hypothetical situation: it's 202X and after much ado Intel, ARM, AMD, and others have just rolled out some new CPU instructions and fancy new ALU with fast overflow detection in hardware. Overflow detection is fast now! If that ever happened, what Rust provides as a baseline today would be obsolete and broken. In the distant future. But still... -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Sat Jun 21 15:02:05 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 21 Jun 2014 18:02:05 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> Message-ID: <53A600DD.1030008@gmail.com> On 21/06/14 05:47 PM, Tony Arcieri wrote: > On Sat, Jun 21, 2014 at 2:42 PM, Daniel Micay > wrote: > > ARM and x86_64 aren't going anywhere and it's too late for trap on > overflow to be part of the baseline instruction set. It's far from a > sure thing that it would even be added. > > > Having watched the debacle that was Azul trying to get features added to > Intel CPUs, like hardware transactional memory or realtime zeroing into > L1 cache, I strongly agree. We can't assume anything about the hardware > manufacturers will do, they just don't care about this stuff, and their > roadmaps for adding anything like this are terrible at best. > > But here's a hypothetical situation: it's 202X and after much ado Intel, > ARM, AMD, and others have just rolled out some new CPU instructions and > fancy new ALU with fast overflow detection in hardware. Overflow > detection is fast now! > > If that ever happened, what Rust provides as a baseline today would be > obsolete and broken. In the distant future. But still... It's not possible to add new instructions to x86_64 that are not large and hard to decode. It's too late, nothing short of breaking backwards compatibility by introducing a new architecture will provide trapping on overflow without a performance hit. To repeat what I said elsewhere, Rust's baseline would still be obsolete if it failed on overflow because there's no indication that we can sanely / portably implement failure on overflow via trapping. It's certainly not possible in LLVM right now. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From comexk at gmail.com Sat Jun 21 15:26:45 2014 From: comexk at gmail.com (comex) Date: Sat, 21 Jun 2014 18:26:45 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A600DD.1030008@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> Message-ID: On Sat, Jun 21, 2014 at 6:02 PM, Daniel Micay wrote: > It's not possible to add new instructions to x86_64 that are not large > and hard to decode. It's too late, nothing short of breaking backwards > compatibility by introducing a new architecture will provide trapping on > overflow without a performance hit. To repeat what I said elsewhere, > Rust's baseline would still be obsolete if it failed on overflow because > there's no indication that we can sanely / portably implement failure on > overflow via trapping. It's certainly not possible in LLVM right now. Er... since when? Many single-byte opcodes in x86-64 corresponding to deprecated x86 instructions are currently undefined. From vadimcn at gmail.com Sat Jun 21 15:43:18 2014 From: vadimcn at gmail.com (Vadim Chugunov) Date: Sat, 21 Jun 2014 15:43:18 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A5FC2D.7080106@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> Message-ID: On Sat, Jun 21, 2014 at 2:42 PM, Daniel Micay wrote: > On 21/06/14 05:21 PM, Vadim Chugunov wrote: > > My 2c: > > > > The world is finally becoming security-conscious, so I think it is a > > only matter of time before architectures that implement zero-cost > > integer overflow checking appear. I think we should be ready for it > > when this happens. So I would propose the following practical solution > > (I think Gabor is also leaning in favor of something like this): > > ARM and x86_64 aren't going anywhere and it's too late for trap on > overflow to be part of the baseline instruction set. It's far from a > sure thing that it would even be added. The small x86_64 instructions > are all used up so it's impossible to add trap on overflow without more > expensive instruction decoding and bloated code size. > I am sure they will figure out a way if this feature becomes a competitive advantage. Whether this will be by adding a prefix to existing instructions, or by creating a new CPU mode I don't know. One thing Intel could do rather easily, would be to spruce up the INTO instruction, which is currently neglected and slow because noone's using it. It's a chicken and egg problem: why would they invest time into improving a feature that nobody can take advantage of? If Wikipedia is to be believed, MIPS already has ints with overflow checking. Some brand-new architectures are heading this way too. And, as I said, if it's too slow, overflow check will be elided in optimized builds. We should, however, invent a way to harass developers who overflow their ints,- to provide an incentive to think about which kind of int is needed. Anyway, trapping would *not* map to how Rust currently deals with logic > errors. It would need to give up on unwinding for logic errors in order > to leverage these kinds of instructions. The alternative is depending on > asynchronous unwinding support and OS-specific handling for the trapping > instructions (SIGFPE like division by zero?). > > Processors already implement a trap on division by zero but Rust is > currently not able to take advantage of it... until we're doing it for > division, there's no indication that we'll be able to do it for other > operations. > Since division overflow already needs to be dealt with, I don't see a problem with making addition overflow do the same thing. Eventually it might be useful to have a third kind of ints, that return Option, however until Rust has support for monads, I don't think these would be very useful. Intrinsic functions would probably be enough for now. > > 1. Declare that regular int types (i8, u8, i32, u32, ...) are > > non-wrapping. > > Check them for overflow in debug builds, maybe even in optimized builds > > on platforms where the overhead is not too egregious. There should > > probably be a per-module performance escape hatch that disables overflow > > checks in optimized builds on all platforms. On zero-cost overflow > > checking platforms, the checks would of course always be on. > > Also, since we are saving LLVM IR in rlibs for LTO, it may even be > > possible to make this a global (i.e. not just for the current crate) > > compile-time decision. > > If they're not defined as wrapping on overflow, how *are* they defined? > > It does have to be defined as something, even if that means an arbitrary > result left up to the implementation. The result must be consistent > between reads to maintain memory safety. > In checked builds they'll trap, just like division by zero. In unchecked builds, the result is "undefined". But, of course, in practice, they will just wrap around. > > 2. Introduce new wrapping counterparts of the above for cases when > > wrapping is actually desired. > > > > If we don't do this now, it will be much more painful later, when large > > body of Rust code will have been written that does not make the > > distinction between wrapping and non-wrapping ints. > > > > Vadim > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhm456 at gmail.com Sat Jun 21 15:54:31 2014 From: jhm456 at gmail.com (Jerry Morrison) Date: Sat, 21 Jun 2014 15:54:31 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> Message-ID: I agree with Vadim that the world will inevitably become security-conscious -- also safety-conscious. We will live to see it unless such a bug causes nuclear war or power grid meltdown. When the sea change happens, Rust will either be (A)* the attractive choice for systems programming* or (B) *obsolete*. Rust already has the leading position in memory safety for systems programming, so it's lined up to go. The world desperately needs a C++ replacement for real-time, safety-critical software before that ever-more-complicated language causes big disasters. Rust is the only candidate around for that. (Or maybe D, if its real-time threads can avoid GC pauses.) CACM's recent *Mars Code* article shows the extremes that JPL has to do to program reliable space probes. Smaller companies writing automobile engine control systems and such will soon be looking for a more cost effective approach. Companies like Intel see so much existing C/C++ software getting by without overflow safety and conclude it doesn't matter. Let's not let their rear-view mirror thinking keep us stuck. Eventually customers will demand better security whether there's a speed penalty or not. CPU designers could say they've given us so much instruction speed that we can afford to spend some of it on overflow checking. Fair point. When the software folks have demonstrated the broad need, Intel can speed it up, whether by optimizing certain instruction sequences or adding new instructions. The Mill CPU architecture handles overflow nicely and promises much higher performance, like extending DSP abilities into ordinary software loops like strncpy(). Whether this one takes off or not is hard to say. That little company could use a big partner. On Sat, Jun 21, 2014 at 3:43 PM, Vadim Chugunov wrote: > > > On Sat, Jun 21, 2014 at 2:42 PM, Daniel Micay > wrote: > >> On 21/06/14 05:21 PM, Vadim Chugunov wrote: >> > My 2c: >> > >> > The world is finally becoming security-conscious, so I think it is a >> > only matter of time before architectures that implement zero-cost >> > integer overflow checking appear. I think we should be ready for it >> > when this happens. So I would propose the following practical solution >> > (I think Gabor is also leaning in favor of something like this): >> >> ARM and x86_64 aren't going anywhere and it's too late for trap on >> overflow to be part of the baseline instruction set. It's far from a >> sure thing that it would even be added. The small x86_64 instructions >> are all used up so it's impossible to add trap on overflow without more >> expensive instruction decoding and bloated code size. >> > > I am sure they will figure out a way if this feature becomes a competitive > advantage. Whether this will be by adding a prefix to existing > instructions, or by creating a new CPU mode I don't know. One thing Intel > could do rather easily, would be to spruce up the INTO instruction, which > is currently neglected and slow because noone's using it. It's a chicken > and egg problem: why would they invest time into improving a feature that > nobody can take advantage of? > > If Wikipedia is to be believed, MIPS already has ints with overflow > checking. Some brand-new architectures > are heading this way too. > > And, as I said, if it's too slow, overflow check will be elided in > optimized builds. > We should, however, invent a way to harass developers who overflow their > ints,- to provide an incentive to think about which kind of int is needed. > > Anyway, trapping would *not* map to how Rust currently deals with logic >> errors. It would need to give up on unwinding for logic errors in order >> to leverage these kinds of instructions. The alternative is depending on >> asynchronous unwinding support and OS-specific handling for the trapping >> instructions (SIGFPE like division by zero?). >> >> Processors already implement a trap on division by zero but Rust is >> currently not able to take advantage of it... until we're doing it for >> division, there's no indication that we'll be able to do it for other >> operations. >> > > Since division overflow already needs to be dealt with, I don't see a > problem with making addition overflow do the same thing. > > Eventually it might be useful to have a third kind of ints, that return > Option, however until Rust has support for monads, I don't think > these would be very useful. Intrinsic functions would probably be enough > for now. > > > >> > 1. Declare that regular int types (i8, u8, i32, u32, ...) are >> > non-wrapping. >> > Check them for overflow in debug builds, maybe even in optimized builds >> > on platforms where the overhead is not too egregious. There should >> > probably be a per-module performance escape hatch that disables overflow >> > checks in optimized builds on all platforms. On zero-cost overflow >> > checking platforms, the checks would of course always be on. >> > Also, since we are saving LLVM IR in rlibs for LTO, it may even be >> > possible to make this a global (i.e. not just for the current crate) >> > compile-time decision. >> >> If they're not defined as wrapping on overflow, how *are* they defined? >> >> It does have to be defined as something, even if that means an arbitrary >> result left up to the implementation. The result must be consistent >> between reads to maintain memory safety. >> > > In checked builds they'll trap, just like division by zero. In unchecked > builds, the result is "undefined". But, of course, in practice, they will > just wrap around. > > > >> > 2. Introduce new wrapping counterparts of the above for cases when >> > wrapping is actually desired. >> > >> > If we don't do this now, it will be much more painful later, when large >> > body of Rust code will have been written that does not make the >> > distinction between wrapping and non-wrapping ints. >> > >> > Vadim >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -- Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From zwarich at mozilla.com Sat Jun 21 16:02:11 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Sat, 21 Jun 2014 16:02:11 -0700 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: On Jun 21, 2014, at 2:15 PM, Nick Cameron wrote: > Ownership does not always work, graphs often appear in programming. When they do, you have to use Rc or Gc to cope. We shouldn't punish programmers who have these problems to deal with. Telling them to use ownership is pointless if your data is not hierarchical. In this situation, we are not encouraging users to use ownership instead of ref counting, we are encouraging them to use garbage collection, even when that is not the optimal solution for their problem. Rust doesn?t actually have a solution for general graph data structures besides ?store the vertex data in arrays and pass around indices?. In many cases you can?t use Rc/Weak. I think this will be a significant limitation of Rust going forward. Cameron From zwarich at mozilla.com Sat Jun 21 16:05:12 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Sat, 21 Jun 2014 16:05:12 -0700 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> Message-ID: Another big problem with implicit copy constructors is that they make it very difficult to write correct unsafe code. When each use of a variable can call arbitrary code, each use of a variable can trigger unwinding. You then basically require people to write the equivalent of exception-safe C++ in unsafe code to preserve memory safety guarantees, and it?s notoriously difficult to do that. Cameron On Jun 20, 2014, at 8:49 PM, Nick Cameron wrote: > I think having copy constructors is the only way to get rid of `.clone()` all over the place when using` Rc`. That, to me, seems very important (in making smart pointers first class citizens of Rust, without this, I would rather go back to having @-pointers). The trouble is, I see incrementing a ref count as the upper bound on the work that should be done in a copy constructor and I see no way to enforce that. > > So, I guess +1 to spirit of the OP, but no solid proposal for how to do it. > > > On Sat, Jun 21, 2014 at 8:00 AM, Benjamin Striegel wrote: > I'm not a fan of the idea of blessing certain types with a compiler-defined whitelist. And if the choice is then between ugly code and copy constructors, I'll take ugly code over surprising code. > > > On Fri, Jun 20, 2014 at 3:10 PM, Patrick Walton wrote: > On 6/20/14 12:07 PM, Paulo S?rgio Almeida wrote:] > > Currently being Copy equates with being Pod. The more time passes and > the more code examples I see, it is amazing the amount of ugliness that > it causes. I wonder if there is a way out. > > Part of the problem is that a lot of library code assumes that Copy types can be copied by just moving bytes around. Having copy constructors would mean that this simplifying assumption would have to change. It's doable, I suppose, but having copy constructors would have a significant downside. > > Patrick > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Sat Jun 21 16:07:38 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 21 Jun 2014 19:07:38 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> Message-ID: <53A6103A.1010400@gmail.com> On 21/06/14 06:54 PM, Jerry Morrison wrote: > I agree with Vadim that the world will inevitably become > security-conscious -- also safety-conscious. We will live to see it > unless such a bug causes nuclear war or power grid meltdown. > > When the sea change happens, Rust will either be (A)/ the attractive > choice for systems programming/ or (B) /obsolete/. Rust already has the > leading position in memory safety for systems programming, so it's lined > up to go. No one will use Rust if it's slow. If it uses checked arithmetic, it will be slow. There's nothing subjective about that. > The world desperately needs a C++ replacement for real-time, > safety-critical software before that ever-more-complicated language > causes big disasters. Rust is the only candidate around for that. (Or > maybe D, if its real-time threads can avoid GC pauses.) CACM's recent > /Mars Code/ article > shows > the extremes that JPL has to do to program reliable space probes. > Smaller companies writing automobile engine control systems > > and such will soon be looking for a more cost effective approach. Trapping on overflow doesn't turn the overflow into a non-bug. It prevents it from being exploited as a security vulnerability, but it would bring down a safety critical system. > Companies like Intel see so much existing C/C++ software getting by > without overflow safety and conclude it doesn't matter. Let's not let > their rear-view mirror thinking keep us stuck. Eventually customers will > demand better security whether there's a speed penalty or not. > > CPU designers could say they've given us so much instruction speed that > we can afford to spend some of it on overflow checking. Fair point. When > the software folks have demonstrated the broad need, Intel can speed it > up, whether by optimizing certain instruction sequences or adding new > instructions. Overflow checking means a branch on every integer arithmetic operation. It means every arithmetic operation is impure (unwinding) so LLVM won't be able to hoist stuff out of loops unless it proves that there's no overflow, which is rare. For example, this prevents it from hoisting a bounds check out of a loop by introducing a second kind of impure failure condition. It also prevents auto-vectorization, which is increasingly important. A language without good auto-vectorization is not going to be an interesting systems language down the road. > The Mill CPU architecture handles overflow nicely and promises much > higher performance, like extending DSP abilities into ordinary software > loops like strncpy(). Whether this one takes off or not is hard to say. > That little company could use a big partner. It has to exist before it can succeed or fail. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From lists at ncameron.org Sat Jun 21 16:09:47 2014 From: lists at ncameron.org (Nick Cameron) Date: Sun, 22 Jun 2014 11:09:47 +1200 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: Agreed. Post-1.0, I hope to put some effort into addressing this. For now, the best solution I've found is to use borrowed references, arena allocation and unsafe initialisation. This is far from perfect, and you end up with _a lot_ of lifetime parameters, but it works if the problem fits the constraints. On Sun, Jun 22, 2014 at 11:02 AM, Cameron Zwarich wrote: > On Jun 21, 2014, at 2:15 PM, Nick Cameron wrote: > > > Ownership does not always work, graphs often appear in programming. When > they do, you have to use Rc or Gc to cope. We shouldn't punish programmers > who have these problems to deal with. Telling them to use ownership is > pointless if your data is not hierarchical. In this situation, we are not > encouraging users to use ownership instead of ref counting, we are > encouraging them to use garbage collection, even when that is not the > optimal solution for their problem. > > Rust doesn?t actually have a solution for general graph data structures > besides ?store the vertex data in arrays and pass around indices?. In many > cases you can?t use Rc/Weak. I think this will be a significant limitation > of Rust going forward. > > Cameron -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Sat Jun 21 16:10:57 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 21 Jun 2014 19:10:57 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> Message-ID: <53A61101.1020103@gmail.com> On 21/06/14 06:26 PM, comex wrote: > On Sat, Jun 21, 2014 at 6:02 PM, Daniel Micay wrote: >> It's not possible to add new instructions to x86_64 that are not large >> and hard to decode. It's too late, nothing short of breaking backwards >> compatibility by introducing a new architecture will provide trapping on >> overflow without a performance hit. To repeat what I said elsewhere, >> Rust's baseline would still be obsolete if it failed on overflow because >> there's no indication that we can sanely / portably implement failure on >> overflow via trapping. It's certainly not possible in LLVM right now. > > Er... since when? Many single-byte opcodes in x86-64 corresponding to > deprecated x86 instructions are currently undefined. http://ref.x86asm.net/coder64.html I don't see enough gaps here for the necessary instructions. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From jhm456 at gmail.com Sat Jun 21 16:27:11 2014 From: jhm456 at gmail.com (Jerry Morrison) Date: Sat, 21 Jun 2014 16:27:11 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A6103A.1010400@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A6103A.1010400@gmail.com> Message-ID: On Sat, Jun 21, 2014 at 4:07 PM, Daniel Micay wrote: > On 21/06/14 06:54 PM, Jerry Morrison wrote: > > I agree with Vadim that the world will inevitably become > > security-conscious -- also safety-conscious. We will live to see it > > unless such a bug causes nuclear war or power grid meltdown. > > > > When the sea change happens, Rust will either be (A)/ the attractive > > choice for systems programming/ or (B) /obsolete/. Rust already has the > > leading position in memory safety for systems programming, so it's lined > > up to go. > > No one will use Rust if it's slow. If it uses checked arithmetic, it > will be slow. There's nothing subjective about that. > Surely there's a way to make the language and libraries ready for overflow safety while able to perform without it in the short term. > > > The world desperately needs a C++ replacement for real-time, > > safety-critical software before that ever-more-complicated language > > causes big disasters. Rust is the only candidate around for that. (Or > > maybe D, if its real-time threads can avoid GC pauses.) CACM's recent > > /Mars Code/ article > > shows > > the extremes that JPL has to do to program reliable space probes. > > Smaller companies writing automobile engine control systems > > < > http://www.edn.com/design/automotive/4423428/Toyota-s-killer-firmware--Bad-design-and-its-consequences > > > > and such will soon be looking for a more cost effective approach. > > Trapping on overflow doesn't turn the overflow into a non-bug. It > prevents it from being exploited as a security vulnerability, but it > would bring down a safety critical system. > A safety critical system needs to catch and recover from thread failures, e.g. by restarting it, or failing over, or gracefully shutting down. The first requirement is to keep the problem from causing collateral damage and opening exploitable holes. Systems like phone switches written in Erlang are good at the recovery part. (And by "smaller companies" I meant "smaller, less funded teams.") > Companies like Intel see so much existing C/C++ software getting by > > without overflow safety and conclude it doesn't matter. Let's not let > > their rear-view mirror thinking keep us stuck. Eventually customers will > > demand better security whether there's a speed penalty or not. > > > > CPU designers could say they've given us so much instruction speed that > > we can afford to spend some of it on overflow checking. Fair point. When > > the software folks have demonstrated the broad need, Intel can speed it > > up, whether by optimizing certain instruction sequences or adding new > > instructions. > > Overflow checking means a branch on every integer arithmetic operation. > > It means every arithmetic operation is impure (unwinding) so LLVM won't > be able to hoist stuff out of loops unless it proves that there's no > overflow, which is rare. For example, this prevents it from hoisting a > bounds check out of a loop by introducing a second kind of impure > failure condition. > > It also prevents auto-vectorization, which is increasingly important. A > language without good auto-vectorization is not going to be an > interesting systems language down the road. > How about propagating the overflow info downstream like a NaN by a limited distance, rather than throwing an immediate exception? > The Mill CPU architecture handles overflow nicely and promises much > > higher performance, like extending DSP abilities into ordinary software > > loops like strncpy(). Whether this one takes off or not is hard to say. > > That little company could use a big partner. > > It has to exist before it can succeed or fail. > > Yea. -- Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.striegel at gmail.com Sat Jun 21 16:55:31 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sat, 21 Jun 2014 19:55:31 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A6103A.1010400@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A6103A.1010400@gmail.com> Message-ID: > No one will use Rust if it's slow. If it uses checked arithmetic, it > will be slow. There's nothing subjective about that. This is the only argument that matters. If we are slower than C++, Rust will not replace C++ and will have failed at its goal of making the world a safer place. The world already has a glut of safe and slow languages; if inefficiency were acceptable, then C++ would have been replaced long ago. In addition, bringing up hypothetical CPU architectures with support for checked arithmetic is not relevant. Rust is a language designed for 2014, not for 2024. And if in 2024 we are all suddenly gifted with CPUs where checked arithmetic is literally free and if this somehow causes Rust to be "obsolete" (which seems unlikely in any case), then so be it. Rust is not the last systems programming language that will ever be written. On Sat, Jun 21, 2014 at 7:07 PM, Daniel Micay wrote: > On 21/06/14 06:54 PM, Jerry Morrison wrote: > > I agree with Vadim that the world will inevitably become > > security-conscious -- also safety-conscious. We will live to see it > > unless such a bug causes nuclear war or power grid meltdown. > > > > When the sea change happens, Rust will either be (A)/ the attractive > > choice for systems programming/ or (B) /obsolete/. Rust already has the > > leading position in memory safety for systems programming, so it's lined > > up to go. > > No one will use Rust if it's slow. If it uses checked arithmetic, it > will be slow. There's nothing subjective about that. > > > The world desperately needs a C++ replacement for real-time, > > safety-critical software before that ever-more-complicated language > > causes big disasters. Rust is the only candidate around for that. (Or > > maybe D, if its real-time threads can avoid GC pauses.) CACM's recent > > /Mars Code/ article > > shows > > the extremes that JPL has to do to program reliable space probes. > > Smaller companies writing automobile engine control systems > > < > http://www.edn.com/design/automotive/4423428/Toyota-s-killer-firmware--Bad-design-and-its-consequences > > > > and such will soon be looking for a more cost effective approach. > > Trapping on overflow doesn't turn the overflow into a non-bug. It > prevents it from being exploited as a security vulnerability, but it > would bring down a safety critical system. > > > Companies like Intel see so much existing C/C++ software getting by > > without overflow safety and conclude it doesn't matter. Let's not let > > their rear-view mirror thinking keep us stuck. Eventually customers will > > demand better security whether there's a speed penalty or not. > > > > CPU designers could say they've given us so much instruction speed that > > we can afford to spend some of it on overflow checking. Fair point. When > > the software folks have demonstrated the broad need, Intel can speed it > > up, whether by optimizing certain instruction sequences or adding new > > instructions. > > Overflow checking means a branch on every integer arithmetic operation. > > It means every arithmetic operation is impure (unwinding) so LLVM won't > be able to hoist stuff out of loops unless it proves that there's no > overflow, which is rare. For example, this prevents it from hoisting a > bounds check out of a loop by introducing a second kind of impure > failure condition. > > It also prevents auto-vectorization, which is increasingly important. A > language without good auto-vectorization is not going to be an > interesting systems language down the road. > > > The Mill CPU architecture handles overflow nicely and promises much > > higher performance, like extending DSP abilities into ordinary software > > loops like strncpy(). Whether this one takes off or not is hard to say. > > That little company could use a big partner. > > It has to exist before it can succeed or fail. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Sat Jun 21 16:58:05 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 21 Jun 2014 19:58:05 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A6103A.1010400@gmail.com> Message-ID: <53A61C0D.7010708@gmail.com> On 21/06/14 07:27 PM, Jerry Morrison wrote: > > On Sat, Jun 21, 2014 at 4:07 PM, Daniel Micay > wrote: > > On 21/06/14 06:54 PM, Jerry Morrison wrote: > > I agree with Vadim that the world will inevitably become > > security-conscious -- also safety-conscious. We will live to see it > > unless such a bug causes nuclear war or power grid meltdown. > > > > When the sea change happens, Rust will either be (A)/ the attractive > > choice for systems programming/ or (B) /obsolete/. Rust already > has the > > leading position in memory safety for systems programming, so it's > lined > > up to go. > > No one will use Rust if it's slow. If it uses checked arithmetic, it > will be slow. There's nothing subjective about that. > > > Surely there's a way to make the language and libraries ready for > overflow safety while able to perform without it in the short term. I'm not sure what you mean by this. If one day ARM gets instructions for trapping on overflow, that's all well and good, but Rust can't use it to implement fail-on-overflow. If the proposal was based around aborting on overflow rather than failing, it would be able to use those instructions. > > The world desperately needs a C++ replacement for real-time, > > safety-critical software before that ever-more-complicated language > > causes big disasters. Rust is the only candidate around for that. (Or > > maybe D, if its real-time threads can avoid GC pauses.) CACM's recent > > /Mars Code/ article > > shows > > the extremes that JPL has to do to program reliable space probes. > > Smaller companies writing automobile engine control systems > > > > > and such will soon be looking for a more cost effective approach. > > Trapping on overflow doesn't turn the overflow into a non-bug. It > prevents it from being exploited as a security vulnerability, but it > would bring down a safety critical system. > > > A safety critical system needs to catch and recover from thread > failures, e.g. by restarting it, or failing over, or gracefully shutting > down. The first requirement is to keep the problem from causing > collateral damage and opening exploitable holes. Systems like phone > switches written in Erlang are good at the recovery part. > > (And by "smaller companies" I meant "smaller, less funded teams.") Rust's task failure isn't very isolated or robust. A failure in a destructor called during failure will abort the process. A failure in a destructor when not already failing will not call the inner destructors as it would be memory unsafe. A failure also has to poison RWLock / Mutex so that all other threads with handles to the same shared data will fail too. I don't think these issues are going to be fixed, unwinding in a language with destructors is just inherently broken. I think process separation is a far better option for robust systems. > > Companies like Intel see so much existing C/C++ software getting by > > without overflow safety and conclude it doesn't matter. Let's not let > > their rear-view mirror thinking keep us stuck. Eventually > customers will > > demand better security whether there's a speed penalty or not. > > > > CPU designers could say they've given us so much instruction speed > that > > we can afford to spend some of it on overflow checking. Fair > point. When > > the software folks have demonstrated the broad need, Intel can > speed it > > up, whether by optimizing certain instruction sequences or adding new > > instructions. > > Overflow checking means a branch on every integer arithmetic operation. > > It means every arithmetic operation is impure (unwinding) so LLVM won't > be able to hoist stuff out of loops unless it proves that there's no > overflow, which is rare. For example, this prevents it from hoisting a > bounds check out of a loop by introducing a second kind of impure > failure condition. > > It also prevents auto-vectorization, which is increasingly important. A > language without good auto-vectorization is not going to be an > interesting systems language down the road. > > > How about propagating the overflow info downstream like a NaN by a > limited distance, rather than throwing an immediate exception? The hardware doesn't support this. Any kind of checked overflow means no auto-vectorization. You only get wrapping arithmetic and in some cases saturing arithmetic. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From danielmicay at gmail.com Sat Jun 21 17:02:31 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 21 Jun 2014 20:02:31 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A6103A.1010400@gmail.com> Message-ID: <53A61D17.6000605@gmail.com> On 21/06/14 07:55 PM, Benjamin Striegel wrote: >> No one will use Rust if it's slow. If it uses checked arithmetic, it >> will be slow. There's nothing subjective about that. > > This is the only argument that matters. > > If we are slower than C++, Rust will not replace C++ and will have > failed at its goal of making the world a safer place. The world already > has a glut of safe and slow languages; if inefficiency were acceptable, > then C++ would have been replaced long ago. > > In addition, bringing up hypothetical CPU architectures with support for > checked arithmetic is not relevant. Rust is a language designed for > 2014, not for 2024. > > And if in 2024 we are all suddenly gifted with CPUs where checked > arithmetic is literally free and if this somehow causes Rust to be > "obsolete" (which seems unlikely in any case), then so be it. Rust is > not the last systems programming language that will ever be written. Not only does the hardware have to provide it, but each OS also has to expose it in a way that Rust could use to throw an exception, unless the proposal is to simply abort on overflow. LLVM would also have to gain support for unwinding from arithmetic operations, as it can't currently do that. Even with hardware support for the operation itself, giving every integer operation a side effect would still cripple performance by wiping out optimizations. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From jhm456 at gmail.com Sat Jun 21 17:10:59 2014 From: jhm456 at gmail.com (Jerry Morrison) Date: Sat, 21 Jun 2014 17:10:59 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A61D17.6000605@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A6103A.1010400@gmail.com> <53A61D17.6000605@gmail.com> Message-ID: OK. You folks made good points. How about if I retract "or obsolete" in favor of "or up for big change or not the tool for those purposes"? Maybe abort-on-overflow in suitable cases... On Sat, Jun 21, 2014 at 5:02 PM, Daniel Micay wrote: > On 21/06/14 07:55 PM, Benjamin Striegel wrote: > >> No one will use Rust if it's slow. If it uses checked arithmetic, it > >> will be slow. There's nothing subjective about that. > > > > This is the only argument that matters. > > > > If we are slower than C++, Rust will not replace C++ and will have > > failed at its goal of making the world a safer place. The world already > > has a glut of safe and slow languages; if inefficiency were acceptable, > > then C++ would have been replaced long ago. > > > > In addition, bringing up hypothetical CPU architectures with support for > > checked arithmetic is not relevant. Rust is a language designed for > > 2014, not for 2024. > > > > And if in 2024 we are all suddenly gifted with CPUs where checked > > arithmetic is literally free and if this somehow causes Rust to be > > "obsolete" (which seems unlikely in any case), then so be it. Rust is > > not the last systems programming language that will ever be written. > > Not only does the hardware have to provide it, but each OS also has to > expose it in a way that Rust could use to throw an exception, unless the > proposal is to simply abort on overflow. LLVM would also have to gain > support for unwinding from arithmetic operations, as it can't currently > do that. Even with hardware support for the operation itself, giving > every integer operation a side effect would still cripple performance by > wiping out optimizations. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -- Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From bascule at gmail.com Sat Jun 21 17:16:27 2014 From: bascule at gmail.com (Tony Arcieri) Date: Sat, 21 Jun 2014 17:16:27 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A6103A.1010400@gmail.com> Message-ID: On Sat, Jun 21, 2014 at 4:55 PM, Benjamin Striegel wrote: > In addition, bringing up hypothetical CPU architectures with support for > checked arithmetic is not relevant. Rust is a language designed for 2014, > not for 2024. > So why not do the safe thing by default (which future CPUs may make fast), and provide a secondary mechanism to get the "unsafe" fast path? -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Sat Jun 21 17:25:59 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 21 Jun 2014 20:25:59 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A6103A.1010400@gmail.com> Message-ID: <53A62297.5020303@gmail.com> On 21/06/14 08:16 PM, Tony Arcieri wrote: > On Sat, Jun 21, 2014 at 4:55 PM, Benjamin Striegel > > wrote: > > In addition, bringing up hypothetical CPU architectures with support > for checked arithmetic is not relevant. Rust is a language designed > for 2014, not for 2024. > > > So why not do the safe thing by default (which future CPUs may make > fast), and provide a secondary mechanism to get the "unsafe" fast path? CPU support for trapping on overflow will not make it fast. Either way, it makes every integer arithmetic operation impure and will wipe out many optimizations. The claim that a CPU can make this faster is conjecture until someone explains how we can actually leverage it. Turning trapping on overflow into unwinding on overflow is not a trivial issue and would involve changes to LLVM's design along with potentially non-portable platform support for handling the trapping via a signal handler and then throwing an asynchronous exception. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From comexk at gmail.com Sat Jun 21 20:45:43 2014 From: comexk at gmail.com (comex) Date: Sat, 21 Jun 2014 23:45:43 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A61101.1020103@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: On Sat, Jun 21, 2014 at 7:10 PM, Daniel Micay wrote: >> Er... since when? Many single-byte opcodes in x86-64 corresponding to >> deprecated x86 instructions are currently undefined. > > http://ref.x86asm.net/coder64.html > > I don't see enough gaps here for the necessary instructions. You can see a significant number of invalid one-byte entries, 06, 07, 0e, 1e, 1f, etc. The simplest addition would just be to resurrect INTO and make it efficient - assuming signed 64 and 32 bit integers are good enough for most use cases. Alternatively, it could be two one-byte instructions to add an unsigned version (perhaps a waste of precious slots) or a two-byte instruction which could perhaps allow trapping on any condition. Am I missing something? From matthieu.monrocq at gmail.com Sun Jun 22 03:37:38 2014 From: matthieu.monrocq at gmail.com (Matthieu Monrocq) Date: Sun, 22 Jun 2014 12:37:38 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: I am not a fan of having wrap-around and non-wrap-around types, because whether you use wrap-around arithmetic or not is, in the end, an implementation detail, and having to switch types left and right whenever going from one mode to the other is going to be a lot of boilerplate. Instead, why not take the same road than swift and map +, -, * and / to non-wrap-around operators and declare new (more verbose) operators for the rare case where performance matters or wrap-around is the right semantics ? Even though Rust is a performance conscious language (since it aims at displacing C and C++), the 80/20 rule still applies and most of Rust code should not require absolute speed; so let's make it convenient to write safe code and prevent newcomers from shooting themselves in the foot by providing safety by default, and for those who profiled their applications or are writing hashing algorithms *also* provide the necessary escape hatches. This way we can have our cake and eat it too... or am I missing something ? -- Matthieu On Sun, Jun 22, 2014 at 5:45 AM, comex wrote: > On Sat, Jun 21, 2014 at 7:10 PM, Daniel Micay > wrote: > >> Er... since when? Many single-byte opcodes in x86-64 corresponding to > >> deprecated x86 instructions are currently undefined. > > > > http://ref.x86asm.net/coder64.html > > > > I don't see enough gaps here for the necessary instructions. > > You can see a significant number of invalid one-byte entries, 06, 07, > 0e, 1e, 1f, etc. The simplest addition would just be to resurrect > INTO and make it efficient - assuming signed 64 and 32 bit integers > are good enough for most use cases. Alternatively, it could be two > one-byte instructions to add an unsigned version (perhaps a waste of > precious slots) or a two-byte instruction which could perhaps allow > trapping on any condition. Am I missing something? > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Sun Jun 22 04:59:44 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Sun, 22 Jun 2014 13:59:44 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <53A1D892.5080403@mozilla.com> Message-ID: On Sat, Jun 21, 2014 at 3:31 AM, Jerry Morrison wrote: > > On Fri, Jun 20, 2014 at 5:36 PM, G?bor Lehel wrote: > >> >> >> >> On Sat, Jun 21, 2014 at 1:37 AM, Jerry Morrison wrote: >> >>> >>> On Fri, Jun 20, 2014 at 2:07 PM, G?bor Lehel >>> wrote: >>> >>>> >>>> >>>> >>>> On Thu, Jun 19, 2014 at 9:05 AM, Jerry Morrison >>>> wrote: >>>> >>>>> Nice analysis! >>>>> >>>>> Over what scope should programmers pick between G?bor's 3 categories? >>>>> >>>>> The "wraparound is desired" category should only occur in narrow parts >>>>> of code, like computing a hash. That suits a wraparound-operator better >>>>> than a wraparound-type, and certainly better than a compiler switch. And it >>>>> doesn't make sense for a type like 'int' that doesn't have a fixed size. >>>>> >>>> >>>> I thought hash algorithms were precisely the kind of case where you >>>> might opt for types which were clearly defined as wrapping. Why do you >>>> think using different operators instead would be preferred? >>>> >>> >>> Considering a hashing or CRC example, the code reads a bunch of >>> non-wraparound values, mashes them together using wraparound arithmetic, >>> then uses the result in a way that does not mean to wrap around at the >>> integer size. >>> >>> It's workable to convert inputs to wraparound types and use >>> wraparound accumulators, then convert the result to a non-wraparound type. >>> But using wraparound operators seems simpler, more visible, and less >>> error-prone. E.g. it'd be a mistake if the hash function returned a >>> wraparound type, which gets assigned with type inference, and so downstream >>> operations wrap around. >>> >> >> Yes, the signature of the hash function shouldn't necessarily expose the >> implementation's use of wraparound types... though it's not completely >> obvious to me. What kind of downstream operations would it make sense to >> perform on a hash value anyway? Anything besides further hashing? >> >> I'm only minimally knowledgeable about hashing algorithms, but I would've >> thought that casting the inputs to wraparound types at the outset and then >> casting the result back at the end would be *less* error prone than making >> sure to use the wraparound version for every operation in the function. Is >> that wrong? Are there any operations within the body of the hash function >> where overflow should be caught? >> >> And if we'd be going with separate operators instead of separate types, >> hash functions are a niche enough use case that, in themselves, I don't >> think they *warrant* having distinct symbolic operators for the wraparound >> operations; they could just use named methods instead. >> >> Hashing is the one that always comes up, but are there any other >> instances where wraparound is the desired semantics? >> > > Here's an example hash function from *Effective Java > * (page > 48) following its recipe for writing hash functions by combining the > object's significant fields: > > @Override public int hashCode() { > > int result = 17; > > result = 31 * result + areaCode; > > result = 31 * result + prefix; > > result = 31 * result + lineNumber; > > return result; > > } > > > So using Swift's wraparound operators in Java looks like: > > @Override public int hashCode() { > > int result = 17; > > result = 31 &* result &+ areaCode; > > result = 31 &* result &+ prefix; > > result = 31 &* result &+ lineNumber; > > return result; > > } > > > Alternatively, with a wraparound integer type wint (note that int is > defined to be 32 bits in Java): > > @Override public int hashCode() { > > wint result = 17; > > result = (wint) 31 * result + (wint) areaCode; > > result = (wint) 31 * result + (wint) prefix; > > result = (wint) 31 * result + (wint) lineNumber; > > return (int) result; > > } > > > In this example, it's easier to get the first one right than the second > one. > Thanks. I think the first `(wint)` cast in the middle three lines might be avoidable in Rust. And once you've written `int hashCode()` and `wint result`, the typechecker should complain about any casts that you accidentally forget or get wrong. Given these, the winner is no longer so clear to me. But yeah, the operator-based version isn't as obviously worse as I had assumed. > > The prototypical use for a hash code is to index into a hash table modulo > the table's current size. It can also be used for debugging, e.g. Java's > default toString() method uses the object's class name and hash, returning > something like "PhoneNumber at 163b91". > > Another example of wraparound math is computing a checksum like CRC32. > The checksum value is typically sent over a wire or stored in a storage > medium to cross-check data integrity at the receiving end. After computing > the checksum, you only want to pass it around and compare it. > > The only other example that comes to mind is emulating the arithmetic > operations of a target CPU or other hardware. > > In other cases of bounded numbers, like ARGB color components, one wants > to deal with overflow, not silently wraparound. > > Implementing BigInt can use wraparound math if it can also get the carry > bit. > > Yes, these cases are so few that named operators may suffice. That's a bit > less convenient but linguistically simpler than Swift's 5 wraparound > arithmetic operators. > > > >>> >>>> >>>>> >>>>> The "wraparound is undesired but performance is critical" category >>>>> occurs in the most performance critical bits of code [I'm doubting that all >>>>> parts of all Rust programs are performance critical], and programmers need >>>>> to make the trade-off over that limited scope. Maybe via operators or >>>>> types, but not by a compiler switch over whole compilation units. >>>>> >>>>> That leaves "wraparound is undesired and performance is not critical" >>>>> category for everything else. The choice between checked vs. unbounded >>>>> sounds like typing. >>>>> >>>>> BigUint is weird: It can underflow but not overflow. When you use its >>>>> value in a more bounded way you'll need to bounds-check it then, whether it >>>>> can go negative or not. Wouldn't it be easier to discard it than squeeze it >>>>> into the wraparound or checked models? >>>>> >>>> >>>> Making the unbounded integer types implement the Checking/Wrapping >>>> traits is more for completeness than anything else, I'm not sure whether it >>>> has practical value. >>>> >>>> A BigUint/Natural type is not as important as BigInt/Integer, but it >>>> can be nice to have. Haskell only has Integer in the Prelude, but an >>>> external package provides Natural, and there've been proposals to mainline >>>> it. It's useful for function inputs where only nonnegative values make >>>> sense. You could write asserts manually, but you might as well factor them >>>> out. And types are documentation. >>>> >>>> The Haskell implementation of Natural is just a newtype over Integer >>>> with added checks, and the same thing might make sense for Rust. >>>> >>> >>> I see. Good points. >>> >>> >>>> >>>> On Wed, Jun 18, 2014 at 11:21 AM, Brian Anderson >>> > wrote: >>>> >>>>> >>>>> On 06/18/2014 10:08 AM, G?bor Lehel wrote: >>>>> >>>>>> >>>>>> # Checked math >>>>>> >>>>>> For (2), the standard library offers checked math in the >>>>>> `CheckedAdd`, `CheckedMul` etc. traits, as well as integer types of >>>>>> unbounded size: `BigInt` and `BigUint`. This is good, but it's not enough. >>>>>> The acid test has to be whether for non-performance-critical code, people >>>>>> are actually *using* checked math. If they're not, then we've failed. >>>>>> >>>>>> `CheckedAdd` and co. are important to have for flexibility, but >>>>>> they're far too unwieldy for general use. People aren't going to write >>>>>> `.checked_add(2).unwrap()` when they can write `+ 2`. A more adequate >>>>>> design might be something like this: >>>>>> >>>>>> * Have traits for all the arithmetic operations for both checking on >>>>>> overflow and for wrapping around on overflow, e.g. `CheckedAdd` (as now), >>>>>> `WrappingAdd`, `CheckedSub`, `WrappingSub`, and so on. >>>>>> >>>>>> * Offer convenience methods for the Checked traits which perform >>>>>> `unwrap()` automatically. >>>>>> >>>>>> * Have separate sets of integer types which check for overflow and >>>>>> which wrap around on overflow. Whatever they're called: `CheckedU8`, >>>>>> `checked::u8`, `uc8`, ... >>>>>> >>>>>> * Both sets of types implement all of the Checked* and Wrapping* >>>>>> traits. You can use explicit calls to get either behavior with either types. >>>>>> >>>>>> * The checked types use the Checked traits to implement the operator >>>>>> overloads (`Add`, Mul`, etc.), while the wrapping types use the Wrapping >>>>>> traits to implement them. In other words, the difference between the types >>>>>> is (probably only) in the behavior of the operators. >>>>>> >>>>>> * `BigInt` also implements all of the Wrapping and Checked traits: >>>>>> because overflow never happens, it can claim to do anything if it "does >>>>>> happen". `BigUint` implements all of them except for the Wrapping traits >>>>>> which may underflow, such as `WrappingSub`, because it has nowhere to wrap >>>>>> around to. >>>>>> >>>>>> Another option would be to have just one set of types but two sets of >>>>>> operators, like Swift does. I think that would work as well, or even >>>>>> better, but we've been avoiding having any operators which aren't familiar >>>>>> from C. >>>>>> >>>>> >>>>> The general flavor of this proposal w/r/t checked arithmetic sounds >>>>> pretty reasonable to me, and we can probably make progress on this now. I >>>>> particularly think that having checked types that use operator overloading >>>>> is important for ergonomics. >>>>> _______________________________________________ >>>>> Rust-dev mailing list >>>>> Rust-dev at mozilla.org >>>>> https://mail.mozilla.org/listinfo/rust-dev >>>>> >>>> >>>> >>>> >>>> -- >>>> Jerry >>>> >>>>> >>>> >>> >>> >>> -- >>> Jerry >>> >> >> > > > -- > Jerry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Sun Jun 22 06:31:41 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Sun, 22 Jun 2014 15:31:41 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> Message-ID: On Sat, Jun 21, 2014 at 11:21 PM, Vadim Chugunov wrote: > My 2c: > > The world is finally becoming security-conscious, so I think it is a only > matter of time before architectures that implement zero-cost integer > overflow checking appear. I think we should be ready for it when this > happens. So I would propose the following practical solution (I think > Gabor is also leaning in favor of something like this): > > 1. Declare that regular int types (i8, u8, i32, u32, ...) are > non-wrapping. > Check them for overflow in debug builds, maybe even in optimized builds on > platforms where the overhead is not too egregious. There should probably > be a per-module performance escape hatch that disables overflow checks in > optimized builds on all platforms. On zero-cost overflow checking > platforms, the checks would of course always be on. > Also, since we are saving LLVM IR in rlibs for LTO, it may even be > possible to make this a global (i.e. not just for the current crate) > compile-time decision. > > 2. Introduce new wrapping counterparts of the above for cases when > wrapping is actually desired. > > If we don't do this now, it will be much more painful later, when large > body of Rust code will have been written that does not make the distinction > between wrapping and non-wrapping ints. > The prospect of future architectures with cheaper (free) overflow checking isn't my primary motivation, though if we also end up better prepared for them as a side effect, that's icing on the cake. My primary motivation is that, outside of a couple of specialized cases like hashing and checksums, wraparound semantics on overflow is **wrong**. It may be well-defined behavior, and it may be fast, but it's **wrong**. What's the value of a well-defined, performant semantics which does the wrong thing? I also agree that performance is non-negotiable in this case, however. The only good thing about always wrong is that it's not that hard to do better. Given the circumstances, I think the least bad outcome we could achieve, and which we *should* aim to achieve, would be this: * Where performance is known to not be a requirement, Rust code in the wild uses either overflow-checked arithmetic or unbounded integer types, with the choice between them depending on ergonomic and semantic considerations. * When the performance requirement can't be ruled out, Rust code in the wild uses arithmetic for which overflow checking can be turned on or off with a compiler flag. For testing and debugging, it is turned on. For production and benchmarks, it is turned off. * For code where wraparound semantics is desired, the appropriate facilities are also available. Given the discussion so far, the design I'd be leaning toward to accomplish the above might be something like this: * Two sets of fixed-sized integer types are available in the `prelude`. * `u8`..`u64`, `i8`..`i64`, `int`, and `uint` have unspecified results on overflow (**not** undefined behavior). A compiler flag turns overflow checks on or off. Essentially, the checks are `debug_assert`s, though whether they should be controlled by the same flag is open to debate. * `uc8`..`uc64`, `ic8`..`ic64`, `intc`, and `uintc` are *always* checked for overflow, regardless of flags. (Names are of course open to bikeshedding.) * Given that these are not really different semantically, automatic coercions between corresponding types can be considered. (Even then, for `a + b` where `a: int` and `b: intc`, explicit disambiguation would presumably still be required.) * Unbounded integer types using owned memory allocation are available in the `prelude`. I might prefer to call them `Integer` and `Natural` instead of `BigInt` and `BigUint`. * Types and/or operations which wrap around on overflow are available in the standard library. Given how specialized the use cases for these seem to be, perhaps they could even go directly in the `hash` module. It's not clear to me yet whether a separate set of types (`uw8`..`uw64`, `iw8`..`iw64`) or just a separate set of operations on the `prelude` types (e.g. `trait WrappingAdd`) would be preferable. * Unbounded integer types which use garbage collected allocation are available in the `gc` module. > > Vadim > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Sun Jun 22 07:29:45 2014 From: glaebhoerl at gmail.com (=?UTF-8?B?R8OhYm9yIExlaGVs?=) Date: Sun, 22 Jun 2014 16:29:45 +0200 Subject: [rust-dev] Recoverable and unrecoverable errors (Was: Re: Integer overflow, round -2147483648) Message-ID: On Sun, Jun 22, 2014 at 2:02 AM, Daniel Micay wrote: > On 21/06/14 07:55 PM, Benjamin Striegel wrote: > >> No one will use Rust if it's slow. If it uses checked arithmetic, it > >> will be slow. There's nothing subjective about that. > > > > This is the only argument that matters. > > > > If we are slower than C++, Rust will not replace C++ and will have > > failed at its goal of making the world a safer place. The world already > > has a glut of safe and slow languages; if inefficiency were acceptable, > > then C++ would have been replaced long ago. > > > > In addition, bringing up hypothetical CPU architectures with support for > > checked arithmetic is not relevant. Rust is a language designed for > > 2014, not for 2024. > > > > And if in 2024 we are all suddenly gifted with CPUs where checked > > arithmetic is literally free and if this somehow causes Rust to be > > "obsolete" (which seems unlikely in any case), then so be it. Rust is > > not the last systems programming language that will ever be written. > > Not only does the hardware have to provide it, but each OS also has to > expose it in a way that Rust could use to throw an exception, unless the > proposal is to simply abort on overflow. LLVM would also have to gain > support for unwinding from arithmetic operations, as it can't currently > do that. Even with hardware support for the operation itself, giving > every integer operation a side effect would still cripple performance by > wiping out optimizations. > This is going off on a tangent (and I hope editing the subject will make gmail consider it a separate conversation), but I've been bothered for a while by our apparent lack of distinction between, as I've heard them referred to, recoverable and unrecoverable errors. Unrecoverable errors are programmer errors. Per their name, they are not recoverable. They don't "happen" at runtime, or even compile time, but at program-writing time, and the only way to recover from them is to go back to program-writing time and fix the program. Things like out-of-bounds array accesses, accidental overflow, `assert`s and `debug_assert`s would belong in this category. Meanwhile, recoverable errors are merely problems which occur at runtime, and it should be possible to recover from them. Task failure, recoverable from without the task but not within it, is kind of neither here nor there. What if we *did* move to program abort for unrecoverable errors? And what if we did (eventually) grow an actual exception handling system for (some) recoverable errors? We'd enforce the same isolation guarantees for `try { }` blocks as we do for tasks - in other words, a `try` block couldn't share state with the rest of the function - so exception safety would continue to not be a concern. (I know we have `try()`, but it's kind of coarse and has an up-front performance cost.) To me, the hard part doesn't seem to be exception safety, but figuring out what things can be thrown and how they are caught, whether they're part of function signatures or exist outside them, and so on. The catches-based-on-type-of-thrown-object mechanism used by existing languages doesn't appeal to me, but I don't have many better ideas either. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.striegel at gmail.com Sun Jun 22 08:32:13 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sun, 22 Jun 2014 11:32:13 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: > Even though Rust is a performance conscious language (since it aims at displacing C and C++), the 80/20 rule still applies and most of Rust code should not require absolute speed This is a mistaken assumption. Systems programming exists on the extreme end of the programming spectrum where edge cases are the norm, not the exception, and where 80/20 does not apply. If you don't require absolute speed, why are you using Rust? On Sun, Jun 22, 2014 at 6:37 AM, Matthieu Monrocq < matthieu.monrocq at gmail.com> wrote: > I am not a fan of having wrap-around and non-wrap-around types, because > whether you use wrap-around arithmetic or not is, in the end, an > implementation detail, and having to switch types left and right whenever > going from one mode to the other is going to be a lot of boilerplate. > > Instead, why not take the same road than swift and map +, -, * and / to > non-wrap-around operators and declare new (more verbose) operators for the > rare case where performance matters or wrap-around is the right semantics ? > > Even though Rust is a performance conscious language (since it aims at > displacing C and C++), the 80/20 rule still applies and most of Rust code > should not require absolute speed; so let's make it convenient to write > safe code and prevent newcomers from shooting themselves in the foot by > providing safety by default, and for those who profiled their applications > or are writing hashing algorithms *also* provide the necessary escape > hatches. > > This way we can have our cake and eat it too... or am I missing something ? > > -- Matthieu > > > > On Sun, Jun 22, 2014 at 5:45 AM, comex wrote: > >> On Sat, Jun 21, 2014 at 7:10 PM, Daniel Micay >> wrote: >> >> Er... since when? Many single-byte opcodes in x86-64 corresponding to >> >> deprecated x86 instructions are currently undefined. >> > >> > http://ref.x86asm.net/coder64.html >> > >> > I don't see enough gaps here for the necessary instructions. >> >> You can see a significant number of invalid one-byte entries, 06, 07, >> 0e, 1e, 1f, etc. The simplest addition would just be to resurrect >> INTO and make it efficient - assuming signed 64 and 32 bit integers >> are good enough for most use cases. Alternatively, it could be two >> one-byte instructions to add an unsigned version (perhaps a waste of >> precious slots) or a two-byte instruction which could perhaps allow >> trapping on any condition. Am I missing something? >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eg1290 at gmail.com Sun Jun 22 08:39:47 2014 From: eg1290 at gmail.com (Evan G) Date: Sun, 22 Jun 2014 10:39:47 -0500 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: Because of memory safety? Because you want low-level control without absolute speed? Because of a small memory footprint? Because of having a good async story without giving up a lot of speed? There are plenty of other features to Rust then "absolute speed". Just because that's *your* usecase for it doesn't mean you should force it on others. On Sun, Jun 22, 2014 at 10:32 AM, Benjamin Striegel wrote: > > Even though Rust is a performance conscious language (since it aims at > displacing C and C++), the 80/20 rule still applies and most of Rust code > should not require absolute speed > > This is a mistaken assumption. Systems programming exists on the extreme > end of the programming spectrum where edge cases are the norm, not the > exception, and where 80/20 does not apply. If you don't require absolute > speed, why are you using Rust? > > > On Sun, Jun 22, 2014 at 6:37 AM, Matthieu Monrocq < > matthieu.monrocq at gmail.com> wrote: > >> I am not a fan of having wrap-around and non-wrap-around types, because >> whether you use wrap-around arithmetic or not is, in the end, an >> implementation detail, and having to switch types left and right whenever >> going from one mode to the other is going to be a lot of boilerplate. >> >> Instead, why not take the same road than swift and map +, -, * and / to >> non-wrap-around operators and declare new (more verbose) operators for the >> rare case where performance matters or wrap-around is the right semantics ? >> >> Even though Rust is a performance conscious language (since it aims at >> displacing C and C++), the 80/20 rule still applies and most of Rust code >> should not require absolute speed; so let's make it convenient to write >> safe code and prevent newcomers from shooting themselves in the foot by >> providing safety by default, and for those who profiled their applications >> or are writing hashing algorithms *also* provide the necessary escape >> hatches. >> >> This way we can have our cake and eat it too... or am I missing something >> ? >> >> -- Matthieu >> >> >> >> On Sun, Jun 22, 2014 at 5:45 AM, comex wrote: >> >>> On Sat, Jun 21, 2014 at 7:10 PM, Daniel Micay >>> wrote: >>> >> Er... since when? Many single-byte opcodes in x86-64 corresponding to >>> >> deprecated x86 instructions are currently undefined. >>> > >>> > http://ref.x86asm.net/coder64.html >>> > >>> > I don't see enough gaps here for the necessary instructions. >>> >>> You can see a significant number of invalid one-byte entries, 06, 07, >>> 0e, 1e, 1f, etc. The simplest addition would just be to resurrect >>> INTO and make it efficient - assuming signed 64 and 32 bit integers >>> are good enough for most use cases. Alternatively, it could be two >>> one-byte instructions to add an unsigned version (perhaps a waste of >>> precious slots) or a two-byte instruction which could perhaps allow >>> trapping on any condition. Am I missing something? >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.striegel at gmail.com Sun Jun 22 09:00:34 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sun, 22 Jun 2014 12:00:34 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: > There are plenty of other features to Rust then "absolute speed". You're right. Here are the three primary features of Rust, in decreasing order of importance: 1. Memory safety 2. C++ performance 3. Safe concurrency Notably, correctness in the face of integer overflow is merely a nice-to-have, and if it would compromise the second principle then we cannot accept it. The only features that are allowed to compromise performance are those that are required to satisfy memory safety, and the developers have already concluded that defined integer overflow does not jeopardize memory safety. Your only recourse is to convince the developers that they are incorrect and that overflow semantics are in fact a memory safety hazard. Railing against the incorrectness of overflow semantics alone isn't going to sway anyone. On Sun, Jun 22, 2014 at 11:39 AM, Evan G wrote: > Because of memory safety? Because you want low-level control without > absolute speed? Because of a small memory footprint? Because of having a > good async story without giving up a lot of speed? > > There are plenty of other features to Rust then "absolute speed". Just > because that's *your* usecase for it doesn't mean you should force it on > others. > > > On Sun, Jun 22, 2014 at 10:32 AM, Benjamin Striegel < > ben.striegel at gmail.com> wrote: > >> > Even though Rust is a performance conscious language (since it aims at >> displacing C and C++), the 80/20 rule still applies and most of Rust code >> should not require absolute speed >> >> This is a mistaken assumption. Systems programming exists on the extreme >> end of the programming spectrum where edge cases are the norm, not the >> exception, and where 80/20 does not apply. If you don't require absolute >> speed, why are you using Rust? >> >> >> On Sun, Jun 22, 2014 at 6:37 AM, Matthieu Monrocq < >> matthieu.monrocq at gmail.com> wrote: >> >>> I am not a fan of having wrap-around and non-wrap-around types, because >>> whether you use wrap-around arithmetic or not is, in the end, an >>> implementation detail, and having to switch types left and right whenever >>> going from one mode to the other is going to be a lot of boilerplate. >>> >>> Instead, why not take the same road than swift and map +, -, * and / to >>> non-wrap-around operators and declare new (more verbose) operators for the >>> rare case where performance matters or wrap-around is the right semantics ? >>> >>> Even though Rust is a performance conscious language (since it aims at >>> displacing C and C++), the 80/20 rule still applies and most of Rust code >>> should not require absolute speed; so let's make it convenient to write >>> safe code and prevent newcomers from shooting themselves in the foot by >>> providing safety by default, and for those who profiled their applications >>> or are writing hashing algorithms *also* provide the necessary escape >>> hatches. >>> >>> This way we can have our cake and eat it too... or am I missing >>> something ? >>> >>> -- Matthieu >>> >>> >>> >>> On Sun, Jun 22, 2014 at 5:45 AM, comex wrote: >>> >>>> On Sat, Jun 21, 2014 at 7:10 PM, Daniel Micay >>>> wrote: >>>> >> Er... since when? Many single-byte opcodes in x86-64 corresponding >>>> to >>>> >> deprecated x86 instructions are currently undefined. >>>> > >>>> > http://ref.x86asm.net/coder64.html >>>> > >>>> > I don't see enough gaps here for the necessary instructions. >>>> >>>> You can see a significant number of invalid one-byte entries, 06, 07, >>>> 0e, 1e, 1f, etc. The simplest addition would just be to resurrect >>>> INTO and make it efficient - assuming signed 64 and 32 bit integers >>>> are good enough for most use cases. Alternatively, it could be two >>>> one-byte instructions to add an unsigned version (perhaps a waste of >>>> precious slots) or a two-byte instruction which could perhaps allow >>>> trapping on any condition. Am I missing something? >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.sapin at exyr.org Sun Jun 22 09:02:48 2014 From: simon.sapin at exyr.org (Simon Sapin) Date: Sun, 22 Jun 2014 18:02:48 +0200 Subject: [rust-dev] Rust CI In-Reply-To: References: Message-ID: <53A6FE28.4090300@exyr.org> On 18/06/14 10:11, Hans J?rgen Hoel wrote: > Rust Ci wasn't working for a period due to problems with building the > nightly PPA for the platform used by Travis (required GCC version was > bumped with no way to specify alternative to configure script). > > This has been fixed for a while, but it turns out that many Travis > auth tokens has expired in the mean time. Are there advantages or disadvantages with using nightlies from the PPA rather than those from rust-lang.org? -- Simon Sapin From florob at babelmonkeys.de Sun Jun 22 09:06:37 2014 From: florob at babelmonkeys.de (Florian Zeitz) Date: Sun, 22 Jun 2014 18:06:37 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: <53A6FF0D.4010801@babelmonkeys.de> On 22.06.2014 17:32, Benjamin Striegel wrote: >> Even though Rust is a performance conscious language (since it aims at > displacing C and C++), the 80/20 rule still applies and most of Rust > code should not require absolute speed > > This is a mistaken assumption. Systems programming exists on the extreme > end of the programming spectrum where edge cases are the norm, not the > exception, and where 80/20 does not apply. If you don't require absolute > speed, why are you using Rust? > This is such a terrible straw-man argument, and I hate hearing it over and over again. Most of all because it sounds really hostile to me: "Go away, your problem doesn't require ultimate performance, this is not the language you should be using". And in the spirit of sending I-messages: I feel like I'm being told to stop using the language. That feels a bit unproductive to me. It also fuels the flame of people insisting bound checks should be optional. To me the point of this discussion boils down to this: I think we can all agree that having checked arithmetic is worthwhile. Rust already has it as e.g. `.checked_add()'. I think it might even be non-controversial that it is worthwhile to make using them more ergonomic. Either by providing a separate operator, or a separate type (I personally think the later option fits the language better, but YMMV). What is apparently reason for a heated debate is, whether this should be the default. That certainly is a safety vs. performance debate. It has been pointed out that checked arithmetic actually impacts performance beyond introducing a jump instruction, after each add. It causes a lot of optimization to be disabled/ineffective. I have however not seen strong arguments that the issues imposed by not always using checked arithmetic are generally security critical. I'd welcome a civil, rational discussion of the costs and benefits of each approach, instead of whatever this is currently starting to turn into. Regards, Florian From eg1290 at gmail.com Sun Jun 22 09:10:35 2014 From: eg1290 at gmail.com (Evan G) Date: Sun, 22 Jun 2014 11:10:35 -0500 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: I don't think I was ever "Railing against the incorrectness of overflow semantics"? I was just pointing out that your (imo pretty hostile?) message about "If you don't require absolute speed, why are you using Rust?" doesn't really ring true. Most C++ programmers don't even require absolute speed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.striegel at gmail.com Sun Jun 22 09:11:50 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sun, 22 Jun 2014 12:11:50 -0400 Subject: [rust-dev] Recoverable and unrecoverable errors (Was: Re: Integer overflow, round -2147483648) In-Reply-To: References: Message-ID: I agree that we need to clarify our error-handling story. Specifically I would like Daniel to elaborate on this quote of his from the previous thread, with potential solutions at the language level: > Rust's task failure isn't very isolated or robust. A failure in a > destructor called during failure will abort the process. A failure in a > destructor when not already failing will not call the inner destructors > as it would be memory unsafe. > > A failure also has to poison RWLock / Mutex so that all other threads > with handles to the same shared data will fail too. I don't think these > issues are going to be fixed, unwinding in a language with destructors > is just inherently broken. > > I think process separation is a far better option for robust systems. On Sun, Jun 22, 2014 at 10:29 AM, G?bor Lehel wrote: > > On Sun, Jun 22, 2014 at 2:02 AM, Daniel Micay > wrote: > >> On 21/06/14 07:55 PM, Benjamin Striegel wrote: >> >> No one will use Rust if it's slow. If it uses checked arithmetic, it >> >> will be slow. There's nothing subjective about that. >> > >> > This is the only argument that matters. >> > >> > If we are slower than C++, Rust will not replace C++ and will have >> > failed at its goal of making the world a safer place. The world already >> > has a glut of safe and slow languages; if inefficiency were acceptable, >> > then C++ would have been replaced long ago. >> > >> > In addition, bringing up hypothetical CPU architectures with support for >> > checked arithmetic is not relevant. Rust is a language designed for >> > 2014, not for 2024. >> > >> > And if in 2024 we are all suddenly gifted with CPUs where checked >> > arithmetic is literally free and if this somehow causes Rust to be >> > "obsolete" (which seems unlikely in any case), then so be it. Rust is >> > not the last systems programming language that will ever be written. >> >> Not only does the hardware have to provide it, but each OS also has to >> expose it in a way that Rust could use to throw an exception, unless the >> proposal is to simply abort on overflow. LLVM would also have to gain >> support for unwinding from arithmetic operations, as it can't currently >> do that. Even with hardware support for the operation itself, giving >> every integer operation a side effect would still cripple performance by >> wiping out optimizations. >> > > This is going off on a tangent (and I hope editing the subject will make > gmail consider it a separate conversation), but I've been bothered for a > while by our apparent lack of distinction between, as I've heard them > referred to, recoverable and unrecoverable errors. Unrecoverable errors are > programmer errors. Per their name, they are not recoverable. They don't > "happen" at runtime, or even compile time, but at program-writing time, and > the only way to recover from them is to go back to program-writing time and > fix the program. Things like out-of-bounds array accesses, accidental > overflow, `assert`s and `debug_assert`s would belong in this category. > Meanwhile, recoverable errors are merely problems which occur at runtime, > and it should be possible to recover from them. > > Task failure, recoverable from without the task but not within it, is kind > of neither here nor there. > > What if we *did* move to program abort for unrecoverable errors? > > And what if we did (eventually) grow an actual exception handling system > for (some) recoverable errors? We'd enforce the same isolation guarantees > for `try { }` blocks as we do for tasks - in other words, a `try` block > couldn't share state with the rest of the function - so exception safety > would continue to not be a concern. (I know we have `try()`, but it's kind > of coarse and has an up-front performance cost.) To me, the hard part > doesn't seem to be exception safety, but figuring out what things can be > thrown and how they are caught, whether they're part of function signatures > or exist outside them, and so on. The > catches-based-on-type-of-thrown-object mechanism used by existing languages > doesn't appeal to me, but I don't have many better ideas either. > > > > >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slabode at aim.com Sun Jun 22 09:16:45 2014 From: slabode at aim.com (SiegeLord) Date: Sun, 22 Jun 2014 12:16:45 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: <53A7016D.5030707@aim.com> On 06/22/2014 11:32 AM, Benjamin Striegel wrote: > This is a mistaken assumption. Systems programming exists on the extreme > end of the programming spectrum where edge cases are the norm, not the > exception, and where 80/20 does not apply. Even in systems programming not every line is going to be critical for performance. There is still going to be a distribution of some lines just taking more time than others. Additionally, in a single project, there's a nontrivial cost in using Rust for the 20% of code that's fast and using some other language for the remaining 80%. How are you going to transfer Rust's trait abstractions to, e.g., Python? > If you don't require absolute speed, why are you using Rust? Because it's a nice, general purpose language? Systems programming language is a statement about capability, not a statement about the sole type of programming the language supports. C++ can be and is used effectively in applications where speed is of the essence and in applications where speed doesn't matter. Is Rust going to be purposefully less generally useful than C++? There's always this talk of "C++ programmers won't use Rust because of reason X". Which C++ programmers? In my experience the vast majority of C++ programmers don't push C++ to its performance limits. Are they using the wrong language for the job? I don't think so as there are many reasons to use C++ beside its speed potential. Rust will never become popular if it caters to the tiny percentage of C++ users who care about the last few percent of speed while alienating everybody else (via language features or statements like yours). The better goal is a) enable both styles of programming b) make the super-fast style easy enough so that everybody uses it. -SL From asb at asbradbury.org Sun Jun 22 09:17:38 2014 From: asb at asbradbury.org (Alex Bradbury) Date: Sun, 22 Jun 2014 17:17:38 +0100 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A6FF0D.4010801@babelmonkeys.de> References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> <53A6FF0D.4010801@babelmonkeys.de> Message-ID: On 22 June 2014 17:06, Florian Zeitz wrote: > To me the point of this discussion boils down to this: > I think we can all agree that having checked arithmetic is worthwhile. > Rust already has it as e.g. `.checked_add()'. > I think it might even be non-controversial that it is worthwhile to make > using them more ergonomic. Either by providing a separate operator, or a > separate type (I personally think the later option fits the language > better, but YMMV). > > What is apparently reason for a heated debate is, whether this should be > the default. That certainly is a safety vs. performance debate. > > It has been pointed out that checked arithmetic actually impacts > performance beyond introducing a jump instruction, after each add. > It causes a lot of optimization to be disabled/ineffective. > > I have however not seen strong arguments that the issues imposed by not > always using checked arithmetic are generally security critical. > > I'd welcome a civil, rational discussion of the costs and benefits of > each approach, instead of whatever this is currently starting to turn into. I can't help but feel this discussion has been going round in circles. There seems general agreement that checked arithmetic can be useful and there is a community interested in it. Surely the obvious next step is for that community to work to make checked arithmetic easier to use in Rust and analyse the performance impact of replacing instances of unchecked arithmetic with it. Then we can have this performance debate with some actual results to back up people's statements. Alex From ben.striegel at gmail.com Sun Jun 22 09:21:16 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sun, 22 Jun 2014 12:21:16 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: I apologize for being hostile. As Florian has noted, we're just arguing about the default behavior here. It is my opinion that checked behavior by default will make Rust unsuitable for filling C++'s niche, and send the message that we are not serious about performance. On Sun, Jun 22, 2014 at 12:10 PM, Evan G wrote: > I don't think I was ever "Railing against the incorrectness of overflow > semantics"? I was just pointing out that your (imo pretty hostile?) message > about "If you don't require absolute speed, why are you using Rust?" > doesn't really ring true. Most C++ programmers don't even require absolute > speed. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.fath7 at gmail.com Sun Jun 22 10:24:55 2014 From: daniel.fath7 at gmail.com (Daniel Fath) Date: Sun, 22 Jun 2014 19:24:55 +0200 Subject: [rust-dev] Rust CI In-Reply-To: <53A6FE28.4090300@exyr.org> References: <53A6FE28.4090300@exyr.org> Message-ID: > Are there advantages or disadvantages with using nightlies from the PPA rather than those from rust-lang.org? PPA nightlies are woefully out of date but they are easier to reinstall - they automatically notify you when you're out of date, which I don't think rust-lang nightlies do. Also having nightlies will do wonders for acceptance amongst the Ubuntu/Debian derivatives programmer. Rust Lang are bleeding edge but are a bit more harder to maintain - you need to remember to run update script On Sun, Jun 22, 2014 at 6:02 PM, Simon Sapin wrote: > On 18/06/14 10:11, Hans J?rgen Hoel wrote: > >> Rust Ci wasn't working for a period due to problems with building the >> nightly PPA for the platform used by Travis (required GCC version was >> bumped with no way to specify alternative to configure script). >> >> This has been fixed for a while, but it turns out that many Travis >> auth tokens has expired in the mean time. >> > > Are there advantages or disadvantages with using nightlies from the PPA > rather than those from rust-lang.org? > > -- > Simon Sapin > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pssalmeida at gmail.com Sun Jun 22 13:21:51 2014 From: pssalmeida at gmail.com (=?UTF-8?Q?Paulo_S=C3=A9rgio_Almeida?=) Date: Sun, 22 Jun 2014 21:21:51 +0100 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> <5C9C01DB-D618-4A13-8E31-43A3F6393019@mozilla.com> Message-ID: Yes, but many users won't even attempt to write their own pointer types, and will reap benefits from having nice support for the essential pointers, that they can think of as built-in. Those that attempt to write their own will not be in a worse position than they are now. On 21 June 2014 17:05, Benjamin Striegel wrote: > > A user won't care if Arc and Rc are built-in or not. > > They will definitely care, once they attempt to write their own pointer > types and find that they're second-class citizens compared to the types > that have been blessed by the compiler. There's little point in having a > powerful and extensible language if even simple types need hardcoded > compiler magic to function. > > > On Sat, Jun 21, 2014 at 7:42 AM, Paulo S?rgio Almeida < > pssalmeida at gmail.com> wrote: > >> Regarding the white-listing, I also find it weird, but should the user >> experience be worse just because Rc and Arc are implemented in Rust and so >> we should do nothing in the name of orthogonality? A user won't care if Arc >> and Rc are built-in or not. >> >> I may have passed the message that its only ugliness involved, and being >> lazy to type .clone(). But the point is uniformity and coherence of the >> pointers API, clarity, and ease of refactoring code. I am thinking of >> writing an RFC about cleaning up pointers. As of now there are some, in my >> opinion, needless non-uniformities in the use of pointers. I would like to >> have some properties: >> >> 1) Two programs that use pointers, identical except for pointer types and >> that both compile should produce semantically equivalent result (i.e., only >> differ in performance). >> >> The idea is that different pointer types would be chosen according to >> capabilities (is move enough or do I want to mutate something I own, pick >> Box; do I want to share intra-task, pick Rc or Gc; inter-task, pick Arc). >> If a program fragment is written using, say Gc, and later I decide to >> switch to Rc, I should need only change the declaration site(s) and not >> have to go all-over and add .clone(). >> >> (There are other things than Copy that need to be improved, like >> uniformity of auto-dereferencing and auto-borrowing. Fortunately I hope >> those to be not as controverse.) >> >> 2) Pointers should be transparent, and avoid confusion between methods of >> the pointer and methods of the referent. >> >> In particular, having non Box pointers Copy and avoiding pointer cloning, >> all .clone() in code would mean cloning the referent, being uniform with >> what happens with Box pointers. A clone should be something more rare and >> conscious. As of now, having mechanical .clone() in many places makes the >> "real" refent clones less obvious. An unintended referent clone may go more >> easily unnoticed. >> >> (Other aspects involve switching to UFCS style for pointer methods.) >> >> 3) Last use move optimisation should be applied for pointer types. >> >> This is not as essential, but as now, the compiler will know the last use >> place of a variable and use move instead of copy. All white-listed for Copy >> pointer-types must allow this optimisation. As we are talking about a >> controlled, to be approved set of types (i.e. Rc and Arc), and not general >> user-defined types, we can be sure that for all white-listed types this is >> so. Last use move optimisation would result in the same performance of code >> as now. Again, this is coherent with Box types, where the first use (by >> value) must be also the last use. >> >> Anyway, I would like to stress that much fewer pointer copies will exist >> in Rust given (auto-)borrowing. It is important to educate programers to >> always start by considering &T in function arguments, if they only need to >> use the T, and add more capabilities if needed. Something like &Rc if >> the function needs to use the T and occasionally copy the pointer, and only >> Rc if the normal case is copying the pointer. >> >> This is why I even argue that Arc should be Copy even considering the >> much more expensive copy. The important thing is programers knowing the >> cost, which will happen for the very few white-listed types, as opposed to >> allowing general user-defined copies for which the cost is not clear for >> the reader. A program using Arc should not Use Arc all-over, but only in a >> few places; after spawning, the proc will typically pass not the Arc but >> only &T to functions performing the work; this is unless those functions >> need to spawn further tasks but in this case, the Arc copies are need and >> in those places we would use .clone() anyway, resulting in the same >> performance. >> >> >> >> On 21 June 2014 10:10, Nick Cameron wrote: >> >>> I guess I forgot that C++ ref counted pointers (pre-11) generally have a >>> move version of the type. Thanks for pointing that out. >>> >>> I agree it would be odd to copy that design (Rc/RcTemp) in a language >>> which has move semantics by default. I wonder if we could come up with >>> _some_ design that would be better than the current one. My reasoning is >>> that copy-with-increment is the (overwhelmingly) common case for >>> ref-counted pointers and so should be easier/prettier than the less common >>> case (moving). One could argue that the more efficient case (moving) should >>> be prettier and I think that is valid. I'm not sure how to square the two >>> arguments. I do think this deserves more thought than just accepting the >>> current (`.clone()`) situation - I think it is very un-ergonimic. Having >>> two types rather than two copying mechanisms seems more preferable to me, >>> but I hope there is a better solution. >>> >>> >>> On Sat, Jun 21, 2014 at 6:21 PM, Cameron Zwarich >>> wrote: >>> >>>> On Jun 20, 2014, at 11:06 PM, Nick Cameron wrote: >>>> >>>> > zwarich: I haven't thought this through to a great extent, and I >>>> don't think here is the right place to plan the API. But, you ought to >>>> still have control over whether an Rc pointer is copied or referenced. If >>>> you have an Rc object and pass it to a function which takes an Rc, it >>>> is copied, if it takes a &Rc or a &T then it references (in the latter >>>> case with an autoderef-ref). If the function is parametric over U and takes >>>> a &U, then we instantiate U with either Rc or T (in either case it would >>>> be passed by ref without an increment, deciding which is not changed by >>>> having a copy constructor). If the function takes a U literal, then U must >>>> be instantiated with Rc. So, you still get to control whether you >>>> reference with an increment or not. >>>> > >>>> > I think if Rc is copy, then it is always copied. I would not expect >>>> it to ever move. I don't think that is untenable, performance wise, after >>>> all it is what everyone is currently doing in C++. I agree the second >>>> option seems unpredictable and thus less pleasant. >>>> >>>> Copying on every single transfer of a ref-counted smart pointer is >>>> definitely *not* what everyone is doing in C++. In C++11, move constructors >>>> were added, partially to enable smart pointers to behave sanely and >>>> eliminate extra copies in this fashion (albeit in some cases requiring >>>> explicit moves rather than implicit ones like in Rust). >>>> >>>> Before that, it was possible to encode this idiom using a separate >>>> smart pointer for the expiring value. WebKit relies on (or relied on, >>>> before C++11) a scheme like this for adequate performance: >>>> >>>> https://www.webkit.org/coding/RefPtr.html >>>> >>>> In theory, you could encode such a scheme into this ?always copy on >>>> clone" version of Rust, where Rc would always copy, and RcTemp wouldn?t >>>> even implement clone, and would only be moveable and convertible back to an >>>> Rc. However, it seems strange to go out of your way to encode a bad version >>>> of move semantics back into a language that has native move semantics. >>>> >>>> Cameron >>> >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pssalmeida at gmail.com Sun Jun 22 13:26:33 2014 From: pssalmeida at gmail.com (=?UTF-8?Q?Paulo_S=C3=A9rgio_Almeida?=) Date: Sun, 22 Jun 2014 21:26:33 +0100 Subject: [rust-dev] On Copy = POD In-Reply-To: References: Message-ID: On 21 June 2014 22:03, Igor Bukanov wrote: > On 20 June 2014 21:07, Paulo S?rgio Almeida wrote: > > I have seen many other examples, where the code could mislead the reader > into > > thinking there are several, e.g., Mutexes: > > > > let mutex = Arc::new(Mutex::new(1)); > > let mutex2 = mutex.clone(); > > Does this experience exist outside multithreaded code? I am asking > because if the need to use extra temporary to create clones is limited > mostly to cases involving implementations of Send, then this is rather > different case than the issue of avoiding explicit clone() in general. > I don't know, I don't have real experience writing Rust. Have only been looking at examples, mostly from the docs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcwalton at mozilla.com Sun Jun 22 13:49:24 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Sun, 22 Jun 2014 13:49:24 -0700 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: <53A74154.5090303@mozilla.com> Why can't you use Rc or Weak? That seems self-evidently false to me: there are many languages that *only* have reference counting, and they can represent graphs just fine. Patrick From pcwalton at mozilla.com Sun Jun 22 13:50:13 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Sun, 22 Jun 2014 13:50:13 -0700 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> <268B7EFF-16CF-464D-B1C4-A08C6B4177DD@mozilla.com> Message-ID: <53A74185.1090909@mozilla.com> On 6/21/14 9:00 AM, Benjamin Striegel wrote: > > I don't think that is untenable, performance wise, after all it is > what everyone is currently doing in C++. > > We have already made several decisions that will disadvantage us with > regard to C++. ...Like what? This thread has a lot of very surprising sweeping assertions without a lot of evidence. Patrick From pcwalton at mozilla.com Sun Jun 22 14:01:18 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Sun, 22 Jun 2014 14:01:18 -0700 Subject: [rust-dev] On Copy = POD In-Reply-To: References: <53A4871C.20901@mozilla.com> Message-ID: <53A7441E.7000700@mozilla.com> On 6/21/14 4:05 PM, Cameron Zwarich wrote: > Another big problem with implicit copy constructors is that they make it > very difficult to write correct unsafe code. When each use of a variable > can call arbitrary code, each use of a variable can trigger unwinding. > You then basically require people to write the equivalent of > exception-safe C++ in unsafe code to preserve memory safety guarantees, > and it?s notoriously difficult to do that. Yes, I kind of wonder whether it is better to do something more targeted to Rc (for example, making copy constructors always unsafe?they are for Rc anyhow?and saying that unwinding is UB, or adopting something more like Obj-C/Swift ARC than C++ copy constructors or D postblit). C++ has sometimes gotten into trouble offering large sweeping "metafeatures" (e.g. ADL, SFINAE) when small targeted features could suffice. But honestly, I'm pretty happy with the status quo, especially for 1.0. Not to imply that Servo is the only use case that matters, of course, but when it comes to RC, the status quo isn't hurting Servo in my experience (and, in fact, the explicit clones are quite important for layout); if anything, what hurts Servo is integration with the JavaScript *GC*, and fluent integration with an external garbage collector in a systems language is an unsolved research problem as far as I'm concerned. Patrick From pcwalton at mozilla.com Sun Jun 22 14:03:29 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Sun, 22 Jun 2014 14:03:29 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A61101.1020103@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: <53A744A1.1080103@mozilla.com> On 6/21/14 4:10 PM, Daniel Micay wrote: > http://ref.x86asm.net/coder64.html > > I don't see enough gaps here for the necessary instructions. I think all that Intel would have to do is to resurrect INTO (0xce) and optimize the case in which INTO immediately follows an overflowable arithmetic instruction. Patrick From danielmicay at gmail.com Sun Jun 22 14:05:03 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sun, 22 Jun 2014 17:05:03 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> Message-ID: <53A744FF.7070501@gmail.com> On 22/06/14 09:31 AM, G?bor Lehel wrote: > > The prospect of future architectures with cheaper (free) overflow > checking isn't my primary motivation, though if we also end up better > prepared for them as a side effect, that's icing on the cake. It's never going to be free or even cheap. Replacing very common pure operations with impure ones in most code will destroy performance. Rust relies heavily on compiler optimizations to even come close to C level performance. Anyway, no one has offered an explanation of how they're planning on integrating this into LLVM and how they propose turning a trapping operation into unwinding across various platforms. > My primary motivation is that, outside of a couple of specialized cases > like hashing and checksums, wraparound semantics on overflow is > **wrong**. It may be well-defined behavior, and it may be fast, but it's > **wrong**. What's the value of a well-defined, performant semantics > which does the wrong thing? Few functions check all of the necessary preconditions. For example, a binary search implementation doesn't check to see if the array is sorted. It's not incorrect to require a precondition from the caller and overflow is only one of countless cases of this. Choosing to enforce invariants in the type system or checking them at runtime is always a compromise, and Rust eschews runtime checks not strictly required for memory safety. In some cases, the type system has been leveraged to enforce invariants at compile-time (Ord vs. PartialOrd) but even though that's quite easy to sidestep, it's not without drawbacks. > I also agree that performance is non-negotiable in this case, however. > The only good thing about always wrong is that it's not that hard to do > better. > > Given the circumstances, I think the least bad outcome we could achieve, > and which we *should* aim to achieve, would be this: > > * Where performance is known to not be a requirement, Rust code in the > wild uses either overflow-checked arithmetic or unbounded integer types, > with the choice between them depending on ergonomic and semantic > considerations. > > * When the performance requirement can't be ruled out, Rust code in the > wild uses arithmetic for which overflow checking can be turned on or off > with a compiler flag. For testing and debugging, it is turned on. For > production and benchmarks, it is turned off. The Rust developers have been consistently opposed to introducing dialects of the language via compiler switches. I brought up the issue of macros and syntax extensions but you've chosen to ignore that. > * For code where wraparound semantics is desired, the appropriate > facilities are also available. > > Given the discussion so far, the design I'd be leaning toward to > accomplish the above might be something like this: > > * Two sets of fixed-sized integer types are available in the `prelude`. > > * `u8`..`u64`, `i8`..`i64`, `int`, and `uint` have unspecified results > on overflow (**not** undefined behavior). A compiler flag turns overflow > checks on or off. Essentially, the checks are `debug_assert`s, though > whether they should be controlled by the same flag is open to debate. > > * `uc8`..`uc64`, `ic8`..`ic64`, `intc`, and `uintc` are *always* > checked for overflow, regardless of flags. (Names are of course open to > bikeshedding.) > > * Given that these are not really different semantically, automatic > coercions between corresponding types can be considered. (Even then, for > `a + b` where `a: int` and `b: intc`, explicit disambiguation would > presumably still be required.) > > * Unbounded integer types using owned memory allocation are available > in the `prelude`. I might prefer to call them `Integer` and `Natural` > instead of `BigInt` and `BigUint`. > > * Types and/or operations which wrap around on overflow are available > in the standard library. Given how specialized the use cases for these > seem to be, perhaps they could even go directly in the `hash` module. > It's not clear to me yet whether a separate set of types (`uw8`..`uw64`, > `iw8`..`iw64`) or just a separate set of operations on the `prelude` > types (e.g. `trait WrappingAdd`) would be preferable. A `Vec` and `Vec` would be entirely distinct types. That alone is going to cause performance issues and will make the language more painful to use. It's already pushing the boundaries of what people will be willing to accept with features like strictly checked move semantics and reference lifetimes. > * Unbounded integer types which use garbage collected allocation are > available in the `gc` module. It doesn't sense to have 3 separate implementations of big integers for reference counting, atomic reference counting and task-local tracing garbage collection. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From rick.richardson at gmail.com Sun Jun 22 14:09:03 2014 From: rick.richardson at gmail.com (Rick Richardson) Date: Sun, 22 Jun 2014 17:09:03 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A744FF.7070501@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A744FF.7070501@gmail.com> Message-ID: Apologies if this has been suggested, but would it be possible to have a compiler switch that can add runtime checks and abort on overflow/underflow/carry for debugging purposes, but the default behavior is no check? IMO this would be the best of both worlds, because I would assume that one would really only care about checked math during testing and dev. On Sun, Jun 22, 2014 at 5:05 PM, Daniel Micay wrote: > On 22/06/14 09:31 AM, G?bor Lehel wrote: > > > > The prospect of future architectures with cheaper (free) overflow > > checking isn't my primary motivation, though if we also end up better > > prepared for them as a side effect, that's icing on the cake. > > It's never going to be free or even cheap. Replacing very common pure > operations with impure ones in most code will destroy performance. Rust > relies heavily on compiler optimizations to even come close to C level > performance. Anyway, no one has offered an explanation of how they're > planning on integrating this into LLVM and how they propose turning a > trapping operation into unwinding across various platforms. > > > My primary motivation is that, outside of a couple of specialized cases > > like hashing and checksums, wraparound semantics on overflow is > > **wrong**. It may be well-defined behavior, and it may be fast, but it's > > **wrong**. What's the value of a well-defined, performant semantics > > which does the wrong thing? > > Few functions check all of the necessary preconditions. For example, a > binary search implementation doesn't check to see if the array is > sorted. It's not incorrect to require a precondition from the caller > and overflow is only one of countless cases of this. > > Choosing to enforce invariants in the type system or checking them at > runtime is always a compromise, and Rust eschews runtime checks not > strictly required for memory safety. > > In some cases, the type system has been leveraged to enforce invariants > at compile-time (Ord vs. PartialOrd) but even though that's quite easy > to sidestep, it's not without drawbacks. > > > I also agree that performance is non-negotiable in this case, however. > > The only good thing about always wrong is that it's not that hard to do > > better. > > > > Given the circumstances, I think the least bad outcome we could achieve, > > and which we *should* aim to achieve, would be this: > > > > * Where performance is known to not be a requirement, Rust code in the > > wild uses either overflow-checked arithmetic or unbounded integer types, > > with the choice between them depending on ergonomic and semantic > > considerations. > > > > * When the performance requirement can't be ruled out, Rust code in the > > wild uses arithmetic for which overflow checking can be turned on or off > > with a compiler flag. For testing and debugging, it is turned on. For > > production and benchmarks, it is turned off. > > The Rust developers have been consistently opposed to introducing > dialects of the language via compiler switches. I brought up the issue > of macros and syntax extensions but you've chosen to ignore that. > > > * For code where wraparound semantics is desired, the appropriate > > facilities are also available. > > > > Given the discussion so far, the design I'd be leaning toward to > > accomplish the above might be something like this: > > > > * Two sets of fixed-sized integer types are available in the `prelude`. > > > > * `u8`..`u64`, `i8`..`i64`, `int`, and `uint` have unspecified results > > on overflow (**not** undefined behavior). A compiler flag turns overflow > > checks on or off. Essentially, the checks are `debug_assert`s, though > > whether they should be controlled by the same flag is open to debate. > > > > * `uc8`..`uc64`, `ic8`..`ic64`, `intc`, and `uintc` are *always* > > checked for overflow, regardless of flags. (Names are of course open to > > bikeshedding.) > > > > * Given that these are not really different semantically, automatic > > coercions between corresponding types can be considered. (Even then, for > > `a + b` where `a: int` and `b: intc`, explicit disambiguation would > > presumably still be required.) > > > > * Unbounded integer types using owned memory allocation are available > > in the `prelude`. I might prefer to call them `Integer` and `Natural` > > instead of `BigInt` and `BigUint`. > > > > * Types and/or operations which wrap around on overflow are available > > in the standard library. Given how specialized the use cases for these > > seem to be, perhaps they could even go directly in the `hash` module. > > It's not clear to me yet whether a separate set of types (`uw8`..`uw64`, > > `iw8`..`iw64`) or just a separate set of operations on the `prelude` > > types (e.g. `trait WrappingAdd`) would be preferable. > > A `Vec` and `Vec` would be entirely distinct types. That > alone is going to cause performance issues and will make the language > more painful to use. It's already pushing the boundaries of what people > will be willing to accept with features like strictly checked move > semantics and reference lifetimes. > > > * Unbounded integer types which use garbage collected allocation are > > available in the `gc` module. > > It doesn't sense to have 3 separate implementations of big integers for > reference counting, atomic reference counting and task-local tracing > garbage collection. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -- "Historically, the most terrible things - war, genocide, and slavery - have resulted not from disobedience, but from obedience" -- Howard Zinn -------------- next part -------------- An HTML attachment was scrubbed... URL: From zwarich at mozilla.com Sun Jun 22 14:12:11 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Sun, 22 Jun 2014 14:12:11 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: <158B23C7-BEED-4096-853D-23DC8A94CB10@mozilla.com> For some applications, Rust?s bounds checks and the inability of rustc to eliminate them in nontrivial cases will already be too much of a performance sacrifice. What do we say to those people? Is it just that memory safety is important because of its security implications, and other forms of program correctness are not? I am wary of circling around on this topic again, but I feel that the biggest mistake in this discussion js that checked overflow in a language requires a potential trap on every single integer operation. Languages like Ada (and Swift, to a lesser extent), allow for slightly imprecise exceptions in the case of integer overflow. A fairly simple rule is that a check for overflow only needs to occur before the incorrect result may have externally visible side effects. Ada?s rule even lets you go a step further and remove overflow checks from loops in some cases (without violating control dependence of the eventual overflowing operation). One model like this that has been proposed for C/C++ is the ?As Infinitely Ranged? model (see http://www.cert.org/secure-coding/tools/air-integer-model.cfm), where operations either give the result that would be correct if integers had infinite precision, or cause a trap. This allows for more optimizations to be performed, and although it is questionable to me whether they preserved control dependence of overflow behavior in all cases, they report a 5.5% slowdown on SPEC2006 with GCC 4.5 (compared to a 13% slowdown with more plentiful checks) using a very naive implementation. A lot of those checks in SPEC2006 could probably just be eliminated if the language itself distinguished between overflow-trapping and overflow-permissive operations. If a compiler optimizer understood the semantics of potentially trapping integer operations better (or at all), it could reduce the overhead of the checks. I know that some people don?t want to require new compiler techniques (or are afraid of relying on something outside of the scope of what LLVM can handle today), but it would be unfortunate for Rust to make the wrong decision here based on such incidental details rather than what is actually possible. Cameron On Jun 22, 2014, at 9:21 AM, Benjamin Striegel wrote: > I apologize for being hostile. As Florian has noted, we're just arguing about the default behavior here. It is my opinion that checked behavior by default will make Rust unsuitable for filling C++'s niche, and send the message that we are not serious about performance. > > > On Sun, Jun 22, 2014 at 12:10 PM, Evan G wrote: > I don't think I was ever "Railing against the incorrectness of overflow semantics"? I was just pointing out that your (imo pretty hostile?) message about "If you don't require absolute speed, why are you using Rust?" doesn't really ring true. Most C++ programmers don't even require absolute speed. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zwarich at mozilla.com Sun Jun 22 14:18:13 2014 From: zwarich at mozilla.com (Cameron Zwarich) Date: Sun, 22 Jun 2014 14:18:13 -0700 Subject: [rust-dev] &self/&mut self in traits considered harmful(?) In-Reply-To: <539F6D81.30708@mozilla.com> References: <53985939.3010603@aim.com> <539B9B72.7020509@mozilla.com> <539F2B16.60608@mozilla.com> <539F4DBE.2010308@gmail.com> <539F6076.5090507@mozilla.com> <5300CBA1-043E-4334-9190-B7695C2040DB@mozilla.com> <539F6D81.30708@mozilla.com> Message-ID: <2CD73ECE-F50C-4AE3-8959-3F8C315803A8@mozilla.com> On Jun 16, 2014, at 3:19 PM, Patrick Walton wrote: > On 6/16/14 3:17 PM, Cameron Zwarich wrote: >> I stated the right case, but the wrong reason. It?s not for >> vectorization, it?s because it?s not easy to reuse the storage of a >> matrix while multiplying into it. > > Wouldn't most matrices be implicitly copyable (and thus optimized--or at least optimizable--into by-ref at the ABI level)? Sorry for the super-late reply, but if you reuse the same argument multiple times, you will have made multiple copies of it, right? A sufficiently optimizing compiler would probably be able to optimize it if everything is inlined. However, that also only applies to dense matrices. Sparse matrices are unlikely to be Copy. Cameron From danielmicay at gmail.com Sun Jun 22 14:23:52 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sun, 22 Jun 2014 17:23:52 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: <53A74968.9020505@gmail.com> On 22/06/14 11:39 AM, Evan G wrote: > Because of memory safety? Most modern languages are memory safe. They're also significantly easier to use than Rust, because the compiler / runtime is responsible for managing object lifetimes. > Because you want low-level control without absolute speed? I'm not really sure what you mean by "low-level control". I'm also not sure why you would want such a thing if not for performance. > Because of a small memory footprint? The memory footprint depends more on the application code than anything else. A language using garbage collection might use 3x as much memory or more, but using implicit atomic reference counting like Swift avoids the need for the programmer to manage ownership / lifetimes without introducing significant memory usage overhead. Rust is a bad choice if performance isn't critical, because those features exist for the sake of performance. > Because of having a good async story without giving up a lot of speed? Rust has exactly zero support for async / non-blocking I/O. The only form of I/O in Rust will block a task, meaning each waiting I/O operation is consuming an entire stack. Avoiding the overhead of a stack per I/O concurrent operation is the rationale behind async / non-blocking I/O. > There are plenty of other features to Rust then "absolute speed". Just > because that's /your/ usecase for it doesn't mean you should force it on > others. The set of design compromises made by the language is ill-suited to a use case where performance isn't critical. There's a high cost to having the programmer manage object lifetimes, and a language without the same focus on performance can avoid this entirely. Swift shares many of the design characteristics of Rust (like traits), with C#-style value types and reference types. Rather than references as values where the programmer has to deal with lifetimes, reference types have their lifetime managed by a garbage collector (C#) or reference counting (Swift). I have a hard time seeing why someone would choose Rust over a language like that if they didn't care about a focus on performance *across the application* rather than just for inner loops. Rust's strength is the ability to build high-level abstractions without making performance compromises. You can still have low-level control in Swift, but you can't have both low-level control and high-level / safe abstractions. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From danielmicay at gmail.com Sun Jun 22 14:26:47 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sun, 22 Jun 2014 17:26:47 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: <53A74A17.7050800@gmail.com> On 22/06/14 11:32 AM, Benjamin Striegel wrote: >> Even though Rust is a performance conscious language (since it aims at > displacing C and C++), the 80/20 rule still applies and most of Rust > code should not require absolute speed > > This is a mistaken assumption. Systems programming exists on the extreme > end of the programming spectrum where edge cases are the norm, not the > exception, and where 80/20 does not apply. If you don't require absolute > speed, why are you using Rust? Rust's design is based on the assumption that performance cannot be achieved simply by having highly optimized inner loops. It takes a whole program approach to performance by exposing references as first-class values and enforcing safety via type-checked lifetimes. You can write an efficient low-level loop in Haskell or Swift, but you can't build high level safe abstractions without paying a runtime cost. If someone isn't interested in this approach, then I have a hard time understanding why they would be using Rust. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From danielmicay at gmail.com Sun Jun 22 14:31:51 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sun, 22 Jun 2014 17:31:51 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A744FF.7070501@gmail.com> Message-ID: <53A74B47.3020209@gmail.com> On 22/06/14 05:09 PM, Rick Richardson wrote: > Apologies if this has been suggested, but would it be possible to have a > compiler switch that can add runtime checks and abort on > overflow/underflow/carry for debugging purposes, but the default > behavior is no check? IMO this would be the best of both worlds, > because I would assume that one would really only care about checked > math during testing and dev. You would need to build an entirely separate set of standard libraries with checked overflow. Adding new dialects of the language via compiler switches is never the right answer. It seems that every time an issue like this comes up, people propose making a compiler switch as the option. If we had compiler switches for abort vs. unwinding, no tracing gc support vs. tracing gc support, no integer overflow checks vs. integer overflow checks and more, we would have a truly ridiculous number of language dialects. I think even 2 dialects is too much... -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From masklinn at masklinn.net Sun Jun 22 14:48:13 2014 From: masklinn at masklinn.net (Masklinn) Date: Sun, 22 Jun 2014 23:48:13 +0200 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A74B47.3020209@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A744FF.7070501@gmail.com> <53A74B47.3020209@gmail.com> Message-ID: <21ADF883-0995-4BE0-B64D-F07F545F688A@masklinn.net> On 2014-06-22, at 23:31 , Daniel Micay wrote: > On 22/06/14 05:09 PM, Rick Richardson wrote: >> Apologies if this has been suggested, but would it be possible to have a >> compiler switch that can add runtime checks and abort on >> overflow/underflow/carry for debugging purposes, but the default >> behavior is no check? IMO this would be the best of both worlds, >> because I would assume that one would really only care about checked >> math during testing and dev. > > You would need to build an entirely separate set of standard libraries > with checked overflow. From my understanding, everything would be built with checked overflow (unless explicitly disabled/bypassed), and the overflow check could be disabled at compile-time/. I don't think that's a good solution, but that's what Swift's `-Ofast` does, it completely removes a number of checks (including overflow checking), essentially making the language unsafe but much faster. As a side-question, were the performances of ftrapv (in clang) ever actually tested? There were some discussion on testing the impact in Firefox, but that ended up with GCC's ftrapv being broken and not doing anything, and Firefox not working with clang -ftrapv. I've not seen any numbers since, just lots of assertions that it's far too slow to be an option. From danielmicay at gmail.com Sun Jun 22 14:58:11 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sun, 22 Jun 2014 17:58:11 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <158B23C7-BEED-4096-853D-23DC8A94CB10@mozilla.com> References: <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> <158B23C7-BEED-4096-853D-23DC8A94CB10@mozilla.com> Message-ID: <53A75173.50509@gmail.com> On 22/06/14 05:12 PM, Cameron Zwarich wrote: > For some applications, Rust?s bounds checks and the inability of rustc > to eliminate them in nontrivial cases will already be too much of a > performance sacrifice. What do we say to those people? Is it just that > memory safety is important because of its security implications, and > other forms of program correctness are not? Rust's goal has been to bring modern language features to the systems programming world (sum types, traits, pattern matching) and make the necessary sacrifices to achieve memory safety. Rust isn't intended to be all things to all people. Most of the complexity budget has been spent providing memory safety without making performance sacrifices. Bounds checks are already one of the most serious barriers to adoption as a replacement for C and C++. A lot of effort will need to go into exposing safe APIs (like iterators) for sidestepping this overhead. At the moment, the unchecked indexing methods are required in many cases to avoid a large performance loss relative to C and the code ends up being far uglier than it would have been in C. Rust could provide checked arithmetic, but it can't make use of it in the standard libraries. It wouldn't improve the safety of the language because any performance critical low-level code using `unsafe` is going to want to avoid paying the cost of bounds checks. It's not simply a language issue, because libraries would need to expose methods with and without the overflow checks whenever they're doing arithmetic with the parameters. > I am wary of circling around on this topic again, but I feel that the > biggest mistake in this discussion js that checked overflow in a > language requires a potential trap on every single integer operation. > Languages like Ada (and Swift, to a lesser extent), allow for slightly > imprecise exceptions in the case of integer overflow. > > A fairly simple rule is that a check for overflow only needs to occur > before the incorrect result may have externally visible side effects. > Ada?s rule even lets you go a step further and remove overflow checks > from loops in some cases (without violating control dependence of the > eventual overflowing operation). > > One model like this that has been proposed for C/C++ is the ?As > Infinitely Ranged? model (see > http://www.cert.org/secure-coding/tools/air-integer-model.cfm), where > operations either give the result that would be correct if integers had > infinite precision, or cause a trap. This allows for more optimizations > to be performed, and although it is questionable to me whether they > preserved control dependence of overflow behavior in all cases, they > report a 5.5% slowdown on SPEC2006 with GCC 4.5 (compared to a 13% > slowdown with more plentiful checks) using a very naive implementation. > A lot of those checks in SPEC2006 could probably just be eliminated if > the language itself distinguished between overflow-trapping and > overflow-permissive operations. If a compiler optimizer understood the > semantics of potentially trapping integer operations better (or at all), > it could reduce the overhead of the checks. > > I know that some people don?t want to require new compiler techniques > (or are afraid of relying on something outside of the scope of what LLVM > can handle today), but it would be unfortunate for Rust to make the > wrong decision here based on such incidental details rather than what is > actually possible. Graydon was always adamant that the language should use proven techniques and should avoid depending on a non-existent compiler optimization or immature research. For a language calling itself pragmatic, it has certainly spent a great deal of time getting to 1.0 and already ventures a fair bit outside of proven techniques (lifetimes). The focus should be on releasing an elegant language with memory safety and competitive performance with C++. Adding new features to the language at this point rather than refining the existing ones would be a huge mistake. A feature like checked arithmetic or tail call elimination can and should go through experiments behind feature gates, but it doesn't belong in a 1.0 release. If you don't agree with the fundamental language design at this point, that's too bad because it's 6 months away from 1.0 and you're too late. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From danielmicay at gmail.com Sun Jun 22 15:23:08 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sun, 22 Jun 2014 18:23:08 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <21ADF883-0995-4BE0-B64D-F07F545F688A@masklinn.net> References: <3978007.QzGOc367vL@tph-l13071> <53A744FF.7070501@gmail.com> <53A74B47.3020209@gmail.com> <21ADF883-0995-4BE0-B64D-F07F545F688A@masklinn.net> Message-ID: <53A7574C.1040203@gmail.com> On 22/06/14 05:48 PM, Masklinn wrote: > On 2014-06-22, at 23:31 , Daniel Micay wrote: >> On 22/06/14 05:09 PM, Rick Richardson wrote: >>> Apologies if this has been suggested, but would it be possible to have a >>> compiler switch that can add runtime checks and abort on >>> overflow/underflow/carry for debugging purposes, but the default >>> behavior is no check? IMO this would be the best of both worlds, >>> because I would assume that one would really only care about checked >>> math during testing and dev. >> >> You would need to build an entirely separate set of standard libraries >> with checked overflow. > > From my understanding, everything would be built with checked overflow > (unless explicitly disabled/bypassed), and the overflow check could be > disabled at compile-time/. > > I don't think that's a good solution, but that's what Swift's `-Ofast` > does, it completely removes a number of checks (including overflow > checking), essentially making the language unsafe but much faster. > > As a side-question, were the performances of ftrapv (in clang) ever > actually tested? There were some discussion on testing the impact > in Firefox, but that ended up with GCC's ftrapv being broken and > not doing anything, and Firefox not working with clang -ftrapv. It's important to note that `-ftrapv` only adds checks to signed types and doesn't work with GCC. You need to pass both `-fsanitize=signed-integer-overflow` and `-fsanitize=unsigned-integer-overflow` to clang. > I've not seen any numbers since, just lots of assertions that > it's far too slow to be an option. Here's a benchmark of random level generation that made the rounds on /r/rust and #rust already: https://github.com/logicchains/levgen-benchmarks Here is the result with `clang -O3 C.c`: ./a.out 10 &> /dev/null 0.20s user 0.00s system 99% cpu 0.205 total Here is the result with `clang -O3 -fsanitize=signed-integer-overflow -fsanitize=unsigned-integer-overflow`: ./a.out 10 &> /dev/null 0.31s user 0.00s system 98% cpu 0.313 total So in a real world use case consistently partly of integer arithmetic, it's an absolutely acceptable 50% increase in running time. At that point, Rust would be a much slower language than Java by default and no one would use it. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From danielmicay at gmail.com Sun Jun 22 15:25:45 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sun, 22 Jun 2014 18:25:45 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> Message-ID: <53A757E9.4070306@gmail.com> On 22/06/14 06:37 AM, Matthieu Monrocq wrote: > I am not a fan of having wrap-around and non-wrap-around types, because > whether you use wrap-around arithmetic or not is, in the end, an > implementation detail, and having to switch types left and right > whenever going from one mode to the other is going to be a lot of > boilerplate. > > Instead, why not take the same road than swift and map +, -, * and / to > non-wrap-around operators and declare new (more verbose) operators for > the rare case where performance matters or wrap-around is the right > semantics ? That's the wrong default for a performance-centric language. > > Even though Rust is a performance conscious language (since it aims at > displacing C and C++), the 80/20 rule still applies and most of Rust > code should not require absolute speed; so let's make it convenient to > write safe code and prevent newcomers from shooting themselves in the > foot by providing safety by default, and for those who profiled their > applications or are writing hashing algorithms *also* provide the > necessary escape hatches. Reducing performance of programs making heavy use of integer arithmetic by 50%+ is unacceptable. > This way we can have our cake and eat it too... or am I missing something ? No one will use the language after seeing that it's slower than Java by default. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From cg.wowus.cg at gmail.com Sun Jun 22 15:43:16 2014 From: cg.wowus.cg at gmail.com (Clark Gaebel) Date: Sun, 22 Jun 2014 18:43:16 -0400 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <53A757E9.4070306@gmail.com> References: <3978007.QzGOc367vL@tph-l13071> <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> <53A757E9.4070306@gmail.com> Message-ID: I think a reasonable middle ground is to have checked operators that look a little funny. Kind of like swift, but in reverse: > malloc((number_of_elements +~ 12) *~ size_of::()) Where adding a ~ to the end of an operator makes it check for overflow. This would certainly look nicer than stuff like: > malloc(number_of_elements.checked_add(12).checked_mul(size_of::())) lying around in low level data structures code. It also keeps the default fast, which is very important. - Clark On Sun, Jun 22, 2014 at 6:25 PM, Daniel Micay wrote: > On 22/06/14 06:37 AM, Matthieu Monrocq wrote: > > I am not a fan of having wrap-around and non-wrap-around types, because > > whether you use wrap-around arithmetic or not is, in the end, an > > implementation detail, and having to switch types left and right > > whenever going from one mode to the other is going to be a lot of > > boilerplate. > > > > Instead, why not take the same road than swift and map +, -, * and / to > > non-wrap-around operators and declare new (more verbose) operators for > > the rare case where performance matters or wrap-around is the right > > semantics ? > > That's the wrong default for a performance-centric language. > > > > Even though Rust is a performance conscious language (since it aims at > > displacing C and C++), the 80/20 rule still applies and most of Rust > > code should not require absolute speed; so let's make it convenient to > > write safe code and prevent newcomers from shooting themselves in the > > foot by providing safety by default, and for those who profiled their > > applications or are writing hashing algorithms *also* provide the > > necessary escape hatches. > > Reducing performance of programs making heavy use of integer arithmetic > by 50%+ is unacceptable. > > > This way we can have our cake and eat it too... or am I missing > something ? > > No one will use the language after seeing that it's slower than Java by > default. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -- Clark. Key ID : 0x78099922 Fingerprint: B292 493C 51AE F3AB D016 DD04 E5E3 C36F 5534 F907 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hansjorg at gmail.com Sun Jun 22 15:55:46 2014 From: hansjorg at gmail.com (=?UTF-8?Q?Hans_J=C3=B8rgen_Hoel?=) Date: Mon, 23 Jun 2014 00:55:46 +0200 Subject: [rust-dev] Rust CI In-Reply-To: References: <53A6FE28.4090300@exyr.org> Message-ID: The PPA nightlies should be up to date now if you're using an Ubuntu version that's still supported by Canonical (i.e. supported by Launchpad). -- Hans J?rgen On 22 June 2014 19:24, Daniel Fath wrote: >> Are there advantages or disadvantages with using nightlies from the PPA >> rather than those from rust-lang.org? > > PPA nightlies are woefully out of date but they are easier to reinstall - > they automatically notify you when you're out of date, which I don't think > rust-lang nightlies do. Also having nightlies will do wonders for acceptance > amongst the Ubuntu/Debian derivatives programmer. > > Rust Lang are bleeding edge but are a bit more harder to maintain - you need > to remember to run update script From pcwalton at mozilla.com Sun Jun 22 16:12:01 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Sun, 22 Jun 2014 16:12:01 -0700 Subject: [rust-dev] Integer overflow, round -2147483648 In-Reply-To: <158B23C7-BEED-4096-853D-23DC8A94CB10@mozilla.com> References: <53A5FC2D.7080106@gmail.com> <53A600DD.1030008@gmail.com> <53A61101.1020103@gmail.com> <158B23C7-BEED-4096-853D-23DC8A94CB10@mozilla.com> Message-ID: <53A762C1.8040809@mozilla.com> On 6/22/14 2:12 PM, Cameron Zwarich wrote: > For some applications, Rust?s bounds checks and the inability of rustc > to eliminate them in nontrivial cases will already be too much of a > performance sacrifice. What do we say to those people? Is it just that > memory safety is important because of its security implications, and > other forms of program correctness are not? > > I am wary of circling around on this topic again, but I feel that the > biggest mistake in this discussion js that checked overflow in a > language requires a potential trap on every single integer operation. > Languages like Ada (and Swift, to a lesser extent), allow for slightly > imprecise exceptions in the case of integer overflow. I believe that it is possible that the overhead of integer overflow will be negligible in the future, and that paper is exciting to me too! But I feel that: 1. Integer overflow is primarily a security concern when it compromises memory safety. Quoting OWASP [1], emphasis mine: "An integer overflow condition exists when an integer, which has not been properly sanity checked, is used in the *determination of an offset or size for memory allocation, copying, concatenation, or similarly*." Other features in Rust, such as bounds checks, unsigned indexing, and the borrow check, defend against the memory safety problems. So the benefits of defending against memory safety are likely to be quite different in Rust from in C. It's a question of costs versus benefits, as always. 2. Signed integer overflow is not undefined behavior in Rust as it is in C. This mitigates some of the more frightening risks associated with it. 3. The As-If-Infinitely-Ranged paper is research. Like all research, the risk of adopting integer overflow checks is somewhat high; it might still not work out to be acceptable in practice when we've exhausted all potential compiler optimizations that it allows. That risk has to be compared against the potential reward, which is likely to be lesser in Rust than in C because of the reasons outlined in (1) and (2). 4. We have a pretty tight shipping schedule at this point: 1.0, which is the backwards compatible release, is on track to be released this year. We are making excellent progress on backwards incompatible language changes (closed 10 out of 45 issues over the past 2 weeks!), but we must be consciously fi