From ncm at cantrip.org Sat Feb 1 00:56:31 2014 From: ncm at cantrip.org (Nathan Myers) Date: Sat, 01 Feb 2014 00:56:31 -0800 Subject: [rust-dev] Pointer syntax In-Reply-To: References: Message-ID: <52ECB6BF.7020502@cantrip.org> On 01/31/2014 09:43 PM, Eric Summers wrote: > I think I like the mut syntax in let expressions, but I still like shoving the pointer next the type like I would do in C/C++ for something like fn drop(&mut self) {}. > > I guess it is somewhat rare to use mutable pointers as function parameters, so maybe not a big deal. While we're talking about syntax, hasn't anybody noticed that prefix pointer-designator and dereference operators are crazy, especially for otherwise left-to-right declaration order? C++ had no choice, but Rust can make the sensible choice: the only one that Pascal got right. (That used the caret, also an eerily apt choice.) Nathan Myers From gaetan at xeberon.net Sat Feb 1 01:39:31 2014 From: gaetan at xeberon.net (Gaetan) Date: Sat, 1 Feb 2014 10:39:31 +0100 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <52EAEDDB.4030402@active-4.com> <6734580107559889337@gmail297201516> <-7815470346720532493@gmail297201516> <52EC39C5.90801@gmail.com> Message-ID: There is not only API change. Sometime, from a minor version to another, a feature get silently broken (that is silent regression). While it might not impact libA which depends on it, but it may fail libB which also depends on it, but with a previous version. As a result, libA force installation of this dependency without any concern (all its features works) but libB get broken without any concern. And that the real mess to deal with. That's happened this week at my job... I largely prefer each library be self contained, ie, if libA depends on libZ version X.X.X, and libB depends on libZZ version Y.Y.Y, just let each one be installed and used at there own version. That is perfectly acceptable (and even recommended) for a non system integrated software (for example when a companie want to build a software with minimum system dependency that would run on any version of Ubuntu, with the only dependency on libc. On the other hand, when the software get integrated into the distribution (ubuntu, redhat, homebrew), let the distrib version manager do its job. ----- Gaetan 2014-02-01 Tony Arcieri : > On Fri, Jan 31, 2014 at 4:03 PM, Lee Braiden wrote: > >> This would be counterproductive. If a library cannot be upgraded to 1.9, >> or even 2.2, because some app REQUIRES 1.4, then that causes SERIOUS, >> SECURITY issues. >> > > Yes, these are exactly the types of problems I want to help solve. Many > people on this thread are talking about pinning to specific versions of > libraries. This will prevent upgrades in the event of a security problem. > > Good dependency resolvers work on constraints, not specific versions. > > The ONLY realistic way I can see to solve this, is to have all higher >> version numbers of the same package be backwards compatible, and have >> incompatible packages be DIFFERENT packages, as I mentioned before. >> >> Really, there is a contract here: an API contract. > > > Are you familiar with semantic versioning? > > http://semver.org/ > > Semantic Versioning would stipulate that a backwards incompatible change > in an API would necessitate a MAJOR version bump. This indicates a break in > the original contract. > > Ideally if people are using multiple major versions of the same package, > and a security vulnerability is discovered which affects all versions of a > package, that the package maintainers release a hotfix for all major > versions. > > -- > Tony Arcieri > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.monrocq at gmail.com Sat Feb 1 03:05:12 2014 From: matthieu.monrocq at gmail.com (Matthieu Monrocq) Date: Sat, 1 Feb 2014 12:05:12 +0100 Subject: [rust-dev] What of semi-automated segmented stacks ? In-Reply-To: <20140131215410.GD25089@Mr-Bennet> References: <20140131215410.GD25089@Mr-Bennet> Message-ID: On Fri, Jan 31, 2014 at 10:54 PM, Niko Matsakis wrote: > One option that might make a lot of sense is having an API like this: > > start_extensible_stack(initial_size, || ...) > consider_stack_switch(required, || ...) > > The idea is that `start_extensible_stack` allocates a stack chunk of > size `initial_size` and runs the closure in > it. `consider_stack_switch` basically does what the old prelude did: > compare the bounds and jump if needed. If you failed to insert > `consider_stack_switch` at all the required place, you'd wind up with > stack overruns. Presumably some mechanism for detecting stack overrun > (e.g. guard pages) is still required. > > Unlike the older system, which was intended for universal use, this > would be a power user's API. You get precise control over when to > switch and how much to grow the stack by. This could help avoid the > performance pitfalls we encountered when trying to find a "one size > fits all" heuristic for stack growth. For example, it'd be trivial to > create a system where you start out with a very small stack per > request but switch over to a large stack if the request proves complex > -- but perhaps you only switch once, since you don't want to grow > indefinitely. You should also be better able to avoid performance > pathologies where the stack switch occurs on a frequently traversed > call edge. > > On the flip side, of course, it's not clear to me that the API is > really *usable* -- certainly static analysis could identify points of > arbitrary recursion where a call to `consider_stack_switch` may be > needed. One downside is the `start_extensible_stack` routine, since > there is already a start frame to start from. Maybe it's not needed, > and we can just always set the "end of stack" register, but not > necessarily read its contents. That'd be nicer. > > I would rather, indeed, not have to use `start_extensible_stack` if possible. It seems to be that setting the "end of stack" address somewhere (probably runtime-dependent) is all that would needed, and I hope it is cheap enough that it could be done indiscriminately. With this, we are down to a single intrinsic/library function: `consider_stack_switch`. Quite the minimalist API :) -- Matthieu > Anyway, just thinking aloud here. > > > Niko > > > On Thu, Jan 30, 2014 at 06:27:27PM +0100, Matthieu Monrocq wrote: > > Hello, > > > > Segmented stacks were ditched because of performance issues that were > never > > fully resolved, especially when every opaque call (C, ...) required > > allocated a large stack up-front. > > > > Still, there are platforms (FreeBSD) with small stacks where the idea of > > segmented tasks could ease development... so what if we let the developer > > ship in ? > > > > > > The idea of semi-automated segmented stacks would be: > > > > - to expose to the user how many bytes worth of stack are remaining > > > > - to let the user trigger a stack switch > > > > > > This system should keep the penalty close to null for those who do not > > care, and be relatively orthogonal to the rest of the implementation: > > > > - how many bytes remaining carries little to no penalty: just a pointed > > substraction between the current stack pointer and the "end-of-stack" > > pointer (which can be set once and for all at thread start-up) > > > > - the stack switch is voluntary, and can include a prelude on the new > stack > > that automatically comes back to its parent so that most code should not > > care, no penalty in regular execution (without it) > > > > - I foresee some potential implementation difficulty for the unwinder, > did > > it ever work on segmented stacks ? Was it difficult/slow ? Does > performance > > of unwind matter that much ? > > > > > > Work-around: > > > > In the absence of segmented stacks, the user can only resolve to using > > another task to get a "free" stack. Unfortunately, because of the > (great!) > > memory safety of Rust this other task cannot readily access its parent > > task's memory. > > > > > > I do not remember whether this idea was investigated when segmented tasks > > were removed. I thought it might be interesting to consider, although... > it > > is probably useless for 1.0 anyway. > > > > -- Matthieu > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.monrocq at gmail.com Sat Feb 1 03:27:15 2014 From: matthieu.monrocq at gmail.com (Matthieu Monrocq) Date: Sat, 1 Feb 2014 12:27:15 +0100 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <52EAEDDB.4030402@active-4.com> <6734580107559889337@gmail297201516> <-7815470346720532493@gmail297201516> <52EC39C5.90801@gmail.com> Message-ID: Just two small remarks: 1. C++11 actually introduced the idea of *inline namespace* specifically to detect the use of multiple (incompatible) versions of a single library. The *compiling* phase sees through inline namespaces as if they were not there, however the symbol get mangled differently, creating a linking/loading error when trying to resolve symbols. 2. I appreciate the idea of using both version A.B.C and X.Y.Z of a particular library, however what happens when I pass an object "Dummy" created by A.B.C to a function of X.Y.Z expecting a "Dummy": - is there any guarantee that the memory layout is identical ? - could there by an issue with virtual tables (for traits) ? If X.Y.Z calls "slot 3" of the virtual table (known as "(int, string) -> int") and it turns out that A.B.C had a completely different function here (say "(string, string) -> string") then I am foreseeing a crash. The latter is a very real issue in C++, as there is no portable way to include a virtual method in a class hierarchy. It just so happens that the Itanium ABI guarantees that you can get away with introducing such a method as the last virtual method in the last class of the hierarchy that introduces new virtual methods... but this is quite error-prone, and it is impossible to know whether a user of your library further refine your class... In short, isn't there a risk of crashes if one accidentally links two versions of a given library and start exchanging objects ? It seems impractical to prove that objects created by one version cannot accidentally end up being passed to the other version: - unless the types differ at compilation time (seems awkward) - or you can prove that at most one version of the library may "leak" those types (ie, all others dependencies use the other versions purely internally) Escape analysis is pretty though in general, however should you have a mechanism for external/internal dependencies (the latter not being exposed in the interface), then the compiler could possibly be augmented to prove it, I guess. And thus you would just have to ensure, for each library, that no two dependencies reference the same library (but different versions) as an external dependency of theirs. Well, maybe not so small remarks... -- Matthieu On Sat, Feb 1, 2014 at 10:39 AM, Gaetan wrote: > There is not only API change. Sometime, from a minor version to another, a > feature get silently broken (that is silent regression). While it might not > impact libA which depends on it, but it may fail libB which also depends on > it, but with a previous version. > As a result, libA force installation of this dependency without any > concern (all its features works) but libB get broken without any concern. > > And that the real mess to deal with. > > That's happened this week at my job... > > I largely prefer each library be self contained, ie, if libA depends on > libZ version X.X.X, and libB depends on libZZ version Y.Y.Y, just let each > one be installed and used at there own version. That is perfectly > acceptable (and even recommended) for a non system integrated software (for > example when a companie want to build a software with minimum system > dependency that would run on any version of Ubuntu, with the only > dependency on libc. > On the other hand, when the software get integrated into the distribution > (ubuntu, redhat, homebrew), let the distrib version manager do its job. > > > > ----- > Gaetan > > > > 2014-02-01 Tony Arcieri : > >> On Fri, Jan 31, 2014 at 4:03 PM, Lee Braiden wrote: >> >>> This would be counterproductive. If a library cannot be upgraded to >>> 1.9, or even 2.2, because some app REQUIRES 1.4, then that causes SERIOUS, >>> SECURITY issues. >>> >> >> Yes, these are exactly the types of problems I want to help solve. Many >> people on this thread are talking about pinning to specific versions of >> libraries. This will prevent upgrades in the event of a security problem. >> >> Good dependency resolvers work on constraints, not specific versions. >> >> The ONLY realistic way I can see to solve this, is to have all higher >>> version numbers of the same package be backwards compatible, and have >>> incompatible packages be DIFFERENT packages, as I mentioned before. >>> >>> Really, there is a contract here: an API contract. >> >> >> Are you familiar with semantic versioning? >> >> http://semver.org/ >> >> Semantic Versioning would stipulate that a backwards incompatible change >> in an API would necessitate a MAJOR version bump. This indicates a break in >> the original contract. >> >> Ideally if people are using multiple major versions of the same package, >> and a security vulnerability is discovered which affects all versions of a >> package, that the package maintainers release a hotfix for all major >> versions. >> >> -- >> Tony Arcieri >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko at alum.mit.edu Sat Feb 1 04:57:48 2014 From: niko at alum.mit.edu (Niko Matsakis) Date: Sat, 1 Feb 2014 07:57:48 -0500 Subject: [rust-dev] Syntax for custom type bounds In-Reply-To: References: Message-ID: <20140201125738.GA21688@Mr-Bennet> Regarding the marker types, they are somewhat awkward, and are not the approach I originally favored. But they have some real advantages: - Easily extensible as we add new requirements, unlike syntax. - Easily documented. - These bounds are only used for unsafe code, so it's not something ordinary users should have to stumble over. What concerns me more is that marker types are "opt in" -- so if you don't know that you need them, and you build a datatype founded on unsafe code, you can get incorrect behavior. There may be some steps we can take to mitigate that in some cases. In any case, the use of marker types are also quite orthogonal to your other concerns: > This also makes the intent much more clear. Currently, one would have to > dig into the definition of MutItems<'a,T> to figure out that the lifetime > parameter 'a is used to create a dummy borrow field into the vector, so > that the whole iterator is then treated as a mutable borrow. This feels > very convoluted, if you ask me. I disagree -- I think treating lifetime and type parameters uniformly feels cleaner than permitting lifetime bounds to appear in random places. Presumably `'a Foo` would be syntactic sugar for `Foo<'a, T>`? There's an obvious ambiguity here with `&'a T`. > On a slightly different note, is there a strong reason for having to name > lifetime parameters explicitly? Could we simply re-use actual parameter > names prefixed with ' as their lifetimes? It is plausible we could say that a lifetime parameter name that is the same as a local variable binding whose type is a borrowed pointer refers to the lifetime of that borrowed pointer. To me, it feels like a rather ad hoc rule, though I can see it would sometimes be convenient. The current rules are intended to showcase how lifetime parameters work precisely like type parameters. In other words, we write: fn copy(t: T) -> T; we do not write: fn copy(t) -> t; In the same way, we identify and declare lifetime parameters. Note that lifetime parameters do not have a natural one-to-one relationship with variables. It's certainly possible (and reasonable) to declare a function like: fn foo<'a, 'b, 'c>(x: &'a Foo<'b, 'c>) In which case, the meaning of `'x` is pretty unclear to me. > The above could then be reduced to this: > > pub trait MutableVector { > fn mut_iter(self) -> 'self MutItems; > ... > } > > This used to be valid syntax, btw, though it worked because 'self lifetime > was special, IIRC. Writing `'self` was valid syntax, but it didn't have the meaning you are proposing. Which is one of the reasons we removed it. Niko From leebraid at gmail.com Sat Feb 1 05:07:14 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 13:07:14 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <6734580107559889337@gmail297201516> <-7815470346720532493@gmail297201516> <52EC39C5.90801@gmail.com> Message-ID: <52ECF182.1000004@gmail.com> On 01/02/14 00:05, Vladimir Lushnikov wrote: > I disagree. With the same analogy, I could sell you a blunt knife - > you would use it as a letter opener, and then I would fix the bug. Now > you might actually cut yourself! No. The correct analogy would be: 1) I create a recipe for a sharp knife. The first IMPLEMENTATION of that recipe is a blunt knife, and so the IMPLEMENTATION is buggy. 2) You see that the IMPLEMENTATION is useful in its own right, and have thus invented the concept of a letter opener. 3) You fork the code at that point, creating a new "Letter opener" recipe from the buggy, "blunt knife" implementation of the "sharp knife" recipe. 4) I fix my blunt knife implementation to create a sharp knife. 5) Everyone is happy. > I think this is a different concern. It should be up to the library > author what they deems as 'compatible'. It does not excuse the > consumers of the library to do their own testing that it produces the > results they want (a version number in any form is not a proof of > correctness). Absolutely not. If people want to make their OWN, unpublished code in compatible, then yes, that's their business. But as soon as they release a library to others, they are making certain promises and guarantees -- the library API's contract. They are saying that "we're providing this code, which can do this, and will help with your code. You can use it in your own projects, and save yourself time. What's more, many people can use the same code, and save time/space reimplementing the same things". If they make such an offer to help others, then go back on it by breaking the code, it's simply irresponsible: bad code maintenance. We should ABSOLUTELY discourage that, if our goal is reliable, reusable, libraries which everyone can use with some certainty. -- Lee From dwrenshaw at gmail.com Sat Feb 1 05:54:04 2014 From: dwrenshaw at gmail.com (David Renshaw) Date: Sat, 1 Feb 2014 08:54:04 -0500 Subject: [rust-dev] Syntax for custom type bounds In-Reply-To: <20140201125738.GA21688@Mr-Bennet> References: <20140201125738.GA21688@Mr-Bennet> Message-ID: I am excited about the new marker types! I think that, combined with a fix for issue 5121 (https://github.com/mozilla/rust/issues/5121), they will greatly increase my ability to express memory safety invariants in capnproto-rust. On Sat, Feb 1, 2014 at 7:57 AM, Niko Matsakis wrote: > Regarding the marker types, they are somewhat awkward, and are not the > approach I originally favored. But they have some real advantages: > > - Easily extensible as we add new requirements, unlike syntax. > - Easily documented. > - These bounds are only used for unsafe code, so it's not something > ordinary users should have to stumble over. > > What concerns me more is that marker types are "opt in" -- so if you > don't know that you need them, and you build a datatype founded on > unsafe code, you can get incorrect behavior. There may be some steps > we can take to mitigate that in some cases. > > In any case, the use of marker types are also quite orthogonal to your > other concerns: > > > This also makes the intent much more clear. Currently, one would have > to > > dig into the definition of MutItems<'a,T> to figure out that the lifetime > > parameter 'a is used to create a dummy borrow field into the vector, so > > that the whole iterator is then treated as a mutable borrow. This feels > > very convoluted, if you ask me. > > I disagree -- I think treating lifetime and type parameters uniformly > feels cleaner than permitting lifetime bounds to appear in random > places. Presumably `'a Foo` would be syntactic sugar for `Foo<'a, T>`? > There's an obvious ambiguity here with `&'a T`. > > > On a slightly different note, is there a strong reason for having to name > > lifetime parameters explicitly? Could we simply re-use actual parameter > > names prefixed with ' as their lifetimes? > > It is plausible we could say that a lifetime parameter name that is > the same as a local variable binding whose type is a borrowed pointer > refers to the lifetime of that borrowed pointer. To me, it feels like > a rather ad hoc rule, though I can see it would sometimes be convenient. > > The current rules are intended to showcase how lifetime parameters work > precisely like type parameters. In other words, we write: > > fn copy(t: T) -> T; > > we do not write: > > fn copy(t) -> t; > > In the same way, we identify and declare lifetime parameters. > > Note that lifetime parameters do not have a natural one-to-one > relationship with variables. It's certainly possible (and reasonable) > to declare a function like: > > fn foo<'a, 'b, 'c>(x: &'a Foo<'b, 'c>) > > In which case, the meaning of `'x` is pretty unclear to me. > > > The above could then be reduced to this: > > > > pub trait MutableVector { > > fn mut_iter(self) -> 'self MutItems; > > ... > > } > > > > This used to be valid syntax, btw, though it worked because 'self > lifetime > > was special, IIRC. > > Writing `'self` was valid syntax, but it didn't have the meaning you > are proposing. Which is one of the reasons we removed it. > > > > Niko > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leebraid at gmail.com Sat Feb 1 06:01:01 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 14:01:01 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <6734580107559889337@gmail297201516> <-7815470346720532493@gmail297201516> <52EC39C5.90801@gmail.com> Message-ID: <52ECFE1D.6010400@gmail.com> On 01/02/14 00:09, Tony Arcieri wrote: > On Fri, Jan 31, 2014 at 4:03 PM, Lee Braiden > wrote: > > This would be counterproductive. If a library cannot be upgraded > to 1.9, or even 2.2, because some app REQUIRES 1.4, then that > causes SERIOUS, SECURITY issues. > > > Yes, these are exactly the types of problems I want to help solve. > Many people on this thread are talking about pinning to specific > versions of libraries. This will prevent upgrades in the event of a > security problem. > > Good dependency resolvers work on constraints, not specific versions. > Agreed. > Are you familiar with semantic versioning? > > http://semver.org/ > > Semantic Versioning would stipulate that a backwards incompatible > change in an API would necessitate a MAJOR version bump. This > indicates a break in the original contract. > I'm familiar, in the sense that it's what many libs/apps do, but again, I don't believe that library 29.x should be backwards-incompatible with 28.x. Major versions of a package, to me, should indicate major new features, but not abandonment of old features. If you want to redesign some code base so it's incompatible (i.e., no longer the same thing), then it deserves a new name. Let's compare the mindsets of backwards-compatible library design, vs.... oh, let's call it "major-breakage" ;) language design: Let's say you follow a common major-breakage approach, and do this: 1) Create a "general-compression-library", version 1.0, which uses the LZ algorithm, and exposes some details of that. 2) During the course of development, you get ideas for version 2.0 2) You publish the 1.x library 3) Create a "general-compression-library", version 2.0. This, you decide, will use LZMA algorithm, and exposes some details of that. 4) You publish the 2.x library. 5) You receive a patch from someone, adding BZIP support, for 1.x. It includes code to make 1.x more general. However, it's incompatible with 2.x, and you've moved on, so you drop it, or backport your 2.x stuff. Maybe you publish 3.x, but now it's incompatible with 2.x AND 1.x... 6) All the while, people have been using your libraries in products, and some depend on 1.x, some on 2.x, some on 3.x. It's a mess of compatibility hell, with no clear direction, security issues due to unmaintained code, etc. Because details are exposed in each, 2.0 breaks compatibility with 1.x. Under a model where version 2.x can be incompatible with version 1.x, you say, "OK, fine. Slightly broken stuff, but new features. People can upgrade and use the new stuff, or not. Up to them." The problem though, is that the thinking behind all this is wrong-headed --- beginning from bad assumptions --- and the acceptance of backward-incompatibility encourages that way of thinking. Let's enforce backwards-compatiblity, and see what *might* happen, instead: 1) You create a "general-compression-library", version 1.0. You use the LZ algorithm, and expose details of that. 2) During the course of development, you get ideas for 2.0 3) You're about to publish the library, and realise that your 2.0 changes won't be backwards compatible, because 1.x exposes API details in a non-futureproof way. 4) You do a little extra work on 1.x, making it more general -- i.e., living up to its name. 5) You publish 1.x 6) You create version 2.x, which ALSO supports LZMA. 7) You publish version 2.x, which now has twice as many features, does what it says on the tin by being a general compression library, etc. 8) You receive a patch from someone, adding BZIP support, for 1.x. You merge it in, and publish 3.x, which now supports 3 compression formats. 9) All the while, people have been using your libraries in products: they all work with general compression library x.x, later versions being better, minor OR major. No security issues, because you just upgrade to the latest library version. Now, instead of one base library and two forks, you have a one library with three versions, each backwards-compatible, each building features over the last. That's a MUCH better outcome. Now, that does involve a bit more foresight, but I think it's the kind of foresight that enforcing backwards compatibility encourages, and rightly so. I said *might* happen. Let's explore another turn of events, and imagine that you didn't have the foresight in step 3 above: you create "general-compression-library", never realising that it's not general at all, and that 1.x is going to be incompatible with 2.x, until 1.x is published, and you come to create 2.x. Under a backwards-compatibility model, that might go like this: 1) You create general-compression-library, version 1.0, with LZ support, expose details of that, and publish it. 2) You want to add LZMA support to this library, but can't because it breaks backwards compatibility. 3) Instead, you create a new library, "universal-compression-library", 1.0, with plugin support, including a built-in plugins for both LZMA and (via general-compression-library 1.0), LZ support. 4) You publish this as universal-compression-library, v 2.0 5) You receive a patch for BZIP support, for general-compression-library 1.x. It adds new features to general compression library, to support both LZ and BZIP. You thank the contributor for his patches, publish g-c-l 2.0, create a plugin for u-c-l, to support it as well. 6) All the while, people have been using your libraries in products: some depend on the latest version of general-compression library, version x.x, later versions being better. Some use the newer universal-compression-library, version x.x, later versions being better. There are two libraries, which are maintained to some extent for now. Security issues are reduced over the first example. So this is NOT as great an outcome as the second example, admittedly. However, it's still much better than the first example. In contrast to the first example, there is now a clear direction: a newer, more future-proof, more compatible library is coming to the fore, clearly distinguished by a new name. More products are using it, and, if general-compression-library is ever fully deprecated, its code can be ported to the universal lib / replaced with other, non-deprecated plugins for the same functionality. Only products that directly depended on the old, broken design of general-compression-library are at risk due to unmaintained code. In that case, someone is likely to port the application code to use universal-compression-library instead: especially if the deprecation notice tells them to. The difference? In the original example, you have multiple forks. No one forks, distinguished only by version numbers. Different forks are incompatible with each other, patches go to either, and no fork is clearly superior, because now you have multiple solutions for the same problem, under the same name. Really, all we're saying here is, "don't switch things out from under people". If your library is meant to do X, when you do A, then don't make it suddenly crash if you don't do B beforehand. If one recipe requires two steps, and recipe requires one, then they are different recipes and should have different names. If two recipes use a set of steps, and yet produce different outcomes, then there are ingredients in there that you need to be aware of, and so really, they are different recipes, with different ingredients, and deserve different names. -- Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir at slate-project.org Sat Feb 1 06:18:30 2014 From: vladimir at slate-project.org (Vladimir Lushnikov) Date: Sat, 1 Feb 2014 14:18:30 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <52ECFE1D.6010400@gmail.com> References: <52E7071A.50702@mozilla.com> <6734580107559889337@gmail297201516> <-7815470346720532493@gmail297201516> <52EC39C5.90801@gmail.com> <52ECFE1D.6010400@gmail.com> Message-ID: There are some great points here (that should probably go into some sort of 'best practices' doc for rust package authors). However there is a fundamental flaw IMO - we are talking about open-source code, where the author is not obligated to do *any* of this. Most open-source licenses explicitly state that there are no implied warranties of *any* kind. Indeed, I think we would see far less open-source software published if we started imposing requirements on how to go about it (this includes the versioning of libraries). (Whether this is a good thing for open-source in general is open to debate). In an enterprise-only world, this would obviously work because the companies that are providing your libraries actually have to give you a QoS. But with open-source software, you can pick any library you want or fork any library you want - if you want something better that follows your requirements, then just fork it and make the changes you want. Which library fork will end up the most used is essentially a popularity contest. Of course this excludes the standard library because that approach definitely does not work there (most notably the D tango vs. dmd train-wreck). The 'don't switch things from under people' idea is definitely sound. But if you care about your application's stability, you test each new library upgrade for changes with your unit, regression and integration tests. That is the only way to be sure that nothing is broken and semantic versioning does not address that, except by making some instances where things are really likely to be incompatible much clearer. If nothing else, this is why even if you allow constraints as the units of dependency resolution, at any given time you are relying upon a single pinned version. You should rebuild and retest if you want to upgrade *anything*. Whether this can or should be done dynamically (at runtime) is another question. I disagree with the 'breaking-changes ==> new version' idea. The rust developers have already said that whatever 2.0 will be, it may break backward compatibility. This is a *good thing* because it's chance to clean up. One of the reasons C++ is so huge is because it almost never removes features, and this leads to unnecessary complexity. Someone mentioned passing two objects with the same 'type' from different versions of a library and how this would work in terms of memory layout. But with 'slots', this wouldn't be allowed by the linker, because effectively it sees the two different versions of the library as different libraries (even though they have the same name). On Sat, Feb 1, 2014 at 2:01 PM, Lee Braiden wrote: > On 01/02/14 00:09, Tony Arcieri wrote: > > On Fri, Jan 31, 2014 at 4:03 PM, Lee Braiden wrote: > >> This would be counterproductive. If a library cannot be upgraded to 1.9, >> or even 2.2, because some app REQUIRES 1.4, then that causes SERIOUS, >> SECURITY issues. >> > > Yes, these are exactly the types of problems I want to help solve. Many > people on this thread are talking about pinning to specific versions of > libraries. This will prevent upgrades in the event of a security problem. > > Good dependency resolvers work on constraints, not specific versions. > > > Agreed. > > > Are you familiar with semantic versioning? > > http://semver.org/ > > Semantic Versioning would stipulate that a backwards incompatible change > in an API would necessitate a MAJOR version bump. This indicates a break in > the original contract. > > > I'm familiar, in the sense that it's what many libs/apps do, but again, I > don't believe that library 29.x should be backwards-incompatible with > 28.x. Major versions of a package, to me, should indicate major new > features, but not abandonment of old features. If you want to redesign > some code base so it's incompatible (i.e., no longer the same thing), then > it deserves a new name. > > Let's compare the mindsets of backwards-compatible library design, vs.... > oh, let's call it "major-breakage" ;) language design: > > Let's say you follow a common major-breakage approach, and do this: > > 1) Create a "general-compression-library", version 1.0, which uses the LZ > algorithm, and exposes some details of that. > 2) During the course of development, you get ideas for version 2.0 > 2) You publish the 1.x library > 3) Create a "general-compression-library", version 2.0. This, you decide, > will use LZMA algorithm, and exposes some details of that. > 4) You publish the 2.x library. > 5) You receive a patch from someone, adding BZIP support, for 1.x. It > includes code to make 1.x more general. However, it's incompatible with > 2.x, and you've moved on, so you drop it, or backport your 2.x stuff. > Maybe you publish 3.x, but now it's incompatible with 2.x AND 1.x... > 6) All the while, people have been using your libraries in products, and > some depend on 1.x, some on 2.x, some on 3.x. It's a mess of compatibility > hell, with no clear direction, security issues due to unmaintained code, > etc. > > Because details are exposed in each, 2.0 breaks compatibility with 1.x. > Under a model where version 2.x can be incompatible with version 1.x, you > say, "OK, fine. Slightly broken stuff, but new features. People can > upgrade and use the new stuff, or not. Up to them." > > > The problem though, is that the thinking behind all this is wrong-headed > --- beginning from bad assumptions --- and the acceptance of > backward-incompatibility encourages that way of thinking. > > Let's enforce backwards-compatiblity, and see what *might* happen, instead: > > 1) You create a "general-compression-library", version 1.0. You use the > LZ algorithm, and expose details of that. > 2) During the course of development, you get ideas for 2.0 > 3) You're about to publish the library, and realise that your 2.0 changes > won't be backwards compatible, because 1.x exposes API details in a > non-futureproof way. > 4) You do a little extra work on 1.x, making it more general -- i.e., > living up to its name. > 5) You publish 1.x > 6) You create version 2.x, which ALSO supports LZMA. > 7) You publish version 2.x, which now has twice as many features, does > what it says on the tin by being a general compression library, etc. > 8) You receive a patch from someone, adding BZIP support, for 1.x. You > merge it in, and publish 3.x, which now supports 3 compression formats. > 9) All the while, people have been using your libraries in products: they > all work with general compression library x.x, later versions being better, > minor OR major. No security issues, because you just upgrade to the latest > library version. > > Now, instead of one base library and two forks, you have a one library > with three versions, each backwards-compatible, each building features over > the last. That's a MUCH better outcome. > > Now, that does involve a bit more foresight, but I think it's the kind of > foresight that enforcing backwards compatibility encourages, and rightly so. > > > > I said *might* happen. Let's explore another turn of events, and imagine > that you didn't have the foresight in step 3 above: you create > "general-compression-library", never realising that it's not general at > all, and that 1.x is going to be incompatible with 2.x, until 1.x is > published, and you come to create 2.x. Under a backwards-compatibility > model, that might go like this: > > 1) You create general-compression-library, version 1.0, with LZ support, > expose details of that, and publish it. > 2) You want to add LZMA support to this library, but can't because it > breaks backwards compatibility. > 3) Instead, you create a new library, "universal-compression-library", > 1.0, with plugin support, including a built-in plugins for both LZMA and > (via general-compression-library 1.0), LZ support. > 4) You publish this as universal-compression-library, v 2.0 > 5) You receive a patch for BZIP support, for general-compression-library > 1.x. It adds new features to general compression library, to support both > LZ and BZIP. You thank the contributor for his patches, publish g-c-l 2.0, > create a plugin for u-c-l, to support it as well. > 6) All the while, people have been using your libraries in products: some > depend on the latest version of general-compression library, version x.x, > later versions being better. Some use the newer > universal-compression-library, version x.x, later versions being better. > There are two libraries, which are maintained to some extent for now. > Security issues are reduced over the first example. > > > So this is NOT as great an outcome as the second example, admittedly. > However, it's still much better than the first example. In contrast to the > first example, there is now a clear direction: a newer, more future-proof, > more compatible library is coming to the fore, clearly distinguished by a > new name. More products are using it, and, if general-compression-library > is ever fully deprecated, its code can be ported to the universal lib / > replaced with other, non-deprecated plugins for the same functionality. > Only products that directly depended on the old, broken design of > general-compression-library are at risk due to unmaintained code. In that > case, someone is likely to port the application code to use > universal-compression-library instead: especially if the deprecation notice > tells them to. > > > The difference? In the original example, you have multiple forks. No one > forks, distinguished only by version numbers. Different forks are > incompatible with each other, patches go to either, and no fork is clearly > superior, because now you have multiple solutions for the same problem, > under the same name. > > > Really, all we're saying here is, "don't switch things out from under > people". If your library is meant to do X, when you do A, then don't make > it suddenly crash if you don't do B beforehand. If one recipe requires two > steps, and recipe requires one, then they are different recipes and should > have different names. If two recipes use a set of steps, and yet produce > different outcomes, then there are ingredients in there that you need to be > aware of, and so really, they are different recipes, with different > ingredients, and deserve different names. > > -- > Lee > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir at slate-project.org Sat Feb 1 06:19:12 2014 From: vladimir at slate-project.org (Vladimir Lushnikov) Date: Sat, 1 Feb 2014 14:19:12 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <6734580107559889337@gmail297201516> <-7815470346720532493@gmail297201516> <52EC39C5.90801@gmail.com> <52ECFE1D.6010400@gmail.com> Message-ID: Oops, that should have been: 'breaking-changes ==> new package name' I disagree with the 'breaking-changes ==> new version' idea. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leebraid at gmail.com Sat Feb 1 06:38:15 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 14:38:15 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> Message-ID: <52ED06D7.40405@gmail.com> On 01/02/14 00:12, Tony Arcieri wrote: > On Fri, Jan 31, 2014 at 4:07 PM, Vladimir Lushnikov > > wrote: > > Just to be clear, I think what you are saying is that you want > version pinning to be dynamic? I.e. when a new version of a > library dependency becomes available, upgrade the package with > that dependency? > > > I would like a constraints-based system that is able to calculate the > latest API-compatible version of a given package based on rules less > strict than version = X.Y.Z > Agreed, but that's VERY low bar for requirements; I think we need to be more specific. Apt, debian's package manager, for example, can have package dependency rules like these: some-package: Version 4.11_amd64 Depends: X-bin (ver == 2.4, ver > 3.1 && < 3.7) | opengl-dev Source-Package: some-source Build-depends: X-devel, (scons-builder (ver >= 3 && ver != 3.3) | basic-make nvidia-headers: Provides: opengl-dev ati-radeon-hd-devel: Provides: radeon-dev GNUMake: Provides: basic-make BSDMake: Provides: basic-make Which says that: * You can build some-package 4.11_amd64 from some-source-4.11, any version of X-devel, version 3.x of scons builder (except for 3.3 which is broken somehow), and that anything providing basic make functionality is needed, whether it's BSD's make of GNU's. * However, if you just want to install the binary version, you only need one of X-bin, or opengl-dev You could get around the fact that "android-native-devkit" is a whole bunch of tools and libraries which don't confirm to the package system, by creating a dummy package requiring android-native-devkit, and saying that it provides basic-make and opengl-headers, so that the dependencies all work out. As another example, you could break opengl-dev into API versions, saying that android-native-devkit provides opengl-dev, opengl2-dev, and opengl3-dev, but that ati-radeon-hd-dev provides only opengl-dev, and opengl2-dev. Then you can say, for example, "get android-native-devkit from here, and always use the latest, most unstable version, but give me the most stable versionm of BSDMake, and make sure X-bin is the stable version, but with the latest security patches". One thing you can't do (without chroot/jails/containers) is to say, "Install these packages here, and install version 1.x of packageN here, with 3.x there, and 2.5 there.". That's pretty important for virtual hosting, and development, for example. In short, Rust's package system should probably support: * Package names, independent of sources * Parseable versions * Dependency EXPRESSIONS, including boolean logic, comparison operators, negation (i.e., none of the packages in this sub-expression are compatible), etc. * Virtual packages which include other packages or wrap other packages * Multiple installation paths, with a list of packages to be installed / maintained there * Some way to use different installation paths based on which project you're in * Some way to specify the local installation path during development, the default installation path for public packages, and a way to override the default installation path for specific sysadmin purposes. * Some way to specify dependencies on third-party libraries / tools, from other languages / package managers. I've little idea of to do about that. Probably just print an error message and quit, to begin with? -- Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir at slate-project.org Sat Feb 1 06:49:43 2014 From: vladimir at slate-project.org (Vladimir Lushnikov) Date: Sat, 1 Feb 2014 14:49:43 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <52ED06D7.40405@gmail.com> References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED06D7.40405@gmail.com> Message-ID: Portage has a very similar syntax/way of specifying runtime vs. build-time dependencies: http://devmanual.gentoo.org/general-concepts/dependencies/. Apt doesn't have support for slots and USE flags (code that is included/excluded at compile time for optional features). On Sat, Feb 1, 2014 at 2:38 PM, Lee Braiden wrote: Agreed, but that's VERY low bar for requirements; I think we need to be > more specific. Apt, debian's package manager, for example, can have > package dependency rules like these: > > some-package: > Version 4.11_amd64 > Depends: X-bin (ver == 2.4, ver > 3.1 && < 3.7) | opengl-dev > Source-Package: some-source > Build-depends: X-devel, (scons-builder (ver >= 3 && ver != 3.3) | > basic-make > > nvidia-headers: > Provides: opengl-dev > > ati-radeon-hd-devel: > Provides: radeon-dev > > GNUMake: > Provides: basic-make > > BSDMake: > Provides: basic-make > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpx.infinity at gmail.com Sat Feb 1 06:55:13 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Sat, 1 Feb 2014 18:55:13 +0400 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <1391030003.16129.76945009.289252ED@webmail.messagingengine.com> <52EAEDDB.4030402@active-4.com> <6734580107559889337@gmail297201516> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> Message-ID: Is it possible at all to find the latest version of a library which is still compatible completely automatically? Incompatibilites can be present on logic level, so the compilation wiith incompatible version will succeed, but the program will work incorrectly. I don't think that this can be solved without assumptions about versioning (like semver) and/or without manual intervention. Couldn't we just use more loose variant of version pinning inside semantic versioning, with manual user intervention when it is needed? For example, assuming there is something like semantic versioning adopted, packages specify dependencies on certain major version, and the dependency resolver downloads latest available package inside this major version. If for some reason automatically selected dependency is incompatible with our package or other dependencies of our package, the user can manually override this selection, maybe even with another major version. This is, as far as I understand, the system of slots used by Portage as Vladimir Lushnikov described. Slots correspond to major versions in semver terms, and other packages depend on concrete slot. But the user has ultimate power to select whichever version they need, overriding automatic choice. In short, we allow dependency resolver to use the latest possible packages which should be compatible according to semantic versioning, and if it fails, we provide the user with ability to override dependency resolver choices. 2014-02-01 Tony Arcieri : > On Fri, Jan 31, 2014 at 3:59 PM, Jack Moffitt wrote: >> >> The algorithm here is rather simple. We try to satisfy rust-logger and >> rust-rest. rust-rest has a version (or could be a tag like 1.x) so we >> go get that. It depends on rust-json 2.0 so we get that. Then we try >> to look for rust-logger, whatever version is latest (in rustpkg this >> would mean current master since no version or tag is given). This >> pulls in rust-json 1.0 since 1.0 != 2.0 and those have specific tags. >> Everything is built and linked as normal. Whether rust-json's >> constraints are exact revisions or they are intervals (< 2.0 and >= >> 2.0 for example), makes little difference I think. > > > To reiterate, it sounds like you're describing every package pinning its > dependencies to a specific version, which I'd consider an antipattern. > > What is to prevent a program using this (still extremely handwavey) > algorithm from depending on rust-json 1.0, 1.1, 1.2, 1.3, 1.4, 2.0, 2.1, and > 2.2 simultaneously? > > What if some of these are buggy, but the fixed versions aren't used due to > version pinning? > > What if rust-json 1.0 has a security issue? > > -- > Tony Arcieri > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From leebraid at gmail.com Sat Feb 1 07:00:08 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 15:00:08 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <6734580107559889337@gmail297201516> <-7815470346720532493@gmail297201516> <52EC39C5.90801@gmail.com> Message-ID: <52ED0BF8.7080202@gmail.com> On 01/02/14 09:39, Gaetan wrote: > There is not only API change. Sometime, from a minor version to > another, a feature get silently broken (that is silent regression). > While it might not impact libA which depends on it, but it may fail > libB which also depends on it, but with a previous version. Silent regressions are the exceptional case though, not the norm. As a general rule, upgrades are important and necessary, at least for security reasons. It's kind of up to developers, up to distro maintainers, and certainly up to mission-critical sysadmins, to choose software (libs, and apps which use those libs) which are QA'd well enough to avoid this. Breakage SOMETIMES happens, but, much like recovering from a failed write to disk, you just have to weigh the odds, try it, then back up if it didn't work out. In fact, you could think of the process of upgrading a library as simple write followed by a verify. If you do it properly, like a good admin would, it'll all be wrapped in a transaction that you can roll back. BUT, the important part is that you'll probably still need to upgrade versoin x.19->x.21, even if upgrading x.19->x.20 fails for some reason. Either you have a system which just works, isn't connected to the net, has no bugs, and no security risks associated with it, or you upgrade sooner or later. In most cases, if you're acting responsibly, you CANNOT just install version x.19, call that a working system, and forget about it, installing x.21 only for newer customers / systems. Not if those systems are connected to the internet, at least. -- Lee From leebraid at gmail.com Sat Feb 1 07:43:20 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 15:43:20 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> Message-ID: <52ED1618.8090701@gmail.com> On 01/02/14 14:55, Vladimir Matveev wrote: > Is it possible at all to find the latest version of a library which is > still compatible completely automatically? Incompatibilites can be > present on logic level, so the compilation wiith incompatible version > will succeed, but the program will work incorrectly. I don't think > that this can be solved without assumptions about versioning (like > semver) and/or without manual intervention. No, it's not. It's always going to be the library developers / programmer's responsibilty, to some extent. For example, if a library adds three new functions to fit within some begin/end wrapper, it may modify the begin/end functions to behave differently. If the library author does that in a way that breaks existing logic, then that's a bug, to my mind, or a deliberate divergence / API contract breakage. At that point, what the author has REALLY done is decided that his original design for begin() end(), and for that whole part of the library in general, is wrong, and needs a REDESIGN. What he can then do is: a) Create different functions, which have extended functionality, and support the three new in-wrapper functions. So, you could call: begin() old_funcs... end() OR: extended_begin() old_funcs() new_funcs() extended_end() b) Create a new library, similar to the old one, but with new functionality, new API guarantees, etc. Ignoring the problem just creates a mess though, which ripples throughout the development space (downstream products, library forks, etc.), and no package manager will completely solve it after the fact, except to acknowledge the mess and install separate packages for every program that needs them (but that has security / feature-loss issues). > Couldn't we just use more loose variant of version pinning inside > semantic versioning, with manual user intervention when it is needed? > For example, assuming there is something like semantic versioning > adopted, packages specify dependencies on certain major version, and > the dependency resolver downloads latest available package inside this > major version. You can do that within a major version, except for one case - multiple developers creating diverged versions of 2.13, based on 2.12, each with their own features. Really, though, what you're doing is just shifting/brushing the compatibility issue under the rug each time: y is OK in x.y because x guarantees backwards compatibility. Fork1 in x.y.fork1 is OK, because x.y guarantees backwards compatibility... and so on, ad infinitum. Whatever level you're at, you have two issues: a) Backwards compatibility between library versions b) The official, named version of the library, vs. unofficial code. Assuming you guarantee A in some way (backwards compatibility in general, across all versions of the library, or backwards compatibility for minor versions), you still have incompatibility if (b) arises, which it will in all distributed repository scenarios, UNLESS you can do something like git's version tracking per branch, where any version number is unique, and also implies every version before. Then you're back to whether you want to do that per major version, or overall. But doing it per major version recursively raises the question of which major version is authorised: what if you have a single library at 19.x, and TWO people create 20.0 independently? Again, you have incompatibility. So, you're back to the question of (a): is it the same library, or should an author simply stay within the bounds of a library's API, and fork a new CONCEPTUALLY DIFFERENT new lib (most likely with a new name) when they break that API? > If for some reason automatically selected dependency is > incompatible with our package or other dependencies of our package, > the user can manually override this selection But what does the user know about library APIs? He needs to dig into the logic of the program, and worse, the logic of underlying libraries, to figure out that: somelib::begin() from github://somelib/someplace/v23.2/src/module1/submod2/utils.rs, line 24 does not mean the same as: somelib::begin() from github://somelib/otherplace/v23.2/src/module1/submod2/utils.rs, line 35 ! ;) > major version. This is, as far as I understand, the system of slots > used by Portage as Vladimir Lushnikov described. Slots correspond to > major versions in semver terms, and other packages depend on concrete > slot. This sounds interesting (I'll have to track down Vladimir's original post on that), but so far, I'm not sure it solves the problem of a forked minor version, any more than other methods solve a forked major version. It seems to me that it always comes back to people choosing to break library APIs, and other people trying to clean it up in one way or another, which ultimately fails, at some point -- major, minor, fork, repository, branch, or otherwise -- wherever the guarantee of backwards-compatibility is no longer given. > But the user has ultimate power to select whichever version they > need, overriding automatic choice. I agree that overriding choices is always important. For example, a library may make NO guarantees about performance, or may make guarantees to improve performance in certain ways, but if you know that one version happens to performs well in your particular environment, and later versions don't, then may you choose to use that anyway, assuming it "just works" and you don't want to maintain it or upgrade it for security, etc. Those are big trade-offs, though, and should not be encouraged. Again, the right solution is to fork the code at the version that works performance-wise, and introduce new API guarantees for differenet performance characterics. At that point, your library with different performance should have a different name, but should still be getting security updates etc., rather than just being pinned at one version forever. -- Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From leebraid at gmail.com Sat Feb 1 07:45:47 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 15:45:47 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED06D7.40405@gmail.com> Message-ID: <52ED16AB.2050505@gmail.com> On 01/02/14 14:49, Vladimir Lushnikov wrote: > Portage has a very similar syntax/way of specifying runtime vs. > build-time dependencies: > http://devmanual.gentoo.org/general-concepts/dependencies/. > > Apt doesn't have support for slots and USE flags (code that is > included/excluded at compile time for optional features). > Agreed; use flags are very nice :) I find them a bit clunky / breakable, though -- it's very hard to know what the valid range of flags is, and how that will affect every package on your system. If Rust gets something similar, the exact circumstances under which they're used, the range valid values, and the effects of each, should be EXTREMELY clear. -- Lee From vladimir at slate-project.org Sat Feb 1 07:48:21 2014 From: vladimir at slate-project.org (Vladimir Lushnikov) Date: Sat, 1 Feb 2014 15:48:21 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <52ED16AB.2050505@gmail.com> References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED06D7.40405@gmail.com> <52ED16AB.2050505@gmail.com> Message-ID: I think USE flags are more appropriate for library features (which is exactly the way portage uses them). So you have your rust app with conditional code that depends on a particular cfg ( https://github.com/mozilla/rust/wiki/Doc-attributes) and then you expose a list of these in your package specification so that others can know to say - I use the json library but with built-in URI support. On Sat, Feb 1, 2014 at 3:45 PM, Lee Braiden wrote: > On 01/02/14 14:49, Vladimir Lushnikov wrote: > >> Portage has a very similar syntax/way of specifying runtime vs. >> build-time dependencies: http://devmanual.gentoo.org/ >> general-concepts/dependencies/. >> >> Apt doesn't have support for slots and USE flags (code that is >> included/excluded at compile time for optional features). >> >> > Agreed; use flags are very nice :) I find them a bit clunky / breakable, > though -- it's very hard to know what the valid range of flags is, and how > that will affect every package on your system. If Rust gets something > similar, the exact circumstances under which they're used, the range valid > values, and the effects of each, should be EXTREMELY clear. > > > -- > Lee > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leebraid at gmail.com Sat Feb 1 10:27:05 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 18:27:05 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED06D7.40405@gmail.com> <52ED16AB.2050505@gmail.com> Message-ID: <52ED3C79.7020401@gmail.com> On 01/02/14 15:48, Vladimir Lushnikov wrote: > I think USE flags are more appropriate for library features (which is > exactly the way portage uses them). So you have your rust app with > conditional code that depends on a particular cfg > (https://github.com/mozilla/rust/wiki/Doc-attributes) and then you > expose a list of these in your package specification so that others > can know to say - I use the json library but with built-in URI support. Interesting. I was thinking more of compiling for specific CPU optimisations, etc. For the "use this optional library" thing, debian seems to mostly just use optional / recommended dependencies. The package manager informs you that a package is recommended / optional, and you can install them if you want. Then the ./configure script or whatever will normally just use it if it's there, by default if that's considered sensible as a default, or you can build it with extra flags manually, to make it build in a non-default way. I like that Debian exposes those optional packages at the package manager level, but the global / local (iirc) use flags make a lot of sense too. Some hybrid that had option flags when installing/building, and informed you of additional packages needed (much like when you select "features to install" in a GUI installer), folding that back into the package management/dependencies etc. might be best, but it would be relatively complex to implement. -- Lee From leebraid at gmail.com Sat Feb 1 10:31:40 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 18:31:40 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <52ED1618.8090701@gmail.com> References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED1618.8090701@gmail.com> Message-ID: <52ED3D8C.8000801@gmail.com> Ah, this: On 01/02/14 15:43, Lee Braiden wrote: > extended_begin() > old_funcs() > new_funcs() > extended_end() should read more like: begin() old_funcs() extended_begin() new_funcs() extended_end() end() -- Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan at xeberon.net Sat Feb 1 10:54:12 2014 From: gaetan at xeberon.net (Gaetan) Date: Sat, 1 Feb 2014 19:54:12 +0100 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <1391030003.16129.76945009.289252ED@webmail.messagingengine.com> <52EAEDDB.4030402@active-4.com> <6734580107559889337@gmail297201516> Message-ID: why not enforcing in a way or another a API compatibility test suite for ensuring at least a certain level of compatibility between two version? I think it is something quite doable, and moreover this would kinda force the package manager to write unit tests which is always a good practice. ----- Gaetan 2014-01-31 Sean McArthur : > On Fri, Jan 31, 2014 at 1:05 PM, Tony Arcieri wrote: > >> IMO, a system that respects semantic versioning, allows you to constrain >> the dependency to a particular *major* version without requiring pinning >> to a *specific* version. >> >> I would call anything that requires pinning to a specific version an >> antipattern. Among other things, pinning to specific versions precludes >> software updates which may be security-critical. >> >> > It's perfectly reasonable to require a certain *minor* version, since > minor versions (in semver) can include API additions that you may depend on. > > Also, nodejs and npm supposedly support semver, but it's impossible to > enforce library authors actually do this, so you'll get libraries with > breaking changes going from 1.1.2 to 1.1.3 because reasons. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leebraid at gmail.com Sat Feb 1 10:59:06 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 18:59:06 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <52EAEDDB.4030402@active-4.com> <6734580107559889337@gmail297201516> Message-ID: <52ED43FA.8050207@gmail.com> On 01/02/14 18:54, Gaetan wrote: > why not enforcing in a way or another a API compatibility test suite > for ensuring at least a certain level of compatibility between two > version? I think it is something quite doable, and moreover this would > kinda force the package manager to write unit tests which is always a > good practice. At the moment, we're trying to agree the policy. After the policy is agreed, tools could be created to help ensure that those policies are met. People would then use them if they see fit, or they could be built into package creation / version upload tools as standard. The first thing is to agree a reliable, sensible policy that improves the quality of software / package management, and is WORTH enforcing, though. -- Lee From leebraid at gmail.com Sat Feb 1 11:02:15 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 19:02:15 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <52EAEDDB.4030402@active-4.com> <6734580107559889337@gmail297201516> Message-ID: <52ED44B7.4000302@gmail.com> On 01/02/14 18:54, Gaetan wrote: > why not enforcing in a way or another a API compatibility test suite > for ensuring at least a certain level of compatibility between two > version? I think it is something quite doable, and moreover this would > kinda force the package manager to write unit tests which is always a > good practice. > One other thing: I don't believe "a certain level of compatibility" is a useful attribute to track in releases. Either something is fully compatible, or it breaks existing software. It might be useful to judge the suitability of software for release (i.e., software passes one release-readiness test when it's fully compatible with a previous release), but that's a different thing, imho. -- Lee From vladimir at slate-project.org Sat Feb 1 11:03:06 2014 From: vladimir at slate-project.org (Vladimir Lushnikov) Date: Sat, 1 Feb 2014 19:03:06 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <52ED43FA.8050207@gmail.com> References: <52E7071A.50702@mozilla.com> <52EAEDDB.4030402@active-4.com> <6734580107559889337@gmail297201516> <52ED43FA.8050207@gmail.com> Message-ID: Just read the document at http://semver.org/ fully and this sounds like a sensible approach. Since I think with crate metadata it should be obvious (at the symbol level) if the public api changes, then this can be tracked somewhere by an automatic tool where we have a public list of rust packages (rustpi or travis etc). On Sat, Feb 1, 2014 at 6:59 PM, Lee Braiden wrote: > On 01/02/14 18:54, Gaetan wrote: > >> why not enforcing in a way or another a API compatibility test suite for >> ensuring at least a certain level of compatibility between two version? I >> think it is something quite doable, and moreover this would kinda force the >> package manager to write unit tests which is always a good practice. >> > > At the moment, we're trying to agree the policy. After the policy is > agreed, tools could be created to help ensure that those policies are met. > People would then use them if they see fit, or they could be built into > package creation / version upload tools as standard. The first thing is to > agree a reliable, sensible policy that improves the quality of software / > package management, and is WORTH enforcing, though. > > > -- > Lee > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpx.infinity at gmail.com Sat Feb 1 11:32:09 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Sat, 1 Feb 2014 23:32:09 +0400 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <52ED1618.8090701@gmail.com> References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED1618.8090701@gmail.com> Message-ID: <20140201233209.7ffa7b5a@lightyear> > You can do that within a major version, except for one case - multiple > developers creating diverged versions of 2.13, based on 2.12, each with > their own features. > ... > But doing it per major version recursively raises the question of which > major version is authorised: what if you have a single library at 19.x, > and TWO people create 20.0 independently? Again, you have > incompatibility. So, you're back to the question of (a): is it the same > library, or should an author simply stay within the bounds of a > library's API, and fork a new CONCEPTUALLY DIFFERENT new lib (most > likely with a new name) when they break that API? I think that forks should be considered as completely different libraries. This shouldn't be a problem when certain naming scheme is used, for example, two-level names like in Java world. Central repository will certainly help, because each entry in it will be controlled by concrete user. These entries can also be linked with version control stream which represents main development line. No ambiguities here. It may be desirable then to use specific fork instead of the mainline project. This can be a feature of overriding system, which will be present anyway. If the user wants to use a fork instead of a library (all its versions or a specific version), he/she will be able to specify this requirement somehow, and dependency resolver will take it into account. Obviously, package authors will be able to choose default "fork" which they want to use. > But what does the user know about library APIs? He needs to dig into > the logic of the program, and worse, the logic of underlying libraries, > to figure out that: > > somelib::begin() from > github://somelib/someplace/v23.2/src/module1/submod2/utils.rs, line 24 > > does not mean the same as: > > somelib::begin() from > github://somelib/otherplace/v23.2/src/module1/submod2/utils.rs, line 35 > > ! ;) > When this API is used directly by the package, then the user *should* know about it. He's using it, after all. If this API belongs to a transitive dependency, then I don't think there is an ideal solution. Either the version is pinned (like in Java world), or it is chosen by the dependency resolver. In the former case all transitive dependencies are guaranteed to be intercompatible, because these pinned versions were deliberately chosen by libraries developers. In the latter case there is always a possibility of compatibility problems, because it is impossible to guarantee complete compatibility - libraries are written by people, after all. Then it is the user's responsibility to resolve these problems, no one else will be able to do this. From ml at isaac.cedarswampstudios.org Sat Feb 1 11:59:57 2014 From: ml at isaac.cedarswampstudios.org (Isaac Dupree) Date: Sat, 01 Feb 2014 14:59:57 -0500 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <6734580107559889337@gmail297201516> <-7815470346720532493@gmail297201516> <52EC39C5.90801@gmail.com> Message-ID: <52ED523D.6060604@isaac.cedarswampstudios.org> On 02/01/2014 06:27 AM, Matthieu Monrocq wrote: > In short, isn't there a risk of crashes if one accidentally links two > versions of a given library and start exchanging objects ? It seems > impractical to prove that objects created by one version cannot > accidentally end up being passed to the other version: > > - unless the types differ at compilation time (seems awkward) Haskell does this. Types are equal if their {package, package-version, module-name, type-name} is the same. (Or maybe it is even more rigorous about type equality.) Using multiple versions of some packages turns out not to be awkward at all, such as libraries for writing tests and libraries that don't export important data types. -Isaac From leebraid at gmail.com Sat Feb 1 12:03:13 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 20:03:13 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <20140201233209.7ffa7b5a@lightyear> References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED1618.8090701@gmail.com> <20140201233209.7ffa7b5a@lightyear> Message-ID: <52ED5301.7010404@gmail.com> On 01/02/14 19:32, Vladimir Matveev wrote: > When this API is used directly by the package, then the user *should* > know about it. He's using it, after all. There are developers (direct library users), and then distro maintainers/admins/users who need to manage libraries installed on their system. The former should know, but the others shouldn't have to think about it, yet should (must) be able to override the defaults if they need to, at least for shared libraries. Presumably we want shared libraries and static libraries to function similarly, except for whether the user chooses static or dynamic linkage. > If this API belongs to a transitive dependency, then I don't think > there is an ideal solution. Either the version is pinned (like in Java > world), or it is chosen by the dependency resolver. If we're talking about pinning to an absolute version (no upgrades), then I think that's a security / bugfix issue, unless we're also talking about static linkage in that case (which is reasonable because then the bug is essentially part of the black box that is the software the user is installing, and in that case, the software maintainer is also responsible for releasing updates to fix bugs within the statically linked code. > In the former case all transitive dependencies are guaranteed to be > intercompatible Are they? What if the statically pinned version of a scanner library doesn't support the user's new scanner, there's an update to support his scanner, but it's ignored because the software allows only an absolute version number? > because these pinned versions were deliberately chosen by libraries > developers. Who are not infallible, and do/should not get to choose everything about the target system's libraries. There is also a freedom issue, regarding someone's right to implement a new version of the library, say, to port it to a new GUI toolkit. > In the latter case there is always a possibility of compatibility > problems, because it is impossible to guarantee complete compatibility > - libraries are written by people, after all. Yes, but we can encourage it, just like we encourage immutability, even though we can't force everyone to use it. > Then it is the user's responsibility to resolve these problems, no one > else will be able to do this. But the user can't do this, if new libraries break old programs, or old programs won't allow upgrading. -- Lee From leebraid at gmail.com Sat Feb 1 12:10:41 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 01 Feb 2014 20:10:41 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <52ED523D.6060604@isaac.cedarswampstudios.org> References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <52EC39C5.90801@gmail.com> <52ED523D.6060604@isaac.cedarswampstudios.org> Message-ID: <52ED54C1.9080904@gmail.com> On 01/02/14 19:59, Isaac Dupree wrote: > On 02/01/2014 06:27 AM, Matthieu Monrocq wrote: >> In short, isn't there a risk of crashes if one accidentally links two >> versions of a given library and start exchanging objects ? It seems >> impractical to prove that objects created by one version cannot >> accidentally end up being passed to the other version: >> >> - unless the types differ at compilation time (seems awkward) > > Haskell does this. Types are equal if their {package, package-version, > module-name, type-name} is the same. (Or maybe it is even more > rigorous about type equality.) Using multiple versions of some > packages turns out not to be awkward at all, such as libraries for > writing tests and libraries that don't export important data types. > This sounds useful, but still seems like it's prone to error, unless you can define versions in some reliable way, which works despite distributed repositories, branches on those repositories, etc. Does anyone have a proposal for methods of doing that? I think it would require tracking version + hash of all code --- a bit like the way git tracks the head of a branch. Is that what the hash in rust libraries currently includes? -- Lee From dpx.infinity at gmail.com Sat Feb 1 13:28:45 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Sun, 2 Feb 2014 01:28:45 +0400 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <52ED5301.7010404@gmail.com> References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED1618.8090701@gmail.com> <20140201233209.7ffa7b5a@lightyear> <52ED5301.7010404@gmail.com> Message-ID: <20140202012845.47f9e66c@lightyear> To clarify, when I was writing "user" I meant the developer who uses this package, not the end user of complete program. > On 01/02/14 19:32, Vladimir Matveev wrote: > > When this API is used directly by the package, then the user *should* > > know about it. He's using it, after all. > > There are developers (direct library users), and then distro > maintainers/admins/users who need to manage libraries installed on their > system. The former should know, but the others shouldn't have to think > about it, yet should (must) be able to override the defaults if they > need to, at least for shared libraries. Presumably we want shared > libraries and static libraries to function similarly, except for whether > the user chooses static or dynamic linkage. Well, it seems that working for a long time with a code targeting virtual machines is corrupting :) I completely forgot about different models of compilation. I see your point. But I think that developing and distributing should be considered separately. Package manager for developers should be a part of language infrastructure (like rustpkg is now for Rust and, for example, go tool for Go language or cabal for Haskell). This package manager allows flexible management of Rust libraries and their dependencies, and it should be integrated with the build system (or *be* this build system). It is used by developers to create applications and libraries and by maintainers to prepare these applications and libraries for integration with the distribution system for end users. Package manager for general users (I'll call it system package manager), however, depends on the OS, and it is maintainer's task to determine correct dependencies for each package. Rust package manager should not depend in any way on the system package manager and its packages, because each system has its own package manager, and it is just impossible to support them all. Rust also should not force usage of concrete user-level package manager (like 0install, for example), because this means additional unrelated software on the user installation. Go and Haskell do not have this problem because they are linked statically, and their binary packages do not have any library dependencies at all. Rust is a different story as it supports and encourages dynamic linkage. I think that maintainers should choose "standard" set of Rust libraries which is OK for most applications, and support and update them and their dependent applications. If there are conflicts between versions (for example, some application started to depend on another fork of a library), then maintainers should resolve this in the standard way of their distribution system (e.g. slots for Portage, name suffixes in apt and so on). Essentially there is a large graph of packages in the Rust world, consisting of packages under Rust package manager control (main graph). Developers are only working with this graph. Then for each distribution system maintainers of this system pull packages from the main graph and adapt it to their system in a way this system allows and encourages. I don't think that it is possible to achieve anything better than this. We cannot and should not force end users to use something other than their packaging system. > > If this API belongs to a transitive dependency, then I don't think > > there is an ideal solution. Either the version is pinned (like in Java > > world), or it is chosen by the dependency resolver. > > If we're talking about pinning to an absolute version (no upgrades), > then I think that's a security / bugfix issue, unless we're also talking > about static linkage in that case (which is reasonable because then the > bug is essentially part of the black box that is the software the user > is installing, and in that case, the software maintainer is also > responsible for releasing updates to fix bugs within the statically > linked code. > > > In the former case all transitive dependencies are guaranteed to be > > intercompatible > > Are they? What if the statically pinned version of a scanner library > doesn't support the user's new scanner, there's an update to support his > scanner, but it's ignored because the software allows only an absolute > version number? I don't think your example is related. By guaranteed intercompatibility I meant something like the following. Suppose your package is called `package`. It depends on `foo-x` who in turn depends on `bar-y`. When versions are always pinned by their developers, `foo` author deliberately has chosen `bar-y` version, and he knows that `foo-x` library will work properly with `bar-y`. This is how Java ecosystem works now. New scanner, however, is not an API feature. Your example seems to support the general point about outdated dependencies, and I generally agree with it. > > because these pinned versions were deliberately chosen by libraries > > developers. > > Who are not infallible, and do/should not get to choose everything about > the target system's libraries. There is also a freedom issue, regarding > someone's right to implement a new version of the library, say, to port > it to a new GUI toolkit. Well, developers choose the libraries they work with, and it is absolutely reasonable for them to expect that the user will have compatible version installed. > > In the latter case there is always a possibility of compatibility > > problems, because it is impossible to guarantee complete compatibility > > - libraries are written by people, after all. > > Yes, but we can encourage it, just like we encourage immutability, even > though we can't force everyone to use it. Absolutely agree with this. Isn't semantic versioning intended just for that? > > Then it is the user's responsibility to resolve these problems, no one > > else will be able to do this. > > But the user can't do this, if new libraries break old programs, or old > programs won't allow upgrading. I believe you mean end user here. Then it highly depends on system package manager of this user. If, for example, he or she uses Portage, then the problem can be solved (at least partially) with its slots mechanism. If it is some other system, then it should be solved by its means. See above. From corey at octayn.net Sat Feb 1 14:39:51 2014 From: corey at octayn.net (Corey Richardson) Date: Sat, 1 Feb 2014 17:39:51 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax Message-ID: Hey all, bjz and I have worked out a nice proposal[0] for a slight syntax change, reproduced here. It is a breaking change to the syntax, but it is one that I think brings many benefits. Summary ======= Change the following syntax: ``` struct Foo { ... } impl Trait for Foo { ... } fn foo(...) { ... } ``` to: ``` forall struct Foo { ... } forall impl Trait for Foo { ... } forall fn foo(...) { ... } ``` The Problem =========== The immediate, and most pragmatic, problem is that in today's Rust one cannot easily search for implementations of a trait. Why? `grep 'impl Clone'` is itself not sufficient, since many types have parametric polymorphism. Now I need to come up with some sort of regex that can handle this. An easy first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite inconvenient to type and remember. (Here I ignore the issue of tooling, as I do not find the argument of "But a tool can do it!" valid in language design.) A deeper, more pedagogical problem, is the mismatch between how `struct Foo<...> { ... }` is read and how it is actually treated. The straightforward, left-to-right reading says "There is a struct Foo which, given the types ... has the members ...". This might lead one to believe that `Foo` is a single type, but it is not. `Foo` (that is, type `Foo` instantiated with type `int`) is not the same type as `Foo` (that is, type `Foo` instantiated with type `uint`). Of course, with a small amount of experience or a very simple explanation, that becomes obvious. Something less obvious is the treatment of functions. What does `fn foo<...>(...) { ... }` say? "There is a function foo which, given types ... and arguments ..., does the following computation: ..." is not very adequate. It leads one to believe there is a *single* function `foo`, whereas there is actually a single `foo` for every substitution of type parameters! This also holds for implementations (both of traits and of inherent methods). Another minor problem is that nicely formatting long lists of type parameters or type parameters with many bounds is difficult. Proposed Solution ================= Introduce a new keyword, `forall`. This choice of keyword reads very well and will not conflict with any identifiers in code which follows the [style guide](https://github.com/mozilla/rust/wiki/Note-style-guide). Change the following declarations from ``` struct Foo { ... } impl Trait for Foo { ... } fn foo(...) { ... } ``` to: ``` forall struct Foo { ... } forall impl Trait for Foo { ... } forall fn foo(...) { ... } ``` These read very well. "for all types T and U, there is a struct Foo ...", "for all types T and U, there is a function foo ...", etc. These reflect that there are in fact multiple functions `foo` and structs `Foo` and implementations of `Trait`, due to monomorphization. [0]: http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/ From kevin at sb.org Sat Feb 1 14:54:38 2014 From: kevin at sb.org (Kevin Ballard) Date: Sat, 1 Feb 2014 14:54:38 -0800 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: <6F10B059-8BB2-419C-90B2-000937616817@sb.org> On Feb 1, 2014, at 2:39 PM, Corey Richardson wrote: > The immediate, and most pragmatic, problem is that in today's Rust one cannot > easily search for implementations of a trait. Why? `grep 'impl Clone'` is > itself not sufficient, since many types have parametric polymorphism. Now I > need to come up with some sort of regex that can handle this. An easy > first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite inconvenient to > type and remember. (Here I ignore the issue of tooling, as I do not find the > argument of "But a tool can do it!" valid in language design.) Putting your other arguments aside, I am not convinced by the grep argument. With the syntax as it is today, I use `grep 'impl.*Clone'` if I want to find Clone impls. Yes, it can match more than just Clone impls. But that's true too even with this change. At the very least, any sort of multiline comment or string can contain text that matches even the most rigorously specified grep. The only way to truly guarantee you're only matching real impls is to actually parse the file with a real parser. -Kevin From bytbox at gmail.com Sat Feb 1 14:53:13 2014 From: bytbox at gmail.com (Scott Lawrence) Date: Sat, 1 Feb 2014 17:53:13 -0500 (EST) Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: May as well throw my 2 cents in. This is a pretty nice idea (I've always found 'impl' to be particularly confusing anyway). It does loose a nice property, though. Previously, there was a nice parallelism between struct Foo and let foo: Foo and so the syntax was quite obvious for beginners. The extra complexity of forall kills this. Of course, one could write forall struct Foo { but that's just ugly. On Sat, 1 Feb 2014, Corey Richardson wrote: > Hey all, > > bjz and I have worked out a nice proposal[0] for a slight syntax > change, reproduced here. It is a breaking change to the syntax, but it > is one that I think brings many benefits. > > Summary > ======= > > Change the following syntax: > > ``` > struct Foo { ... } > impl Trait for Foo { ... } > fn foo(...) { ... } > ``` > > to: > > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` > > The Problem > =========== > > The immediate, and most pragmatic, problem is that in today's Rust one cannot > easily search for implementations of a trait. Why? `grep 'impl Clone'` is > itself not sufficient, since many types have parametric polymorphism. Now I > need to come up with some sort of regex that can handle this. An easy > first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite inconvenient to > type and remember. (Here I ignore the issue of tooling, as I do not find the > argument of "But a tool can do it!" valid in language design.) > > A deeper, more pedagogical problem, is the mismatch between how `struct > Foo<...> { ... }` is read and how it is actually treated. The straightforward, > left-to-right reading says "There is a struct Foo which, given the types ... > has the members ...". This might lead one to believe that `Foo` is a single > type, but it is not. `Foo` (that is, type `Foo` instantiated with type > `int`) is not the same type as `Foo` (that is, type `Foo` instantiated > with type `uint`). Of course, with a small amount of experience or a very > simple explanation, that becomes obvious. > > Something less obvious is the treatment of functions. What does `fn > foo<...>(...) { ... }` say? "There is a function foo which, given types ... > and arguments ..., does the following computation: ..." is not very adequate. > It leads one to believe there is a *single* function `foo`, whereas there is > actually a single `foo` for every substitution of type parameters! This also > holds for implementations (both of traits and of inherent methods). > > Another minor problem is that nicely formatting long lists of type parameters > or type parameters with many bounds is difficult. > > Proposed Solution > ================= > > Introduce a new keyword, `forall`. This choice of keyword reads very well and > will not conflict with any identifiers in code which follows the [style > guide](https://github.com/mozilla/rust/wiki/Note-style-guide). > > Change the following declarations from > > ``` > struct Foo { ... } > impl Trait for Foo { ... } > fn foo(...) { ... } > ``` > > to: > > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` > > These read very well. "for all types T and U, there is a struct Foo ...", "for > all types T and U, there is a function foo ...", etc. These reflect that there > are in fact multiple functions `foo` and structs `Foo` and implementations of > `Trait`, due to monomorphization. > > > [0]: http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/ > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- Scott Lawrence From corey at octayn.net Sat Feb 1 14:54:34 2014 From: corey at octayn.net (Corey Richardson) Date: Sat, 1 Feb 2014 17:54:34 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: <6F10B059-8BB2-419C-90B2-000937616817@sb.org> References: <6F10B059-8BB2-419C-90B2-000937616817@sb.org> Message-ID: On Sat, Feb 1, 2014 at 5:54 PM, Kevin Ballard wrote: > On Feb 1, 2014, at 2:39 PM, Corey Richardson wrote: > >> The immediate, and most pragmatic, problem is that in today's Rust one cannot >> easily search for implementations of a trait. Why? `grep 'impl Clone'` is >> itself not sufficient, since many types have parametric polymorphism. Now I >> need to come up with some sort of regex that can handle this. An easy >> first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite inconvenient to >> type and remember. (Here I ignore the issue of tooling, as I do not find the >> argument of "But a tool can do it!" valid in language design.) > > Putting your other arguments aside, I am not convinced by the grep argument. > With the syntax as it is today, I use `grep 'impl.*Clone'` if I want to find Clone > impls. Yes, it can match more than just Clone impls. But that's true too even with this > change. At the very least, any sort of multiline comment or string can contain text that > matches even the most rigorously specified grep. The only way to truly guarantee you're > only matching real impls is to actually parse the file with a real parser. > Sure. I find the monomorphization and formatting arguments to be more compelling, personally, but I was initially motivated by a failed grep. grep can't find derived implementations, either. From ben.striegel at gmail.com Sat Feb 1 14:55:54 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sat, 1 Feb 2014 17:55:54 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: First of all, why a new keyword? Reusing `for` here would be totally unambiguous. :P And also save us from creating the precedent of multi-word keywords. Secondly, currently Rust has a philosophy of use-follows-declaration (i.e. the syntax for using something mirrors the syntax for declaring it). This would eliminate that. Thirdly, I've actually been thinking about something like this for quite a while. The reason is that our function signatures are LOOONG, and I've always thought that it would be great to be able to declare the type parameters above the function, in an attribute or something. But you could just as easily split after your closing > for the same effect. If people are fine with ditching use-follows-declaration, then this could be pretty nice. On Sat, Feb 1, 2014 at 5:39 PM, Corey Richardson wrote: > Hey all, > > bjz and I have worked out a nice proposal[0] for a slight syntax > change, reproduced here. It is a breaking change to the syntax, but it > is one that I think brings many benefits. > > Summary > ======= > > Change the following syntax: > > ``` > struct Foo { ... } > impl Trait for Foo { ... } > fn foo(...) { ... } > ``` > > to: > > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` > > The Problem > =========== > > The immediate, and most pragmatic, problem is that in today's Rust one > cannot > easily search for implementations of a trait. Why? `grep 'impl Clone'` is > itself not sufficient, since many types have parametric polymorphism. Now I > need to come up with some sort of regex that can handle this. An easy > first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite > inconvenient to > type and remember. (Here I ignore the issue of tooling, as I do not find > the > argument of "But a tool can do it!" valid in language design.) > > A deeper, more pedagogical problem, is the mismatch between how `struct > Foo<...> { ... }` is read and how it is actually treated. The > straightforward, > left-to-right reading says "There is a struct Foo which, given the types > ... > has the members ...". This might lead one to believe that `Foo` is a single > type, but it is not. `Foo` (that is, type `Foo` instantiated with type > `int`) is not the same type as `Foo` (that is, type `Foo` > instantiated > with type `uint`). Of course, with a small amount of experience or a very > simple explanation, that becomes obvious. > > Something less obvious is the treatment of functions. What does `fn > foo<...>(...) { ... }` say? "There is a function foo which, given types ... > and arguments ..., does the following computation: ..." is not very > adequate. > It leads one to believe there is a *single* function `foo`, whereas there > is > actually a single `foo` for every substitution of type parameters! This > also > holds for implementations (both of traits and of inherent methods). > > Another minor problem is that nicely formatting long lists of type > parameters > or type parameters with many bounds is difficult. > > Proposed Solution > ================= > > Introduce a new keyword, `forall`. This choice of keyword reads very well > and > will not conflict with any identifiers in code which follows the [style > guide](https://github.com/mozilla/rust/wiki/Note-style-guide). > > Change the following declarations from > > ``` > struct Foo { ... } > impl Trait for Foo { ... } > fn foo(...) { ... } > ``` > > to: > > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` > > These read very well. "for all types T and U, there is a struct Foo ...", > "for > all types T and U, there is a function foo ...", etc. These reflect that > there > are in fact multiple functions `foo` and structs `Foo` and implementations > of > `Trait`, due to monomorphization. > > > [0]: > http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/ > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Sat Feb 1 14:56:28 2014 From: corey at octayn.net (Corey Richardson) Date: Sat, 1 Feb 2014 17:56:28 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sat, Feb 1, 2014 at 5:53 PM, Scott Lawrence wrote: > and so the syntax was quite obvious for beginners. The extra complexity of > forall kills this. Of course, one could write > > forall struct Foo { > > but that's just ugly. > Unrelated but struct declarations can't have bounds. I don't have a good way to re-unify the syntax declaration and use.. From corey at octayn.net Sat Feb 1 14:58:34 2014 From: corey at octayn.net (Corey Richardson) Date: Sat, 1 Feb 2014 17:58:34 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel wrote: > First of all, why a new keyword? Reusing `for` here would be totally > unambiguous. :P And also save us from creating the precedent of multi-word > keywords. > I'd be equally happy with for instead of forall. > Secondly, currently Rust has a philosophy of use-follows-declaration (i.e. > the syntax for using something mirrors the syntax for declaring it). This > would eliminate that. > Yes, and I don't have a solution for that. From bytbox at gmail.com Sat Feb 1 14:58:33 2014 From: bytbox at gmail.com (Scott Lawrence) Date: Sat, 1 Feb 2014 17:58:33 -0500 (EST) Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sat, 1 Feb 2014, Corey Richardson wrote: > On Sat, Feb 1, 2014 at 5:53 PM, Scott Lawrence wrote: >> and so the syntax was quite obvious for beginners. The extra complexity of >> forall kills this. Of course, one could write >> >> forall struct Foo { >> >> but that's just ugly. >> > > Unrelated but struct declarations can't have bounds. Bah of course. Just trying to point out that the bounds would only need to be used in the first parameter set. I think it's a pretty good idea anyway, if the loss of concision is considered acceptable. > I don't have a > good way to re-unify the syntax declaration and use.. > -- Scott Lawrence From ben.striegel at gmail.com Sat Feb 1 14:59:55 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sat, 1 Feb 2014 17:59:55 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: > Yes, and I don't have a solution for that. Well, it's not like we don't already stumble here a bit, what with requiring ::<> instead of just <>. Not sure how much other people value the consistency here. On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson wrote: > On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel > wrote: > > First of all, why a new keyword? Reusing `for` here would be totally > > unambiguous. :P And also save us from creating the precedent of > multi-word > > keywords. > > > > I'd be equally happy with for instead of forall. > > > Secondly, currently Rust has a philosophy of use-follows-declaration > (i.e. > > the syntax for using something mirrors the syntax for declaring it). This > > would eliminate that. > > > > Yes, and I don't have a solution for that. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cadencemarseille at gmail.com Sat Feb 1 15:05:38 2014 From: cadencemarseille at gmail.com (Cadence Marseille) Date: Sat, 1 Feb 2014 18:05:38 -0500 Subject: [rust-dev] Replacement for #[link_args] Message-ID: Hello, It seems that support for #[link_args] was recently removed (even with #[feature(link_args)]), so now the -L argument is not being passed to the linker command: https://travis-ci.org/cadencemarseille/rust-pcre/builds/18054206 How do you specify a library directory when building a package with rustpkg? Cadence -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir at slate-project.org Sat Feb 1 15:05:45 2014 From: vladimir at slate-project.org (Vladimir Lushnikov) Date: Sat, 1 Feb 2014 23:05:45 +0000 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: I like the idea (grepability) in theory, but 'for all' to me means that you do *not* monomorphise the type immediately. This is especially obvious when considering single function bounds and not trait bounds (but I guess there are no plans to support HoF so it does not matter anyway). On Sat, Feb 1, 2014 at 10:59 PM, Benjamin Striegel wrote: > > Yes, and I don't have a solution for that. > > Well, it's not like we don't already stumble here a bit, what with > requiring ::<> instead of just <>. Not sure how much other people value the > consistency here. > > > On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson wrote: > >> On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel >> wrote: >> > First of all, why a new keyword? Reusing `for` here would be totally >> > unambiguous. :P And also save us from creating the precedent of >> multi-word >> > keywords. >> > >> >> I'd be equally happy with for instead of forall. >> >> > Secondly, currently Rust has a philosophy of use-follows-declaration >> (i.e. >> > the syntax for using something mirrors the syntax for declaring it). >> This >> > would eliminate that. >> > >> >> Yes, and I don't have a solution for that. >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.striegel at gmail.com Sat Feb 1 15:06:06 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sat, 1 Feb 2014 18:06:06 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: Another point in favor of this plan is that it would eliminate the need to put type parameters directly after the `impl`, which to be honest *is* pretty weird and inconsistent with the rest of the language. But I'm still not sure how I feel about the look of it: for fn foo(t: T, u: U) -> (T, U) { If you choose *not* to wrap after the type parameters there, you're really obscuring what the heck you're trying to declare. Heck, maybe what we're really asking for is for the ability to have "generic blocks" within which type parameters can be declared once: for { fn foo(t: T, u: U) -> (T, U) { ...but that's even *more* boilerplate! On Sat, Feb 1, 2014 at 5:59 PM, Benjamin Striegel wrote: > > Yes, and I don't have a solution for that. > > Well, it's not like we don't already stumble here a bit, what with > requiring ::<> instead of just <>. Not sure how much other people value the > consistency here. > > > On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson wrote: > >> On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel >> wrote: >> > First of all, why a new keyword? Reusing `for` here would be totally >> > unambiguous. :P And also save us from creating the precedent of >> multi-word >> > keywords. >> > >> >> I'd be equally happy with for instead of forall. >> >> > Secondly, currently Rust has a philosophy of use-follows-declaration >> (i.e. >> > the syntax for using something mirrors the syntax for declaring it). >> This >> > would eliminate that. >> > >> >> Yes, and I don't have a solution for that. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkuehn at cmu.edu Sat Feb 1 15:10:18 2014 From: tkuehn at cmu.edu (Tim Kuehn) Date: Sat, 1 Feb 2014 15:10:18 -0800 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sat, Feb 1, 2014 at 3:06 PM, Benjamin Striegel wrote: > Another point in favor of this plan is that it would eliminate the need to > put type parameters directly after the `impl`, which to be honest *is* > pretty weird and inconsistent with the rest of the language. But I'm still > not sure how I feel about the look of it: > > for fn foo(t: T, u: U) -> (T, U) { > > If you choose *not* to wrap after the type parameters there, you're really > obscuring what the heck you're trying to declare. > > Heck, maybe what we're really asking for is for the ability to have > "generic blocks" within which type parameters can be declared once: > > for { > fn foo(t: T, u: U) -> (T, U) { > > ...but that's even *more* boilerplate! > > It'd mirror how impls work, though, which would be nice. It could be optional. > > On Sat, Feb 1, 2014 at 5:59 PM, Benjamin Striegel wrote: > >> > Yes, and I don't have a solution for that. >> >> Well, it's not like we don't already stumble here a bit, what with >> requiring ::<> instead of just <>. Not sure how much other people value the >> consistency here. >> >> >> On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson wrote: >> >>> On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel >>> wrote: >>> > First of all, why a new keyword? Reusing `for` here would be totally >>> > unambiguous. :P And also save us from creating the precedent of >>> multi-word >>> > keywords. >>> > >>> >>> I'd be equally happy with for instead of forall. >>> >>> > Secondly, currently Rust has a philosophy of use-follows-declaration >>> (i.e. >>> > the syntax for using something mirrors the syntax for declaring it). >>> This >>> > would eliminate that. >>> > >>> >>> Yes, and I don't have a solution for that. >>> >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir at slate-project.org Sat Feb 1 15:12:46 2014 From: vladimir at slate-project.org (Vladimir Lushnikov) Date: Sat, 1 Feb 2014 23:12:46 +0000 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: Placing type bounds before the name of the thing you are trying to declare feels unnatural to me. And the generic block is far too much boilerplate! How about supporting type aliases as Scala does? So you write: type MyT = Clone + Eq fn foo(t: T, u: U) -> ? and the 'type' is just an alias for any type that has Clone + Eq? Obviously the above only solves repeating yourself for complex type constraints. Also, reusing 'for' would be confusing as well, because you expect a loop there, not a generic type bound. How about 'any': any fn foo (t: T, u: U) -> ? ? On Sat, Feb 1, 2014 at 11:06 PM, Benjamin Striegel wrote: > Another point in favor of this plan is that it would eliminate the need to > put type parameters directly after the `impl`, which to be honest *is* > pretty weird and inconsistent with the rest of the language. But I'm still > not sure how I feel about the look of it: > > for fn foo(t: T, u: U) -> (T, U) { > > If you choose *not* to wrap after the type parameters there, you're really > obscuring what the heck you're trying to declare. > > Heck, maybe what we're really asking for is for the ability to have > "generic blocks" within which type parameters can be declared once: > > for { > fn foo(t: T, u: U) -> (T, U) { > > ...but that's even *more* boilerplate! > > > > > On Sat, Feb 1, 2014 at 5:59 PM, Benjamin Striegel wrote: > >> > Yes, and I don't have a solution for that. >> >> Well, it's not like we don't already stumble here a bit, what with >> requiring ::<> instead of just <>. Not sure how much other people value the >> consistency here. >> >> >> On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson wrote: >> >>> On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel >>> wrote: >>> > First of all, why a new keyword? Reusing `for` here would be totally >>> > unambiguous. :P And also save us from creating the precedent of >>> multi-word >>> > keywords. >>> > >>> >>> I'd be equally happy with for instead of forall. >>> >>> > Secondly, currently Rust has a philosophy of use-follows-declaration >>> (i.e. >>> > the syntax for using something mirrors the syntax for declaring it). >>> This >>> > would eliminate that. >>> > >>> >>> Yes, and I don't have a solution for that. >>> >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.striegel at gmail.com Sat Feb 1 15:16:09 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sat, 1 Feb 2014 18:16:09 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: > How about supporting type aliases as Scala does? In theory I think that should be achievable today, using trait inheritance: trait MyT : Clone, Eq {} ...at least, I *think* we allow multiple trait inheritance. Not sure what the syntax is! On Sat, Feb 1, 2014 at 6:12 PM, Vladimir Lushnikov < vladimir at slate-project.org> wrote: > Placing type bounds before the name of the thing you are trying to declare > feels unnatural to me. And the generic block is far too much boilerplate! > > How about supporting type aliases as Scala does? So you write: > > type MyT = Clone + Eq > fn foo(t: T, u: U) -> ... > > and the 'type' is just an alias for any type that has Clone + Eq? > > Obviously the above only solves repeating yourself for complex type > constraints. > > Also, reusing 'for' would be confusing as well, because you expect a loop > there, not a generic type bound. How about 'any': > > any fn foo (t: T, u: U) -> ... > > ? > > > On Sat, Feb 1, 2014 at 11:06 PM, Benjamin Striegel > wrote: > >> Another point in favor of this plan is that it would eliminate the need >> to put type parameters directly after the `impl`, which to be honest *is* >> pretty weird and inconsistent with the rest of the language. But I'm still >> not sure how I feel about the look of it: >> >> for fn foo(t: T, u: U) -> (T, U) { >> >> If you choose *not* to wrap after the type parameters there, you're >> really obscuring what the heck you're trying to declare. >> >> Heck, maybe what we're really asking for is for the ability to have >> "generic blocks" within which type parameters can be declared once: >> >> for { >> fn foo(t: T, u: U) -> (T, U) { >> >> ...but that's even *more* boilerplate! >> >> >> >> >> On Sat, Feb 1, 2014 at 5:59 PM, Benjamin Striegel > > wrote: >> >>> > Yes, and I don't have a solution for that. >>> >>> Well, it's not like we don't already stumble here a bit, what with >>> requiring ::<> instead of just <>. Not sure how much other people value the >>> consistency here. >>> >>> >>> On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson wrote: >>> >>>> On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel >>>> wrote: >>>> > First of all, why a new keyword? Reusing `for` here would be totally >>>> > unambiguous. :P And also save us from creating the precedent of >>>> multi-word >>>> > keywords. >>>> > >>>> >>>> I'd be equally happy with for instead of forall. >>>> >>>> > Secondly, currently Rust has a philosophy of use-follows-declaration >>>> (i.e. >>>> > the syntax for using something mirrors the syntax for declaring it). >>>> This >>>> > would eliminate that. >>>> > >>>> >>>> Yes, and I don't have a solution for that. >>>> >>> >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Sat Feb 1 15:17:29 2014 From: corey at octayn.net (Corey Richardson) Date: Sat, 1 Feb 2014 18:17:29 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: Would also need to add an `impl MyT for T`, so we've gone in a circle. On Sat, Feb 1, 2014 at 6:16 PM, Benjamin Striegel wrote: >> How about supporting type aliases as Scala does? > > In theory I think that should be achievable today, using trait inheritance: > > trait MyT : Clone, Eq {} > > ...at least, I *think* we allow multiple trait inheritance. Not sure what > the syntax is! > > > On Sat, Feb 1, 2014 at 6:12 PM, Vladimir Lushnikov > wrote: >> >> Placing type bounds before the name of the thing you are trying to declare >> feels unnatural to me. And the generic block is far too much boilerplate! >> >> How about supporting type aliases as Scala does? So you write: >> >> type MyT = Clone + Eq >> fn foo(t: T, u: U) -> ? >> >> and the 'type' is just an alias for any type that has Clone + Eq? >> >> Obviously the above only solves repeating yourself for complex type >> constraints. >> >> Also, reusing 'for' would be confusing as well, because you expect a loop >> there, not a generic type bound. How about 'any': >> >> any fn foo (t: T, u: U) -> ? >> >> ? >> >> >> On Sat, Feb 1, 2014 at 11:06 PM, Benjamin Striegel >> wrote: >>> >>> Another point in favor of this plan is that it would eliminate the need >>> to put type parameters directly after the `impl`, which to be honest *is* >>> pretty weird and inconsistent with the rest of the language. But I'm still >>> not sure how I feel about the look of it: >>> >>> for fn foo(t: T, u: U) -> (T, U) { >>> >>> If you choose *not* to wrap after the type parameters there, you're >>> really obscuring what the heck you're trying to declare. >>> >>> Heck, maybe what we're really asking for is for the ability to have >>> "generic blocks" within which type parameters can be declared once: >>> >>> for { >>> fn foo(t: T, u: U) -> (T, U) { >>> >>> ...but that's even *more* boilerplate! >>> >>> >>> >>> >>> On Sat, Feb 1, 2014 at 5:59 PM, Benjamin Striegel >>> wrote: >>>> >>>> > Yes, and I don't have a solution for that. >>>> >>>> Well, it's not like we don't already stumble here a bit, what with >>>> requiring ::<> instead of just <>. Not sure how much other people value the >>>> consistency here. >>>> >>>> >>>> On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson >>>> wrote: >>>>> >>>>> On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel >>>>> wrote: >>>>> > First of all, why a new keyword? Reusing `for` here would be totally >>>>> > unambiguous. :P And also save us from creating the precedent of >>>>> > multi-word >>>>> > keywords. >>>>> > >>>>> >>>>> I'd be equally happy with for instead of forall. >>>>> >>>>> > Secondly, currently Rust has a philosophy of use-follows-declaration >>>>> > (i.e. >>>>> > the syntax for using something mirrors the syntax for declaring it). >>>>> > This >>>>> > would eliminate that. >>>>> > >>>>> >>>>> Yes, and I don't have a solution for that. >>>> >>>> >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From bytbox at gmail.com Sat Feb 1 15:16:42 2014 From: bytbox at gmail.com (Scott Lawrence) Date: Sat, 1 Feb 2014 18:16:42 -0500 (EST) Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: It seems to use a '+' instead of ','. On Sat, 1 Feb 2014, Benjamin Striegel wrote: >> How about supporting type aliases as Scala does? > > In theory I think that should be achievable today, using trait inheritance: > > trait MyT : Clone, Eq {} > > ...at least, I *think* we allow multiple trait inheritance. Not sure what > the syntax is! > > > On Sat, Feb 1, 2014 at 6:12 PM, Vladimir Lushnikov < > vladimir at slate-project.org> wrote: > >> Placing type bounds before the name of the thing you are trying to declare >> feels unnatural to me. And the generic block is far too much boilerplate! >> >> How about supporting type aliases as Scala does? So you write: >> >> type MyT = Clone + Eq >> fn foo(t: T, u: U) -> ... >> >> and the 'type' is just an alias for any type that has Clone + Eq? >> >> Obviously the above only solves repeating yourself for complex type >> constraints. >> >> Also, reusing 'for' would be confusing as well, because you expect a loop >> there, not a generic type bound. How about 'any': >> >> any fn foo (t: T, u: U) -> ... >> >> ? >> >> >> On Sat, Feb 1, 2014 at 11:06 PM, Benjamin Striegel >> wrote: >> >>> Another point in favor of this plan is that it would eliminate the need >>> to put type parameters directly after the `impl`, which to be honest *is* >>> pretty weird and inconsistent with the rest of the language. But I'm still >>> not sure how I feel about the look of it: >>> >>> for fn foo(t: T, u: U) -> (T, U) { >>> >>> If you choose *not* to wrap after the type parameters there, you're >>> really obscuring what the heck you're trying to declare. >>> >>> Heck, maybe what we're really asking for is for the ability to have >>> "generic blocks" within which type parameters can be declared once: >>> >>> for { >>> fn foo(t: T, u: U) -> (T, U) { >>> >>> ...but that's even *more* boilerplate! >>> >>> >>> >>> >>> On Sat, Feb 1, 2014 at 5:59 PM, Benjamin Striegel >>> wrote: >>> >>>>> Yes, and I don't have a solution for that. >>>> >>>> Well, it's not like we don't already stumble here a bit, what with >>>> requiring ::<> instead of just <>. Not sure how much other people value the >>>> consistency here. >>>> >>>> >>>> On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson wrote: >>>> >>>>> On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel >>>>> wrote: >>>>>> First of all, why a new keyword? Reusing `for` here would be totally >>>>>> unambiguous. :P And also save us from creating the precedent of >>>>> multi-word >>>>>> keywords. >>>>>> >>>>> >>>>> I'd be equally happy with for instead of forall. >>>>> >>>>>> Secondly, currently Rust has a philosophy of use-follows-declaration >>>>> (i.e. >>>>>> the syntax for using something mirrors the syntax for declaring it). >>>>> This >>>>>> would eliminate that. >>>>>> >>>>> >>>>> Yes, and I don't have a solution for that. >>>>> >>>> >>>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> > -- Scott Lawrence From corey at octayn.net Sat Feb 1 15:18:41 2014 From: corey at octayn.net (Corey Richardson) Date: Sat, 1 Feb 2014 18:18:41 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sat, Feb 1, 2014 at 6:12 PM, Vladimir Lushnikov wrote: > Also, reusing 'for' would be confusing as well, because you expect a loop > there, not a generic type bound. How about 'any': > any is a super useful identifier and is already used. I do not want to reserve it. From ecreed at cs.washington.edu Sat Feb 1 15:24:36 2014 From: ecreed at cs.washington.edu (Eric Reed) Date: Sat, 1 Feb 2014 15:24:36 -0800 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: Responses inlined. > Hey all, > > bjz and I have worked out a nice proposal[0] for a slight syntax > change, reproduced here. It is a breaking change to the syntax, but it > is one that I think brings many benefits. > > Summary > ======= > > Change the following syntax: > > ``` > struct Foo { ... } > impl Trait for Foo { ... } > fn foo(...) { ... } > ``` > > to: > > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` > > The Problem > =========== > > The immediate, and most pragmatic, problem is that in today's Rust one > cannot > easily search for implementations of a trait. Why? `grep 'impl Clone'` is > itself not sufficient, since many types have parametric polymorphism. Now I > need to come up with some sort of regex that can handle this. An easy > first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite > inconvenient to > type and remember. (Here I ignore the issue of tooling, as I do not find > the > argument of "But a tool can do it!" valid in language design.) > I think what I've done in the past was just `grep impl | grep Clone'. > A deeper, more pedagogical problem, is the mismatch between how `struct > Foo<...> { ... }` is read and how it is actually treated. The > straightforward, > left-to-right reading says "There is a struct Foo which, given the types > ... > has the members ...". This might lead one to believe that `Foo` is a single > type, but it is not. `Foo` (that is, type `Foo` instantiated with type > `int`) is not the same type as `Foo` (that is, type `Foo` > instantiated > with type `uint`). Of course, with a small amount of experience or a very > simple explanation, that becomes obvious. > I strongly disagree with this reasoning. There IS only one type Foo. It's a type constructor with kind * -> * (where * means proper type). Foo and Foo are two different applications of Foo and are proper types (i.e. *) because Foo is * -> * and both int and uint are *. Regarding people confusing Foo, Foo and Foo, I think the proposed forall struct Foo {...} syntax is actually more confusing. With the current syntax, it's never legal to write Foo without type parameters, but with the proposed syntax it would be. > Something less obvious is the treatment of functions. What does `fn > foo<...>(...) { ... }` say? "There is a function foo which, given types ... > and arguments ..., does the following computation: ..." is not very > adequate. > It leads one to believe there is a *single* function `foo`, whereas there > is > actually a single `foo` for every substitution of type parameters! This > also > holds for implementations (both of traits and of inherent methods). > Again, I strongly disagree here. There IS only one function foo. Some of it's arguments are types. foo's behavior *does not change* based on the type parameters because of parametricity. That the compiler monomporphizes generic functions is just an implementation detail and doesn't change the semantics of the function. > Another minor problem is that nicely formatting long lists of type > parameters > or type parameters with many bounds is difficult. > I'm not sure how this proposal would address this problem. All of your proposed examples are longer than the current syntax equivalents. > > Proposed Solution > ================= > > Introduce a new keyword, `forall`. This choice of keyword reads very well > and > will not conflict with any identifiers in code which follows the [style > guide](https://github.com/mozilla/rust/wiki/Note-style-guide). > > Change the following declarations from > > ``` > struct Foo { ... } > impl Trait for Foo { ... } > fn foo(...) { ... } > ``` > > to: > > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` > > These read very well. "for all types T and U, there is a struct Foo ...", > "for > all types T and U, there is a function foo ...", etc. These reflect that > there > are in fact multiple functions `foo` and structs `Foo` and implementations > of > `Trait`, due to monomorphization. > > I don't have a preference between the current syntax or your proposed syntax, but I generally disagree that what you claim is a problem is in fact a problem or that your proposal would address it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.summers at me.com Sat Feb 1 15:24:51 2014 From: eric.summers at me.com (Eric Summers) Date: Sat, 01 Feb 2014 17:24:51 -0600 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` I?m new to rust, so maybe this doesn?t make sense, but would it make sense to have a variation of this syntax to make implementing related traits and functions more DRY? Essentially allow the for all to be shared. While I?ve been skimming code to learn Rust, I noticed trait restrictions in particular seem to be repeated a lot in functions and traits that are related to each other. forall { impl BinaryEncoder for MyStruct { ? } impl BinaryDecoder for MyStruct { ? } } I also like how it breaks across lines: forall struct Foo { ... } It looks like someone else suggested this while I was typing, but I like the aesthetics of it. -Eric From corey at octayn.net Sat Feb 1 15:31:59 2014 From: corey at octayn.net (Corey Richardson) Date: Sat, 1 Feb 2014 18:31:59 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed wrote: > Responses inlined. > >> >> Hey all, >> >> bjz and I have worked out a nice proposal[0] for a slight syntax >> change, reproduced here. It is a breaking change to the syntax, but it >> is one that I think brings many benefits. >> >> Summary >> ======= >> >> Change the following syntax: >> >> ``` >> struct Foo { ... } >> impl Trait for Foo { ... } >> fn foo(...) { ... } >> ``` >> >> to: >> >> ``` >> forall struct Foo { ... } >> forall impl Trait for Foo { ... } >> forall fn foo(...) { ... } >> ``` >> >> The Problem >> =========== >> >> The immediate, and most pragmatic, problem is that in today's Rust one >> cannot >> easily search for implementations of a trait. Why? `grep 'impl Clone'` is >> itself not sufficient, since many types have parametric polymorphism. Now >> I >> need to come up with some sort of regex that can handle this. An easy >> first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite >> inconvenient to >> type and remember. (Here I ignore the issue of tooling, as I do not find >> the >> argument of "But a tool can do it!" valid in language design.) > > > I think what I've done in the past was just `grep impl | grep Clone'. > >> >> A deeper, more pedagogical problem, is the mismatch between how `struct >> Foo<...> { ... }` is read and how it is actually treated. The >> straightforward, >> left-to-right reading says "There is a struct Foo which, given the types >> ... >> has the members ...". This might lead one to believe that `Foo` is a >> single >> type, but it is not. `Foo` (that is, type `Foo` instantiated with >> type >> `int`) is not the same type as `Foo` (that is, type `Foo` >> instantiated >> with type `uint`). Of course, with a small amount of experience or a very >> simple explanation, that becomes obvious. > > > I strongly disagree with this reasoning. > There IS only one type Foo. It's a type constructor with kind * -> * (where > * means proper type). > Foo and Foo are two different applications of Foo and are proper > types (i.e. *) because Foo is * -> * and both int and uint are *. > Regarding people confusing Foo, Foo and Foo, I think the proposed > forall struct Foo {...} syntax is actually more confusing. > With the current syntax, it's never legal to write Foo without type > parameters, but with the proposed syntax it would be. > I've yet to see a proposal for HKT, but with them that interpretation would be valid and indeed make this proposal's argument weaker. >> >> Something less obvious is the treatment of functions. What does `fn >> foo<...>(...) { ... }` say? "There is a function foo which, given types >> ... >> and arguments ..., does the following computation: ..." is not very >> adequate. >> It leads one to believe there is a *single* function `foo`, whereas there >> is >> actually a single `foo` for every substitution of type parameters! This >> also >> holds for implementations (both of traits and of inherent methods). > > > Again, I strongly disagree here. > There IS only one function foo. Some of it's arguments are types. foo's > behavior *does not change* based on the type parameters because of > parametricity. > That the compiler monomporphizes generic functions is just an implementation > detail and doesn't change the semantics of the function. > It can if it uses Any, size_of, etc. eddyb had "integers in the typesystem" by using size_of and [u8, ..N]. Anything using the "properties" of types or the tydescs *will* change for each instantiation. >> >> Another minor problem is that nicely formatting long lists of type >> parameters >> or type parameters with many bounds is difficult. > > > I'm not sure how this proposal would address this problem. All of your > proposed examples are longer than the current syntax equivalents. > The idea is there is an obvious place to insert a newline (after the forall), though bjz would have to comment more on that. From banderson at mozilla.com Sat Feb 1 15:33:34 2014 From: banderson at mozilla.com (Brian Anderson) Date: Sat, 01 Feb 2014 15:33:34 -0800 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: <52ED844E.1080404@mozilla.com> On 02/01/2014 02:59 PM, Benjamin Striegel wrote: > > Yes, and I don't have a solution for that. > > Well, it's not like we don't already stumble here a bit, what with > requiring ::<> instead of just <>. Not sure how much other people > value the consistency here. Yeah, the existing solution is bad, and also rare. If changing the declaration might happen then you might as well make another minor change for consistency, possibly for the better. > > > On Sat, Feb 1, 2014 at 5:58 PM, Corey Richardson > wrote: > > On Sat, Feb 1, 2014 at 5:55 PM, Benjamin Striegel > > wrote: > > First of all, why a new keyword? Reusing `for` here would be totally > > unambiguous. :P And also save us from creating the precedent of > multi-word > > keywords. > > > > I'd be equally happy with for instead of forall. > > > Secondly, currently Rust has a philosophy of > use-follows-declaration (i.e. > > the syntax for using something mirrors the syntax for declaring > it). This > > would eliminate that. > > > > Yes, and I don't have a solution for that. > > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Sat Feb 1 15:36:55 2014 From: corey at octayn.net (Corey Richardson) Date: Sat, 1 Feb 2014 18:36:55 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sat, Feb 1, 2014 at 6:31 PM, Corey Richardson wrote: > On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed wrote: >> Again, I strongly disagree here. >> There IS only one function foo. Some of it's arguments are types. foo's >> behavior *does not change* based on the type parameters because of >> parametricity. >> That the compiler monomporphizes generic functions is just an implementation >> detail and doesn't change the semantics of the function. >> > > It can if it uses Any, size_of, etc. eddyb had "integers in the > typesystem" by using size_of and [u8, ..N]. Anything using the > "properties" of types or the tydescs *will* change for each > instantiation. > Furthermore, I don't considered monomorphic instantiation to be an implementation detail. Without it the difference between trait objects and generics is nonsensical, and iirc there's code that depends on the addresses of different instantiations being different (though I might be confusing that with statics). It's also important to understanding the performance characteristics of Rust, esp binary size and why metadata is so huge. It's a vital detail to understanding Rust, and any use of it needs to consider it. If it is indeed considered an implementation detail, it's probably the most important implementation detail I've seen in anything. Given Rust's target market, it'd be irresponsible to ignore it... From vadimcn at gmail.com Sat Feb 1 15:42:45 2014 From: vadimcn at gmail.com (Vadim) Date: Sat, 1 Feb 2014 15:42:45 -0800 Subject: [rust-dev] Syntax for custom type bounds In-Reply-To: <20140201125738.GA21688@Mr-Bennet> References: <20140201125738.GA21688@Mr-Bennet> Message-ID: On Sat, Feb 1, 2014 at 4:57 AM, Niko Matsakis wrote: > Regarding the marker types, they are somewhat awkward, and are not the > approach I originally favored. But they have some real advantages: > > - Easily extensible as we add new requirements, unlike syntax. > - Easily documented. > - These bounds are only used for unsafe code, so it's not something > ordinary users should have to stumble over. > > What concerns me more is that marker types are "opt in" -- so if you > don't know that you need them, and you build a datatype founded on > unsafe code, you can get incorrect behavior. There may be some steps > we can take to mitigate that in some cases. > > In any case, the use of marker types are also quite orthogonal to your > other concerns: I meant that marker types seem like more of the same approach that was taken with lifetimes in iterators (i.e. declaring a dummy field). I don't have a firm opinion about what syntax I'd prefer for markers in general, but I do have some ideas about lifetimes, which are probably the most commonly used type bound in Rust. > > This also makes the intent much more clear. Currently, one would have > to > > dig into the definition of MutItems<'a,T> to figure out that the lifetime > > parameter 'a is used to create a dummy borrow field into the vector, so > > that the whole iterator is then treated as a mutable borrow. This feels > > very convoluted, if you ask me. > > I disagree -- I think treating lifetime and type parameters uniformly > feels cleaner than permitting lifetime bounds to appear in random > places. Presumably `'a Foo` would be syntactic sugar for `Foo<'a, T>`? > There's an obvious ambiguity here with `&'a T`. > Since &'a Foo currently means "the return value is a reference into something that has lifetime 'a", 'a Foo feels like a natural extension for saying "the return value is a reference-like thing whose safety depends on something that has lifetime 'a still being around". Foo<'a,T>, of the other hand... it is not obvious to me why would it necessarily mean that. Is this because the only way to use a lifetime parameter in a type is to create a reference field into something with that lifetime? If so, it feels like one logical deduction too many for the reader of the code to make. And what if the lifetime parameter isn't used at all? After all, I can do that with regular type parameters (i.e. declare, but not use). Then Foo<'a,T> would only appear as having lifetime 'a, without actually being so? > On a slightly different note, is there a strong reason for having to name > > lifetime parameters explicitly? Could we simply re-use actual parameter > > names prefixed with ' as their lifetimes? > > It is plausible we could say that a lifetime parameter name that is > the same as a local variable binding whose type is a borrowed pointer > refers to the lifetime of that borrowed pointer. To me, it feels like > a rather ad hoc rule, though I can see it would sometimes be convenient. > > The current rules are intended to showcase how lifetime parameters work > precisely like type parameters. In other words, we write: > > fn copy(t: T) -> T; > > we do not write: > > fn copy(t) -> t; > > In the same way, we identify and declare lifetime parameters. > > Note that lifetime parameters do not have a natural one-to-one > relationship with variables. It's certainly possible (and reasonable) > to declare a function like: > > fn foo<'a, 'b, 'c>(x: &'a Foo<'b, 'c>) > > In which case, the meaning of `'x` is pretty unclear to me. > I'd like it to mean "the lifetime of whatever x points to", i.e. 'x == 'a. I realize that this is somewhat problematic, because x itself is the reference, not something it points to, but... you know, because auto-dereferencing... :-) > > The above could then be reduced to this: > > > > pub trait MutableVector { > > fn mut_iter(self) -> 'self MutItems; > > ... > > } > > > > This used to be valid syntax, btw, though it worked because 'self > lifetime > > was special, IIRC. > > Writing `'self` was valid syntax, but it didn't have the meaning you > are proposing. Which is one of the reasons we removed it. > I've been around Rust for almost a year now, and certainly since the time the current lifetime notation has been introduced, and I *still *could not explain to somebody, why a lifetime parameter appearing among the type parameters of a trait or a struct refers to the lifetime of that trait or struct. It isn't used to declare any reference fields... (and traits can't have fields, of course). I'd understand if the above example had to be written as: pub trait MutableVector { fn mut_iter<'a>(&'a mut self) -> 'a MutItems; ... } But the current notation completely evades me. Regarding 'self: Ok, say what you want about reusing parameter names for lifetimes in general, but having syntax sugar for the lifetime of the current struct was totally worth it, IMHO. We already have sugar for "self", or else we'd be writing "trait Foo { fn method(self:&Foo) ... }", so why not for its' lifetime? In my estimation, referencing the current struct constitutes like 90% of all lifetime parameter usage. Back when we had 'self, Rust sources looked way less noisy. Vadim -------------- next part -------------- An HTML attachment was scrubbed... URL: From comexk at gmail.com Sat Feb 1 16:25:47 2014 From: comexk at gmail.com (comex) Date: Sat, 1 Feb 2014 19:25:47 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sat, Feb 1, 2014 at 5:39 PM, Corey Richardson wrote: > A deeper, more pedagogical problem, is the mismatch between how `struct > Foo<...> { ... }` is read and how it is actually treated. The straightforward, > left-to-right reading says "There is a struct Foo which, given the types ... > has the members ...". I read "struct Foo<...> { ... }" the same way as "fn foo(...) -> ...". In the latter case, given some value parameters, I get a return value; in the former, given some type parameters, I get a struct. On the contrary, I would find the idea that "forall fn" is specified with "fn::", like in C++ (template ) relatively confusing. For bulk generics I would rather have the concept of a generic module than just copy the generic arguments onto each item. (For that matter, I think I'd like to have traits on modules or something like that, so that you can have something like a list trait which comes with a type for the list and a type for its iterator, without having to write two generic parameters on everything. But that's a different story.) Also, I think the syntax for generics is verbose enough as it is; I'd rather see it shortened than lengthened. From eric.summers at me.com Sat Feb 1 16:49:26 2014 From: eric.summers at me.com (Eric Summers) Date: Sat, 01 Feb 2014 18:49:26 -0600 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: <3B128C57-53F8-4CC7-A4F9-69438A8A2CCE@me.com> > > forall { > impl BinaryEncoder for MyStruct { ? } > impl BinaryDecoder for MyStruct { ? } > } comex mentioned the idea of a generic module. That would be interesting. I like that idea better then this. > > I also like how it breaks across lines: > > forall > struct Foo { > ? > } > I guess it currently breaks ok for long type params: impl Trait for Foo { ... } I think the grep issue will be solved by libsyntax being integrated in text editor plugins. -Eric From armin.ronacher at active-4.com Sat Feb 1 17:19:04 2014 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Sun, 02 Feb 2014 01:19:04 +0000 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: <52ED9D08.6030905@active-4.com> Hi, On 01/02/2014 22:58, Corey Richardson wrote: > I'd be equally happy with for instead of forall. +1 on not using forall, it sounds confusing and is actually quite a bit to type considering how frequent these things are. As an alternative to "for" I would want to throw "be" and "use" into the mix. "be" is currently already reserved, but not sure how well it sounds. Regards, Armin From lists at ncameron.org Sat Feb 1 17:45:44 2014 From: lists at ncameron.org (Nick Cameron) Date: Sun, 2 Feb 2014 14:45:44 +1300 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: I prefer the existing syntax. - As pointed out above, there are solutions to the non-grep-ability. - The change adds boilerplate and nomenclature that is likely unfamiliar to our target audience - 'for all' is well known to functional programmers, but I believe that is not true for most users of C++ (or Java). Being closer to the C++/Java syntax for generics is probably more 'intuitive'. - I do not think generics imply polymorphic implementation - C++ programmers are used to generics having monomorphic implementations. - Starting an item definition with the most important keyword is nice and we would lose that - currently it is easy to scan down the start of lines and see that something is a fn, impl, struct, etc. With the change you just see that something is generic or not, which is not what you are interested in when scanning. Put another way, I believe this change prioritises automatic search (grep, which can be fixed) over visual search (which cannot). (I do agree that formatting lists of type params is difficult) Cheers, Nick On Sun, Feb 2, 2014 at 11:39 AM, Corey Richardson wrote: > Hey all, > > bjz and I have worked out a nice proposal[0] for a slight syntax > change, reproduced here. It is a breaking change to the syntax, but it > is one that I think brings many benefits. > > Summary > ======= > > Change the following syntax: > > ``` > struct Foo { ... } > impl Trait for Foo { ... } > fn foo(...) { ... } > ``` > > to: > > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` > > The Problem > =========== > > The immediate, and most pragmatic, problem is that in today's Rust one > cannot > easily search for implementations of a trait. Why? `grep 'impl Clone'` is > itself not sufficient, since many types have parametric polymorphism. Now I > need to come up with some sort of regex that can handle this. An easy > first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite > inconvenient to > type and remember. (Here I ignore the issue of tooling, as I do not find > the > argument of "But a tool can do it!" valid in language design.) > > A deeper, more pedagogical problem, is the mismatch between how `struct > Foo<...> { ... }` is read and how it is actually treated. The > straightforward, > left-to-right reading says "There is a struct Foo which, given the types > ... > has the members ...". This might lead one to believe that `Foo` is a single > type, but it is not. `Foo` (that is, type `Foo` instantiated with type > `int`) is not the same type as `Foo` (that is, type `Foo` > instantiated > with type `uint`). Of course, with a small amount of experience or a very > simple explanation, that becomes obvious. > > Something less obvious is the treatment of functions. What does `fn > foo<...>(...) { ... }` say? "There is a function foo which, given types ... > and arguments ..., does the following computation: ..." is not very > adequate. > It leads one to believe there is a *single* function `foo`, whereas there > is > actually a single `foo` for every substitution of type parameters! This > also > holds for implementations (both of traits and of inherent methods). > > Another minor problem is that nicely formatting long lists of type > parameters > or type parameters with many bounds is difficult. > > Proposed Solution > ================= > > Introduce a new keyword, `forall`. This choice of keyword reads very well > and > will not conflict with any identifiers in code which follows the [style > guide](https://github.com/mozilla/rust/wiki/Note-style-guide). > > Change the following declarations from > > ``` > struct Foo { ... } > impl Trait for Foo { ... } > fn foo(...) { ... } > ``` > > to: > > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` > > These read very well. "for all types T and U, there is a struct Foo ...", > "for > all types T and U, there is a function foo ...", etc. These reflect that > there > are in fact multiple functions `foo` and structs `Foo` and implementations > of > `Trait`, due to monomorphization. > > > [0]: > http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/ > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Sat Feb 1 18:23:12 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Sat, 1 Feb 2014 18:23:12 -0800 Subject: [rust-dev] "let mut" <-> "let !" In-Reply-To: References: Message-ID: We've been steadily reducing the amount of punctuation in the language, because people tend not to like it. Plus, in this case, `mut` being longer than `!` or any other symbol is useful: mutability should be a teeny bit painful. From ecreed at cs.washington.edu Sat Feb 1 18:27:54 2014 From: ecreed at cs.washington.edu (Eric Reed) Date: Sat, 1 Feb 2014 18:27:54 -0800 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: I'm going to respond to Any and size_of separately because there's a significant difference IMO. It's true that Any and trait bounds on type parameters in general can let function behavior depend on the passed type, but only in the specific behavior defined by the trait. Everything that's not a trait function is still independent of the passed type (contrast this with a setup where this wasn't true. `fn foo() -> int' could return 2i for int and spin up a tetris game then crash for uint). Any just happens to be powerful enough to allow complete variance, which is expected since it's just dynamic typing, but there's an important distinction still: behavior variance because of Any *is* part of the function because you need to do explicit type tests. I wasn't aware of mem::size_of before, but I'm rather annoyed to find out we've started adding bare A -> B functions since it breaks parametricity. I'd much rather put size_of in a trait, at which point it's just a weaker version of Any. Being able to tell how a function's behavior might vary just from the type signature is a very nice property, and I'd like Rust to keep it. Now, onto monomorphization. I agree that distinguishing static and dynamic dispatch is important for performance characterization, but static dispatch != monomorphization (or if it currently does, then it probably shouldn't) because not all statically dispatched code needs to be monomorphizied. Consider a function like this: fn foo(ox: Option<~A>, f: |~A| -> ~B) -> Option<~B> { match ox { Some(x) => Some(f(x)), None => None, } } It's quite generic, but AFAIK there's no need to monomorphize it for static dispatch. It uses a constant amount of stack space (not counting what `f' uses when called) and could run the exact same code for any types A or B (check discriminant, potentially call a function pointer, and return). I would guess most cases require monomorphization, but I consider universal monomorphization a way of implementing static dispatch (as opposed to partial monomorphization). I agree that understanding monomorphization is important for understanding the performance characteristics of code generated by *rustc*, but rustc != Rust. Unless universal monomorphization for static dispatch makes its way into the Rust language spec, I'm going to consider it an implementation detail for rustc. On Sat, Feb 1, 2014 at 3:31 PM, Corey Richardson wrote: > On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed > wrote: > > Responses inlined. > > > >> > >> Hey all, > >> > >> bjz and I have worked out a nice proposal[0] for a slight syntax > >> change, reproduced here. It is a breaking change to the syntax, but it > >> is one that I think brings many benefits. > >> > >> Summary > >> ======= > >> > >> Change the following syntax: > >> > >> ``` > >> struct Foo { ... } > >> impl Trait for Foo { ... } > >> fn foo(...) { ... } > >> ``` > >> > >> to: > >> > >> ``` > >> forall struct Foo { ... } > >> forall impl Trait for Foo { ... } > >> forall fn foo(...) { ... } > >> ``` > >> > >> The Problem > >> =========== > >> > >> The immediate, and most pragmatic, problem is that in today's Rust one > >> cannot > >> easily search for implementations of a trait. Why? `grep 'impl Clone'` > is > >> itself not sufficient, since many types have parametric polymorphism. > Now > >> I > >> need to come up with some sort of regex that can handle this. An easy > >> first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite > >> inconvenient to > >> type and remember. (Here I ignore the issue of tooling, as I do not find > >> the > >> argument of "But a tool can do it!" valid in language design.) > > > > > > I think what I've done in the past was just `grep impl | grep Clone'. > > > >> > >> A deeper, more pedagogical problem, is the mismatch between how `struct > >> Foo<...> { ... }` is read and how it is actually treated. The > >> straightforward, > >> left-to-right reading says "There is a struct Foo which, given the types > >> ... > >> has the members ...". This might lead one to believe that `Foo` is a > >> single > >> type, but it is not. `Foo` (that is, type `Foo` instantiated with > >> type > >> `int`) is not the same type as `Foo` (that is, type `Foo` > >> instantiated > >> with type `uint`). Of course, with a small amount of experience or a > very > >> simple explanation, that becomes obvious. > > > > > > I strongly disagree with this reasoning. > > There IS only one type Foo. It's a type constructor with kind * -> * > (where > > * means proper type). > > Foo and Foo are two different applications of Foo and are > proper > > types (i.e. *) because Foo is * -> * and both int and uint are *. > > Regarding people confusing Foo, Foo and Foo, I think the > proposed > > forall struct Foo {...} syntax is actually more confusing. > > With the current syntax, it's never legal to write Foo without type > > parameters, but with the proposed syntax it would be. > > > > I've yet to see a proposal for HKT, but with them that interpretation > would be valid and indeed make this proposal's argument weaker. > > >> > >> Something less obvious is the treatment of functions. What does `fn > >> foo<...>(...) { ... }` say? "There is a function foo which, given types > >> ... > >> and arguments ..., does the following computation: ..." is not very > >> adequate. > >> It leads one to believe there is a *single* function `foo`, whereas > there > >> is > >> actually a single `foo` for every substitution of type parameters! This > >> also > >> holds for implementations (both of traits and of inherent methods). > > > > > > Again, I strongly disagree here. > > There IS only one function foo. Some of it's arguments are types. foo's > > behavior *does not change* based on the type parameters because of > > parametricity. > > That the compiler monomporphizes generic functions is just an > implementation > > detail and doesn't change the semantics of the function. > > > > It can if it uses Any, size_of, etc. eddyb had "integers in the > typesystem" by using size_of and [u8, ..N]. Anything using the > "properties" of types or the tydescs *will* change for each > instantiation. > > >> > >> Another minor problem is that nicely formatting long lists of type > >> parameters > >> or type parameters with many bounds is difficult. > > > > > > I'm not sure how this proposal would address this problem. All of your > > proposed examples are longer than the current syntax equivalents. > > > > The idea is there is an obvious place to insert a newline (after the > forall), though bjz would have to comment more on that. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at metajack.im Sat Feb 1 18:40:13 2014 From: jack at metajack.im (Jack Moffitt) Date: Sat, 1 Feb 2014 19:40:13 -0700 Subject: [rust-dev] Replacement for #[link_args] In-Reply-To: References: Message-ID: This might be a recent regression of rustpkg, but rustpkg should pass through compiler options you give it. `rustpkg install foo -L some/path` I think is supposed to work. If nothing else, rustc will definitely take -L arguments. Also, in your crate source you want to annotate you extern block with `#[link(name=...)]`. jack. On Sat, Feb 1, 2014 at 4:05 PM, Cadence Marseille wrote: > Hello, > > It seems that support for #[link_args] was recently removed (even with > #[feature(link_args)]), so now the -L argument is not being passed to the > linker command: > https://travis-ci.org/cadencemarseille/rust-pcre/builds/18054206 > > How do you specify a library directory when building a package with rustpkg? > > Cadence > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From danielmicay at gmail.com Sat Feb 1 18:43:36 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 1 Feb 2014 21:43:36 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sat, Feb 1, 2014 at 9:27 PM, Eric Reed wrote: > > I wasn't aware of mem::size_of before, but I'm rather annoyed to find out > we've started adding bare A -> B functions since it breaks parametricity. > I'd much rather put size_of in a trait, at which point it's just a weaker > version of Any. You do realize how widely used size_of is, right? I don't this it makes sense to say we've *started* adding this stuff when being able to get the size/alignment has pretty much always been there. From pcwalton at mozilla.com Sat Feb 1 18:46:28 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Sat, 01 Feb 2014 18:46:28 -0800 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: <52EDB184.7020702@mozilla.com> On 2/1/14 6:43 PM, Daniel Micay wrote: > On Sat, Feb 1, 2014 at 9:27 PM, Eric Reed wrote: >> >> I wasn't aware of mem::size_of before, but I'm rather annoyed to find out >> we've started adding bare A -> B functions since it breaks parametricity. >> I'd much rather put size_of in a trait, at which point it's just a weaker >> version of Any. > > You do realize how widely used size_of is, right? I don't this it > makes sense to say we've *started* adding this stuff when being able > to get the size/alignment has pretty much always been there. `transmute()` breaks parametricity too, which is annoying to me because you can get C++-template-expansion-style errors in translation time ("transmute called on types of different sizes"). I proposed changing it to a dynamic runtime failure if the types had different sizes, which eliminates ad-hoc templates leaking into our trait system, but that met with extremely strong objections from pretty much everyone. Patrick From danielmicay at gmail.com Sat Feb 1 18:50:35 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 1 Feb 2014 21:50:35 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: <52EDB184.7020702@mozilla.com> References: <52EDB184.7020702@mozilla.com> Message-ID: On Sat, Feb 1, 2014 at 9:46 PM, Patrick Walton wrote: > On 2/1/14 6:43 PM, Daniel Micay wrote: >> >> On Sat, Feb 1, 2014 at 9:27 PM, Eric Reed >> wrote: >>> >>> >>> I wasn't aware of mem::size_of before, but I'm rather annoyed to find out >>> we've started adding bare A -> B functions since it breaks parametricity. >>> I'd much rather put size_of in a trait, at which point it's just a weaker >>> version of Any. >> >> >> You do realize how widely used size_of is, right? I don't this it >> makes sense to say we've *started* adding this stuff when being able >> to get the size/alignment has pretty much always been there. > > > `transmute()` breaks parametricity too, which is annoying to me because you > can get C++-template-expansion-style errors in translation time ("transmute > called on types of different sizes"). I proposed changing it to a dynamic > runtime failure if the types had different sizes, which eliminates ad-hoc > templates leaking into our trait system, but that met with extremely strong > objections from pretty much everyone. > > Patrick This could be restricted to `unsafe` code by making reflection features `unsafe` and mandating that safe functions must compile for types meeting the bounds. The `size_of` functionality is absolutely required to have any hope of writing smart pointers and containers in the library, without using ~T and ~[T] as the sole allocators. From pcwalton at mozilla.com Sat Feb 1 18:51:47 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Sat, 01 Feb 2014 18:51:47 -0800 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: <52EDB184.7020702@mozilla.com> Message-ID: <52EDB2C3.4020600@mozilla.com> On 2/1/14 6:50 PM, Daniel Micay wrote: > This could be restricted to `unsafe` code by making reflection > features `unsafe` and mandating that safe functions must compile for > types meeting the bounds. > > The `size_of` functionality is absolutely required to have any hope of > writing smart pointers and containers in the library, without using ~T > and ~[T] as the sole allocators. Oh, don't worry, I'm not proposing removing either transmute or sizeof. Just saying it bugs the theorist in me. :) Patrick From ecreed at cs.washington.edu Sat Feb 1 19:12:25 2014 From: ecreed at cs.washington.edu (Eric Reed) Date: Sat, 1 Feb 2014 19:12:25 -0800 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: Well there's only 260 uses of the string "size_of" in rustc's src/ according to grep and only 3 uses of "size_of" in servo according to GitHub, so I think you may be overestimating its usage. Either way, I'm not proposing we get rid of size_of. I just think we should put it in an automatically derived trait instead of defining a function on all types. Literally the only thing that would change would be code like this: fn foo(t: T) { let size = mem::size_of(t); } would have to be changed to: fn foo(t: T) { let size = SizeOf::size_of(t); // or t.size_of() } Is that really so bad? Now the function's type signature documents that the function's behavior depends on the size of the type. If you see a signature like `fn foo(t: T)', then you know that it doesn't. There's no additional performance overhead and it makes size_of like other intrinsic operators (+, ==, etc.). I seriously don't see what downside this could possibly have. On Sat, Feb 1, 2014 at 6:43 PM, Daniel Micay wrote: > On Sat, Feb 1, 2014 at 9:27 PM, Eric Reed > wrote: > > > > I wasn't aware of mem::size_of before, but I'm rather annoyed to find out > > we've started adding bare A -> B functions since it breaks parametricity. > > I'd much rather put size_of in a trait, at which point it's just a weaker > > version of Any. > > You do realize how widely used size_of is, right? I don't this it > makes sense to say we've *started* adding this stuff when being able > to get the size/alignment has pretty much always been there. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Sat Feb 1 19:12:28 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 1 Feb 2014 22:12:28 -0500 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <20140202012845.47f9e66c@lightyear> References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED1618.8090701@gmail.com> <20140201233209.7ffa7b5a@lightyear> <52ED5301.7010404@gmail.com> <20140202012845.47f9e66c@lightyear> Message-ID: On Sat, Feb 1, 2014 at 4:28 PM, Vladimir Matveev > > Well, it seems that working for a long time with a code targeting virtual > machines is corrupting :) I completely forgot about different models of > compilation. I see your point. But I think that developing and distributing > should be considered separately. Package manager for developers should be a > part of language infrastructure (like rustpkg is now for Rust and, for example, > go tool for Go language or cabal for Haskell). This package manager allows > flexible management of Rust libraries and their dependencies, and it should be > integrated with the build system (or *be* this build system). It is used by > developers to create applications and libraries and by maintainers to prepare > these applications and libraries for integration with the distribution system > for end users. How will it handle external dependencies? > Package manager for general users (I'll call it system package manager), > however, depends on the OS, and it is maintainer's task to determine correct > dependencies for each package. Rust package manager should not depend in any > way on the system package manager and its packages, because each system has its > own package manager, and it is just impossible to support them all. Rust also > should not force usage of concrete user-level package manager (like 0install, > for example), because this means additional unrelated software on the user > installation. I don't understand this. A package manager specific to Rust is additional software, just like 0install. 0install has full support for installing dependencies via the system package manager on many systems if desired. http://0install.net/distribution-integration.html From danielmicay at gmail.com Sat Feb 1 19:18:06 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sat, 1 Feb 2014 22:18:06 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sat, Feb 1, 2014 at 10:12 PM, Eric Reed wrote: > Well there's only 260 uses of the string "size_of" in rustc's src/ according > to grep and only 3 uses of "size_of" in servo according to GitHub, so I > think you may be overestimating its usage. The number of calls to `size_of` isn't a useful metric. It's the building block required to allocate memory (vectors, unique pointers) and in the slice iterators (to perform pointer arithmetic). If it requires a bound, then so will any code using a slice iterator. > Either way, I'm not proposing we get rid of size_of. I just think we should > put it in an automatically derived trait instead of defining a function on > all types. > Literally the only thing that would change would be code like this: > > fn foo(t: T) { > let size = mem::size_of(t); > } > > would have to be changed to: > > fn foo(t: T) { > let size = SizeOf::size_of(t); // or t.size_of() > } > > Is that really so bad? Yes, it is. > Now the function's type signature documents that the function's behavior > depends on the size of the type. > If you see a signature like `fn foo(t: T)', then you know that it > doesn't. > There's no additional performance overhead and it makes size_of like other > intrinsic operators (+, ==, etc.). The operators are not implemented for every type as they are for `size_of`. > I seriously don't see what downside this could possibly have. Using unique pointers, vectors and even slice iterators will require a semantically irrelevant `SizeOf` bound. Whether or not you allocate a unique pointer to store a value internally shouldn't be part of the function signature. From dbau.pp at gmail.com Sat Feb 1 19:22:13 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Sun, 02 Feb 2014 14:22:13 +1100 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: <52EDB9E5.80807@gmail.com> On 02/02/14 14:18, Daniel Micay wrote: > On Sat, Feb 1, 2014 at 10:12 PM, Eric Reed wrote: >> Well there's only 260 uses of the string "size_of" in rustc's src/ according >> to grep and only 3 uses of "size_of" in servo according to GitHub, so I >> think you may be overestimating its usage. > The number of calls to `size_of` isn't a useful metric. It's the > building block required to allocate memory (vectors, unique pointers) > and in the slice iterators (to perform pointer arithmetic). If it > requires a bound, then so will any code using a slice iterator. > >> Either way, I'm not proposing we get rid of size_of. I just think we should >> put it in an automatically derived trait instead of defining a function on >> all types. >> Literally the only thing that would change would be code like this: >> >> fn foo(t: T) { >> let size = mem::size_of(t); >> } >> >> would have to be changed to: >> >> fn foo(t: T) { >> let size = SizeOf::size_of(t); // or t.size_of() >> } >> >> Is that really so bad? > Yes, it is. > >> Now the function's type signature documents that the function's behavior >> depends on the size of the type. >> If you see a signature like `fn foo(t: T)', then you know that it >> doesn't. >> There's no additional performance overhead and it makes size_of like other >> intrinsic operators (+, ==, etc.). > The operators are not implemented for every type as they are for `size_of`. > >> I seriously don't see what downside this could possibly have. > Using unique pointers, vectors and even slice iterators will require a > semantically irrelevant `SizeOf` bound. Whether or not you allocate a > unique pointer to store a value internally shouldn't be part of the > function signature. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev To add to this, a SizeOf bound would be essentially equivalent to the Sized bound from DST, and I believe experimentation a while ago decided that requiring Sized is the common case (or, at least, so common that it would be extremely annoying to require it be explicit). Huon From dpx.infinity at gmail.com Sat Feb 1 23:20:16 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Sun, 2 Feb 2014 11:20:16 +0400 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED1618.8090701@gmail.com> <20140201233209.7ffa7b5a@lightyear> <52ED5301.7010404@gmail.com> <20140202012845.47f9e66c@lightyear> Message-ID: > How will it handle external dependencies? I don't think it should. External dependencies are way too complex. They come in different flavors on different systems. On Windows, for example, you don't have a package manager, and you'll have to ship these dependencies with the program using an installer. On each Linux distro there is custom package manager, each having its own strategy of naming things and its own versioning policy. It is impossible to unify them, and I don't think that Rust package manager should attempt to do this. > I don't understand this. A package manager specific to Rust is > additional software, just like 0install. 0install has full support for > installing dependencies via the system package manager on many systems > if desired. *End users* won't need Rust package manager at all (unless they want to install development versions of Rust software). Only package maintainers and developers have to use it. End users just use their native package manager to obtain packages created by maintainers. If Rust would depend on zero install, however, end user will be *forced* to use zero install. I'm against of using zero install for the following reasons. First, it is just a packaging system. It is not supposed to help in building Rust software. But resolving build dependencies and invoking the compiler with correct paths to installed dependencies is crucial. How, for example, zero install would handle dependency to master branch of some source repository? What if I'm developing several packages which depend on different versions of the same package? Zero install allows installing multiple versions of the same package, yes, but how should I specify where these libraries are located to the compiler? How should I specify build dependencies for people who want to hack on my package? Majority of direct dependencies will be from the Rust world, and dedicated building/packaging tool would be able to download and build them automatically as a part of build process, and only external dependencies would have to be installed manually. With zero install you will have to install everything, including Rust-world dependencies, by yourself. Second, it is another package manager which is foreign to the system (unless the system uses zero install as its package manager, but I think only very minor Linux distros have that). Not only this is bad because having multiple distribution systems is confusing for the end users (BTW, as far as I can see, both Linux and Windows users don't want it. Linux users wouldn't want to have additional package manager, and majority of Windows users just don't know what package manager is and why they have to install some program which downloads other programs just in order for them to use this small utility. They are used to one-click self-contained installers. On Android you are forced to have self-contained packages, and zero install won't work there at all. Don't know anything about Mac, haven't used it). It also means additional impact on distribution maintainers. If Rust adopts zero install universally, then, because distribution maintainers won't support any build system but their own, they will have either to abandon Rust software at all or build and resolve their dependencies (including Rust ones) manually, as it is done with C/C++ now. They won't be able to use zero install because they simply can't depend on it. I think this will hurt Rust adoption a lot. Zero install may have integration with package systems, but looks like it is very brittle. According to [this page](http://0install.net/distribution-integration.html) it is package owner's duty to specify how native package dependencies should be resolved in each distribution. This is extremely fragile. I don't use Debian, for example, how would I know how my dependency is called there? When package owners won't write these mappings for every distribution, then this integration immediately becomes completely pointless. Also I'm not sure that zero install can integrate with every package manager it supports without additional tools, which means even more unneeded dependencies. Zero install package base is also very small. Distributions usually provide a lot more packages. Then, if zero install is the only way Rust software is packaged, I just won't be able to use these libraries unless someone, possibly myself, writes a zero install package definition for it. And this is extremely bad - I don't want to resolve transitive dependencies for my external dependencies, this is already done by my distribution maintainers. Having custom flexible build and packaging system which is used by developers and maintainers is a separation of concerns with very clean interfaces between software developers and software users. I believe that it should be created in order for Rust to be adopted at large, and rustpkg provides solid base for it. From ben at 0x539.de Sun Feb 2 01:30:51 2014 From: ben at 0x539.de (Benjamin Herr) Date: Sun, 02 Feb 2014 10:30:51 +0100 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: <1391333451.2966.30.camel@vigil> On Sun, 2014-02-02 at 14:45 +1300, Nick Cameron wrote: > - The change adds boilerplate and nomenclature that is likely > unfamiliar to our target audience - 'for all' is well known to > functional programmers, but I believe that is not true for most users > of C++ (or Java). Being closer to the C++/Java syntax for generics is > probably more 'intuitive So instead of 'for all', use 'template', and we're closer to C++ syntax than ever! template struct Foo { ... } template impl Trait for Foo { ... } template fn foo(...) { ... } ;-) fwiw, like C++, Java generic methods also put the <> type parameter list in front of the function signature rather than behind the function name: public static > int countGreaterThan(T[] anArray, T elem) { ... } ... while C# apparently compromises and puts the type parameters between the function name and value parameter list, but leaves the bounds for later: public static bool Contains(IEnumerable collection, T item) where T : IComparable; Neither approach translates too well into Rust, but that Rust is almost the odd one out here makes me sympathetic to the desire to avoid breaking up particularly function declarations between the name and the value parameter list too much, in spite of the familiarity argument. Of course, like everything else, that has to be balanced with avoiding superficial but far-reaching overhauls of the language at the eleventh hour. Alas! -benh From jurily at gmail.com Sun Feb 2 05:50:22 2014 From: jurily at gmail.com (=?ISO-8859-1?Q?Gy=F6rgy_Andrasek?=) Date: Sun, 02 Feb 2014 14:50:22 +0100 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: <52EE4D1E.1050504@gmail.com> On 02/01/2014 11:39 PM, Corey Richardson wrote: > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` Why not ``` fn foo: pub unsafe => (f: |T| -> U, arg: T) -> U { f(arg) } struct foo: => { ... } impl Foo: => Trait { ... } ``` Can we please not put more stuff in front of the identifier? Have we ran out of space after it? What will this look like once the types start looking like `template class hashed_index`? Do we really need all that in front of a function name? > The immediate, and most pragmatic, problem is that in today's Rust one cannot > easily search for implementations of a trait. Why? Because the only existing Rust parser is geared towards a native Rust compiler, not a non-Rust IDE. That's the problem, not `grep`, and syntax changes won't help unless you want to redesign the language to be completely regex-compatible. Write a C-bindable parser, plug it into `ack`, `ctags` and a couple of IDEs, and everyone will be happy. > (Here I ignore the issue of tooling, as I do not find the > argument of "But a tool can do it!" valid in language design.) Then why have such a parsing-oriented grammar in the first place? Following this logic, Rust should look more like Haskell, with a Hello Kitty binop `(=^.^=)`. (I swear I've seen this in real code somewhere) From jfager at gmail.com Sun Feb 2 05:55:30 2014 From: jfager at gmail.com (Jason Fager) Date: Sun, 2 Feb 2014 08:55:30 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: I'm not a huge fan of this proposal. It makes declarations longer, and it removes the visual consistency of Foo everywhere, which I think introduces its own pedagogical issue. The recent addition of default type parameters, though, makes me think there's a reasonable change that increases consistency and shortens declarations in a few common cases. >From what I understand, the reason we can't just have impl Trait for Foo is because it's ambiguous whether T and U are intended to be concrete or generic type names; i.e., impl Trait for Foo tells the compiler that we expect U to be a concrete type name. Our new default type parameter declarations look like: struct Foo So what if to actually make generic types concrete, we always used the '='? struct Foo impl Trait for Foo This saves a character over 'impl Trait for Foo', solves the greppability problem, and makes intuitive sense given how defaults are declared. It also has a nice parallel with how ':' is used - ':' adds restrictions, '=' fully locks in place. So what is today something like impl Trait for Foo would become impl Trait for Foo The rule would be that the first use of a type variable T would introduce its bounds, so for instance: impl Trait for Foo would be fine, and impl Trait for Foo would be an error. More nice fallout: struct Foo impl Foo { fn one(a: A) -> B fn two(a: A) -> B fn three(a: A) -> B } means that if I ever want to go back and change the name of Bar, I only have to do it in one place, or if Bar is actually some complicated type, I only had to write it once, like a little local typedef. I'm sure this has some glaring obvious flaw I'm not thinking of. It would be nice to have less syntax for these declarations, but honestly I'm ok with how it is now. On Sat, Feb 1, 2014 at 5:39 PM, Corey Richardson wrote: > Hey all, > > bjz and I have worked out a nice proposal[0] for a slight syntax > change, reproduced here. It is a breaking change to the syntax, but it > is one that I think brings many benefits. > > Summary > ======= > > Change the following syntax: > > ``` > struct Foo { ... } > impl Trait for Foo { ... } > fn foo(...) { ... } > ``` > > to: > > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` > > The Problem > =========== > > The immediate, and most pragmatic, problem is that in today's Rust one > cannot > easily search for implementations of a trait. Why? `grep 'impl Clone'` is > itself not sufficient, since many types have parametric polymorphism. Now I > need to come up with some sort of regex that can handle this. An easy > first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite > inconvenient to > type and remember. (Here I ignore the issue of tooling, as I do not find > the > argument of "But a tool can do it!" valid in language design.) > > A deeper, more pedagogical problem, is the mismatch between how `struct > Foo<...> { ... }` is read and how it is actually treated. The > straightforward, > left-to-right reading says "There is a struct Foo which, given the types > ... > has the members ...". This might lead one to believe that `Foo` is a single > type, but it is not. `Foo` (that is, type `Foo` instantiated with type > `int`) is not the same type as `Foo` (that is, type `Foo` > instantiated > with type `uint`). Of course, with a small amount of experience or a very > simple explanation, that becomes obvious. > > Something less obvious is the treatment of functions. What does `fn > foo<...>(...) { ... }` say? "There is a function foo which, given types ... > and arguments ..., does the following computation: ..." is not very > adequate. > It leads one to believe there is a *single* function `foo`, whereas there > is > actually a single `foo` for every substitution of type parameters! This > also > holds for implementations (both of traits and of inherent methods). > > Another minor problem is that nicely formatting long lists of type > parameters > or type parameters with many bounds is difficult. > > Proposed Solution > ================= > > Introduce a new keyword, `forall`. This choice of keyword reads very well > and > will not conflict with any identifiers in code which follows the [style > guide](https://github.com/mozilla/rust/wiki/Note-style-guide). > > Change the following declarations from > > ``` > struct Foo { ... } > impl Trait for Foo { ... } > fn foo(...) { ... } > ``` > > to: > > ``` > forall struct Foo { ... } > forall impl Trait for Foo { ... } > forall fn foo(...) { ... } > ``` > > These read very well. "for all types T and U, there is a struct Foo ...", > "for > all types T and U, there is a function foo ...", etc. These reflect that > there > are in fact multiple functions `foo` and structs `Foo` and implementations > of > `Trait`, due to monomorphization. > > > [0]: > http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/ > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.striegel at gmail.com Sun Feb 2 09:08:51 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Sun, 2 Feb 2014 12:08:51 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: After sleeping on it I'm not convinced that this would be a net improvement over our current situation. With a few caveats I'm really rather happy with the syntax as it is. On Sun, Feb 2, 2014 at 8:55 AM, Jason Fager wrote: > I'm not a huge fan of this proposal. It makes declarations longer, and it > removes the visual consistency of Foo everywhere, which I think > introduces its own pedagogical issue. > > The recent addition of default type parameters, though, makes me think > there's a reasonable change that increases consistency and shortens > declarations in a few common cases. > > From what I understand, the reason we can't just have > > impl Trait for Foo > > is because it's ambiguous whether T and U are intended to be concrete or > generic type names; i.e., > > impl Trait for Foo > > tells the compiler that we expect U to be a concrete type name. > > Our new default type parameter declarations look like: > > struct Foo > > So what if to actually make generic types concrete, we always used the '='? > > struct Foo > impl Trait for Foo > > This saves a character over 'impl Trait for Foo', solves > the greppability problem, and makes intuitive sense given how defaults are > declared. > > It also has a nice parallel with how ':' is used - ':' adds restrictions, > '=' fully locks in place. So what is today something like > > impl Trait for Foo > > would become > > impl Trait for Foo > > The rule would be that the first use of a type variable T would introduce > its bounds, so for instance: > > impl Trait for Foo > > would be fine, and > > impl Trait for Foo > > would be an error. > > More nice fallout: > > struct Foo > impl Foo { > fn one(a: A) -> B > fn two(a: A) -> B > fn three(a: A) -> B > } > > means that if I ever want to go back and change the name of Bar, I only > have to do it in one place, or if Bar is actually some complicated type, I > only had to write it once, like a little local typedef. > > I'm sure this has some glaring obvious flaw I'm not thinking of. It would > be nice to have less syntax for these declarations, but honestly I'm ok > with how it is now. > > > > > > > > > > > > > > > On Sat, Feb 1, 2014 at 5:39 PM, Corey Richardson wrote: > >> Hey all, >> >> bjz and I have worked out a nice proposal[0] for a slight syntax >> change, reproduced here. It is a breaking change to the syntax, but it >> is one that I think brings many benefits. >> >> Summary >> ======= >> >> Change the following syntax: >> >> ``` >> struct Foo { ... } >> impl Trait for Foo { ... } >> fn foo(...) { ... } >> ``` >> >> to: >> >> ``` >> forall struct Foo { ... } >> forall impl Trait for Foo { ... } >> forall fn foo(...) { ... } >> ``` >> >> The Problem >> =========== >> >> The immediate, and most pragmatic, problem is that in today's Rust one >> cannot >> easily search for implementations of a trait. Why? `grep 'impl Clone'` is >> itself not sufficient, since many types have parametric polymorphism. Now >> I >> need to come up with some sort of regex that can handle this. An easy >> first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite >> inconvenient to >> type and remember. (Here I ignore the issue of tooling, as I do not find >> the >> argument of "But a tool can do it!" valid in language design.) >> >> A deeper, more pedagogical problem, is the mismatch between how `struct >> Foo<...> { ... }` is read and how it is actually treated. The >> straightforward, >> left-to-right reading says "There is a struct Foo which, given the types >> ... >> has the members ...". This might lead one to believe that `Foo` is a >> single >> type, but it is not. `Foo` (that is, type `Foo` instantiated with >> type >> `int`) is not the same type as `Foo` (that is, type `Foo` >> instantiated >> with type `uint`). Of course, with a small amount of experience or a very >> simple explanation, that becomes obvious. >> >> Something less obvious is the treatment of functions. What does `fn >> foo<...>(...) { ... }` say? "There is a function foo which, given types >> ... >> and arguments ..., does the following computation: ..." is not very >> adequate. >> It leads one to believe there is a *single* function `foo`, whereas there >> is >> actually a single `foo` for every substitution of type parameters! This >> also >> holds for implementations (both of traits and of inherent methods). >> >> Another minor problem is that nicely formatting long lists of type >> parameters >> or type parameters with many bounds is difficult. >> >> Proposed Solution >> ================= >> >> Introduce a new keyword, `forall`. This choice of keyword reads very well >> and >> will not conflict with any identifiers in code which follows the [style >> guide](https://github.com/mozilla/rust/wiki/Note-style-guide). >> >> Change the following declarations from >> >> ``` >> struct Foo { ... } >> impl Trait for Foo { ... } >> fn foo(...) { ... } >> ``` >> >> to: >> >> ``` >> forall struct Foo { ... } >> forall impl Trait for Foo { ... } >> forall fn foo(...) { ... } >> ``` >> >> These read very well. "for all types T and U, there is a struct Foo ...", >> "for >> all types T and U, there is a function foo ...", etc. These reflect that >> there >> are in fact multiple functions `foo` and structs `Foo` and >> implementations of >> `Trait`, due to monomorphization. >> >> >> [0]: >> http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/ >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.monrocq at gmail.com Sun Feb 2 10:35:43 2014 From: matthieu.monrocq at gmail.com (Matthieu Monrocq) Date: Sun, 2 Feb 2014 19:35:43 +0100 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Sun, Feb 2, 2014 at 6:08 PM, Benjamin Striegel wrote: > After sleeping on it I'm not convinced that this would be a net > improvement over our current situation. With a few caveats I'm really > rather happy with the syntax as it is. > > > On Sun, Feb 2, 2014 at 8:55 AM, Jason Fager wrote: > >> I'm not a huge fan of this proposal. It makes declarations longer, and >> it removes the visual consistency of Foo everywhere, which I think >> introduces its own pedagogical issue. >> >> The recent addition of default type parameters, though, makes me think >> there's a reasonable change that increases consistency and shortens >> declarations in a few common cases. >> >> From what I understand, the reason we can't just have >> >> impl Trait for Foo >> >> is because it's ambiguous whether T and U are intended to be concrete or >> generic type names; i.e., >> >> impl Trait for Foo >> >> tells the compiler that we expect U to be a concrete type name. >> >> Our new default type parameter declarations look like: >> >> struct Foo >> >> So what if to actually make generic types concrete, we always used the >> '='? >> >> struct Foo >> impl Trait for Foo >> >> This saves a character over 'impl Trait for Foo', solves >> the greppability problem, and makes intuitive sense given how defaults are >> declared. >> >> It also has a nice parallel with how ':' is used - ':' adds restrictions, >> '=' fully locks in place. So what is today something like >> >> impl Trait for Foo >> >> would become >> >> impl Trait for Foo >> >> The rule would be that the first use of a type variable T would introduce >> its bounds, so for instance: >> >> impl Trait for Foo >> >> would be fine, and >> >> impl Trait for Foo >> >> would be an error. >> >> More nice fallout: >> >> struct Foo >> impl Foo { >> fn one(a: A) -> B >> fn two(a: A) -> B >> fn three(a: A) -> B >> } >> >> means that if I ever want to go back and change the name of Bar, I only >> have to do it in one place, or if Bar is actually some complicated type, I >> only had to write it once, like a little local typedef. >> >> I'm sure this has some glaring obvious flaw I'm not thinking of. It >> would be nice to have less syntax for these declarations, but honestly I'm >> ok with how it is now. >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> On Sat, Feb 1, 2014 at 5:39 PM, Corey Richardson wrote: >> >>> Hey all, >>> >>> bjz and I have worked out a nice proposal[0] for a slight syntax >>> change, reproduced here. It is a breaking change to the syntax, but it >>> is one that I think brings many benefits. >>> >>> Summary >>> ======= >>> >>> Change the following syntax: >>> >>> ``` >>> struct Foo { ... } >>> impl Trait for Foo { ... } >>> fn foo(...) { ... } >>> ``` >>> >>> to: >>> >>> ``` >>> forall struct Foo { ... } >>> forall impl Trait for Foo { ... } >>> forall fn foo(...) { ... } >>> ``` >>> >>> >From a readability point of view, I am afraid this might be awkward though. Coming from a C++, I have welcome the switch from `typedef` to `using` (aliases) because of alignment issues; consider: typedef std::map MapType; typedef std::vector> VectorType; vs using MapType = std::map; using VectorType = std::vector>; In the latter, the entities being declared are at a constant offset from the left-hand margin; and close too; whereas in the former, the eyes are strained as they keep looking for what is declared. And now, let's look at your proposal: fn foo(a: int, b: int) -> int { } fn foo(a: T, b: U) -> T { } forall fn foo(a: T, b: U) -> T { } See how "forall" causes a "bump" that forces you to start looking where that name is ? It was so smooth until then ! So, it might be a net win in terms of grep-ability, but to be honest it seems LESS readable to me. -- Matthieu > The Problem >>> =========== >>> >>> The immediate, and most pragmatic, problem is that in today's Rust one >>> cannot >>> easily search for implementations of a trait. Why? `grep 'impl Clone'` is >>> itself not sufficient, since many types have parametric polymorphism. >>> Now I >>> need to come up with some sort of regex that can handle this. An easy >>> first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite >>> inconvenient to >>> type and remember. (Here I ignore the issue of tooling, as I do not find >>> the >>> argument of "But a tool can do it!" valid in language design.) >>> >>> A deeper, more pedagogical problem, is the mismatch between how `struct >>> Foo<...> { ... }` is read and how it is actually treated. The >>> straightforward, >>> left-to-right reading says "There is a struct Foo which, given the types >>> ... >>> has the members ...". This might lead one to believe that `Foo` is a >>> single >>> type, but it is not. `Foo` (that is, type `Foo` instantiated with >>> type >>> `int`) is not the same type as `Foo` (that is, type `Foo` >>> instantiated >>> with type `uint`). Of course, with a small amount of experience or a very >>> simple explanation, that becomes obvious. >>> >>> Something less obvious is the treatment of functions. What does `fn >>> foo<...>(...) { ... }` say? "There is a function foo which, given types >>> ... >>> and arguments ..., does the following computation: ..." is not very >>> adequate. >>> It leads one to believe there is a *single* function `foo`, whereas >>> there is >>> actually a single `foo` for every substitution of type parameters! This >>> also >>> holds for implementations (both of traits and of inherent methods). >>> >>> Another minor problem is that nicely formatting long lists of type >>> parameters >>> or type parameters with many bounds is difficult. >>> >>> Proposed Solution >>> ================= >>> >>> Introduce a new keyword, `forall`. This choice of keyword reads very >>> well and >>> will not conflict with any identifiers in code which follows the [style >>> guide](https://github.com/mozilla/rust/wiki/Note-style-guide). >>> >>> Change the following declarations from >>> >>> ``` >>> struct Foo { ... } >>> impl Trait for Foo { ... } >>> fn foo(...) { ... } >>> ``` >>> >>> to: >>> >>> ``` >>> forall struct Foo { ... } >>> forall impl Trait for Foo { ... } >>> forall fn foo(...) { ... } >>> ``` >>> >>> These read very well. "for all types T and U, there is a struct Foo >>> ...", "for >>> all types T and U, there is a function foo ...", etc. These reflect that >>> there >>> are in fact multiple functions `foo` and structs `Foo` and >>> implementations of >>> `Trait`, due to monomorphization. >>> >>> >>> [0]: >>> http://cmr.github.io/blog/2014/02/01/polymorphic-declaration-syntax-in-rust/ >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Sun Feb 2 10:39:24 2014 From: corey at octayn.net (Corey Richardson) Date: Sun, 2 Feb 2014 13:39:24 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: Also after sleeping on it I'm not as big of a fan of this proposal. But, I find the idea raised earlier of having "generic blocks" to group implementations etc that have the same implementation nice. Fully backwards compat though, so I'm not going to worry about it. From talex5 at gmail.com Sun Feb 2 11:47:30 2014 From: talex5 at gmail.com (Thomas Leonard) Date: Sun, 02 Feb 2014 19:47:30 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED1618.8090701@gmail.com> <20140201233209.7ffa7b5a@lightyear> <52ED5301.7010404@gmail.com> <20140202012845.47f9e66c@lightyear> Message-ID: [ I don't want to start another argument, but since you guys are discussing 0install, maybe I can provide some useful input... ] On 2014-02-02 07:20, Vladimir Matveev wrote: >> How will it handle external dependencies? > I don't think it should. External dependencies are way too complex. > They come in different flavors on different systems. On Windows, for > example, you don't have a package manager, and you'll have to ship > these dependencies with the program using an installer. On each Linux > distro there is custom package manager, each having its own strategy > of naming things and its own versioning policy. It is impossible to > unify them, and I don't think that Rust package manager should attempt > to do this. > >> I don't understand this. A package manager specific to Rust is >> additional software, just like 0install. 0install has full support for >> installing dependencies via the system package manager on many systems >> if desired. > *End users* won't need Rust package manager at all (unless they want > to install development versions of Rust software). Only package > maintainers and developers have to use it. End users just use their > native package manager to obtain packages created by maintainers. If > Rust would depend on zero install, however, end user will be *forced* > to use zero install. I don't follow this. Whether the developer uses 0install to get the build dependencies doesn't make any difference to the generated binary. Of course, you *can* distribute the binary using 0install too, but you're not required to. > I'm against of using zero install for the following reasons. First, it > is just a packaging system. It is not supposed to help in building > Rust software. But resolving build dependencies and invoking the > compiler with correct paths to installed dependencies is crucial. > How, for example, zero install would handle dependency to master branch of > some source repository? 0install doesn't automatically check out Git repositories (although that would be handy). Here's how we currently do it: - Your program depends on libfoo >= 1.0-post - The latest released version of libfoo is only 1.0 - You "git clone" the libfoo repository yourself and register the metadata (feed) file inside it: $ git clone git://.../libfoo $ 0install add-feed libfoo/feed.xml - 0install now sees that libfoo 1.0 and 1.0-post are both available. Since your program requires libfoo >= 1.0-post, it will select the Git checkout version. > What if I'm developing several packages which depend on different > versions of the same package? Zero install allows > installing multiple versions of the same package, yes, but how should > I specify where these libraries are located to the compiler? Given a set of requirements, 0install will tell you where some suitable versions of the dependencies are. For example: $ cd /tmp $ git clone https://github.com/0install/hello-scons.git $ cd hello-scons $ 0install download Hello-scons.xml --source --show - URI: /tmp/hello-scons/Hello-scons.xml Version: 1.1-post Path: /tmp/hello-scons - URI: http://0install.net/2006/3rd-party/SCons.xml Version: 2.0.1 Path: /var/cache/0install.net/implementations/sha1new=86311df9d410de36d75bc51762d2927f2f045ebf - URI: http://repo.roscidus.com/python/python Version: 2.7.6-1 Path: (package:arch:python2:2.7.6-1:x86_64) This says that the build dependencies are: - This package's source code (in /tmp/hello-scons) - The SCons build tool (which 0install has placed in /var/cache) - Python (provided by the distribution) The source could also specify library dependencies. How do you get this information to the build tool? The usual way is to tell 0install how to run the build tool in the XML. In this case, by running SCons on the project's SConstruct file. But you could get the information to it some other way. For example, a "rustpkg" tool that invokes "0install download ... --source --xml" behind the scenes and does something with the machine-readable selections document produced. > How should I specify build dependencies for people who want to hack on my > package? List them in the XML file that is in your project's source repository. Users should then be able to clone your git repository and build, with build dependencies handled for them. > Majority of direct dependencies will be from the Rust world, > and dedicated building/packaging tool would be able to download and > build them automatically as a part of build process, and only external > dependencies would have to be installed manually. With zero install > you will have to install everything, including Rust-world > dependencies, by yourself. 0install should be able to handle all build dependencies (e.g. libraries, the Rust compiler, build tools, documentation tools, etc). > Second, it is another package manager which is foreign to the system > (unless the system uses zero install as its package manager, but I > think only very minor Linux distros have that). Not only this is bad > because having multiple distribution systems is confusing for the end > users (BTW, as far as I can see, both Linux and Windows users don't > want it. Linux users wouldn't want to have additional package manager, > and majority of Windows users just don't know what package manager is > and why they have to install some program which downloads other > programs just in order for them to use this small utility. They are > used to one-click self-contained installers. On Android you are forced > to have self-contained packages, and zero install won't work there at > all. This is all about run time dependencies, but I think the discussion here is about build time, right? You'll have the same issues with any system. > Don't know anything about Mac, haven't used it). It also means > additional impact on distribution maintainers. If Rust adopts zero > install universally, then, because distribution maintainers won't > support any build system but their own, they will have either to > abandon Rust software at all or build and resolve their dependencies > (including Rust ones) manually, as it is done with C/C++ now. They > won't be able to use zero install because they simply can't depend on > it. I think this will hurt Rust adoption a lot. I think any build tool (including go, cabal, pip, rustpkg) will have this problem. Ideally, you want distributions to be able to turn upstream packages into their preferred format automatically. Whatever system you settle on this should be possible, as long as you have some kind of machine-readable dependency information. > Zero install may have integration with package systems, but looks like > it is very brittle. According to [this > page](http://0install.net/distribution-integration.html) it is package > owner's duty to specify how native package dependencies should be > resolved in each distribution. This is extremely fragile. I don't use > Debian, for example, how would I know how my dependency is called > there? When package owners won't write these mappings for every > distribution, then this integration immediately becomes completely > pointless. Also I'm not sure that zero install can integrate with > every package manager it supports without additional tools, which > means even more unneeded dependencies. To be clear, it's the upstream of the library who specify what their library is called on each distribution. As a user of the library, you don't need to care. If upstream doesn't specify the native package name, then it just means 0install will download the upstream version rather than using the distribution's copy (which would be more efficient). Distributions can also specify the association at their end, although they generally don't bother. > Zero install package base is also very small. Distributions usually > provide a lot more packages. Then, if zero install is the only way > Rust software is packaged, I just won't be able to use these libraries > unless someone, possibly myself, writes a zero install package > definition for it. And this is extremely bad - I don't want to resolve > transitive dependencies for my external dependencies, this is already > done by my distribution maintainers. You mean that you might want to depend on a non-Rust package (e.g. python-sphinx) for your build, but 0install won't be able to provide it for you? That could happen, although a) it's no worse than when using a Rust-only package manager b) you have the option to make it work by writing a little XML > Having custom flexible build and packaging system which is used by > developers and maintainers is a separation of concerns with very clean > interfaces between software developers and software users. I believe > that it should be created in order for Rust to be adopted at large, > and rustpkg provides solid base for it. I hope the above has clarified things a bit. From vladimir at slate-project.org Sun Feb 2 12:08:55 2014 From: vladimir at slate-project.org (Vladimir Lushnikov) Date: Sun, 2 Feb 2014 20:08:55 +0000 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED1618.8090701@gmail.com> <20140201233209.7ffa7b5a@lightyear> <52ED5301.7010404@gmail.com> <20140202012845.47f9e66c@lightyear> Message-ID: A general observation (not particularly replying to your post, Thomas). For both python and haskell (just to name two languages), distribution (where things end up on the filesystem ready to be used) can be done by both the built-in tools (cabal-install, pip) and the distribution-specific tools. Gentoo even has a tool to take a cabal package and generate an ebuild from it - https://github.com/gentoo-haskell/hackport. In the Haskell world cabal and cabal-install are separated ( http://ivanmiljenovic.wordpress.com/2010/03/15/repeat-after-me-cabal-is-not-a-package-manager/) - which is probably a good thing and maybe something we can consider for rustpkg. (The link btw is quite interesting in its own right and perhaps some more inspiration could be taken from there). My point is that a building tool should be able to either fetch or look up dependencies that already exist in a well-specified layout on the filesystem (for the case of development and production deployment using a distro package manager respectively). Whether that is a single tool or two tools is a point of design; I think both are necessary. I feel there is enough that's been discussed on this thread for a write-up on a wiki or the beginnings of a design/goals document that can later be presented for another discussion. I don't see an existing place on the wiki for this though - where should it go? On Sun, Feb 2, 2014 at 7:47 PM, Thomas Leonard wrote: > [ I don't want to start another argument, but since you guys are > discussing 0install, maybe I can provide some useful input... ] > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Sun Feb 2 22:55:49 2014 From: corey at octayn.net (Corey Richardson) Date: Mon, 3 Feb 2014 01:55:49 -0500 Subject: [rust-dev] Using Default Type Parameters Message-ID: Default typarams are awesome, but they're gated, and there's some concern that they'll interact unpleasantly with extensions to the type system (most specifically, I've seen concern raised around HKT, where there is conflicting tension about whether to put the "defaults" at the start or end of the typaram list). I've already come across situations where default typarams will make for a nicer API, but I'm wondering whether I should use them without hesitation, looking forward to when they are no longer gated, or whether I should shun them because they will make my code incompatible with future changes to the language. Is there any thoughts on this so far? This question also applies to other feature gatess; the level of assurance one can use a given feature with without having to deal with explosive breakage down the line. From glaebhoerl at gmail.com Sun Feb 2 23:41:54 2014 From: glaebhoerl at gmail.com (=?ISO-8859-1?Q?G=E1bor_Lehel?=) Date: Mon, 3 Feb 2014 08:41:54 +0100 Subject: [rust-dev] Using Default Type Parameters In-Reply-To: References: Message-ID: On Mon, Feb 3, 2014 at 7:55 AM, Corey Richardson wrote: > Default typarams are awesome, but they're gated, and there's some > concern that they'll interact unpleasantly with extensions to the type > system (most specifically, I've seen concern raised around HKT, where > there is conflicting tension about whether to put the "defaults" at > the start or end of the typaram list). > Just for reference, this was discussed here: https://github.com/mozilla/rust/pull/11217 (The tension is essentially that with default type args you want to put the "least important" types at the end, so they can be defaulted, while with HKT you want to put them at the front, so they don't get in the way of abstracting over the important ones.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpx.infinity at gmail.com Mon Feb 3 02:00:00 2014 From: dpx.infinity at gmail.com (Vladimir Matveev) Date: Mon, 3 Feb 2014 14:00:00 +0400 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: References: <52E7071A.50702@mozilla.com> <-7815470346720532493@gmail297201516> <-5222466096720818471@gmail297201516> <52ED1618.8090701@gmail.com> <20140201233209.7ffa7b5a@lightyear> <52ED5301.7010404@gmail.com> <20140202012845.47f9e66c@lightyear> Message-ID: 2014-02-02 Thomas Leonard : > [ I don't want to start another argument, but since you guys are discussing > 0install, maybe I can provide some useful input... ] > > I don't follow this. Whether the developer uses 0install to get the build > dependencies doesn't make any difference to the generated binary. > > Of course, you *can* distribute the binary using 0install too, but you're > not required to. I probably have left this part in that formulation by accident. I apologize for that, I had been writing this message in several passes. Yes, of course it does not matter for the developer where he gets build dependencies from, provided these dependencies are readily available for the build process and are easily managed. > > 0install doesn't automatically check out Git repositories (although that > would be handy). Here's how we currently do it: > > - Your program depends on libfoo >= 1.0-post > > - The latest released version of libfoo is only 1.0 > > - You "git clone" the libfoo repository yourself and register the > metadata (feed) file inside it: > > $ git clone git://.../libfoo > $ 0install add-feed libfoo/feed.xml > > - 0install now sees that libfoo 1.0 and 1.0-post are both available. > Since your program requires libfoo >= 1.0-post, it will select the > Git checkout version. Seems to be a lot of manual work. This could be automated by Rust package/build manager, though. > > Given a set of requirements, 0install will tell you where some suitable > versions of the dependencies are. For example: > > $ cd /tmp > $ git clone https://github.com/0install/hello-scons.git > $ cd hello-scons > $ 0install download Hello-scons.xml --source --show > - URI: /tmp/hello-scons/Hello-scons.xml > Version: 1.1-post > Path: /tmp/hello-scons > > - URI: http://0install.net/2006/3rd-party/SCons.xml > Version: 2.0.1 > Path: > /var/cache/0install.net/implementations/sha1new=86311df9d410de36d75bc51762d2927f2f045ebf > > - URI: http://repo.roscidus.com/python/python > Version: 2.7.6-1 > Path: (package:arch:python2:2.7.6-1:x86_64) > > This says that the build dependencies are: > > - This package's source code (in /tmp/hello-scons) > - The SCons build tool (which 0install has placed in /var/cache) > - Python (provided by the distribution) > > The source could also specify library dependencies. How do you get this > information to the build tool? The usual way is to tell 0install how to run > the build tool in the XML. In this case, by running SCons on the project's > SConstruct file. > > But you could get the information to it some other way. For example, a > "rustpkg" tool that invokes "0install download ... --source --xml" behind > the scenes and does something with the machine-readable selections document > produced. Thanks for the explanation, I didn't know that 0install can run build tools and that it could provide the information about libraries locations. This certainly answers my question. >> How should I specify build dependencies for people who want to hack on my >> package? > > > List them in the XML file that is in your project's source repository. Users > should then be able to clone your git repository and build, with build > dependencies handled for them. Again, didn't know that 0install can handle build dependencies. > > This is all about run time dependencies, but I think the discussion here is > about build time, right? You'll have the same issues with any system. Usually build dependencies are a superset of runtime dependencies, aren't they? Nonetheless, this was not about runtime dependencies, this was about general approach. But I think given your explanation of 0install operation this point can be discarded. > > I think any build tool (including go, cabal, pip, rustpkg) will have this > problem. Ideally, you want distributions to be able to turn upstream > packages into their preferred format automatically. Whatever system you > settle on this should be possible, as long as you have some kind of > machine-readable dependency information. Yes, you're quite correct on that ideally upstream packages should be converted to distribution packages automatically. For the new hypothetical build system I see it like the following: a maintainer downloads sources for a package, invokes some distro-specific tool which in turn invokes `rustpkg` to build the package and assemble dependency information, which is then converted to a distribution package. Then the maintainer manually adds external dependencies to the list. Something like that is already done for Haskell in Arch Linux, for example. It seems that it could be done with 0install, at least, to some extent. >> Zero install may have integration with package systems, but looks like >> it is very brittle. According to [this >> page](http://0install.net/distribution-integration.html) it is package >> owner's duty to specify how native package dependencies should be >> resolved in each distribution. This is extremely fragile. I don't use >> Debian, for example, how would I know how my dependency is called >> there? When package owners won't write these mappings for every >> distribution, then this integration immediately becomes completely >> pointless. Also I'm not sure that zero install can integrate with >> every package manager it supports without additional tools, which >> means even more unneeded dependencies. > > > To be clear, it's the upstream of the library who specify what their library > is called on each distribution. As a user of the library, you don't need to > care. This is again misunderstanding on my side. For some reason I thought that it is developer who are using that library should specify how it should be resolved. Don't know why I thought so then >_< However, even if only the library owner should write this binding, it is very likely that he won't write 0install package at all. See below. > > If upstream doesn't specify the native package name, then it just means > 0install will download the upstream version rather than using the > distribution's copy (which would be more efficient). Distributions can also > specify the association at their end, although they generally don't bother. > >> Zero install package base is also very small. Distributions usually >> provide a lot more packages. Then, if zero install is the only way >> Rust software is packaged, I just won't be able to use these libraries >> unless someone, possibly myself, writes a zero install package >> definition for it. And this is extremely bad - I don't want to resolve >> transitive dependencies for my external dependencies, this is already >> done by my distribution maintainers. > > > You mean that you might want to depend on a non-Rust package (e.g. > python-sphinx) for your build, but 0install won't be able to provide it for > you? That could happen, although > > a) it's no worse than when using a Rust-only package manager > b) you have the option to make it work by writing a little XML This is the biggest problem I see now. 0install could partially provide external dependencies, but not all of them (and I dare say, not even the majority of them, given the number of available 0install packages and comparing it to the number of packages in standard repositories). Then developers and maintainers have to manually track which dependencies are provided via 0install packages and which should be taken from the system package manager. You're correct that this can happen (and will happen) with Rust-only manager too, but I think it is in fact worse than if the list of external dependencies is managed outside of any packaging system because it may give false impression that if everything that the package needs can be provided by 0install. Additionally, it is not clear what will happen if Rust package developers start writing bindings with various distribution packages in their 0install feeds. This could potentially interfere with distribution maintainers, as far as I can see. Given that maintainers would need to convert 0install dependencies to regular dependencies anyway, I think that value of 0install vanishes. We still will have to create some kind of infrastructure for Rust packages (possibly central repository, various conventions, naming (I don't think that multilevel naming system, if we will use some, would interact well with 0install, but I may be wrong again on that), improvements in build system), and all we get if we adopt 0install is highly generic package management which does not take Rust needs into account. Again, given that end users won't see 0install at all, I don't think it is a really good choice. To summarize, I think that it is better to keep Rust package world and external package world completely separate. If they are intermixed, especially with not so widely used packaging system, this will lead to problems. > >> Having custom flexible build and packaging system which is used by >> developers and maintainers is a separation of concerns with very clean >> interfaces between software developers and software users. I believe >> that it should be created in order for Rust to be adopted at large, >> and rustpkg provides solid base for it. > > > I hope the above has clarified things a bit. Yes, indeed it does. Thank you very much for your explanation, I don't feel so strongly against 0install now, however, I still think that Rust-only package manager would be much more convenient in long run. From glaebhoerl at gmail.com Mon Feb 3 05:35:09 2014 From: glaebhoerl at gmail.com (=?ISO-8859-1?Q?G=E1bor_Lehel?=) Date: Mon, 3 Feb 2014 14:35:09 +0100 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: Just because Any is a trait doesn't mean it doesn't break parametricity. Look at this: http://static.rust-lang.org/doc/master/src/std/home/rustbuild/src/rust-buildbot/slave/doc/build/src/libstd/any.rs.html#37-63 Because we have `impl Any for T`, it can be used with *any type* (except borrowed data), including type parameters, whether or not they declare the `T: Any` bound explicitly (which is essentially redundant in this situation). The proper thing would be for the compiler to generate an `impl Any for MyType` for each individual type separately, rather than a single generic impl which is valid for all types. I also think we should guarantee parametricity for safe code and make `size_of` an unsafe fn. Its legitimate uses in unsafe code (e.g. smart pointers) are well encapsulated and don't expose parametricity violations, and I don't believe safe code has a legitimate reason to use it (does it?). On Sun, Feb 2, 2014 at 3:27 AM, Eric Reed wrote: > I'm going to respond to Any and size_of separately because there's a > significant difference IMO. > > It's true that Any and trait bounds on type parameters in general can let > function behavior depend on the passed type, but only in the specific > behavior defined by the trait. Everything that's not a trait function is > still independent of the passed type (contrast this with a setup where this > wasn't true. `fn foo() -> int' could return 2i for int and spin up a > tetris game then crash for uint). Any just happens to be powerful enough to > allow complete variance, which is expected since it's just dynamic typing, > but there's an important distinction still: behavior variance because of > Any *is* part of the function because you need to do explicit type tests. > > I wasn't aware of mem::size_of before, but I'm rather annoyed to find out > we've started adding bare A -> B functions since it breaks parametricity. > I'd much rather put size_of in a trait, at which point it's just a weaker > version of Any. > Being able to tell how a function's behavior might vary just from the type > signature is a very nice property, and I'd like Rust to keep it. > > Now, onto monomorphization. > I agree that distinguishing static and dynamic dispatch is important for > performance characterization, but static dispatch != monomorphization (or > if it currently does, then it probably shouldn't) because not all > statically dispatched code needs to be monomorphizied. Consider a function > like this: > > fn foo(ox: Option<~A>, f: |~A| -> ~B) -> Option<~B> { > match ox { > Some(x) => Some(f(x)), > None => None, > } > } > > It's quite generic, but AFAIK there's no need to monomorphize it for > static dispatch. It uses a constant amount of stack space (not counting > what `f' uses when called) and could run the exact same code for any types > A or B (check discriminant, potentially call a function pointer, and > return). I would guess most cases require monomorphization, but I consider > universal monomorphization a way of implementing static dispatch (as > opposed to partial monomorphization). > I agree that understanding monomorphization is important for understanding > the performance characteristics of code generated by *rustc*, but rustc != > Rust. > Unless universal monomorphization for static dispatch makes its way into > the Rust language spec, I'm going to consider it an implementation detail > for rustc. > > > > On Sat, Feb 1, 2014 at 3:31 PM, Corey Richardson wrote: > >> On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed >> wrote: >> > Responses inlined. >> > >> >> >> >> Hey all, >> >> >> >> bjz and I have worked out a nice proposal[0] for a slight syntax >> >> change, reproduced here. It is a breaking change to the syntax, but it >> >> is one that I think brings many benefits. >> >> >> >> Summary >> >> ======= >> >> >> >> Change the following syntax: >> >> >> >> ``` >> >> struct Foo { ... } >> >> impl Trait for Foo { ... } >> >> fn foo(...) { ... } >> >> ``` >> >> >> >> to: >> >> >> >> ``` >> >> forall struct Foo { ... } >> >> forall impl Trait for Foo { ... } >> >> forall fn foo(...) { ... } >> >> ``` >> >> >> >> The Problem >> >> =========== >> >> >> >> The immediate, and most pragmatic, problem is that in today's Rust one >> >> cannot >> >> easily search for implementations of a trait. Why? `grep 'impl Clone'` >> is >> >> itself not sufficient, since many types have parametric polymorphism. >> Now >> >> I >> >> need to come up with some sort of regex that can handle this. An easy >> >> first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite >> >> inconvenient to >> >> type and remember. (Here I ignore the issue of tooling, as I do not >> find >> >> the >> >> argument of "But a tool can do it!" valid in language design.) >> > >> > >> > I think what I've done in the past was just `grep impl | grep Clone'. >> > >> >> >> >> A deeper, more pedagogical problem, is the mismatch between how `struct >> >> Foo<...> { ... }` is read and how it is actually treated. The >> >> straightforward, >> >> left-to-right reading says "There is a struct Foo which, given the >> types >> >> ... >> >> has the members ...". This might lead one to believe that `Foo` is a >> >> single >> >> type, but it is not. `Foo` (that is, type `Foo` instantiated with >> >> type >> >> `int`) is not the same type as `Foo` (that is, type `Foo` >> >> instantiated >> >> with type `uint`). Of course, with a small amount of experience or a >> very >> >> simple explanation, that becomes obvious. >> > >> > >> > I strongly disagree with this reasoning. >> > There IS only one type Foo. It's a type constructor with kind * -> * >> (where >> > * means proper type). >> > Foo and Foo are two different applications of Foo and are >> proper >> > types (i.e. *) because Foo is * -> * and both int and uint are *. >> > Regarding people confusing Foo, Foo and Foo, I think the >> proposed >> > forall struct Foo {...} syntax is actually more confusing. >> > With the current syntax, it's never legal to write Foo without type >> > parameters, but with the proposed syntax it would be. >> > >> >> I've yet to see a proposal for HKT, but with them that interpretation >> would be valid and indeed make this proposal's argument weaker. >> >> >> >> >> Something less obvious is the treatment of functions. What does `fn >> >> foo<...>(...) { ... }` say? "There is a function foo which, given types >> >> ... >> >> and arguments ..., does the following computation: ..." is not very >> >> adequate. >> >> It leads one to believe there is a *single* function `foo`, whereas >> there >> >> is >> >> actually a single `foo` for every substitution of type parameters! This >> >> also >> >> holds for implementations (both of traits and of inherent methods). >> > >> > >> > Again, I strongly disagree here. >> > There IS only one function foo. Some of it's arguments are types. foo's >> > behavior *does not change* based on the type parameters because of >> > parametricity. >> > That the compiler monomporphizes generic functions is just an >> implementation >> > detail and doesn't change the semantics of the function. >> > >> >> It can if it uses Any, size_of, etc. eddyb had "integers in the >> typesystem" by using size_of and [u8, ..N]. Anything using the >> "properties" of types or the tydescs *will* change for each >> instantiation. >> >> >> >> >> Another minor problem is that nicely formatting long lists of type >> >> parameters >> >> or type parameters with many bounds is difficult. >> > >> > >> > I'm not sure how this proposal would address this problem. All of your >> > proposed examples are longer than the current syntax equivalents. >> > >> >> The idea is there is an obvious place to insert a newline (after the >> forall), though bjz would have to comment more on that. >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kodafox at gmail.com Mon Feb 3 08:26:27 2014 From: kodafox at gmail.com (Jake Kerr) Date: Tue, 4 Feb 2014 01:26:27 +0900 Subject: [rust-dev] Proposal: Unify closure and proc declaration syntax? Message-ID: Hello rust-dev, I imagine there are reasons I've not considered for why proc declaration syntax is the way it is, but I couldn't find any discussion on this list or in the issues so thought I'd make a proposal. I think it would be nice if procs and closures had a more similar syntax. Specifically I'd like to see proc loose it's function like () parameter list and gain the block like || params list as closures have. So rather than the current: spawn(proc(x,y) { /* Do some work. */ }); We would write: spawn(proc |x,y| { /* Do some work. */ }); A minor change, for sure, but I find the later easier to visually parse and recognize as a closure. With the current syntax I find that for a split second it looks to me like a function call to some function "proc" with a block after it. I thought for a moment that maybe proc was a function with a final hidden block argument and that this was a magic syntactic sugar for giving this "proc" function a block (kind of like the old do syntax) but then I read the parser and learned that proc is in fact part of the syntax. There are probably some caveats that I haven't considered since I'm not 100% familiar with all of the closure types. If there is a glaring reason why this is a bad idea please do tell; else let's discuss. Thanks for reading! From ben.striegel at gmail.com Mon Feb 3 09:07:34 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Mon, 3 Feb 2014 12:07:34 -0500 Subject: [rust-dev] Proposal: Unify closure and proc declaration syntax? In-Reply-To: References: Message-ID: It might be good to revisit proc syntax, but I don't think that it's yet time for that. There's a nebulous plan in the works involving reimplementing closures as traits (or... something?) and how that shakes out would likely impact the syntax that we want to provide. On Mon, Feb 3, 2014 at 11:26 AM, Jake Kerr wrote: > Hello rust-dev, > > I imagine there are reasons I've not considered for why proc > declaration syntax is the way it is, but I couldn't find any > discussion on this list or in the issues so thought I'd make a > proposal. > > I think it would be nice if procs and closures had a more similar syntax. > Specifically I'd like to see proc loose it's function like () > parameter list and gain the block like || params list as closures > have. > > So rather than the current: > spawn(proc(x,y) { /* Do some work. */ }); > > We would write: > spawn(proc |x,y| { /* Do some work. */ }); > > A minor change, for sure, but I find the later easier to visually > parse and recognize as a closure. With the current syntax I find that > for a split second it looks to me like a function call to some > function "proc" with a block after it. > I thought for a moment that maybe proc was a function with a > final hidden block argument and that this was a magic syntactic sugar > for giving this "proc" function a block (kind of like the old do > syntax) but then I read the parser and learned that proc is in fact part > of the syntax. > > There are probably some caveats that I haven't considered since I'm not > 100% familiar with all of the closure types. If there is a glaring > reason why this is a bad idea please do tell; else let's discuss. > > Thanks for reading! > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.monrocq at gmail.com Mon Feb 3 09:34:23 2014 From: matthieu.monrocq at gmail.com (Matthieu Monrocq) Date: Mon, 3 Feb 2014 18:34:23 +0100 Subject: [rust-dev] Using Default Type Parameters In-Reply-To: References: Message-ID: On Mon, Feb 3, 2014 at 8:41 AM, G?bor Lehel wrote: > On Mon, Feb 3, 2014 at 7:55 AM, Corey Richardson wrote: > >> Default typarams are awesome, but they're gated, and there's some >> concern that they'll interact unpleasantly with extensions to the type >> system (most specifically, I've seen concern raised around HKT, where >> there is conflicting tension about whether to put the "defaults" at >> the start or end of the typaram list). >> > > Just for reference, this was discussed here: > https://github.com/mozilla/rust/pull/11217 > > (The tension is essentially that with default type args you want to put > the "least important" types at the end, so they can be defaulted, while > with HKT you want to put them at the front, so they don't get in the way of > abstracting over the important ones.) > > Thinking out loud: could parameters be "keyed", like named functions arguments ? If they were, then their position would matter little. -- Matthieu > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Mon Feb 3 09:41:34 2014 From: glaebhoerl at gmail.com (=?ISO-8859-1?Q?G=E1bor_Lehel?=) Date: Mon, 3 Feb 2014 18:41:34 +0100 Subject: [rust-dev] Using Default Type Parameters In-Reply-To: References: Message-ID: Possibly, but it's not particularly well-trodden ground (I think Ur/Web might have something like it?). And would you really want to write `HashMap`? On Mon, Feb 3, 2014 at 6:34 PM, Matthieu Monrocq wrote: > > > > On Mon, Feb 3, 2014 at 8:41 AM, G?bor Lehel wrote: > >> On Mon, Feb 3, 2014 at 7:55 AM, Corey Richardson wrote: >> >>> Default typarams are awesome, but they're gated, and there's some >>> concern that they'll interact unpleasantly with extensions to the type >>> system (most specifically, I've seen concern raised around HKT, where >>> there is conflicting tension about whether to put the "defaults" at >>> the start or end of the typaram list). >>> >> >> Just for reference, this was discussed here: >> https://github.com/mozilla/rust/pull/11217 >> >> (The tension is essentially that with default type args you want to put >> the "least important" types at the end, so they can be defaulted, while >> with HKT you want to put them at the front, so they don't get in the way of >> abstracting over the important ones.) >> >> > Thinking out loud: could parameters be "keyed", like named functions > arguments ? If they were, then their position would matter little. > > -- Matthieu > > >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slabode at aim.com Mon Feb 3 10:05:31 2014 From: slabode at aim.com (SiegeLord) Date: Mon, 03 Feb 2014 13:05:31 -0500 Subject: [rust-dev] Using Default Type Parameters In-Reply-To: References: Message-ID: <52EFDA6B.6040709@aim.com> On 02/03/2014 12:41 PM, G?bor Lehel wrote: > Possibly, but it's not particularly well-trodden ground (I think Ur/Web > might have something like it?). > > And would you really want to write `HashMap`? Naturally it would be optional subject to some rules. I think this has nice parallels with the yet non-existent keyword/default function arguments. -SL From niko at alum.mit.edu Mon Feb 3 11:33:54 2014 From: niko at alum.mit.edu (Niko Matsakis) Date: Mon, 3 Feb 2014 14:33:54 -0500 Subject: [rust-dev] Syntax for custom type bounds In-Reply-To: References: <20140201125738.GA21688@Mr-Bennet> Message-ID: <20140203193354.GA2334@Mr-Bennet> On Sat, Feb 01, 2014 at 03:42:45PM -0800, Vadim wrote: > Since &'a Foo currently means "the return value is a reference into > something that has lifetime 'a", 'a Foo feels like a natural extension > for saying "the return value is a reference-like thing whose safety depends > on something that has lifetime 'a still being around". > Foo<'a,T>, of the other hand... it is not obvious to me why would it > necessarily mean that. It does not, in fact, *necessarily* mean that, though certainly it most commonly does. It will depend on the definition of the `Foo` type and how the lifetime parameter is used within the type, as you say. It seems then that you did not mean for `'a Foo` to be syntactic sugar for `Foo<'a, T>` but rather a new kind of type: kind of a "by value that is limited to 'a". > I've been around Rust for almost a year now, and certainly since the time > the current lifetime notation has been introduced, and I *still *could not > explain to somebody, why a lifetime parameter appearing among the type > parameters of a trait or a struct refers to the lifetime of that trait or > struct. As I wrote above, a lifetime parameter does not by itself have any effect, just like a type parameter. Both type and lifetime parameters are simply substituted into the struct body, and any limitations arise from there. That is, if I have struct Foo<'a> { x: &'a int } then `Foo<'xyz>` is limited to the lifetime `'xyz` because `Foo` contains a field `x` whose type is (after substitution) `&'xyz int`, and that *field* cannot escape the lifetime `'xyz`. Thus, there are in fact corner cases where the lifetime parameter has no effect, and so it is not the case that `SomeType<'xyz>` is necessarily limited to `'xyz` (the most obvious being when 'xyz is unused within the struct body, as you suggest). Niko From ecreed at cs.washington.edu Mon Feb 3 14:20:17 2014 From: ecreed at cs.washington.edu (Eric Reed) Date: Mon, 3 Feb 2014 14:20:17 -0800 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: Actually this isn't the case. fn foo(t: T) -> TypeId { t.get_type_id() } compiles just fine, but fn bar(t: T) -> TypeId { t.get_type_id() } fails with "error: instantiating a type parameter with incompatible type `T`, which does not fulfill `'static`". Just does not imply , so parametricity is not violated. I had the same thought about making size_of and friends unsafe functions. I think that might be a reasonable idea. On Mon, Feb 3, 2014 at 5:35 AM, G?bor Lehel wrote: > Just because Any is a trait doesn't mean it doesn't break parametricity. > Look at this: > > > http://static.rust-lang.org/doc/master/src/std/home/rustbuild/src/rust-buildbot/slave/doc/build/src/libstd/any.rs.html#37-63 > > Because we have `impl Any for T`, it can be used with *any > type* (except borrowed data), including type parameters, whether or not > they declare the `T: Any` bound explicitly (which is essentially redundant > in this situation). > > The proper thing would be for the compiler to generate an `impl Any for > MyType` for each individual type separately, rather than a single generic > impl which is valid for all types. > > I also think we should guarantee parametricity for safe code and make > `size_of` an unsafe fn. Its legitimate uses in unsafe code (e.g. smart > pointers) are well encapsulated and don't expose parametricity violations, > and I don't believe safe code has a legitimate reason to use it (does it?). > > > On Sun, Feb 2, 2014 at 3:27 AM, Eric Reed wrote: > >> I'm going to respond to Any and size_of separately because there's a >> significant difference IMO. >> >> It's true that Any and trait bounds on type parameters in general can let >> function behavior depend on the passed type, but only in the specific >> behavior defined by the trait. Everything that's not a trait function is >> still independent of the passed type (contrast this with a setup where this >> wasn't true. `fn foo() -> int' could return 2i for int and spin up a >> tetris game then crash for uint). Any just happens to be powerful enough to >> allow complete variance, which is expected since it's just dynamic typing, >> but there's an important distinction still: behavior variance because of >> Any *is* part of the function because you need to do explicit type tests. >> >> I wasn't aware of mem::size_of before, but I'm rather annoyed to find out >> we've started adding bare A -> B functions since it breaks parametricity. >> I'd much rather put size_of in a trait, at which point it's just a weaker >> version of Any. >> Being able to tell how a function's behavior might vary just from the >> type signature is a very nice property, and I'd like Rust to keep it. >> >> Now, onto monomorphization. >> I agree that distinguishing static and dynamic dispatch is important for >> performance characterization, but static dispatch != monomorphization (or >> if it currently does, then it probably shouldn't) because not all >> statically dispatched code needs to be monomorphizied. Consider a function >> like this: >> >> fn foo(ox: Option<~A>, f: |~A| -> ~B) -> Option<~B> { >> match ox { >> Some(x) => Some(f(x)), >> None => None, >> } >> } >> >> It's quite generic, but AFAIK there's no need to monomorphize it for >> static dispatch. It uses a constant amount of stack space (not counting >> what `f' uses when called) and could run the exact same code for any types >> A or B (check discriminant, potentially call a function pointer, and >> return). I would guess most cases require monomorphization, but I consider >> universal monomorphization a way of implementing static dispatch (as >> opposed to partial monomorphization). >> I agree that understanding monomorphization is important for >> understanding the performance characteristics of code generated by *rustc*, >> but rustc != Rust. >> Unless universal monomorphization for static dispatch makes its way into >> the Rust language spec, I'm going to consider it an implementation detail >> for rustc. >> >> >> >> On Sat, Feb 1, 2014 at 3:31 PM, Corey Richardson wrote: >> >>> On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed >>> wrote: >>> > Responses inlined. >>> > >>> >> >>> >> Hey all, >>> >> >>> >> bjz and I have worked out a nice proposal[0] for a slight syntax >>> >> change, reproduced here. It is a breaking change to the syntax, but it >>> >> is one that I think brings many benefits. >>> >> >>> >> Summary >>> >> ======= >>> >> >>> >> Change the following syntax: >>> >> >>> >> ``` >>> >> struct Foo { ... } >>> >> impl Trait for Foo { ... } >>> >> fn foo(...) { ... } >>> >> ``` >>> >> >>> >> to: >>> >> >>> >> ``` >>> >> forall struct Foo { ... } >>> >> forall impl Trait for Foo { ... } >>> >> forall fn foo(...) { ... } >>> >> ``` >>> >> >>> >> The Problem >>> >> =========== >>> >> >>> >> The immediate, and most pragmatic, problem is that in today's Rust one >>> >> cannot >>> >> easily search for implementations of a trait. Why? `grep 'impl >>> Clone'` is >>> >> itself not sufficient, since many types have parametric polymorphism. >>> Now >>> >> I >>> >> need to come up with some sort of regex that can handle this. An easy >>> >> first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite >>> >> inconvenient to >>> >> type and remember. (Here I ignore the issue of tooling, as I do not >>> find >>> >> the >>> >> argument of "But a tool can do it!" valid in language design.) >>> > >>> > >>> > I think what I've done in the past was just `grep impl | grep Clone'. >>> > >>> >> >>> >> A deeper, more pedagogical problem, is the mismatch between how >>> `struct >>> >> Foo<...> { ... }` is read and how it is actually treated. The >>> >> straightforward, >>> >> left-to-right reading says "There is a struct Foo which, given the >>> types >>> >> ... >>> >> has the members ...". This might lead one to believe that `Foo` is a >>> >> single >>> >> type, but it is not. `Foo` (that is, type `Foo` instantiated with >>> >> type >>> >> `int`) is not the same type as `Foo` (that is, type `Foo` >>> >> instantiated >>> >> with type `uint`). Of course, with a small amount of experience or a >>> very >>> >> simple explanation, that becomes obvious. >>> > >>> > >>> > I strongly disagree with this reasoning. >>> > There IS only one type Foo. It's a type constructor with kind * -> * >>> (where >>> > * means proper type). >>> > Foo and Foo are two different applications of Foo and are >>> proper >>> > types (i.e. *) because Foo is * -> * and both int and uint are *. >>> > Regarding people confusing Foo, Foo and Foo, I think the >>> proposed >>> > forall struct Foo {...} syntax is actually more confusing. >>> > With the current syntax, it's never legal to write Foo without type >>> > parameters, but with the proposed syntax it would be. >>> > >>> >>> I've yet to see a proposal for HKT, but with them that interpretation >>> would be valid and indeed make this proposal's argument weaker. >>> >>> >> >>> >> Something less obvious is the treatment of functions. What does `fn >>> >> foo<...>(...) { ... }` say? "There is a function foo which, given >>> types >>> >> ... >>> >> and arguments ..., does the following computation: ..." is not very >>> >> adequate. >>> >> It leads one to believe there is a *single* function `foo`, whereas >>> there >>> >> is >>> >> actually a single `foo` for every substitution of type parameters! >>> This >>> >> also >>> >> holds for implementations (both of traits and of inherent methods). >>> > >>> > >>> > Again, I strongly disagree here. >>> > There IS only one function foo. Some of it's arguments are types. foo's >>> > behavior *does not change* based on the type parameters because of >>> > parametricity. >>> > That the compiler monomporphizes generic functions is just an >>> implementation >>> > detail and doesn't change the semantics of the function. >>> > >>> >>> It can if it uses Any, size_of, etc. eddyb had "integers in the >>> typesystem" by using size_of and [u8, ..N]. Anything using the >>> "properties" of types or the tydescs *will* change for each >>> instantiation. >>> >>> >> >>> >> Another minor problem is that nicely formatting long lists of type >>> >> parameters >>> >> or type parameters with many bounds is difficult. >>> > >>> > >>> > I'm not sure how this proposal would address this problem. All of your >>> > proposed examples are longer than the current syntax equivalents. >>> > >>> >>> The idea is there is an obvious place to insert a newline (after the >>> forall), though bjz would have to comment more on that. >>> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Mon Feb 3 14:24:45 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Mon, 3 Feb 2014 17:24:45 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Mon, Feb 3, 2014 at 5:20 PM, Eric Reed wrote: > Actually this isn't the case. > > fn foo(t: T) -> TypeId { > t.get_type_id() > } > > compiles just fine, but > > fn bar(t: T) -> TypeId { > t.get_type_id() > } > > fails with "error: instantiating a type parameter with incompatible type > `T`, which does not fulfill `'static`". Just does not imply 'static>, so parametricity is not violated. > > I had the same thought about making size_of and friends unsafe functions. I > think that might be a reasonable idea. The 'static bound is there as a workaround for an implementation limitation. In all likelihood, it will no longer be required in the future as it has no fundamental relation to reflection. From glaebhoerl at gmail.com Mon Feb 3 14:49:56 2014 From: glaebhoerl at gmail.com (=?ISO-8859-1?Q?G=E1bor_Lehel?=) Date: Mon, 3 Feb 2014 23:49:56 +0100 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: References: Message-ID: On Mon, Feb 3, 2014 at 11:20 PM, Eric Reed wrote: > Actually this isn't the case. > > fn foo(t: T) -> TypeId { > t.get_type_id() > } > > compiles just fine, but > > fn bar(t: T) -> TypeId { > t.get_type_id() > } > > fails with "error: instantiating a type parameter with incompatible type > `T`, which does not fulfill `'static`". Just does not imply 'static>, so parametricity is not violated. > 'static is not even a trait per se (as far as I understand it), it merely states the lifetime which data must be valid for. I would not expect this to imply "oh, and you can also try casting it to any type". I'm not sure what a precise definition of parametricity is that we could apply here, but I'd be very surprised if this flies. It should mean something like "only information that is provided may be used", not "if no information is provided, nothing may be assumed, but if even a little information is provided, well feel free to do whatever you like". > > I had the same thought about making size_of and friends unsafe functions. > I think that might be a reasonable idea. > > > On Mon, Feb 3, 2014 at 5:35 AM, G?bor Lehel wrote: > >> Just because Any is a trait doesn't mean it doesn't break parametricity. >> Look at this: >> >> >> http://static.rust-lang.org/doc/master/src/std/home/rustbuild/src/rust-buildbot/slave/doc/build/src/libstd/any.rs.html#37-63 >> >> Because we have `impl Any for T`, it can be used with *any >> type* (except borrowed data), including type parameters, whether or not >> they declare the `T: Any` bound explicitly (which is essentially redundant >> in this situation). >> >> The proper thing would be for the compiler to generate an `impl Any for >> MyType` for each individual type separately, rather than a single generic >> impl which is valid for all types. >> >> I also think we should guarantee parametricity for safe code and make >> `size_of` an unsafe fn. Its legitimate uses in unsafe code (e.g. smart >> pointers) are well encapsulated and don't expose parametricity violations, >> and I don't believe safe code has a legitimate reason to use it (does it?). >> >> >> On Sun, Feb 2, 2014 at 3:27 AM, Eric Reed wrote: >> >>> I'm going to respond to Any and size_of separately because there's a >>> significant difference IMO. >>> >>> It's true that Any and trait bounds on type parameters in general can >>> let function behavior depend on the passed type, but only in the specific >>> behavior defined by the trait. Everything that's not a trait function is >>> still independent of the passed type (contrast this with a setup where this >>> wasn't true. `fn foo() -> int' could return 2i for int and spin up a >>> tetris game then crash for uint). Any just happens to be powerful enough to >>> allow complete variance, which is expected since it's just dynamic typing, >>> but there's an important distinction still: behavior variance because of >>> Any *is* part of the function because you need to do explicit type tests. >>> >>> I wasn't aware of mem::size_of before, but I'm rather annoyed to find >>> out we've started adding bare A -> B functions since it breaks >>> parametricity. >>> I'd much rather put size_of in a trait, at which point it's just a >>> weaker version of Any. >>> Being able to tell how a function's behavior might vary just from the >>> type signature is a very nice property, and I'd like Rust to keep it. >>> >>> Now, onto monomorphization. >>> I agree that distinguishing static and dynamic dispatch is important for >>> performance characterization, but static dispatch != monomorphization (or >>> if it currently does, then it probably shouldn't) because not all >>> statically dispatched code needs to be monomorphizied. Consider a function >>> like this: >>> >>> fn foo(ox: Option<~A>, f: |~A| -> ~B) -> Option<~B> { >>> match ox { >>> Some(x) => Some(f(x)), >>> None => None, >>> } >>> } >>> >>> It's quite generic, but AFAIK there's no need to monomorphize it for >>> static dispatch. It uses a constant amount of stack space (not counting >>> what `f' uses when called) and could run the exact same code for any types >>> A or B (check discriminant, potentially call a function pointer, and >>> return). I would guess most cases require monomorphization, but I consider >>> universal monomorphization a way of implementing static dispatch (as >>> opposed to partial monomorphization). >>> I agree that understanding monomorphization is important for >>> understanding the performance characteristics of code generated by *rustc*, >>> but rustc != Rust. >>> Unless universal monomorphization for static dispatch makes its way into >>> the Rust language spec, I'm going to consider it an implementation detail >>> for rustc. >>> >>> >>> >>> On Sat, Feb 1, 2014 at 3:31 PM, Corey Richardson wrote: >>> >>>> On Sat, Feb 1, 2014 at 6:24 PM, Eric Reed >>>> wrote: >>>> > Responses inlined. >>>> > >>>> >> >>>> >> Hey all, >>>> >> >>>> >> bjz and I have worked out a nice proposal[0] for a slight syntax >>>> >> change, reproduced here. It is a breaking change to the syntax, but >>>> it >>>> >> is one that I think brings many benefits. >>>> >> >>>> >> Summary >>>> >> ======= >>>> >> >>>> >> Change the following syntax: >>>> >> >>>> >> ``` >>>> >> struct Foo { ... } >>>> >> impl Trait for Foo { ... } >>>> >> fn foo(...) { ... } >>>> >> ``` >>>> >> >>>> >> to: >>>> >> >>>> >> ``` >>>> >> forall struct Foo { ... } >>>> >> forall impl Trait for Foo { ... } >>>> >> forall fn foo(...) { ... } >>>> >> ``` >>>> >> >>>> >> The Problem >>>> >> =========== >>>> >> >>>> >> The immediate, and most pragmatic, problem is that in today's Rust >>>> one >>>> >> cannot >>>> >> easily search for implementations of a trait. Why? `grep 'impl >>>> Clone'` is >>>> >> itself not sufficient, since many types have parametric >>>> polymorphism. Now >>>> >> I >>>> >> need to come up with some sort of regex that can handle this. An easy >>>> >> first-attempt is `grep 'impl(<.*?>)? Clone'` but that is quite >>>> >> inconvenient to >>>> >> type and remember. (Here I ignore the issue of tooling, as I do not >>>> find >>>> >> the >>>> >> argument of "But a tool can do it!" valid in language design.) >>>> > >>>> > >>>> > I think what I've done in the past was just `grep impl | grep Clone'. >>>> > >>>> >> >>>> >> A deeper, more pedagogical problem, is the mismatch between how >>>> `struct >>>> >> Foo<...> { ... }` is read and how it is actually treated. The >>>> >> straightforward, >>>> >> left-to-right reading says "There is a struct Foo which, given the >>>> types >>>> >> ... >>>> >> has the members ...". This might lead one to believe that `Foo` is a >>>> >> single >>>> >> type, but it is not. `Foo` (that is, type `Foo` instantiated >>>> with >>>> >> type >>>> >> `int`) is not the same type as `Foo` (that is, type `Foo` >>>> >> instantiated >>>> >> with type `uint`). Of course, with a small amount of experience or a >>>> very >>>> >> simple explanation, that becomes obvious. >>>> > >>>> > >>>> > I strongly disagree with this reasoning. >>>> > There IS only one type Foo. It's a type constructor with kind * -> * >>>> (where >>>> > * means proper type). >>>> > Foo and Foo are two different applications of Foo and are >>>> proper >>>> > types (i.e. *) because Foo is * -> * and both int and uint are *. >>>> > Regarding people confusing Foo, Foo and Foo, I think the >>>> proposed >>>> > forall struct Foo {...} syntax is actually more confusing. >>>> > With the current syntax, it's never legal to write Foo without type >>>> > parameters, but with the proposed syntax it would be. >>>> > >>>> >>>> I've yet to see a proposal for HKT, but with them that interpretation >>>> would be valid and indeed make this proposal's argument weaker. >>>> >>>> >> >>>> >> Something less obvious is the treatment of functions. What does `fn >>>> >> foo<...>(...) { ... }` say? "There is a function foo which, given >>>> types >>>> >> ... >>>> >> and arguments ..., does the following computation: ..." is not very >>>> >> adequate. >>>> >> It leads one to believe there is a *single* function `foo`, whereas >>>> there >>>> >> is >>>> >> actually a single `foo` for every substitution of type parameters! >>>> This >>>> >> also >>>> >> holds for implementations (both of traits and of inherent methods). >>>> > >>>> > >>>> > Again, I strongly disagree here. >>>> > There IS only one function foo. Some of it's arguments are types. >>>> foo's >>>> > behavior *does not change* based on the type parameters because of >>>> > parametricity. >>>> > That the compiler monomporphizes generic functions is just an >>>> implementation >>>> > detail and doesn't change the semantics of the function. >>>> > >>>> >>>> It can if it uses Any, size_of, etc. eddyb had "integers in the >>>> typesystem" by using size_of and [u8, ..N]. Anything using the >>>> "properties" of types or the tydescs *will* change for each >>>> instantiation. >>>> >>>> >> >>>> >> Another minor problem is that nicely formatting long lists of type >>>> >> parameters >>>> >> or type parameters with many bounds is difficult. >>>> > >>>> > >>>> > I'm not sure how this proposal would address this problem. All of your >>>> > proposed examples are longer than the current syntax equivalents. >>>> > >>>> >>>> The idea is there is an obvious place to insert a newline (after the >>>> forall), though bjz would have to comment more on that. >>>> >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Mon Feb 3 18:00:52 2014 From: alex at crichton.co (Alex Crichton) Date: Mon, 3 Feb 2014 18:00:52 -0800 Subject: [rust-dev] Handling I/O errors Message-ID: Greetings Rustaceans! Upon updating your nightly builds tonight some of you may realize that all I/O code will fail to compile. Fear not, this simply means that #11946 has landed! The summary of this change is the same as its title, "remove io::io_error". This is quite a far-reaching change, despite its simple summary. All I/O now returns a value of type `io::IoResult` which is just a typedef to `Result`. By returning a Result from all function calls, it's not much cleaner to handle errors (condition syntax is quite awkward), less overhead (registering a condition handler was expensive), and clearer what I/O should do now (should you raise? should you return a "0 value"?). Handling errors is always a tricky situation, so the compiler now provides you two tools to assist you in handling errors: 1. The new unused_must_use lint. This lint mode will tell you when you don't use an IoResult. The purpose of this lint is to help you find out where in your program you're silently ignoring errors (often by accident). If you want even more warnings, you can turn on the unused_result lint which will warn about *all* unused results, not just those of type Result/IoResult. 2. The new if_ok!() macro. This macro has a fairly simple definition [0], and the idea is to return-early if an Err is encountered, and otherwise unwrap the Ok value. Some sample usage looks like: fn fun1() -> io::IoResult { ... } fn fun2() -> io::IoResult { ... } fn fun3() -> io::IoResult<()> { ... } fn foo() -> io::IoResult { if_ok!(fun3()); let val = if_ok!(fun1()) as uint + if_ok!(fun2()); Ok(val) } These two tools are in place to help you handle errors unobtrusively as well as identify locations where you've forgotten to handle errors. Sorry about the awful rebasings you'll have to do in advance, but it's worth it! [0] - https://github.com/mozilla/rust/blob/master/src/libstd/macros.rs#L202-L204 From alex at crichton.co Mon Feb 3 18:19:42 2014 From: alex at crichton.co (Alex Crichton) Date: Mon, 3 Feb 2014 18:19:42 -0800 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: Message-ID: > By returning a Result from all function calls, it's not much cleaner > to handle errors Oops, wrong word there, I meant to indicate that it *is* much cleaner to handle errors with Result rather than conditions. From banderson at mozilla.com Mon Feb 3 18:20:28 2014 From: banderson at mozilla.com (Brian Anderson) Date: Mon, 03 Feb 2014 18:20:28 -0800 Subject: [rust-dev] Nick Cameron joins the Rust team at Mozilla Message-ID: <52F04E6C.7020105@mozilla.com> Hi Rusties, I'm just thrilled to announce today that Nick Cameron (nrc) has joined Mozilla's Rust team full-time. Nick has a PhD in programming language theory from Imperial College London and has been hacking on Gecko's graphics and layout for two years, but now that he's all ours you'll be seeing him eradicate critical Rust bugs by the dozens. Good luck, Nick, and welcome to the party. Regards, Brian From alex at crichton.co Mon Feb 3 18:51:15 2014 From: alex at crichton.co (Alex Crichton) Date: Mon, 3 Feb 2014 18:51:15 -0800 Subject: [rust-dev] Nick Cameron joins the Rust team at Mozilla In-Reply-To: <52F04E6C.7020105@mozilla.com> References: <52F04E6C.7020105@mozilla.com> Message-ID: Welcome Nick! I can't wait to see that 1.0 issue count go down! On Mon, Feb 3, 2014 at 6:20 PM, Brian Anderson wrote: > Hi Rusties, > > I'm just thrilled to announce today that Nick Cameron (nrc) has joined > Mozilla's Rust team full-time. Nick has a PhD in programming language theory > from Imperial College London and has been hacking on Gecko's graphics and > layout for two years, but now that he's all ours you'll be seeing him > eradicate critical Rust bugs by the dozens. Good luck, Nick, and welcome to > the party. > > Regards, > Brian > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From palmercox at gmail.com Mon Feb 3 19:01:32 2014 From: palmercox at gmail.com (Palmer Cox) Date: Mon, 3 Feb 2014 22:01:32 -0500 Subject: [rust-dev] Will smart-pointer deref operator allow making iter::ByRef more generic? Message-ID: Hi, I was looking at some structs in Rust that implement the Decorator pattern - structs that take some value, wrap it, and then provider the same interface as the value that they are wrapping while providing some extra behavior. BufferedReader is an example here - it takes another Reader and provides buffering functionality while still presenting the same interface. Anyway, I noticed that there seem to be two different patterns that are being used in Rust to do this - some structs take ownership of the value to be wrapped while other structs take a borrowed reference to the object to be wrapped. Both approaches have advantages and disadvantages. If you take ownership you have the advantage that the resulting struct is Send, however if all you have is a reference to the object you want to wrap, it won't work. The following, for example won't compile: let mut r = File::open(&Path::new("/")).unwrap(); let br = BufferedReader::new(&mut r); If the wrapping struct takes a reference, the example above will work. However, the resulting struct won't be Send. So, in both cases, the result is that some use-cases won't work. I noticed that the iter module addressed this issue with the ByRef struct. All Iterators that wrap other iterators take ownership of the value to be wrapped. If you want to prevent that, you can use ByRef (which hold a reference to the value to be wrapped) and pass it to the wrapping Iterator which takes ownership of the ByRef instance as opposed to the iterator you are wrapping. The problem with ByRef, though, is that it only works for Iterators. Its possible to implement different versions of ByRef for other types as well. However, this seems a bit tedious. This is a relatively common and fairly useful pattern, so it would seem that it would be nice to be able to support all of the use cases for all types. Anyway, its my understanding that future plans for Rust call for a custom derefence trait to be added to allow for the implementation of custom smart pointers. What I was wondering if that trait would also allow for making iter::ByRef into a smart pointer that could provide this type of behavior for any type as opposed to just Iterators. If that's the case, it seems like all of the structs that implement the decorator pattern could be modified to take the value they are wrapping by value and let the caller use the ByRef struct if they would prefer not to transfer ownership of the value to be wrapped. The upsides are that it seems that this would allow all use cases to be met and it would make all the Decorators consistent; the downside is that it would making wrapping a borrowed reference a little bit more verbose. I don't think that would be all that bad, though. This doesn't look too bad to me: let mut r = File::open(&Path::new("/")).unwrap(); let br = BufferedReader::new(ByRef::new(&mut r)); Thoughts? Things I'm wrong on? Thanks, -Palmer Cox -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkuehn at cmu.edu Mon Feb 3 19:11:03 2014 From: tkuehn at cmu.edu (Tim Kuehn) Date: Mon, 3 Feb 2014 19:11:03 -0800 Subject: [rust-dev] Nick Cameron joins the Rust team at Mozilla In-Reply-To: References: <52F04E6C.7020105@mozilla.com> Message-ID: Fantastic! Good luck, Nick! On Mon, Feb 3, 2014 at 6:51 PM, Alex Crichton wrote: > Welcome Nick! > > I can't wait to see that 1.0 issue count go down! > > On Mon, Feb 3, 2014 at 6:20 PM, Brian Anderson > wrote: > > Hi Rusties, > > > > I'm just thrilled to announce today that Nick Cameron (nrc) has joined > > Mozilla's Rust team full-time. Nick has a PhD in programming language > theory > > from Imperial College London and has been hacking on Gecko's graphics and > > layout for two years, but now that he's all ours you'll be seeing him > > eradicate critical Rust bugs by the dozens. Good luck, Nick, and welcome > to > > the party. > > > > Regards, > > Brian > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From palmercox at gmail.com Mon Feb 3 19:25:33 2014 From: palmercox at gmail.com (Palmer Cox) Date: Mon, 3 Feb 2014 22:25:33 -0500 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: Message-ID: I like this change quite a bit. However, is there a more succinct way than the following to take an action on EOF: match reader.read(buff) { Ok(cnt) => { // Do something } Err(io_error) => match io_error.kind { EndOfFile => { // Do something for EOF } _ => return Err(io_error) } } -Palmer Cox On Mon, Feb 3, 2014 at 9:19 PM, Alex Crichton wrote: > > By returning a Result from all function calls, it's not much cleaner > > to handle errors > > Oops, wrong word there, I meant to indicate that it *is* much cleaner > to handle errors with Result rather than conditions. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Mon Feb 3 19:32:45 2014 From: alex at crichton.co (Alex Crichton) Date: Mon, 3 Feb 2014 19:32:45 -0800 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: Message-ID: I'd recommend one of two solutions, one is a match guard: match reader.read(buf) { Ok(cnt) => { /* ... */ } Err(ref e) if e.kind == io::EndOfFile => { /* ... */ } Err(e) => return Err(e) } and the other would be to home-grow your own macro if you find yourself writing the same pattern frequently. On Mon, Feb 3, 2014 at 7:25 PM, Palmer Cox wrote: > I like this change quite a bit. However, is there a more succinct way than > the following to take an action on EOF: > > match reader.read(buff) { > Ok(cnt) => { > // Do something > } > Err(io_error) => match io_error.kind { > EndOfFile => { > // Do something for EOF > } > _ => return Err(io_error) > } > } > > -Palmer Cox > > > > On Mon, Feb 3, 2014 at 9:19 PM, Alex Crichton wrote: >> >> > By returning a Result from all function calls, it's not much cleaner >> > to handle errors >> >> Oops, wrong word there, I meant to indicate that it *is* much cleaner >> to handle errors with Result rather than conditions. >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > From sfackler at gmail.com Mon Feb 3 19:36:27 2014 From: sfackler at gmail.com (Steven Fackler) Date: Mon, 3 Feb 2014 22:36:27 -0500 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: Message-ID: You can also use a nested pattern: match reader.read(buf) { Ok(cnt) => { /* stuff */ } Err(IoError { kind: EndOfFile, .. } => { /* eof stuff */ } Err(e) => return Err(e) } Steven Fackler On Mon, Feb 3, 2014 at 10:32 PM, Alex Crichton wrote: > I'd recommend one of two solutions, one is a match guard: > > match reader.read(buf) { > Ok(cnt) => { /* ... */ } > Err(ref e) if e.kind == io::EndOfFile => { /* ... */ } > Err(e) => return Err(e) > } > > and the other would be to home-grow your own macro if you find > yourself writing the same pattern frequently. > > On Mon, Feb 3, 2014 at 7:25 PM, Palmer Cox wrote: > > I like this change quite a bit. However, is there a more succinct way > than > > the following to take an action on EOF: > > > > match reader.read(buff) { > > Ok(cnt) => { > > // Do something > > } > > Err(io_error) => match io_error.kind { > > EndOfFile => { > > // Do something for EOF > > } > > _ => return Err(io_error) > > } > > } > > > > -Palmer Cox > > > > > > > > On Mon, Feb 3, 2014 at 9:19 PM, Alex Crichton wrote: > >> > >> > By returning a Result from all function calls, it's not much cleaner > >> > to handle errors > >> > >> Oops, wrong word there, I meant to indicate that it *is* much cleaner > >> to handle errors with Result rather than conditions. > >> _______________________________________________ > >> Rust-dev mailing list > >> Rust-dev at mozilla.org > >> https://mail.mozilla.org/listinfo/rust-dev > > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From palmercox at gmail.com Mon Feb 3 19:44:55 2014 From: palmercox at gmail.com (Palmer Cox) Date: Mon, 3 Feb 2014 22:44:55 -0500 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: Message-ID: Cool, thanks! -Palmer Cox On Mon, Feb 3, 2014 at 10:36 PM, Steven Fackler wrote: > You can also use a nested pattern: > > match reader.read(buf) { > Ok(cnt) => { /* stuff */ } > Err(IoError { kind: EndOfFile, .. } => { /* eof stuff */ } > Err(e) => return Err(e) > } > > Steven Fackler > > > On Mon, Feb 3, 2014 at 10:32 PM, Alex Crichton wrote: > >> I'd recommend one of two solutions, one is a match guard: >> >> match reader.read(buf) { >> Ok(cnt) => { /* ... */ } >> Err(ref e) if e.kind == io::EndOfFile => { /* ... */ } >> Err(e) => return Err(e) >> } >> >> and the other would be to home-grow your own macro if you find >> yourself writing the same pattern frequently. >> >> On Mon, Feb 3, 2014 at 7:25 PM, Palmer Cox wrote: >> > I like this change quite a bit. However, is there a more succinct way >> than >> > the following to take an action on EOF: >> > >> > match reader.read(buff) { >> > Ok(cnt) => { >> > // Do something >> > } >> > Err(io_error) => match io_error.kind { >> > EndOfFile => { >> > // Do something for EOF >> > } >> > _ => return Err(io_error) >> > } >> > } >> > >> > -Palmer Cox >> > >> > >> > >> > On Mon, Feb 3, 2014 at 9:19 PM, Alex Crichton wrote: >> >> >> >> > By returning a Result from all function calls, it's not much cleaner >> >> > to handle errors >> >> >> >> Oops, wrong word there, I meant to indicate that it *is* much cleaner >> >> to handle errors with Result rather than conditions. >> >> _______________________________________________ >> >> Rust-dev mailing list >> >> Rust-dev at mozilla.org >> >> https://mail.mozilla.org/listinfo/rust-dev >> > >> > >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bill_myers at outlook.com Mon Feb 3 19:47:08 2014 From: bill_myers at outlook.com (Bill Myers) Date: Tue, 4 Feb 2014 03:47:08 +0000 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: , , , Message-ID: Are we sure that this is the correct design, as opposed to having read return 0 for EoF or perhaps returning None with a Result, IoError> return type? After all, EOF is unlike all other I/O errors, since it is guaranteed to happen on all readers, and the fact that it needs to be special cased might be an indication that the design is wrong. Also, in addition to "raw" eof, a partial read due to end of file causes the same issues, so it seems that code needs to have two match conditions to handle eof, while it would only need a single one if eof was represented by Ok(0). From bill_myers at outlook.com Mon Feb 3 19:49:26 2014 From: bill_myers at outlook.com (Bill Myers) Date: Tue, 4 Feb 2014 03:49:26 +0000 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: , , , , , , , Message-ID: > it is guaranteed to happen on all readers I meant all finite readers, such as those for normal disk files. From bill_myers at outlook.com Mon Feb 3 20:04:20 2014 From: bill_myers at outlook.com (Bill Myers) Date: Tue, 4 Feb 2014 04:04:20 +0000 Subject: [rust-dev] Will smart-pointer deref operator allow making iter::ByRef more generic? In-Reply-To: References: Message-ID: I don't think so, because the fact that the particular instance of T implements the Deref trait cannot have any effect on the decorator code, since it's not in the bounds for T. What instead would work is to change the language so that if type Type implements Trait and all Trait methods take &self or &mut self (as opposed to by value self or ~self), then an implementation of Trait for &'a mut Type is automatically generated (with the obvious implementation). Likewise if all Trait methods take &self, then an implementation of Trait for &'a Type is also automatically generated. Then what you want to do will just work without the need of any wrapper or special syntax. One could then, as an additional step, automatically generate an implementation of Trait for MutDeref if Trait is implemented by &mut Type (possibly due to the above technique), but this would not be required for the example. From ben.striegel at gmail.com Mon Feb 3 21:12:18 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Tue, 4 Feb 2014 00:12:18 -0500 Subject: [rust-dev] Nick Cameron joins the Rust team at Mozilla In-Reply-To: References: <52F04E6C.7020105@mozilla.com> Message-ID: Congratulations! On Mon, Feb 3, 2014 at 10:11 PM, Tim Kuehn wrote: > Fantastic! Good luck, Nick! > > > On Mon, Feb 3, 2014 at 6:51 PM, Alex Crichton wrote: > >> Welcome Nick! >> >> I can't wait to see that 1.0 issue count go down! >> >> On Mon, Feb 3, 2014 at 6:20 PM, Brian Anderson >> wrote: >> > Hi Rusties, >> > >> > I'm just thrilled to announce today that Nick Cameron (nrc) has joined >> > Mozilla's Rust team full-time. Nick has a PhD in programming language >> theory >> > from Imperial College London and has been hacking on Gecko's graphics >> and >> > layout for two years, but now that he's all ours you'll be seeing him >> > eradicate critical Rust bugs by the dozens. Good luck, Nick, and >> welcome to >> > the party. >> > >> > Regards, >> > Brian >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leebraid at gmail.com Tue Feb 4 00:29:26 2014 From: leebraid at gmail.com (Lee Braiden) Date: Tue, 04 Feb 2014 08:29:26 +0000 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: , , , , , , , Message-ID: <52F0A4E6.8070103@gmail.com> On 04/02/14 03:49, Bill Myers wrote: >> it is guaranteed to happen on all readers > I meant all finite readers, such as those for normal disk files. What are you getting at here, Bill? I thought you intended the opposite: to hint that could happen a lot on (async) infinite streams. -- Lee From leebraid at gmail.com Tue Feb 4 00:37:37 2014 From: leebraid at gmail.com (Lee Braiden) Date: Tue, 04 Feb 2014 08:37:37 +0000 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: , , , Message-ID: <52F0A6D1.4060602@gmail.com> On 04/02/14 03:47, Bill Myers wrote: > Are we sure that this is the correct design, as opposed to having read return 0 for EoF or perhaps returning None with a Result, IoError> return type? > > After all, EOF is unlike all other I/O errors, since it is guaranteed to happen on all readers, and the fact that it needs to be special cased might be an indication that the design is wrong. > > Also, in addition to "raw" eof, a partial read due to end of file causes the same issues, so it seems that code needs to have two match conditions to handle eof, while it would only need a single one if eof was represented by Ok(0). This makes a lot of sense. Although I wonder if the issue isn't more of a distinction between two use-cases: read_up_to_n_bytes(stream_buf, n) vs: read_n_bytes(my_struct, n) In the first, you intend to only read as much as possible, within some limit, then decode it. In the second, you intend to read a specific amount, because that's what's REQUIRED by your file format spec. Another approach I've use for handling this is to distinguish: read(buf, n): definitely read the next n bytes, or fail -- used when you know what should be next. This moves the cursor, and probably consumes internal file buffers peek(buf, n): attempt to read n bytes, if available, for later decoding -- used for lookahead, when you don't know what's next, and want to check. This does NOT move the cursor, and possibly buffers more data for a later read() Note that this approach works well for buffered infinite streams, as well as files. -- Lee From corey at octayn.net Tue Feb 4 00:45:43 2014 From: corey at octayn.net (Corey Richardson) Date: Tue, 4 Feb 2014 03:45:43 -0500 Subject: [rust-dev] Using Rust with LLVM's Sanitizers Message-ID: Hey all, As you are probably aware, we use LLVM for code generation and optimization. It is a large project, and one of its cooler features is the variety of "sanitizers" it provides (http://clang.llvm.org/docs/index.html#using-clang-as-a-compiler). These are not clang-specific, but indeed can be used with Rust! For example, to use AddressSanitizer (asan) on my system, I do: rustc --passes asan,asan-module --link-args "/usr/bin/../lib/clang/3.4/lib/linux/libclang_rt.asan-x86_64.a" foo.rs I got the path to the necessary asan library by looking at "clang -v -fsanitize=address foo.c" For example, the following program: use std::libc::{free,malloc}; fn main() { unsafe { let p = malloc(42); free(p); free(p); } } Outputs https://gist.github.com/cmr/8800111 Similarly, you can use ThreadSanitizer (tsan) by using `--passes tsan` and replacing asan with tsan in the link-args. Also useful is msan (for detecting uninitialized reads) and tsan (for detecting data races). One caveat is that although LLVM will happily run all of the sanitizer passes for you, you can actually only use one at a time (you will see this when you try to link the multiple *san libraries together). Now, these aren't going to be as useful in Rust given its focus on safety, but they can still be useful when debugging unsafe code, writing bindings, etc. Any time valgrind would be useful, often one of the sanitizers can be used with less overhead. Cheers, cmr From j.boggiano at seld.be Tue Feb 4 02:29:22 2014 From: j.boggiano at seld.be (Jordi Boggiano) Date: Tue, 04 Feb 2014 11:29:22 +0100 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <52E7931D.70808@gmail.com> References: <52E7071A.50702@mozilla.com> <52E76C1C.50005@gmail.com> <52E77969.1020902@gmail.com> <52E7931D.70808@gmail.com> Message-ID: <52F0C102.8040003@seld.be> On 28/01/2014 12:23, Matthew Frazier wrote: > Have you ever used Composer ? I know that > Jordi Boggiano, one of the authors, has been involved with the Rust > community in the past. Some Composer features that I think are critical > for the new Rust package manager include: > > [snip] > > If I had more time and more Rust experience, I would be interested in > implementing a Composer-like package manager for Rust. Unfortunately I > have little of both. :-( I would be more than happy to give feedback to any new rustpkg plans (I did already talk with Tim about it last summer). I don't have time to go through the entire thread here, but from what I see it seems to me many systems devs here have a pretty old fashioned idea of what a good dependency manager can do in terms of ease of use and feature set. I just hope whoever starts working on a new spec announces it clearly here in a new thread so it does not go unnoticed. I'd love to help but I already have enough problems maintaining one package manager, for now anyway :) Cheers -- Jordi Boggiano @seldaek - http://nelm.io/jordi From bascule at gmail.com Tue Feb 4 02:36:27 2014 From: bascule at gmail.com (Tony Arcieri) Date: Tue, 4 Feb 2014 02:36:27 -0800 Subject: [rust-dev] Deprecating rustpkg In-Reply-To: <52F0C102.8040003@seld.be> References: <52E7071A.50702@mozilla.com> <52E76C1C.50005@gmail.com> <52E77969.1020902@gmail.com> <52E7931D.70808@gmail.com> <52F0C102.8040003@seld.be> Message-ID: On Tue, Feb 4, 2014 at 2:29 AM, Jordi Boggiano wrote: > I just hope whoever starts working on a new spec announces it clearly THIS -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.ronacher at active-4.com Tue Feb 4 02:10:34 2014 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Tue, 04 Feb 2014 10:10:34 +0000 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: Message-ID: <52F0BC9A.90400@active-4.com> Hi, Awesome. I have been waiting for this for a long time :D On 04/02/2014 02:00, Alex Crichton wrote: > 2. The new if_ok!() macro. This macro has a fairly simple definition > [0], and the idea is to return-early if an Err is encountered, and > otherwise unwrap the Ok value. Some sample usage looks like: This opens the question how to compose errors or handle them if they are not self contained. For instance think of an SSL library that wraps a socket. All the sudden it has to return all the IO errors (as it's still doing IO) but it also has to return additional SSL errors (like a failed handshake). Right now that does not work for two reasons: a) the IO interface is a trait, so you cannot change the signature. b) there is no nice way to compose errors. What's the general suggested solution for users for this? I would love to see some sort of generic error trait that gives some basic interface to extract information about an error (maybe have a way to get an error identifier to test for and a human readable message). Regards, Armin PS.: since we have if_ok! now may I propose also introducing something like this? macro_rules! try_unwrap { ($expr:expr, $err_result:expr) => ( match ($expr) { Some(x) => x, None => { return $err_result }, } ) } From matthias.einwag at googlemail.com Tue Feb 4 03:15:21 2014 From: matthias.einwag at googlemail.com (Matthias Einwag) Date: Tue, 4 Feb 2014 12:15:21 +0100 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: Message-ID: 0 is no necessarily EOF. E.g. it's not EOF - when you requested to read 0 bytes, which is perfectly legal - when your IO Object is in nonblocking mode (yes, that's currently not supported in Rust, but might be in Future) And the Result, IoError> will only create questions when the the Err Result will be used as opposed to the None Option. I don't know if you would associate an Ok(None) with an EOF if you first-time encounter it. Therefore I think the current Err(EOF) is absolutely fine. 2014-02-04 Bill Myers : > Are we sure that this is the correct design, as opposed to having read > return 0 for EoF or perhaps returning None with a Result, > IoError> return type? > > After all, EOF is unlike all other I/O errors, since it is guaranteed to > happen on all readers, and the fact that it needs to be special cased might > be an indication that the design is wrong. > > Also, in addition to "raw" eof, a partial read due to end of file causes > the same issues, so it seems that code needs to have two match conditions > to handle eof, while it would only need a single one if eof was represented > by Ok(0). > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko at alum.mit.edu Tue Feb 4 03:57:37 2014 From: niko at alum.mit.edu (Niko Matsakis) Date: Tue, 4 Feb 2014 06:57:37 -0500 Subject: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax In-Reply-To: <1391333451.2966.30.camel@vigil> References: <1391333451.2966.30.camel@vigil> Message-ID: <20140204115737.GB3630@Mr-Bennet> On Sun, Feb 02, 2014 at 10:30:51AM +0100, Benjamin Herr wrote: > ... while C# apparently compromises and puts the type parameters between > the function name and value parameter list, but leaves the bounds for > later: > > public static bool Contains(IEnumerable collection, T item) > where T : IComparable; I have actually been meaning to suggest this for Rust, most likely as an addition to the current syntax. The reasoning is that if we were to generalize our traits into the equivalent of Haskell's "multi-parameter type classes", there are things that the current syntax cannot express. Also I think it reads better if there are many bounds. But I'll expand in a separate e-mail, rather than tying the discussion to this thread, which is about Corey's proposal. Niko From bjzaba at yahoo.com.au Tue Feb 4 04:56:15 2014 From: bjzaba at yahoo.com.au (Brendan Zabarauskas) Date: Tue, 4 Feb 2014 23:56:15 +1100 Subject: [rust-dev] Nick Cameron joins the Rust team at Mozilla In-Reply-To: <52F04E6C.7020105@mozilla.com> References: <52F04E6C.7020105@mozilla.com> Message-ID: How exciting. Congratulations Nick! ~Brendan On 4 Feb 2014, at 1:20 pm, Brian Anderson wrote: > Hi Rusties, > > I'm just thrilled to announce today that Nick Cameron (nrc) has joined Mozilla's Rust team full-time. Nick has a PhD in programming language theory from Imperial College London and has been hacking on Gecko's graphics and layout for two years, but now that he's all ours you'll be seeing him eradicate critical Rust bugs by the dozens. Good luck, Nick, and welcome to the party. > > Regards, > Brian > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From dbau.pp at gmail.com Tue Feb 4 06:13:13 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Wed, 05 Feb 2014 01:13:13 +1100 Subject: [rust-dev] The rustpkg source lives on Message-ID: <52F0F579.6000806@gmail.com> Hi all, Some people have expressed unhappiness that rustpkg was removed rather than fixed, so I've extracted the last rustpkg source from mozilla/rust and thrown it up on github[1], keeping the git history. I'm personally not planning on doing any particular work on it, but I have copied the docs in, as well as activated travis (just building it, no tests, for now; and not on Rust CI yet either); and am going to attempt to copy the issues tagged A-pkg across from mozilla/rust. If anyone is interested in working on it (or thinks they may be possibly interested at some point), feel free to contact me to be added to the organisation. If there are people interested in being a leader/coordinator of some sort, that would be very nice... since I'm not going to be filling that role. [1]: https://github.com/rustpkg/rustpkg Huon From danielmicay at gmail.com Tue Feb 4 09:23:45 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Tue, 4 Feb 2014 12:23:45 -0500 Subject: [rust-dev] Using Rust with LLVM's Sanitizers In-Reply-To: References: Message-ID: On Tue, Feb 4, 2014 at 3:45 AM, Corey Richardson wrote: > Hey all, > > As you are probably aware, we use LLVM for code generation and > optimization. It is a large project, and one of its cooler features is > the variety of "sanitizers" it provides > (http://clang.llvm.org/docs/index.html#using-clang-as-a-compiler). > These are not clang-specific, but indeed can be used with Rust! > > For example, to use AddressSanitizer (asan) on my system, I do: > > rustc --passes asan,asan-module --link-args > "/usr/bin/../lib/clang/3.4/lib/linux/libclang_rt.asan-x86_64.a" foo.rs > > I got the path to the necessary asan library by looking at "clang -v > -fsanitize=address foo.c" > > For example, the following program: > > use std::libc::{free,malloc}; > > fn main() { > unsafe { let p = malloc(42); free(p); free(p); } > } > > Outputs https://gist.github.com/cmr/8800111 > > Similarly, you can use ThreadSanitizer (tsan) by using `--passes tsan` > and replacing asan with tsan in the link-args. Also useful is msan > (for detecting uninitialized reads) and tsan (for detecting data > races). One caveat is that although LLVM will happily run all of the > sanitizer passes for you, you can actually only use one at a time (you > will see this when you try to link the multiple *san libraries > together). > > Now, these aren't going to be as useful in Rust given its focus on > safety, but they can still be useful when debugging unsafe code, > writing bindings, etc. Any time valgrind would be useful, often one of > the sanitizers can be used with less overhead. > > Cheers, > cmr This mostly doesn't work without frontend support. It's true that it will add checks to certain function calls, but the frontend support is missing so the majority of the feature set is not there. It will be unsafe to find out-of-bounds array access, out-of-bounds pointer arithmetic, dereferences of dangling pointers, data races and other problems originating in Rust code. No error is produced for the following: fn main() { unsafe { let xs = [1, 2, 3]; xs.unsafe_ref(3); } } When using `clang`, you get the following error report: ================================================================= ==15431==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fff5cbe8eac at pc 0x47b92e bp 0x7fff5cbe8e10 sp 0x7fff5cbe8e08 READ of size 4 at 0x7fff5cbe8eac thread T0 #0 0x47b92d in main (/home/strcat/a.out+0x47b92d) #1 0x7ff1aecacb04 in __libc_start_main (/usr/lib/libc.so.6+0x21b04) #2 0x47b5ec in _start (/home/strcat/a.out+0x47b5ec) Address 0x7fff5cbe8eac is located in stack of thread T0 at offset 44 in frame #0 0x47b6bf in main (/home/strcat/a.out+0x47b6bf) This frame has 1 object(s): [32, 44) 'a' <== Memory access at offset 44 overflows this variable HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext (longjmp and C++ exceptions *are* supported) SUMMARY: AddressSanitizer: stack-buffer-overflow ??:0 main Shadow bytes around the buggy address: 0x10006b975180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10006b975190: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10006b9751a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10006b9751b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10006b9751c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x10006b9751d0: f1 f1 f1 f1 00[04]f4 f4 f3 f3 f3 f3 00 00 00 00 0x10006b9751e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10006b9751f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10006b975200: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10006b975210: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x10006b975220: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Heap right redzone: fb Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack partial redzone: f4 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 ASan internal: fe ==15431==ABORTING From corey at octayn.net Tue Feb 4 09:35:31 2014 From: corey at octayn.net (Corey Richardson) Date: Tue, 4 Feb 2014 12:35:31 -0500 Subject: [rust-dev] Using Rust with LLVM's Sanitizers In-Reply-To: References: Message-ID: On Tue, Feb 4, 2014 at 12:23 PM, Daniel Micay wrote: > On Tue, Feb 4, 2014 at 3:45 AM, Corey Richardson wrote: >> Hey all, >> >> As you are probably aware, we use LLVM for code generation and >> optimization. It is a large project, and one of its cooler features is >> the variety of "sanitizers" it provides >> (http://clang.llvm.org/docs/index.html#using-clang-as-a-compiler). >> These are not clang-specific, but indeed can be used with Rust! >> >> For example, to use AddressSanitizer (asan) on my system, I do: >> >> rustc --passes asan,asan-module --link-args >> "/usr/bin/../lib/clang/3.4/lib/linux/libclang_rt.asan-x86_64.a" foo.rs >> >> I got the path to the necessary asan library by looking at "clang -v >> -fsanitize=address foo.c" >> >> For example, the following program: >> >> use std::libc::{free,malloc}; >> >> fn main() { >> unsafe { let p = malloc(42); free(p); free(p); } >> } >> >> Outputs https://gist.github.com/cmr/8800111 >> >> Similarly, you can use ThreadSanitizer (tsan) by using `--passes tsan` >> and replacing asan with tsan in the link-args. Also useful is msan >> (for detecting uninitialized reads) and tsan (for detecting data >> races). One caveat is that although LLVM will happily run all of the >> sanitizer passes for you, you can actually only use one at a time (you >> will see this when you try to link the multiple *san libraries >> together). >> >> Now, these aren't going to be as useful in Rust given its focus on >> safety, but they can still be useful when debugging unsafe code, >> writing bindings, etc. Any time valgrind would be useful, often one of >> the sanitizers can be used with less overhead. >> >> Cheers, >> cmr > > This mostly doesn't work without frontend support. It's true that it > will add checks to certain function calls, but the frontend support is > missing so the majority of the feature set is not there. It will be > unsafe to find out-of-bounds array access, out-of-bounds pointer > arithmetic, dereferences of dangling pointers, data races and other > problems originating in Rust code. No error is produced for the > following: > > fn main() { > unsafe { > let xs = [1, 2, 3]; > xs.unsafe_ref(3); > } > } > > When using `clang`, you get the following error report: > > ================================================================= > ==15431==ERROR: AddressSanitizer: stack-buffer-overflow on address > 0x7fff5cbe8eac at pc 0x47b92e bp 0x7fff5cbe8e10 sp 0x7fff5cbe8e08 > READ of size 4 at 0x7fff5cbe8eac thread T0 > #0 0x47b92d in main (/home/strcat/a.out+0x47b92d) > #1 0x7ff1aecacb04 in __libc_start_main (/usr/lib/libc.so.6+0x21b04) > #2 0x47b5ec in _start (/home/strcat/a.out+0x47b5ec) > > Address 0x7fff5cbe8eac is located in stack of thread T0 at offset 44 in frame > #0 0x47b6bf in main (/home/strcat/a.out+0x47b6bf) > > This frame has 1 object(s): > [32, 44) 'a' <== Memory access at offset 44 overflows this variable > HINT: this may be a false positive if your program uses some custom > stack unwind mechanism or swapcontext > (longjmp and C++ exceptions *are* supported) > SUMMARY: AddressSanitizer: stack-buffer-overflow ??:0 main > Shadow bytes around the buggy address: > 0x10006b975180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > 0x10006b975190: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > 0x10006b9751a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > 0x10006b9751b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > 0x10006b9751c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > =>0x10006b9751d0: f1 f1 f1 f1 00[04]f4 f4 f3 f3 f3 f3 00 00 00 00 > 0x10006b9751e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > 0x10006b9751f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > 0x10006b975200: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > 0x10006b975210: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > 0x10006b975220: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > Shadow byte legend (one shadow byte represents 8 application bytes): > Addressable: 00 > Partially addressable: 01 02 03 04 05 06 07 > Heap left redzone: fa > Heap right redzone: fb > Freed heap region: fd > Stack left redzone: f1 > Stack mid redzone: f2 > Stack right redzone: f3 > Stack partial redzone: f4 > Stack after return: f5 > Stack use after scope: f8 > Global redzone: f9 > Global init order: f6 > Poisoned by user: f7 > ASan internal: fe > ==15431==ABORTING Vadim brought this up at https://github.com/mozilla/rust/issues/749#issuecomment-34040924. I'm working on emitting the necessary stuff but the C API is a pain. From kevin at sb.org Tue Feb 4 11:08:00 2014 From: kevin at sb.org (Kevin Ballard) Date: Tue, 4 Feb 2014 11:08:00 -0800 Subject: [rust-dev] Handling I/O errors In-Reply-To: References: Message-ID: <381B853D-9373-402A-91ED-C57E13DFEE04@sb.org> Perhaps it would help if we actually had a static io::EOF static EOF: IoError = IoError { kind: EndOfFile, desc: "end of file", detail: None } Then we could use this in match patterns as in match w.write(buf) { Err(EOF) => handle_eof(), Err(e) => handle_other_error(), Ok(n) => ... } -Kevin On Feb 4, 2014, at 3:15 AM, Matthias Einwag wrote: > 0 is no necessarily EOF. > E.g. it's not EOF > - when you requested to read 0 bytes, which is perfectly legal > - when your IO Object is in nonblocking mode (yes, that's currently not supported in Rust, but might be in Future) > > And the Result, IoError> will only create questions when the the Err Result will be used as opposed to the None Option. I don't know if you would associate an Ok(None) with an EOF if you first-time encounter it. > > Therefore I think the current Err(EOF) is absolutely fine. > > > 2014-02-04 Bill Myers : > Are we sure that this is the correct design, as opposed to having read return 0 for EoF or perhaps returning None with a Result, IoError> return type? > > After all, EOF is unlike all other I/O errors, since it is guaranteed to happen on all readers, and the fact that it needs to be special cased might be an indication that the design is wrong. > > Also, in addition to "raw" eof, a partial read due to end of file causes the same issues, so it seems that code needs to have two match conditions to handle eof, while it would only need a single one if eof was represented by Ok(0). > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ncameron.org Tue Feb 4 12:04:13 2014 From: lists at ncameron.org (Nick Cameron) Date: Wed, 5 Feb 2014 09:04:13 +1300 Subject: [rust-dev] Nick Cameron joins the Rust team at Mozilla In-Reply-To: <52F04E6C.7020105@mozilla.com> References: <52F04E6C.7020105@mozilla.com> Message-ID: Thanks all! Really looking forward to working with everyone and getting stuck in to Rust. Cheers, Nick On Tue, Feb 4, 2014 at 3:20 PM, Brian Anderson wrote: > Hi Rusties, > > I'm just thrilled to announce today that Nick Cameron (nrc) has joined > Mozilla's Rust team full-time. Nick has a PhD in programming language > theory from Imperial College London and has been hacking on Gecko's > graphics and layout for two years, but now that he's all ours you'll be > seeing him eradicate critical Rust bugs by the dozens. Good luck, Nick, and > welcome to the party. > > Regards, > Brian > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko at alum.mit.edu Tue Feb 4 12:25:56 2014 From: niko at alum.mit.edu (Niko Matsakis) Date: Tue, 4 Feb 2014 15:25:56 -0500 Subject: [rust-dev] Futures in Rust In-Reply-To: References: Message-ID: <20140204202556.GC20082@Mr-Bennet> Within a single function we are more permissive, I think. I've been debating if we should stop that, just for consistency. There are also some bugs concerning closures. I hope to close those soon with my patch for #6801. On Wed, Jan 29, 2014 at 02:39:01PM -0800, Vadim wrote: > I've tried to simulate that with iterators, but it seems I can still read > the buffer. This compiles without errors: > > let mut buf = [0, ..1024]; > let mut iter = buf.mut_iter(); > let x = buf[0]; > *iter.next().unwrap() = 2; // just to make sure I can mutate via the > iterator > > > > > On Wed, Jan 29, 2014 at 2:06 PM, Daniel Micay wrote: > > > On Wed, Jan 29, 2014 at 5:03 PM, Vadim wrote: > > > > > > But maybe Rust type system could grow a new type of borrow that prevents > > all > > > object access while it is in scope, similarly to how iterators prevent > > > mutation of the container being iterated? > > > > > > Vadim > > > > An `&mut` borrow will prevent reads not through that borrow. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From marcbowes at gmail.com Tue Feb 4 13:16:39 2014 From: marcbowes at gmail.com (Marc Bowes) Date: Tue, 4 Feb 2014 23:16:39 +0200 Subject: [rust-dev] struct that has a field which is a ref Message-ID: Hello, I'm trying to implement a struct where one of the fields is a reference and therefore has bounded lifetime. The reason I would like it to be a reference is to encourage sharing of the value in question as setup of said value might be expensive. In my specific example, the value is a session manager and opening said session is expensive. I have come up with the following ``` trait MyTrait { fn be_traity(&self); } struct MyImpl { my_field: u32 } impl MyTrait for MyImpl { fn be_traity(&self) {} } struct MyStruct<'a> { my_field: &'a MyTrait } impl<'a> MyStruct<'a> { fn new(my_field: &'a T) -> MyStruct { MyStruct { my_field: my_field } } } fn main() { let my_field = MyImpl { my_field: 0 }; let my_struct = MyStruct::new(&my_field); } ``` This fails to compile: rust-lifetimes-with-references.rs:20:23: 20:31 error: value may contain references; add `'static` bound rust-lifetimes-with-references.rs:20 my_field: my_field ^~~~~~~~ This confuses me because "may contain references" is exactly what I want? I want to assign it as a ref.. If I slap on a & in the assignment (for no good reason other than being confused): MyStruct { my_field: &my_field } Then I get: rust-lifetimes-with-references.rs:20:23: 20:32 error: failed to find an implementation of trait MyTrait for &'a T rust-lifetimes-with-references.rs:20 my_field: &my_field --- I'm clearly doing something stupid but cannot find a reference example.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmgrosen at gmail.com Tue Feb 4 13:21:41 2014 From: jmgrosen at gmail.com (John Grosen) Date: Tue, 4 Feb 2014 13:21:41 -0800 Subject: [rust-dev] struct that has a field which is a ref In-Reply-To: References: Message-ID: <6F0201E18BC548AB8CA2E495C85B2BBB@gmail.com> The problem here is that you are using a trait object in the struct definition rather than a generic; at the moment, struct generics cannot have trait bounds, though, so the code for the struct would be simply: ``` struct MyStruct { my_field: &?a T } ``` Then the `impl` code should be exactly as you have now. -- John Grosen On Tuesday, February 4, 2014 at 1:16 PM, Marc Bowes wrote: > Hello, > > I'm trying to implement a struct where one of the fields is a reference and therefore has bounded lifetime. The reason I would like it to be a reference is to encourage sharing of the value in question as setup of said value might be expensive. In my specific example, the value is a session manager and opening said session is expensive. > > I have come up with the following > > ``` > trait MyTrait { > fn be_traity(&self); > } > > struct MyImpl { > my_field: u32 > } > > impl MyTrait for MyImpl { > fn be_traity(&self) {} > } > > struct MyStruct<'a> { > my_field: &'a MyTrait > } > > impl<'a> MyStruct<'a> { > fn new(my_field: &'a T) -> MyStruct { > MyStruct { > my_field: my_field > } > } > } > > fn main() { > let my_field = MyImpl { my_field: 0 }; > let my_struct = MyStruct::new(&my_field); > } > > ``` > > This fails to compile: > > rust-lifetimes-with-references.rs:20:23: 20:31 error: value may contain references; add `'static` bound > rust-lifetimes-with-references.rs:20 (http://rust-lifetimes-with-references.rs:20) my_field: my_field > ^~~~~~~~ > > > This confuses me because "may contain references" is exactly what I want? I want to assign it as a ref.. If I slap on a & in the assignment (for no good reason other than being confused): > > MyStruct { > my_field: &my_field > } > > > Then I get: > > rust-lifetimes-with-references.rs:20:23: 20:32 error: failed to find an implementation of trait MyTrait for &'a T > rust-lifetimes-with-references.rs:20 (http://rust-lifetimes-with-references.rs:20) my_field: &my_field > > --- > > I'm clearly doing something stupid but cannot find a reference example.. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org (mailto:Rust-dev at mozilla.org) > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcbowes at gmail.com Tue Feb 4 13:31:57 2014 From: marcbowes at gmail.com (Marc Bowes) Date: Tue, 4 Feb 2014 23:31:57 +0200 Subject: [rust-dev] struct that has a field which is a ref In-Reply-To: <6F0201E18BC548AB8CA2E495C85B2BBB@gmail.com> References: <6F0201E18BC548AB8CA2E495C85B2BBB@gmail.com> Message-ID: I see, thanks. Just to be clear, is this correct then? ``` trait MyTrait { fn be_traity(&self); } struct MyImpl { my_field: u32 } impl MyTrait for MyImpl { fn be_traity(&self) {} } struct MyStruct<'a, T> { my_field: &'a T } impl<'a, T: MyTrait> MyStruct<'a, T> { fn new(my_field: &'a T) -> MyStruct<'a, T> { MyStruct { my_field: my_field } } } fn main() { let my_field = MyImpl { my_field: 0 }; let my_struct = MyStruct::new(&my_field); } ``` The main differences being: 1) The struct definition for MyStruct no longer gives any clue as to what my_field might be (this seems weird to me) 2) The impl now includes 'T: MyTrait' and returns the templated MyStruct How (if at all) will Niko's DST changes impact this? More broadly, what else might impact this in the (short/medium-term) future? #1 bothers me a bit (right now :-)). Thanks! On Tue, Feb 4, 2014 at 11:21 PM, John Grosen wrote: > The problem here is that you are using a trait object in the struct > definition rather than a generic; at the moment, struct generics cannot > have trait bounds, though, so the code for the struct would be simply: > > ``` > struct MyStruct<'a, T> { > my_field: &'a T > } > ``` > > Then the `impl` code should be exactly as you have now. > > -- > John Grosen > > On Tuesday, February 4, 2014 at 1:16 PM, Marc Bowes wrote: > > Hello, > > I'm trying to implement a struct where one of the fields is a reference > and therefore has bounded lifetime. The reason I would like it to be a > reference is to encourage sharing of the value in question as setup of said > value might be expensive. In my specific example, the value is a session > manager and opening said session is expensive. > > I have come up with the following > > ``` > trait MyTrait { > fn be_traity(&self); > } > > struct MyImpl { > my_field: u32 > } > > impl MyTrait for MyImpl { > fn be_traity(&self) {} > } > > struct MyStruct<'a> { > my_field: &'a MyTrait > } > > impl<'a> MyStruct<'a> { > fn new(my_field: &'a T) -> MyStruct { > MyStruct { > my_field: my_field > } > } > } > > fn main() { > let my_field = MyImpl { my_field: 0 }; > let my_struct = MyStruct::new(&my_field); > } > ``` > > This fails to compile: > > rust-lifetimes-with-references.rs:20:23: 20:31 error: value may contain > references; add `'static` bound > rust-lifetimes-with-references.rs:20 my_field: my_field > ^~~~~~~~ > > This confuses me because "may contain references" is exactly what I want? > I want to assign it as a ref.. If I slap on a & in the assignment (for no > good reason other than being confused): > > MyStruct { > my_field: &my_field > } > > Then I get: > > rust-lifetimes-with-references.rs:20:23: 20:32 error: failed to find an > implementation of trait MyTrait for &'a T > rust-lifetimes-with-references.rs:20 my_field: &my_field > > --- > > I'm clearly doing something stupid but cannot find a reference example.. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmgrosen at gmail.com Tue Feb 4 13:41:00 2014 From: jmgrosen at gmail.com (John Grosen) Date: Tue, 4 Feb 2014 13:41:00 -0800 Subject: [rust-dev] struct that has a field which is a ref In-Reply-To: References: <6F0201E18BC548AB8CA2E495C85B2BBB@gmail.com> Message-ID: #2 is definitely correct ? sorry, I forgot that part of it :) #1 is a little more nuanced. For the moment, it?s correct. The current line of thinking (at least, what I?ve gathered) is that you only require trait bounds on the methods that actually use that trait. For example, you could create a Matrix struct, and you could call Matrix::new with any type, but then the Add impl for Matrix would require T: Add, the Mul would require T: Mul, etc. However, IIRC, there was a proposal (maybe a PR?) that would allow and enforce trait bounds on generics in structs. I?m not sure what the opinion of it was, though. -- John Grosen On Tuesday, February 4, 2014 at 1:31 PM, Marc Bowes wrote: > I see, thanks. Just to be clear, is this correct then? > > ``` > trait MyTrait { > fn be_traity(&self); > } > > struct MyImpl { > my_field: u32 > } > > impl MyTrait for MyImpl { > fn be_traity(&self) {} > } > > struct MyStruct<'a, T> { > my_field: &'a T > } > > impl<'a, T: MyTrait> MyStruct<'a, T> { > fn new(my_field: &'a T) -> MyStruct<'a, T> { > MyStruct { > my_field: my_field > } > } > } > > fn main() { > let my_field = MyImpl { my_field: 0 }; > let my_struct = MyStruct::new(&my_field); > } > > ``` > > The main differences being: > > 1) The struct definition for MyStruct no longer gives any clue as to what my_field might be (this seems weird to me) > 2) The impl now includes 'T: MyTrait' and returns the templated MyStruct > > How (if at all) will Niko's DST changes impact this? More broadly, what else might impact this in the (short/medium-term) future? #1 bothers me a bit (right now :-)). > > Thanks! > > > On Tue, Feb 4, 2014 at 11:21 PM, John Grosen wrote: > > The problem here is that you are using a trait object in the struct definition rather than a generic; at the moment, struct generics cannot have trait bounds, though, so the code for the struct would be simply: > > > > ``` > > struct MyStruct { > > my_field: &?a T > > } > > ``` > > > > Then the `impl` code should be exactly as you have now. > > > > -- > > John Grosen > > > > > > On Tuesday, February 4, 2014 at 1:16 PM, Marc Bowes wrote: > > > > > > > > > Hello, > > > > > > I'm trying to implement a struct where one of the fields is a reference and therefore has bounded lifetime. The reason I would like it to be a reference is to encourage sharing of the value in question as setup of said value might be expensive. In my specific example, the value is a session manager and opening said session is expensive. > > > > > > I have come up with the following > > > > > > ``` > > > trait MyTrait { > > > fn be_traity(&self); > > > } > > > > > > struct MyImpl { > > > my_field: u32 > > > } > > > > > > impl MyTrait for MyImpl { > > > fn be_traity(&self) {} > > > } > > > > > > struct MyStruct<'a> { > > > my_field: &'a MyTrait > > > } > > > > > > impl<'a> MyStruct<'a> { > > > fn new(my_field: &'a T) -> MyStruct { > > > MyStruct { > > > my_field: my_field > > > } > > > } > > > } > > > > > > fn main() { > > > let my_field = MyImpl { my_field: 0 }; > > > let my_struct = MyStruct::new(&my_field); > > > } > > > > > > ``` > > > > > > This fails to compile: > > > > > > rust-lifetimes-with-references.rs:20:23: 20:31 error: value may contain references; add `'static` bound > > > rust-lifetimes-with-references.rs:20 (http://rust-lifetimes-with-references.rs:20) my_field: my_field > > > ^~~~~~~~ > > > > > > > > > This confuses me because "may contain references" is exactly what I want? I want to assign it as a ref.. If I slap on a & in the assignment (for no good reason other than being confused): > > > > > > MyStruct { > > > my_field: &my_field > > > } > > > > > > > > > Then I get: > > > > > > rust-lifetimes-with-references.rs:20:23: 20:32 error: failed to find an implementation of trait MyTrait for &'a T > > > rust-lifetimes-with-references.rs:20 (http://rust-lifetimes-with-references.rs:20) my_field: &my_field > > > > > > --- > > > > > > I'm clearly doing something stupid but cannot find a reference example.. > > > _______________________________________________ > > > Rust-dev mailing list > > > Rust-dev at mozilla.org (mailto:Rust-dev at mozilla.org) > > > https://mail.mozilla.org/listinfo/rust-dev > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecreed at cs.washington.edu Tue Feb 4 13:41:36 2014 From: ecreed at cs.washington.edu (Eric Reed) Date: Tue, 4 Feb 2014 13:41:36 -0800 Subject: [rust-dev] struct that has a field which is a ref In-Reply-To: References: <6F0201E18BC548AB8CA2E495C85B2BBB@gmail.com> Message-ID: Niko's DST changes may involve allowing bounds in type definitions, which includes structs. You can read more here . On Tue, Feb 4, 2014 at 1:31 PM, Marc Bowes wrote: > I see, thanks. Just to be clear, is this correct then? > > ``` > trait MyTrait { > fn be_traity(&self); > } > > struct MyImpl { > my_field: u32 > } > > impl MyTrait for MyImpl { > fn be_traity(&self) {} > } > > struct MyStruct<'a, T> { > my_field: &'a T > } > > impl<'a, T: MyTrait> MyStruct<'a, T> { > fn new(my_field: &'a T) -> MyStruct<'a, T> { > MyStruct { > my_field: my_field > } > } > } > > fn main() { > let my_field = MyImpl { my_field: 0 }; > let my_struct = MyStruct::new(&my_field); > } > ``` > > The main differences being: > > 1) The struct definition for MyStruct no longer gives any clue as to what > my_field might be (this seems weird to me) > 2) The impl now includes 'T: MyTrait' and returns the templated MyStruct > > How (if at all) will Niko's DST changes impact this? More broadly, what > else might impact this in the (short/medium-term) future? #1 bothers me a > bit (right now :-)). > > Thanks! > > > On Tue, Feb 4, 2014 at 11:21 PM, John Grosen wrote: > >> The problem here is that you are using a trait object in the struct >> definition rather than a generic; at the moment, struct generics cannot >> have trait bounds, though, so the code for the struct would be simply: >> >> ``` >> struct MyStruct<'a, T> { >> my_field: &'a T >> } >> ``` >> >> Then the `impl` code should be exactly as you have now. >> >> -- >> John Grosen >> >> On Tuesday, February 4, 2014 at 1:16 PM, Marc Bowes wrote: >> >> Hello, >> >> I'm trying to implement a struct where one of the fields is a reference >> and therefore has bounded lifetime. The reason I would like it to be a >> reference is to encourage sharing of the value in question as setup of said >> value might be expensive. In my specific example, the value is a session >> manager and opening said session is expensive. >> >> I have come up with the following >> >> ``` >> trait MyTrait { >> fn be_traity(&self); >> } >> >> struct MyImpl { >> my_field: u32 >> } >> >> impl MyTrait for MyImpl { >> fn be_traity(&self) {} >> } >> >> struct MyStruct<'a> { >> my_field: &'a MyTrait >> } >> >> impl<'a> MyStruct<'a> { >> fn new(my_field: &'a T) -> MyStruct { >> MyStruct { >> my_field: my_field >> } >> } >> } >> >> fn main() { >> let my_field = MyImpl { my_field: 0 }; >> let my_struct = MyStruct::new(&my_field); >> } >> ``` >> >> This fails to compile: >> >> rust-lifetimes-with-references.rs:20:23: 20:31 error: value may contain >> references; add `'static` bound >> rust-lifetimes-with-references.rs:20 my_field: my_field >> ^~~~~~~~ >> >> This confuses me because "may contain references" is exactly what I want? >> I want to assign it as a ref.. If I slap on a & in the assignment (for no >> good reason other than being confused): >> >> MyStruct { >> my_field: &my_field >> } >> >> Then I get: >> >> rust-lifetimes-with-references.rs:20:23: 20:32 error: failed to find an >> implementation of trait MyTrait for &'a T >> rust-lifetimes-with-references.rs:20 my_field: &my_field >> >> --- >> >> I'm clearly doing something stupid but cannot find a reference example.. >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmgrosen at gmail.com Tue Feb 4 13:43:21 2014 From: jmgrosen at gmail.com (John Grosen) Date: Tue, 4 Feb 2014 13:43:21 -0800 Subject: [rust-dev] struct that has a field which is a ref In-Reply-To: References: <6F0201E18BC548AB8CA2E495C85B2BBB@gmail.com> Message-ID: <08D12BC152AD420CB81E117DC416A134@gmail.com> > Niko's DST changes may involve allowing bounds in type definitions, which includes structs. You can read more here (http://smallcultfollowing.com/babysteps/blog/2014/01/05/dst-take-5/). Okay, I guess it was part of DST, then. That proposal had a lot of content in it! -- John Grosen On Tuesday, February 4, 2014 at 1:41 PM, Eric Reed wrote: > Niko's DST changes may involve allowing bounds in type definitions, which includes structs. You can read more here (http://smallcultfollowing.com/babysteps/blog/2014/01/05/dst-take-5/). > > > On Tue, Feb 4, 2014 at 1:31 PM, Marc Bowes wrote: > > I see, thanks. Just to be clear, is this correct then? > > > > ``` > > trait MyTrait { > > fn be_traity(&self); > > } > > > > struct MyImpl { > > my_field: u32 > > } > > > > impl MyTrait for MyImpl { > > fn be_traity(&self) {} > > } > > > > struct MyStruct<'a, T> { > > my_field: &'a T > > } > > > > impl<'a, T: MyTrait> MyStruct<'a, T> { > > fn new(my_field: &'a T) -> MyStruct<'a, T> { > > MyStruct { > > my_field: my_field > > } > > } > > } > > > > fn main() { > > let my_field = MyImpl { my_field: 0 }; > > let my_struct = MyStruct::new(&my_field); > > } > > > > > > ``` > > > > The main differences being: > > > > 1) The struct definition for MyStruct no longer gives any clue as to what my_field might be (this seems weird to me) > > 2) The impl now includes 'T: MyTrait' and returns the templated MyStruct > > > > How (if at all) will Niko's DST changes impact this? More broadly, what else might impact this in the (short/medium-term) future? #1 bothers me a bit (right now :-)). > > > > Thanks! > > > > > > On Tue, Feb 4, 2014 at 11:21 PM, John Grosen wrote: > > > The problem here is that you are using a trait object in the struct definition rather than a generic; at the moment, struct generics cannot have trait bounds, though, so the code for the struct would be simply: > > > > > > ``` > > > struct MyStruct { > > > my_field: &?a T > > > } > > > ``` > > > > > > Then the `impl` code should be exactly as you have now. > > > > > > -- > > > John Grosen > > > > > > > > > On Tuesday, February 4, 2014 at 1:16 PM, Marc Bowes wrote: > > > > > > > > > > > > > Hello, > > > > > > > > I'm trying to implement a struct where one of the fields is a reference and therefore has bounded lifetime. The reason I would like it to be a reference is to encourage sharing of the value in question as setup of said value might be expensive. In my specific example, the value is a session manager and opening said session is expensive. > > > > > > > > I have come up with the following > > > > > > > > ``` > > > > trait MyTrait { > > > > fn be_traity(&self); > > > > } > > > > > > > > struct MyImpl { > > > > my_field: u32 > > > > } > > > > > > > > impl MyTrait for MyImpl { > > > > fn be_traity(&self) {} > > > > } > > > > > > > > struct MyStruct<'a> { > > > > my_field: &'a MyTrait > > > > } > > > > > > > > impl<'a> MyStruct<'a> { > > > > fn new(my_field: &'a T) -> MyStruct { > > > > MyStruct { > > > > my_field: my_field > > > > } > > > > } > > > > } > > > > > > > > fn main() { > > > > let my_field = MyImpl { my_field: 0 }; > > > > let my_struct = MyStruct::new(&my_field); > > > > } > > > > > > > > ``` > > > > > > > > This fails to compile: > > > > > > > > rust-lifetimes-with-references.rs:20:23: 20:31 error: value may contain references; add `'static` bound > > > > rust-lifetimes-with-references.rs:20 (http://rust-lifetimes-with-references.rs:20) my_field: my_field > > > > ^~~~~~~~ > > > > > > > > > > > > This confuses me because "may contain references" is exactly what I want? I want to assign it as a ref.. If I slap on a & in the assignment (for no good reason other than being confused): > > > > > > > > MyStruct { > > > > my_field: &my_field > > > > } > > > > > > > > > > > > Then I get: > > > > > > > > rust-lifetimes-with-references.rs:20:23: 20:32 error: failed to find an implementation of trait MyTrait for &'a T > > > > rust-lifetimes-with-references.rs:20 (http://rust-lifetimes-with-references.rs:20) my_field: &my_field > > > > > > > > --- > > > > > > > > I'm clearly doing something stupid but cannot find a reference example.. > > > > _______________________________________________ > > > > Rust-dev mailing list > > > > Rust-dev at mozilla.org (mailto:Rust-dev at mozilla.org) > > > > https://mail.mozilla.org/listinfo/rust-dev > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org (mailto:Rust-dev at mozilla.org) > > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcbowes at gmail.com Tue Feb 4 13:47:46 2014 From: marcbowes at gmail.com (Marc Bowes) Date: Tue, 4 Feb 2014 23:47:46 +0200 Subject: [rust-dev] struct that has a field which is a ref In-Reply-To: <08D12BC152AD420CB81E117DC416A134@gmail.com> References: <6F0201E18BC548AB8CA2E495C85B2BBB@gmail.com> <08D12BC152AD420CB81E117DC416A134@gmail.com> Message-ID: Thanks Eric. I thought it might change it (based on the presentation he gave). The blog post still requires (another) reread :-). On Tue, Feb 4, 2014 at 11:43 PM, John Grosen wrote: > Niko's DST changes may involve allowing bounds in type definitions, which > includes structs. You can read more here > . > > Okay, I guess it was part of DST, then. That proposal had a lot of content > in it! > > -- > John Grosen > > On Tuesday, February 4, 2014 at 1:41 PM, Eric Reed wrote: > > Niko's DST changes may involve allowing bounds in type definitions, which > includes structs. You can read more here > . > > > On Tue, Feb 4, 2014 at 1:31 PM, Marc Bowes wrote: > > I see, thanks. Just to be clear, is this correct then? > > ``` > trait MyTrait { > fn be_traity(&self); > } > > struct MyImpl { > my_field: u32 > } > > impl MyTrait for MyImpl { > fn be_traity(&self) {} > } > > struct MyStruct<'a, T> { > my_field: &'a T > } > > impl<'a, T: MyTrait> MyStruct<'a, T> { > fn new(my_field: &'a T) -> MyStruct<'a, T> { > MyStruct { > my_field: my_field > } > } > } > > fn main() { > let my_field = MyImpl { my_field: 0 }; > let my_struct = MyStruct::new(&my_field); > } > ``` > > The main differences being: > > 1) The struct definition for MyStruct no longer gives any clue as to what > my_field might be (this seems weird to me) > 2) The impl now includes 'T: MyTrait' and returns the templated MyStruct > > How (if at all) will Niko's DST changes impact this? More broadly, what > else might impact this in the (short/medium-term) future? #1 bothers me a > bit (right now :-)). > > Thanks! > > > On Tue, Feb 4, 2014 at 11:21 PM, John Grosen wrote: > > The problem here is that you are using a trait object in the struct > definition rather than a generic; at the moment, struct generics cannot > have trait bounds, though, so the code for the struct would be simply: > > ``` > struct MyStruct<'a, T> { > my_field: &'a T > } > ``` > > Then the `impl` code should be exactly as you have now. > > -- > John Grosen > > On Tuesday, February 4, 2014 at 1:16 PM, Marc Bowes wrote: > > Hello, > > I'm trying to implement a struct where one of the fields is a reference > and therefore has bounded lifetime. The reason I would like it to be a > reference is to encourage sharing of the value in question as setup of said > value might be expensive. In my specific example, the value is a session > manager and opening said session is expensive. > > I have come up with the following > > ``` > trait MyTrait { > fn be_traity(&self); > } > > struct MyImpl { > my_field: u32 > } > > impl MyTrait for MyImpl { > fn be_traity(&self) {} > } > > struct MyStruct<'a> { > my_field: &'a MyTrait > } > > impl<'a> MyStruct<'a> { > fn new(my_field: &'a T) -> MyStruct { > MyStruct { > my_field: my_field > } > } > } > > fn main() { > let my_field = MyImpl { my_field: 0 }; > let my_struct = MyStruct::new(&my_field); > } > ``` > > This fails to compile: > > rust-lifetimes-with-references.rs:20:23: 20:31 error: value may contain > references; add `'static` bound > rust-lifetimes-with-references.rs:20 my_field: my_field > ^~~~~~~~ > > This confuses me because "may contain references" is exactly what I want? > I want to assign it as a ref.. If I slap on a & in the assignment (for no > good reason other than being confused): > > MyStruct { > my_field: &my_field > } > > Then I get: > > rust-lifetimes-with-references.rs:20:23: 20:32 error: failed to find an > implementation of trait MyTrait for &'a T > rust-lifetimes-with-references.rs:20 my_field: &my_field > > --- > > I'm clearly doing something stupid but cannot find a reference example.. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Tue Feb 4 14:02:00 2014 From: banderson at mozilla.com (Brian Anderson) Date: Tue, 04 Feb 2014 14:02:00 -0800 Subject: [rust-dev] Faster communication between tasks In-Reply-To: References: <52700CB8.8010706@mozilla.com> Message-ID: <52F16358.2020202@mozilla.com> Thanks for the update! Once it's pushed a writeup about the design and performance might be well-receieved on r/rust. On 01/28/2014 06:05 PM, Simon Ruggier wrote: > A small update: I've gotten a resizable version of my disruptor > implementation working, and the performance looks pretty good so far. > I still have a few loose ends to tie up before I push out the changes. > I should have the updated code on GitHub hopefully within a couple of > weeks, depending on how much time I find to work on it. > > > On Sat, Nov 9, 2013 at 2:13 PM, Simon Ruggier > wrote: > > Hi all, I've tentatively come up with a design that would allow > the sender to reallocate the buffer as necessary, with very little > added performance cost. The sending side would bear the cost of > reallocation, and there would be an extra test that receivers > would have to make every time they process an item (no extra > atomic operations needed). However, it may be a few weeks or more > before I have a working implementation to demonstrate, so I > figured it might be worthwhile to mention now that I'll be working > on this. > > Also, I think it would be interesting to investigate doing > something like the Linux kernel's deadlock detection[1], but > generalized to apply to bounded queues, and implemented as a > static check. I know little about this, but even so, I can see how > it would be an enormous amount of work. On the other hand, I would > have thought the same thing about the memory safety rules that > Rust enforces. I'm hopeful that this will eventually be possible > as well. > > [1] https://www.kernel.org/doc/Documentation/lockdep-design.txt > > > On Wed, Oct 30, 2013 at 12:55 AM, Simon Ruggier > wrote: > > On Tue, Oct 29, 2013 at 3:30 PM, Brian Anderson > > wrote: > > On 10/28/2013 10:02 PM, Simon Ruggier wrote: >> Greetings fellow Rustians! >> >> First of all, thanks for working on such a great >> language. I really like the clean syntax, increased >> safety, separation of data from function definitions, and >> freedom from having to declare duplicate method >> prototypes in header files. >> >> I've been working on an alternate way to communicate >> between tasks in Rust, following the same approach as the >> LMAX Disruptor.[1] I'm hoping to eventually offer a >> superset of the functionality in the pipes API, and >> replace them as the default communication mechanism >> between tasks. Just as with concurrency in general, my >> main motivation in implementing this is to improve >> performance. For more information about the disruptor >> approach, there's a lot of information linked from their >> home page, in a variety of formats. > > This is really exciting work. Thanks for pursuing it. I've > been interested in exploring something like Disruptor in > Rust. The current channel types in Rust are indeed slow, > and fixing them is the topic of > https://github.com/mozilla/rust/issues/8568. > > I'll start paying attention to that. The Morrison & Afek 2013 > paper looks like something I should read. > > >> >> This is my first major contribution of new functionality >> to an open-source project, so I didn't want to discuss it >> in advance until I had a working system to demonstrate. I >> currently have a very basic proof of concept that >> achieves almost two orders of magnitude better >> performance than the pipes API. On my hardware[2], I >> currently see throughput of about 27 million items per >> second when synchronizing with a double-checked wait >> condition protocol between sender and receivers, 80+ >> million items with no blocking (i.e. busy waiting), and >> anywhere from 240,000 to 600,000 when using pipes. The >> LMAX Disruptor library gets up to 110 million items per >> second on the same hardware (using busy waiting and >> yielding), so there's definitely still room for >> significant improvement. > > Those are awesome results! > > > Thanks! When I first brought it up, it was getting about 14 > million with the busy waiting. Minimizing the number of atomic > operations (even with relaxed memory ordering) makes a big > difference in performance. The 2/3 drop in performance with > the blocking wait strategy comes from merely doing a > read-modify-write operation on every send (it currently uses > atomic swap, I haven't experimented with others yet). To be > fair, the only result I can take credit for is the blocking > algorithm. The other ideas are straight from the original > disruptor. > > >> I've put the code up on GitHub (I'm using rustc from >> master).[3] Currently, single and multi-stage pipelines >> of receivers are supported, while many features are >> missing, like multiple concurrent senders, multiple >> concurrent receivers, or mutation of the items as they >> pass through the pipeline. However, given what I have so >> far, now is probably the right time to start soliciting >> feedback and advice. I'm looking for review, >> suggestions/constructive criticism, and guidance about >> contributing this to the Rust codebase. > > I'm not deeply familiar with Disruptor, but I believe that > it uses bounded queues. My general feeling thus far is > that, as the general 'go-to' channel type, people should > not be using bounded queues that block the sender when > full because of the potential for unexpected deadlocks. I > could be convinced otherwise though if it's just not > possible to have reasonably fast unbounded channels. Note > that I don't think it's critical for the general-purpose > channel to be as fast as possible - it's more important to > be convenient. > > > Yes, it does. I'm divided on this, because unbounded queues > can also lead to memory exhaustion and added latency, but I > suspect that for many use cases, you're right. For performance > critical code, I think there's probably no argument: if a > queue is too large, it starts causing latency problems (like > with bufferbloat). A queue that accepts an unlimited number of > items is like an API that doesn't let the caller know about > errors. The caller needs to know that there's a large queue, > and adjust its behaviour. Because of this, I doubt any > performance-critical application would find it truly optimal > to use unbounded queues. My opinion on this is strongly > influenced by this post: > http://mechanical-sympathy.blogspot.co.uk/2012/05/apply-back-pressure-when-overloaded.html > > For general usage, though, I need to do more research. Any > application where latency is relevant really should be > designed to deal with back-pressure from queues, but there may > be some batch job style use cases where, as you say, it isn't > worth the extra effort. On the other hand, it's relevant to > think about how deadlocks occur, and decide whether or not > it's reasonable for developers to expect to be able to do > those things. I'll look into this and see what I come up with. > > If there were some general way to mitigate the deadlock issue > within the runtime, it would also solve this problem. > > As a last resort, I suspect that I could probably figure out a > way to have the sender resize the buffer when it fills, copy > the elements over, and then switch the consumers over to the > larger buffer. I don't know if I could do it without affecting > the fast path on the receiver side. > > Please keep working on this. I'm excited to see your results. > > > I appreciate the encouragement :) > > >> >> Thanks, >> Simon >> >> [1] http://lmax-exchange.github.io/disruptor/ >> [2] A 2.66GHz Intel P8800 CPU running in a Thinkpad T500 >> on Linux x86_64 >> [3] https://github.com/sruggier/rust-disruptor >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vadimcn at gmail.com Tue Feb 4 14:15:39 2014 From: vadimcn at gmail.com (Vadim) Date: Tue, 4 Feb 2014 14:15:39 -0800 Subject: [rust-dev] Futures in Rust In-Reply-To: <20140204202556.GC20082@Mr-Bennet> References: <20140204202556.GC20082@Mr-Bennet> Message-ID: On Tue, Feb 4, 2014 at 12:25 PM, Niko Matsakis wrote: > Within a single function we are more permissive, I think. I've been > debating if we should stop that, just for consistency. I think that'd be too annoying. I'd rather see Rust going the other way, and permit calling functions which borrow immutably. BTW, what's the reason for disallowing that now? Is this because they might retain a reference somewhere? Would it be possible to extend the type system so we can model this case properly? In other words, functions could explicitly declare as effect such as "this function may store in region 'b a reference into region 'a", (where 'b is the region of the iterator and 'a is the region of the collection). Then the caller would know if it can permit that or not. Something like: pub trait MutableVector { fn mut_iter(&'a self) -> 'b MutItems where 'b mut ref 'a { ... } } Of course, in order to implement futures they way I want them, we'd also need an effect saying "something in region 'b needs an exclusive access to region 'a, so please rope it off completely". Hmm, is what I am asking for here tantamount to bolting the type system of Disciple onto Rust? :-) There are also > some bugs concerning closures. I hope to close those soon with my > patch for #6801. > > On Wed, Jan 29, 2014 at 02:39:01PM -0800, Vadim wrote: > > I've tried to simulate that with iterators, but it seems I can still read > > the buffer. This compiles without errors: > > > > let mut buf = [0, ..1024]; > > let mut iter = buf.mut_iter(); > > let x = buf[0]; > > *iter.next().unwrap() = 2; // just to make sure I can mutate via the > > iterator > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From palmercox at gmail.com Tue Feb 4 18:56:14 2014 From: palmercox at gmail.com (Palmer Cox) Date: Tue, 4 Feb 2014 21:56:14 -0500 Subject: [rust-dev] Will smart-pointer deref operator allow making iter::ByRef more generic? In-Reply-To: References: Message-ID: Ah, I think I see. I was expecting that after the deref trait lands, that a type like Gc would transparently implement all of the traits that T implemented. I guess that is not the case. So, if you want to pass the pointer to a function that expects an instance of one of the traits implemented by T, you'll have to call borrow() to get a reference and then pass that reference. However, if you just want to call a method on the smart-pointer, you can use "." to invoke the method. Is my understanding correct? Is there are particular reason that Rust doesn't autogenerate boilerplate impls like this one? This seems like something that would be fairly straightforward to do. Impls for ~Trait objects also seem like they could be auto-generated. Does doing the auto-generation cause the binary size to explode? Is there some other issue? Or is it just a case of no one having had time to get around to it yet? Also, Is there a reason why: impl <'a, R: Reader> Reader for &'a mut R { fn read(&mut self, out: &mut [u8]) -> IoResult { self.read(out) } } doesn't currently exist in the standard library? Would a pull request to add it make sense? Thanks! -Palmer Cox On Mon, Feb 3, 2014 at 11:04 PM, Bill Myers wrote: > I don't think so, because the fact that the particular instance of T > implements the Deref trait cannot have any effect on the decorator code, > since it's not in the bounds for T. > > What instead would work is to change the language so that if type Type > implements Trait and all Trait methods take &self or &mut self (as opposed > to by value self or ~self), then an implementation of Trait for &'a mut > Type is automatically generated (with the obvious implementation). > > Likewise if all Trait methods take &self, then an implementation of Trait > for &'a Type is also automatically generated. > > Then what you want to do will just work without the need of any wrapper or > special syntax. > > One could then, as an additional step, automatically generate an > implementation of Trait for MutDeref if Trait is implemented by &mut > Type (possibly due to the above technique), but this would not be required > for the example. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vadimcn at gmail.com Tue Feb 4 23:41:21 2014 From: vadimcn at gmail.com (Vadim) Date: Tue, 4 Feb 2014 23:41:21 -0800 Subject: [rust-dev] Syntax for custom type bounds In-Reply-To: <20140203193354.GA2334@Mr-Bennet> References: <20140201125738.GA21688@Mr-Bennet> <20140203193354.GA2334@Mr-Bennet> Message-ID: On Mon, Feb 3, 2014 at 11:33 AM, Niko Matsakis wrote: > On Sat, Feb 01, 2014 at 03:42:45PM -0800, Vadim wrote: > > Since &'a Foo currently means "the return value is a reference into > > something that has lifetime 'a", 'a Foo feels like a natural > extension > > for saying "the return value is a reference-like thing whose safety > depends > > on something that has lifetime 'a still being around". > > Foo<'a,T>, of the other hand... it is not obvious to me why would it > > necessarily mean that. > > It does not, in fact, *necessarily* mean that, though certainly it > most commonly does. It will depend on the definition of the `Foo` type > and how the lifetime parameter is used within the type, as you say. It > seems then that you did not mean for `'a Foo` to be syntactic sugar > for `Foo<'a, T>` but rather a new kind of type: kind of a "by value > that is limited to 'a". > > > I've been around Rust for almost a year now, and certainly since the > time > > the current lifetime notation has been introduced, and I *still *could > not > > explain to somebody, why a lifetime parameter appearing among the type > > parameters of a trait or a struct refers to the lifetime of that trait or > > struct. > > As I wrote above, a lifetime parameter does not by itself have any > effect, just like a type parameter. Both type and lifetime parameters > are simply substituted into the struct body, and any limitations arise > from there. That is, if I have > > struct Foo<'a> { > x: &'a int > } > > then `Foo<'xyz>` is limited to the lifetime `'xyz` because `Foo` > contains a field `x` whose type is (after substitution) `&'xyz int`, > and that *field* cannot escape the lifetime `'xyz`. > > Thus, there are in fact corner cases where the lifetime parameter has > no effect, and so it is not the case that `SomeType<'xyz>` is > necessarily limited to `'xyz` (the most obvious being when 'xyz is > unused within the struct body, as you suggest). > > How does this apply to traits? If I look at "trait MutableVector<'a, T>" there's nothing that would connect 'a and self. Only when we get down to "impl<'a,T> MutableVector<'a, T> for &'a mut [T]" can you see that 'a is the lifetime of the type it's being implemented for. Which I guess if fine for static dispatch. But what about dynamic dispatch? Is the following supposed to work?: fn bar(x: &mut MutableVector) { let y = x.mut_iter(); ... } Vadim -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko at alum.mit.edu Wed Feb 5 05:28:13 2014 From: niko at alum.mit.edu (Niko Matsakis) Date: Wed, 5 Feb 2014 08:28:13 -0500 Subject: [rust-dev] Futures in Rust In-Reply-To: References: <20140204202556.GC20082@Mr-Bennet> Message-ID: <20140205132813.GA28153@Mr-Bennet> On Tue, Feb 04, 2014 at 02:15:39PM -0800, Vadim wrote: > I think that'd be too annoying. I'm not sure, I suspect it would have very little impact. It would probably simplify the code (always a win) and it would also have some very nice properties. In particular, if you took an `&mut` pointer to a variable, and gave it away, the possessor of that `&mut` pointer could be sure that nobody is even *reading* that data, and hence could be free to mutate it completely in parallel. /me thinks > I'd rather see Rust going the other way, > and permit calling functions which borrow immutably. Clearly you cannot have a mutable and immutable pointer to the same memory. However, it's plausible to permit mutable and *read-only* pointers. We used to have those; we called them `const`. We wound up removing them for a number of reasons. The biggest one is that ensuring memory safety when you are dealing with memory that someone else might mutate is very hard, and thus any code that used const pointers was quite limited in what it could do (in particular you could not create references to the interior of ~ pointers or through enums). The second biggest one is that, for all the pain they brought, const data actually wasn't used that often. Today, I'd suggest that `Cell` or `RefCell` is generally a viable alternative. > In other words, functions could explicitly declare as effect such as > "this function may store in region 'b a reference into region 'a", > (where 'b is the region of the iterator and 'a is the region of the > collection). This seems like a rather different thing -- I don't quite see what the importance or meaning of such an annotation is. Maybe you can elaborate. > Of course, in order to implement futures they way I want them, we'd also > need an effect saying "something in region 'b needs an exclusive access to > region 'a, so please rope it off completely". I don't know why you need this. *If* we made the change I was suggesting earlier, I think we could do quite a nice job with futures using the type system we have today; just have your future take an `&mut` pointer to the data it plans to mutate. This is quite similar to -- but perhaps a generalization of, since it would permit more flexible control flow -- [the proposal I advanced here][1]. Niko [1]: http://smallcultfollowing.com/babysteps/blog/2013/06/11/data-parallelism-in-rust/ From niko at alum.mit.edu Wed Feb 5 05:50:47 2014 From: niko at alum.mit.edu (Niko Matsakis) Date: Wed, 5 Feb 2014 08:50:47 -0500 Subject: [rust-dev] Syntax for custom type bounds In-Reply-To: References: <20140201125738.GA21688@Mr-Bennet> <20140203193354.GA2334@Mr-Bennet> Message-ID: <20140205135047.GB28153@Mr-Bennet> On Tue, Feb 04, 2014 at 11:41:21PM -0800, Vadim wrote: > How does this apply to traits? If I look at "trait MutableVector<'a, T>" > there's nothing that would connect 'a and self. There is no necessary connection between 'a and the receiver. For example, you might have: impl MutableVector<'static, T> for SomeType { ... } > But what about dynamic dispatch? Is the following supposed to > work?: > > fn bar(x: &mut MutableVector) { > let y = x.mut_iter(); > ... > } This should work ok. If you expand out all the implicit lifetime parameters in this example, you would have a function like: > fn bar<'a,'b>(x: &'b mut MutableVector<'a, int>) { > let y = x.mut_iter(); > ... > } Now a function like `mut_iter()` should get an iterator with lifetime `'a`. Niko From erick.tryzelaar at gmail.com Wed Feb 5 14:13:51 2014 From: erick.tryzelaar at gmail.com (Erick Tryzelaar) Date: Wed, 5 Feb 2014 14:13:51 -0800 Subject: [rust-dev] 2/25 Rust Bay Area meetup - Cap'n Proto, Macros, and Testing Message-ID: Hello rusties! I'm happy to announce the our next meetup on Tuesday, February 25th at the Mozilla San Francisco office: http://www.meetup.com/Rust-Bay-Area/events/156288462/ Here is our lineup: ? David Renshaw: I will discuss my work-in-progress on writing a Rust implementation of the Cap'n Proto serialization library, which is like Protobuf but faster and with a better type system, and the Cap'n Proto remote procedure call protocol, which is an advanced distributed object-capability system. I hope to highlight some of the fun puzzles I've run into as I have worked to translate the original object-oriented C++ code into idiomatic Rust. ? Steven Fackler: Exportable Macros ? Kevin Cantu: Testing in Rust As always, Mozilla will be graciously providing food and a live stream of the event. I hope you can make it! -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Wed Feb 5 16:20:32 2014 From: alex at crichton.co (Alex Crichton) Date: Wed, 5 Feb 2014 16:20:32 -0800 Subject: [rust-dev] Rust's 2013 issue churn Message-ID: Some of you may have already seen GitHub's new State of the Octoverse 2013 at http://octoverse.github.com/ I'd just like to point out that the rust repository closed the second most number of issues (6408) on all of GitHub. Just to reiterate, out of the millions of repositories on GitHub, we closed the *second highest* number of issues! Congratulations to everyone, this was truly a community effort. I look forward to closing even more issues this year! From ben.striegel at gmail.com Wed Feb 5 16:40:26 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Wed, 5 Feb 2014 19:40:26 -0500 Subject: [rust-dev] Rust's 2013 issue churn In-Reply-To: References: Message-ID: Awesome! What a cool find. As well, this lends credence to my previously unfounded suspicion that we were one of the most prolific users of Github's issue tracking system. This should give us some leverage the next time that we feel the need to complain about the UI. :P On Wed, Feb 5, 2014 at 7:20 PM, Alex Crichton wrote: > Some of you may have already seen GitHub's new State of the Octoverse > 2013 at http://octoverse.github.com/ > > I'd just like to point out that the rust repository closed the second > most number of issues (6408) on all of GitHub. Just to reiterate, out > of the millions of repositories on GitHub, we closed the *second > highest* number of issues! > > Congratulations to everyone, this was truly a community effort. I look > forward to closing even more issues this year! > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jurily at gmail.com Wed Feb 5 23:02:55 2014 From: jurily at gmail.com (=?ISO-8859-1?Q?Gy=F6rgy_Andrasek?=) Date: Thu, 06 Feb 2014 08:02:55 +0100 Subject: [rust-dev] Rust's 2013 issue churn In-Reply-To: References: Message-ID: <52F3339F.4070102@gmail.com> On 02/06/2014 01:20 AM, Alex Crichton wrote: > Some of you may have already seen GitHub's new State of the Octoverse > 2013 at http://octoverse.github.com/ > > I'd just like to point out that the rust repository closed the second > most number of issues (6408) on all of GitHub. Just to reiterate, out > of the millions of repositories on GitHub, we closed the *second > highest* number of issues! > > Congratulations to everyone, this was truly a community effort. I look > forward to closing even more issues this year! Homebrew shouldn't count. All they do is bump tarball links. From judofyr at gmail.com Thu Feb 6 05:19:06 2014 From: judofyr at gmail.com (Magnus Holm) Date: Thu, 6 Feb 2014 14:19:06 +0100 Subject: [rust-dev] Prefix on extern blocks Message-ID: Hi, Idea: #[prefix="http_parser_"] extern { pub fn init(parser: *http_parser, _type: enum_http_parser_type); ? } This have a few niceties: * You don't have repeat the prefix in every function * You can now write http_parser::init(?) to call the function However, let's also introduce another concept: #[autoprefix] extern { pub fn init(parser: *http_parser, _type: enum_http_parser_type); ? } The autoprefix every function with the crate-name and crate-SHA1. You can then compile your C library with -DPREFIX=`rustc --crate-autoprefix`. -- So, the reason I want this is to integrate with C libraries that I control (i.e. they are bundled with the Rust package) and at the same time support Rust models of allowing multiple versions of the same crate in the same binary. In this particular crate I'm using a slightly modified http_parser, and I'd rather not "leak" the symbols into the This is basically a poor-man's namespace functionality for C libraries. Even if you don't completely control the C library (e.g. you're just bundling http_parser as-is) you can use objcopy or other tools to autoprefix after you've compiled it into an object file. Thoughts? I've already tried to accomplish the same with linker flags, but I haven't found a cross-platform way of hiding specific symbols in a safe way. // Magnus Holm From edbalint at inf.u-szeged.hu Thu Feb 6 08:30:01 2014 From: edbalint at inf.u-szeged.hu (Edit Balint) Date: Thu, 06 Feb 2014 17:30:01 +0100 Subject: [rust-dev] Rust (Servo) Cross-Compile to ARM Message-ID: <52F3B889.7010008@inf.u-szeged.hu> Dear Rust Developers! My name is Edit Balint, I'm a software developer at University of Szeged, Hungary. We have a research project regarding Servo and Rust. Our main goal is to cross-compile and run Rust and Servo on ARM Linux (not Android). We have several issues with the cross-compiling procedure. Is there any guide how to achieve this? Any help would be appreciated. Best regards: Edit Balint From explodingmind at gmail.com Thu Feb 6 08:55:35 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Thu, 6 Feb 2014 11:55:35 -0500 Subject: [rust-dev] Rust (Servo) Cross-Compile to ARM In-Reply-To: <52F3B889.7010008@inf.u-szeged.hu> References: <52F3B889.7010008@inf.u-szeged.hu> Message-ID: I've done a lot of work with Rust on ARMHF, but mostly bootstrapped. Luqman's binary builds are a great cheatcode for spinning up a build - using --enable-local-rust and the images downloadable from http://luqman.ca/rust-builds, it's pretty trivial to get Rust building on a $50 RK3188 ARMv7 quad-core computer with 2GB of RAM. I've found it to be easier than cross-compiling. For floating point intrinsics to work on ARMHF, https://github.com/mozilla/rust/issues/10482#issuecomment-32097769 needs to be applied. I'll be endeavoring to get this merged up within the next week. Depending upon your platform, you may need to patch configure to support your particular set of CPU config etc. I had to manually add arm7 as a CPUTYPE. If you're stuck on cross-compiling, Luqman seems to be the domain expert. I don't know what, if any, out of build patches he uses. Delighted to see ARM getting more love! I'll endeavor to make myself of use - drop an email to the list or to me personally with specific issues as they come up! Best, -- Ian On Thu, Feb 6, 2014 at 11:30 AM, Edit Balint wrote: > Dear Rust Developers! > > My name is Edit Balint, I'm a software developer at University of Szeged, > Hungary. > We have a research project regarding Servo and Rust. > Our main goal is to cross-compile and run Rust and Servo on ARM Linux (not > Android). > We have several issues with the cross-compiling procedure. Is there any > guide how to achieve this? > > Any help would be appreciated. > > Best regards: > Edit Balint > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.sapin at exyr.org Thu Feb 6 09:06:53 2014 From: simon.sapin at exyr.org (Simon Sapin) Date: Thu, 06 Feb 2014 17:06:53 +0000 Subject: [rust-dev] Rust (Servo) Cross-Compile to ARM In-Reply-To: <52F3B889.7010008@inf.u-szeged.hu> References: <52F3B889.7010008@inf.u-szeged.hu> Message-ID: <52F3C12D.7000103@exyr.org> On 06/02/2014 16:30, Edit Balint wrote: > Dear Rust Developers! > > My name is Edit Balint, I'm a software developer at University of > Szeged, Hungary. > We have a research project regarding Servo and Rust. > Our main goal is to cross-compile and run Rust and Servo on ARM Linux > (not Android). > We have several issues with the cross-compiling procedure. Is there any > guide how to achieve this? Although you?re not using Android, this https://github.com/mozilla/servo/wiki/Building-for-Android#wiki-build-servo suggests that you need to pass --target-triples=arm-linux-SOMETHING to the configure script, but I don?t know what value of SOMETHING is relevant to you. It may be gnu, the default triple on my machine is x86_64-unknown-linux-gnu. -- Simon Sapin From explodingmind at gmail.com Thu Feb 6 09:10:25 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Thu, 6 Feb 2014 12:10:25 -0500 Subject: [rust-dev] Rust (Servo) Cross-Compile to ARM In-Reply-To: <52F3C12D.7000103@exyr.org> References: <52F3B889.7010008@inf.u-szeged.hu> <52F3C12D.7000103@exyr.org> Message-ID: You probably want arm-unknown-linux-gnueabihf or arm-unknown-linux-gnueabi. On Thu, Feb 6, 2014 at 12:06 PM, Simon Sapin wrote: > On 06/02/2014 16:30, Edit Balint wrote: > >> Dear Rust Developers! >> >> My name is Edit Balint, I'm a software developer at University of >> Szeged, Hungary. >> We have a research project regarding Servo and Rust. >> Our main goal is to cross-compile and run Rust and Servo on ARM Linux >> (not Android). >> We have several issues with the cross-compiling procedure. Is there any >> guide how to achieve this? >> > > Although you?re not using Android, this > > https://github.com/mozilla/servo/wiki/Building-for- > Android#wiki-build-servo > > suggests that you need to pass --target-triples=arm-linux-SOMETHING to > the configure script, but I don?t know what value of SOMETHING is relevant > to you. It may be gnu, the default triple on my machine is > x86_64-unknown-linux-gnu. > > -- > Simon Sapin > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laden at csclub.uwaterloo.ca Thu Feb 6 09:30:21 2014 From: laden at csclub.uwaterloo.ca (Luqman Aden) Date: Thu, 6 Feb 2014 12:30:21 -0500 Subject: [rust-dev] Rust (Servo) Cross-Compile to ARM In-Reply-To: <52F3B889.7010008@inf.u-szeged.hu> References: <52F3B889.7010008@inf.u-szeged.hu> Message-ID: Building a Rust cross compiler that can target arm isn't too hard. You just need the right toolchain installed. I personally use Debian with the gcc-4.7-arm-linux-gnueabi package from the Emdebian repo. (I believe Ubuntu and other distros have similar packages). From there it's just a simple matter of passing the right triple to the configure script. ./configure --target=arm-unknown-linux-gnueabi && make That'll build a rustc that can target arm as well as all the libraries. Then you can run it like so: rustc --target=arm-unknown-linux-gnueabi --linker=arm-linux-gnueabi-gcc hello.rs That'll give you a binary, hello, which will run on arm/linux. So, that's the basic gist of it. Cross compiling rustc itself is a bit more annoying right now. You basically need to cross compile LLVM, libsyntax, librustc & rustc. As for Servo, I know they cross compile to android so there seems to be at least some level of support in terms of cross compilation. In any case, if you have any more question feel free to ping me on IRC (my nick is just Luqman). On Thu, Feb 6, 2014 at 11:30 AM, Edit Balint wrote: > Dear Rust Developers! > > My name is Edit Balint, I'm a software developer at University of Szeged, > Hungary. > We have a research project regarding Servo and Rust. > Our main goal is to cross-compile and run Rust and Servo on ARM Linux (not > Android). > We have several issues with the cross-compiling procedure. Is there any > guide how to achieve this? > > Any help would be appreciated. > > Best regards: > Edit Balint > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Thu Feb 6 11:03:16 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Thu, 6 Feb 2014 11:03:16 -0800 Subject: [rust-dev] Prefix on extern blocks In-Reply-To: References: Message-ID: Magnus, good to see you here! I think this idea sounds neat, but I'm not really able to judge if it's technically a good idea or not. Seems mega useful though. From eric.summers at me.com Thu Feb 6 11:15:52 2014 From: eric.summers at me.com (Eric Summers) Date: Thu, 06 Feb 2014 13:15:52 -0600 Subject: [rust-dev] Prefix on extern blocks In-Reply-To: References: Message-ID: <8D090F47-7A62-4B9F-B31A-A8DAB4515A57@me.com> I think it is probably better to allow tools like rust-bindgen tackle this problem. It may be cool if there is an elegant way to build in to the language, but I think it will require additional metadata to convert the pointers in to safe pointers. Opaque structs (like in your example) should generally be wrapped as classes and simply changing a prefix wouldn?t be able to solve that. Eric On Feb 6, 2014, at 1:03 PM, Steve Klabnik wrote: > Magnus, good to see you here! > > I think this idea sounds neat, but I'm not really able to judge if > it's technically a good idea or not. Seems mega useful though. From niko at alum.mit.edu Thu Feb 6 13:02:58 2014 From: niko at alum.mit.edu (Niko Matsakis) Date: Thu, 6 Feb 2014 16:02:58 -0500 Subject: [rust-dev] Prefix on extern blocks In-Reply-To: References: Message-ID: <20140206210258.GA25596@Mr-Bennet> On Thu, Feb 06, 2014 at 02:19:06PM +0100, Magnus Holm wrote: > Hi, > > Idea: > > #[prefix="http_parser_"] > extern { > pub fn init(parser: *http_parser, _type: enum_http_parser_type); > ? > } > > This have a few niceties: > > * You don't have repeat the prefix in every function > * You can now write http_parser::init(?) to call the function This could also be achieved with a macro, I think. From banderson at mozilla.com Thu Feb 6 16:35:48 2014 From: banderson at mozilla.com (Brian Anderson) Date: Thu, 06 Feb 2014 16:35:48 -0800 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? Message-ID: <52F42A64.9070408@mozilla.com> Hey. One of my goals for 0.10 is to make the Rust installation and upgrade experience better. My personal ambitions are to make Rust installable with a single shell command, distribute binaries, not source, and to have both nightlies and point releases. Since we're already able to create highly-compatible snapshot compilers, it should be relatively easy to extend our snapshot procedure to produce complete binaries, installable via a cross-platform shell script. This would require the least amount of effort and maintenance because we don't need to use any specific package managers or add new bots, and a single installer can work on all Linuxes. We can also attempt to package Rust with various of the most common package managers: homebrew, macports, dpkg, rpm. There community-maintained packages for some of these already, so we don't necessarily need to redevelop from scratch if we just want to adopt one or all of them as official packages. We could also create a GUI installer for OS X, but I'm not sure how important that is. What shall we do? From ben.striegel at gmail.com Thu Feb 6 16:40:27 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Thu, 6 Feb 2014 19:40:27 -0500 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F42A64.9070408@mozilla.com> References: <52F42A64.9070408@mozilla.com> Message-ID: > We can also attempt to package Rust with various of the most common package managers: homebrew, macports, dpkg, rpm. It would be great to have these for the point releases, but do all of these allow for nightly builds? On Thu, Feb 6, 2014 at 7:35 PM, Brian Anderson wrote: > Hey. > > One of my goals for 0.10 is to make the Rust installation and upgrade > experience better. My personal ambitions are to make Rust installable with > a single shell command, distribute binaries, not source, and to have both > nightlies and point releases. > > Since we're already able to create highly-compatible snapshot compilers, > it should be relatively easy to extend our snapshot procedure to produce > complete binaries, installable via a cross-platform shell script. This > would require the least amount of effort and maintenance because we don't > need to use any specific package managers or add new bots, and a single > installer can work on all Linuxes. > > We can also attempt to package Rust with various of the most common > package managers: homebrew, macports, dpkg, rpm. There community-maintained > packages for some of these already, so we don't necessarily need to > redevelop from scratch if we just want to adopt one or all of them as > official packages. We could also create a GUI installer for OS X, but I'm > not sure how important that is. > > What shall we do? > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leebraid at gmail.com Thu Feb 6 16:41:53 2014 From: leebraid at gmail.com (Lee Braiden) Date: Fri, 07 Feb 2014 00:41:53 +0000 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F42A64.9070408@mozilla.com> References: <52F42A64.9070408@mozilla.com> Message-ID: <52F42BD1.70900@gmail.com> On 07/02/14 00:35, Brian Anderson wrote: > Hey. > > One of my goals for 0.10 is to make the Rust installation and upgrade > experience better. My personal ambitions are to make Rust installable > with a single shell command, distribute binaries, not source, and to > have both nightlies and point releases. > > Since we're already able to create highly-compatible snapshot > compilers, it should be relatively easy to extend our snapshot > procedure to produce complete binaries, installable via a > cross-platform shell script. This would require the least amount of > effort and maintenance because we don't need to use any specific > package managers or add new bots, and a single installer can work on > all Linuxes. > > We can also attempt to package Rust with various of the most common > package managers: homebrew, macports, dpkg, rpm. There > community-maintained packages for some of these already, so we don't > necessarily need to redevelop from scratch if we just want to adopt > one or all of them as official packages. We could also create a GUI > installer for OS X, but I'm not sure how important that is. > > What shall we do? > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev Please don't use a shell script that downloads binaries. For years, my workplace had development machines completely disconnected from the net, so offline installers were a must. They're also important for admins who want to download once and install on multiple machines. Basically, when you click download binaries, what you get should be just that: binaries, not another downloader to let you get the binaries. -- Lee From pnathan at vandals.uidaho.edu Thu Feb 6 16:47:12 2014 From: pnathan at vandals.uidaho.edu (Nathan, Paul (pnathan@vandals.uidaho.edu)) Date: Fri, 7 Feb 2014 00:47:12 +0000 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F42BD1.70900@gmail.com> References: <52F42A64.9070408@mozilla.com>,<52F42BD1.70900@gmail.com> Message-ID: <0e3bf5a563f14f338d02c50bb3a56b5f@BLUPR04MB561.namprd04.prod.outlook.com> ________________________________________ From: Rust-dev on behalf of Lee Braiden Sent: Thursday, February 06, 2014 4:41 PM To: rust-dev at mozilla.org Subject: Re: [rust-dev] What form should the official Rust binary installers for Unixes take? On 07/02/14 00:35, Brian Anderson wrote: > Hey. > > One of my goals for 0.10 is to make the Rust installation and upgrade > experience better. My personal ambitions are to make Rust installable > with a single shell command, distribute binaries, not source, and to > have both nightlies and point releases. > > Since we're already able to create highly-compatible snapshot > compilers, it should be relatively easy to extend our snapshot > procedure to produce complete binaries, installable via a > cross-platform shell script. This would require the least amount of > effort and maintenance because we don't need to use any specific > package managers or add new bots, and a single installer can work on > all Linuxes. > > We can also attempt to package Rust with various of the most common > package managers: homebrew, macports, dpkg, rpm. There > community-maintained packages for some of these already, so we don't > necessarily need to redevelop from scratch if we just want to adopt > one or all of them as official packages. We could also create a GUI > installer for OS X, but I'm not sure how important that is. > > What shall we do? > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev Please don't use a shell script that downloads binaries. For years, my workplace had development machines completely disconnected from the net, so offline installers were a must. They're also important for admins who want to download once and install on multiple machines. Basically, when you click download binaries, what you get should be just that: binaries, not another downloader to let you get the binaries. -- Lee All, Agreeing with Lee about the downloading compilers. This is a *key* thing that has to go away for 1.0. The embedded systems shops often have restrictions unthinkable to people who have not done SCIF-style development. Some OSX devs will probably want a homebrew port; the others will want a GUI installer. _______________________________________________ Rust-dev mailing list Rust-dev at mozilla.org https://mail.mozilla.org/listinfo/rust-dev From j.boggiano at seld.be Thu Feb 6 17:08:20 2014 From: j.boggiano at seld.be (Jordi Boggiano) Date: Fri, 07 Feb 2014 02:08:20 +0100 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F42A64.9070408@mozilla.com> References: <52F42A64.9070408@mozilla.com> Message-ID: <52F43204.5090007@seld.be> On 07/02/2014 01:35, Brian Anderson wrote: > We can also attempt to package Rust with various of the most common > package managers: homebrew, macports, dpkg, rpm. There > community-maintained packages for some of these already, so we don't > necessarily need to redevelop from scratch if we just want to adopt one > or all of them as official packages. We could also create a GUI > installer for OS X, but I'm not sure how important that is. I don't think I have much to add to the debate, but if you do want to create many packages I'd like to make sure you are aware of https://github.com/jordansissel/fpm - I think it can greatly help. Cheers -- Jordi Boggiano @seldaek - http://nelm.io/jordi From jurily at gmail.com Thu Feb 6 19:01:36 2014 From: jurily at gmail.com (=?ISO-8859-1?Q?Gy=F6rgy_Andrasek?=) Date: Fri, 07 Feb 2014 04:01:36 +0100 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F42A64.9070408@mozilla.com> References: <52F42A64.9070408@mozilla.com> Message-ID: <52F44C90.4030508@gmail.com> On 02/07/2014 01:35 AM, Brian Anderson wrote: > Since we're already able to create highly-compatible snapshot compilers, > it should be relatively easy to extend our snapshot procedure to produce > complete binaries, installable via a cross-platform shell script. This > would require the least amount of effort and maintenance because we > don't need to use any specific package managers or add new bots, and a > single installer can work on all Linuxes. That way lies madness. We won't get "standard" distros with it, as they'll want to package their own anyway (Arch already has 0.9 for example), and we won't get niche distros either because the number of variables we'd have to account for is pretty much infinite. Niche distros don't even agree on what libc[0] to use, and it only gets worse from there. Linux is simply not friendly towards binaries. IMHO the only correct option for Linux is to provide a) a fully static bootstrap binary (#10807), b) a build system that observes all the standard conventions for building from source (prefix, toolchain, don't download stuff, etc.) and c) true cross-compiling. Then sit back and let people distribute binaries however they want. (I know this is mostly a pipe dream, but you specifically asked for *all* Linuxes :) [0]: http://en.wikipedia.org/wiki/Lilblue_Linux > We can also attempt to package Rust with various of the most common > package managers: homebrew, macports, dpkg, rpm. There > community-maintained packages for some of these already, so we don't > necessarily need to redevelop from scratch if we just want to adopt one > or all of them as official packages. We could also create a GUI > installer for OS X, but I'm not sure how important that is. Again, people will be happy to solve this for us if we make it easy enough for them. Just maintain the install instructions and relevant links on the site, not the binaries themselves. From danielmicay at gmail.com Thu Feb 6 19:20:50 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Thu, 6 Feb 2014 22:20:50 -0500 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: References: <52F42A64.9070408@mozilla.com> Message-ID: On Thu, Feb 6, 2014 at 7:40 PM, Benjamin Striegel wrote: >> We can also attempt to package Rust with various of the most common >> package managers: homebrew, macports, dpkg, rpm. > > It would be great to have these for the point releases, but do all of these > allow for nightly builds? Yes, they allow for nightly builds. The version number will come from `git describe` and increment for eternity with every revision. Since the version is always greater with each revision, the package manager will consider them to be updates. It's often trivial to create a nightly package too. For example, Arch Linux has direct support for building packages with Bazaar, Mercurial, Git and Subversion sources including support for branches and tags. It's as simple as doing `source=('git://github.com/mozilla/rust.git')` and giving a tag/branch after a hash if desired. From danielmicay at gmail.com Thu Feb 6 20:39:26 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Thu, 6 Feb 2014 23:39:26 -0500 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F42A64.9070408@mozilla.com> References: <52F42A64.9070408@mozilla.com> Message-ID: On Thu, Feb 6, 2014 at 7:35 PM, Brian Anderson wrote: > Hey. > > One of my goals for 0.10 is to make the Rust installation and upgrade > experience better. My personal ambitions are to make Rust installable with a > single shell command, distribute binaries, not source, and to have both > nightlies and point releases. > > Since we're already able to create highly-compatible snapshot compilers, it > should be relatively easy to extend our snapshot procedure to produce > complete binaries, installable via a cross-platform shell script. This would > require the least amount of effort and maintenance because we don't need to > use any specific package managers or add new bots, and a single installer > can work on all Linuxes. > > We can also attempt to package Rust with various of the most common package > managers: homebrew, macports, dpkg, rpm. There community-maintained packages > for some of these already, so we don't necessarily need to redevelop from > scratch if we just want to adopt one or all of them as official packages. We > could also create a GUI installer for OS X, but I'm not sure how important > that is. > > What shall we do? I think Rust should prefer first-party distribution packages when they are available and up-to-date. It would be okay to have a generic Linux installer as a last resort, but it's not ideal as the package manager won't be keeping it up-to-date and it won't be fully integrated with the system. A package tailored to the system can install the support for various text editors to the right places and update the mime database for Rust files. It also allows for linking against the system LLVM libraries when Rust is able to use the stable LLVM release without patches. I'll continue packaging the latest stable Rust release in the official Arch Linux repositories. There are nightly builds hosted on Arch's build server and maintaining it just involves a quick upload of the new PKGBUILD via ssh. It would be great to have the nightly repository hosted on , but it would involve maintaining an Arch virtual machine. FWIW, this would also be a good way to host `playpen` for executing code on the front page and documentation. From clements at brinckerhoff.org Thu Feb 6 23:06:45 2014 From: clements at brinckerhoff.org (John Clements) Date: Thu, 6 Feb 2014 23:06:45 -0800 Subject: [rust-dev] 2/25 Rust Bay Area meetup - Cap'n Proto, Macros, and Testing In-Reply-To: References: Message-ID: <37CC458B-EBA9-4B8E-A3B6-0CABEFCC7E80@brinckerhoff.org> On Feb 5, 2014, at 2:13 PM, Erick Tryzelaar wrote: > Hello rusties! > I'm happy to announce the our next meetup on Tuesday, February 25th at the Mozilla San Francisco office: > > http://www.meetup.com/Rust-Bay-Area/events/156288462/ > > Here is our lineup: > > > ? David Renshaw: I will discuss my work-in-progress on writing a Rust implementation of the Cap'n Proto serialization library, which is like Protobuf but faster and with a better type system, and the Cap'n Proto remote procedure call protocol, which is an advanced distributed object-capability system. I hope to highlight some of the fun puzzles I've run into as I have worked to translate the original object-oriented C++ code into idiomatic Rust. > > > ? Steven Fackler: Exportable Macros ooh... I bet I can make some excuse to come up to SF this night.... Sounds great! John > > ? Kevin Cantu: Testing in Rust > > As always, Mozilla will be graciously providing food and a live stream of the event. I hope you can make it! > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From uzytkownik2 at gmail.com Fri Feb 7 01:16:34 2014 From: uzytkownik2 at gmail.com (Maciej Piechotka) Date: Fri, 07 Feb 2014 10:16:34 +0100 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F42A64.9070408@mozilla.com> References: <52F42A64.9070408@mozilla.com> Message-ID: <1391764594.7982.0.camel@localhost> On Thu, 2014-02-06 at 16:35 -0800, Brian Anderson wrote: > Hey. > > One of my goals for 0.10 is to make the Rust installation and upgrade > experience better. My personal ambitions are to make Rust installable > with a single shell command, distribute binaries, not source, and to > have both nightlies and point releases. > > Since we're already able to create highly-compatible snapshot compilers, > it should be relatively easy to extend our snapshot procedure to produce > complete binaries, installable via a cross-platform shell script. This > would require the least amount of effort and maintenance because we > don't need to use any specific package managers or add new bots, and a > single installer can work on all Linuxes. > > We can also attempt to package Rust with various of the most common > package managers: homebrew, macports, dpkg, rpm. There > community-maintained packages for some of these already, so we don't > necessarily need to redevelop from scratch if we just want to adopt one > or all of them as official packages. We could also create a GUI > installer for OS X, but I'm not sure how important that is. > > What shall we do? As a user I'd just install from repo as I do will all programs I don't actively contribute to. Sorry - scratch that - I have the rust installed via package manager right now. As far as I know at least Arch, Gentoo and Ubuntu have a culture of user packaging. Another example - Gnome as far as I know does not provide any binary packages and only source. Right now the largest problem I have installing is that the pre-compiled rust crashes if the llvm is installed on Gentoo (adding to Gy?rgy point probably) so to install Rust I need to uninstall llvm, install rust and then install llvm (I'm using Gentoo so installing from package == compiling). Since bootstrapping is needed anyway it'd be nice if it work with any LLVM installed (simple tarball should work). Best regards PS. I'd be for having the build system NOT downloading any binary and instead relaying on having rustc in path. Then there can be a prebuilt binary for bootstrapping. It makes a life a bit harder for 'normal users who built rust' (is there anyone in this category?) but easier for packages (both AUR and Portage separates downloading and building which has several benefits) and administrators (adding to Lee's point). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From corey at octayn.net Fri Feb 7 01:25:29 2014 From: corey at octayn.net (Corey Richardson) Date: Fri, 7 Feb 2014 04:25:29 -0500 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <1391764594.7982.0.camel@localhost> References: <52F42A64.9070408@mozilla.com> <1391764594.7982.0.camel@localhost> Message-ID: On Fri, Feb 7, 2014 at 4:16 AM, Maciej Piechotka wrote: > PS. I'd be for having the build system NOT downloading any binary and > instead relaying on having rustc in path. > This isn't very feasible right now, since the compiler needs to target a specific Rust so it knows what it can use in the first stage (stage0). Maybe in the (far) future this can be possible. From info at bnoordhuis.nl Fri Feb 7 01:56:53 2014 From: info at bnoordhuis.nl (Ben Noordhuis) Date: Fri, 7 Feb 2014 10:56:53 +0100 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F42A64.9070408@mozilla.com> References: <52F42A64.9070408@mozilla.com> Message-ID: On Fri, Feb 7, 2014 at 1:35 AM, Brian Anderson wrote: > Hey. > > One of my goals for 0.10 is to make the Rust installation and upgrade > experience better. My personal ambitions are to make Rust installable with a > single shell command, distribute binaries, not source, and to have both > nightlies and point releases. > > Since we're already able to create highly-compatible snapshot compilers, it > should be relatively easy to extend our snapshot procedure to produce > complete binaries, installable via a cross-platform shell script. This would > require the least amount of effort and maintenance because we don't need to > use any specific package managers or add new bots, and a single installer > can work on all Linuxes. > > We can also attempt to package Rust with various of the most common package > managers: homebrew, macports, dpkg, rpm. There community-maintained packages > for some of these already, so we don't necessarily need to redevelop from > scratch if we just want to adopt one or all of them as official packages. We > could also create a GUI installer for OS X, but I'm not sure how important > that is. > > What shall we do? Different demographic perhaps but as a data point: with node.js on OS X, the .pkg and homebrew cover 99.9% of the installed user base, macports isn't even on the radar. Linux: I'd recommend shipping binaries as tarballs with minimal dependencies. As another data point: node.js release binaries are compiled on RHEL 5 for maximum portability. Old glibc, old kernel headers, old everything - if it runs there, it runs everywhere that's newer. Seems to work well, very few bug reports. Package managers: perfect solution for users, somewhat unwieldy from a releng perspective. My company provides packages for multiple releases of several RHEL and Debian derivatives and that matrix becomes overwhelming fast. From jfager at gmail.com Fri Feb 7 05:16:32 2014 From: jfager at gmail.com (Jason Fager) Date: Fri, 7 Feb 2014 08:16:32 -0500 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: References: <52F42A64.9070408@mozilla.com> Message-ID: > Different demographic perhaps but as a data point: with node.js on OS > X, the .pkg and homebrew cover 99.9% of the installed user base, > macports isn't even on the radar. Checking in as someone who prefers macports. I'd guess that the dominance of homebrew and .pkg in the node community is partly because the first mention of OS X on the node homepage points to the .pkg installer and the top hit for 'install node mac' uses homebrew, and partly that node and homebrew both went through their 'cool&new' phase at around the same time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From philippe.delrieu at free.fr Fri Feb 7 12:50:12 2014 From: philippe.delrieu at free.fr (Philippe Delrieu) Date: Fri, 07 Feb 2014 21:50:12 +0100 Subject: [rust-dev] error: cannot pack type `~PtrStruct`, which does not fulfill `Send`, as a trait bounded by Send Message-ID: <52F54704.7070909@free.fr> Hello, I use sfml lib that contains type that has Rc> fields. I use them in a struct that implements some trait. I have some function that take trait as argument. When I compile I have the error : error: cannot pack type `~PtrStruct`, which does not fulfill `Send`, as a trait bounded by Send I do some test code to be more clear: extern mod rsfml; use std::ptr; use std::rc::Rc; use std::cell::RefCell; pub trait Trait1 { fn todo(&self); } pub struct PtrStruct { this : Rc> } impl Trait1 for PtrStruct { fn todo(&self) {} } pub trait MonTrait { fn use_mon_trait(&self, tr: ~Trait1); } pub struct UseMonTrait; impl MonTrait for UseMonTrait { fn use_mon_trait(&self, tr: ~Trait1) { tr.todo(); } } #[main] fn main() { let ptr_struct = ~PtrStruct{ this : Rc::new(RefCell::new(~"toto"))}; let use_ptr = UseMonTrait; use_ptr.use_mon_trait(ptr_struct as ~Trait1); } 36:34 error: cannot pack type `~PtrStruct`, which does not fulfill `Send`, as a trait bounded by Send use_ptr.use_mon_trait(ptr_struct); ^~~~~~~~~~ If I replace Rc> with ~str it works. Any idea to make Rc> compatible with Send trait. Philippe Delrieu From ecreed at cs.washington.edu Fri Feb 7 13:54:21 2014 From: ecreed at cs.washington.edu (Eric Reed) Date: Fri, 7 Feb 2014 13:54:21 -0800 Subject: [rust-dev] error: cannot pack type `~PtrStruct`, which does not fulfill `Send`, as a trait bounded by Send In-Reply-To: <52F54704.7070909@free.fr> References: <52F54704.7070909@free.fr> Message-ID: ~Trait is really sugar for ~Trait: Send, so you're putting an implicit Send bound wherever you use ~Trait. A type meets the Send bound if it's safe to send over channels (which really means that it contains no aliasable data or references [alternatively, it owns all of its parts]). Rc can never meet the Send bound (b/c of aliasability), so what you need to do is get rid of the Send bound (not a problem since you're not actually using it). You can do this by replacing ~Trait with ~Trait: (notice the semicolon! you're specifying that the bound list for ~Trait is empty). I just tried this with your example and it compiles afterwards. On Fri, Feb 7, 2014 at 12:50 PM, Philippe Delrieu wrote: > Hello, > > I use sfml lib that contains type that has Rc> fields. > I use them in a struct that implements some trait. I have some function > that take trait as argument. > When I compile I have the error : error: cannot pack type `~PtrStruct`, > which does not fulfill `Send`, as a trait bounded by Send > > I do some test code to be more clear: > extern mod rsfml; > > use std::ptr; > use std::rc::Rc; > use std::cell::RefCell; > > pub trait Trait1 { > fn todo(&self); > } > > pub struct PtrStruct { > this : Rc> > } > > impl Trait1 for PtrStruct { > fn todo(&self) {} > } > > > pub trait MonTrait { > fn use_mon_trait(&self, tr: ~Trait1); > } > > pub struct UseMonTrait; > > impl MonTrait for UseMonTrait { > fn use_mon_trait(&self, tr: ~Trait1) { > tr.todo(); > } > } > > #[main] > fn main() { > let ptr_struct = ~PtrStruct{ this : Rc::new(RefCell::new(~"toto"))}; > let use_ptr = UseMonTrait; > use_ptr.use_mon_trait(ptr_struct as ~Trait1); > } > > 36:34 error: cannot pack type `~PtrStruct`, which does not fulfill `Send`, > as a trait bounded by Send > use_ptr.use_mon_trait(ptr_struct); > ^~~~~~~~~~ > > If I replace Rc> with ~str it works. > > Any idea to make Rc> compatible with Send trait. > > Philippe Delrieu > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecreed at cs.washington.edu Fri Feb 7 14:06:54 2014 From: ecreed at cs.washington.edu (Eric Reed) Date: Fri, 7 Feb 2014 14:06:54 -0800 Subject: [rust-dev] error: cannot pack type `~PtrStruct`, which does not fulfill `Send`, as a trait bounded by Send In-Reply-To: References: <52F54704.7070909@free.fr> Message-ID: Oh, just to be clear: you should change `tr: ~Trait1' to `tr: ~Trait1: ' in the function signature of use_mon_trait and change `ptr_struct as ~Trait1' to `ptr_struct as ~Trait1: '. On Fri, Feb 7, 2014 at 1:54 PM, Eric Reed wrote: > ~Trait is really sugar for ~Trait: Send, so you're putting an implicit > Send bound wherever you use ~Trait. > A type meets the Send bound if it's safe to send over channels (which > really means that it contains no aliasable data or references > [alternatively, it owns all of its parts]). > Rc can never meet the Send bound (b/c of aliasability), so what you need > to do is get rid of the Send bound (not a problem since you're not actually > using it). > You can do this by replacing ~Trait with ~Trait: (notice the semicolon! > you're specifying that the bound list for ~Trait is empty). > I just tried this with your example and it compiles afterwards. > > > > On Fri, Feb 7, 2014 at 12:50 PM, Philippe Delrieu < > philippe.delrieu at free.fr> wrote: > >> Hello, >> >> I use sfml lib that contains type that has Rc> fields. >> I use them in a struct that implements some trait. I have some function >> that take trait as argument. >> When I compile I have the error : error: cannot pack type `~PtrStruct`, >> which does not fulfill `Send`, as a trait bounded by Send >> >> I do some test code to be more clear: >> extern mod rsfml; >> >> use std::ptr; >> use std::rc::Rc; >> use std::cell::RefCell; >> >> pub trait Trait1 { >> fn todo(&self); >> } >> >> pub struct PtrStruct { >> this : Rc> >> } >> >> impl Trait1 for PtrStruct { >> fn todo(&self) {} >> } >> >> >> pub trait MonTrait { >> fn use_mon_trait(&self, tr: ~Trait1); >> } >> >> pub struct UseMonTrait; >> >> impl MonTrait for UseMonTrait { >> fn use_mon_trait(&self, tr: ~Trait1) { >> tr.todo(); >> } >> } >> >> #[main] >> fn main() { >> let ptr_struct = ~PtrStruct{ this : Rc::new(RefCell::new(~"toto"))}; >> let use_ptr = UseMonTrait; >> use_ptr.use_mon_trait(ptr_struct as ~Trait1); >> } >> >> 36:34 error: cannot pack type `~PtrStruct`, which does not fulfill >> `Send`, as a trait bounded by Send >> use_ptr.use_mon_trait(ptr_struct); >> ^~~~~~~~~~ >> >> If I replace Rc> with ~str it works. >> >> Any idea to make Rc> compatible with Send trait. >> >> Philippe Delrieu >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From philippe.delrieu at free.fr Fri Feb 7 23:32:39 2014 From: philippe.delrieu at free.fr (Philippe Delrieu) Date: Sat, 08 Feb 2014 08:32:39 +0100 Subject: [rust-dev] error: cannot pack type `~PtrStruct`, which does not fulfill `Send`, as a trait bounded by Send In-Reply-To: References: <52F54704.7070909@free.fr> Message-ID: <52F5DD97.2020802@free.fr> Thank you it compile now. I don't plan to use channel with rsfml type so it should work. I have some other issue so I can't test now. Philippe Le 07/02/2014 23:06, Eric Reed a ?crit : > Oh, just to be clear: you should change `tr: ~Trait1' to `tr: ~Trait1: > ' in the function signature of use_mon_trait and change `ptr_struct as > ~Trait1' to `ptr_struct as ~Trait1: '. > > > On Fri, Feb 7, 2014 at 1:54 PM, Eric Reed > wrote: > > ~Trait is really sugar for ~Trait: Send, so you're putting an > implicit Send bound wherever you use ~Trait. > A type meets the Send bound if it's safe to send over channels > (which really means that it contains no aliasable data or > references [alternatively, it owns all of its parts]). > Rc can never meet the Send bound (b/c of aliasability), so what > you need to do is get rid of the Send bound (not a problem since > you're not actually using it). > You can do this by replacing ~Trait with ~Trait: (notice the > semicolon! you're specifying that the bound list for ~Trait is empty). > I just tried this with your example and it compiles afterwards. > > > > On Fri, Feb 7, 2014 at 12:50 PM, Philippe Delrieu > > wrote: > > Hello, > > I use sfml lib that contains type that has Rc> fields. > I use them in a struct that implements some trait. I have > some function that take trait as argument. > When I compile I have the error : error: cannot pack type > `~PtrStruct`, which does not fulfill `Send`, as a trait > bounded by Send > > I do some test code to be more clear: > extern mod rsfml; > > use std::ptr; > use std::rc::Rc; > use std::cell::RefCell; > > pub trait Trait1 { > fn todo(&self); > } > > pub struct PtrStruct { > this : Rc> > } > > impl Trait1 for PtrStruct { > fn todo(&self) {} > } > > > pub trait MonTrait { > fn use_mon_trait(&self, tr: ~Trait1); > } > > pub struct UseMonTrait; > > impl MonTrait for UseMonTrait { > fn use_mon_trait(&self, tr: ~Trait1) { > tr.todo(); > } > } > > #[main] > fn main() { > let ptr_struct = ~PtrStruct{ this : > Rc::new(RefCell::new(~"toto"))}; > let use_ptr = UseMonTrait; > use_ptr.use_mon_trait(ptr_struct as ~Trait1); > } > > 36:34 error: cannot pack type `~PtrStruct`, which does not > fulfill `Send`, as a trait bounded by Send > use_ptr.use_mon_trait(ptr_struct); > ^~~~~~~~~~ > > If I replace Rc> with ~str it works. > > Any idea to make Rc> compatible with Send trait. > > Philippe Delrieu > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From axel.viala at darnuria.eu Sat Feb 8 04:32:25 2014 From: axel.viala at darnuria.eu (Axel Viala) Date: Sat, 08 Feb 2014 13:32:25 +0100 Subject: [rust-dev] Rust meetup in Paris @MozSpace -> 25 February 2014 Message-ID: <52F623D9.4000406@darnuria.eu> Hello everybody! (Sorry for my English it's not my mother tongue...) Nical, Pnkfelix and I are organizing a Rust meetup in Paris in the Mozilla space. If you want to come well free to register here however a lot of thing would be in French : https://www.eventbrite.fr/e/billets-rust-paris-meetup-10528169037 Actually we are actually 25/50 peoples. And we are planning to make some talks and workshop(like trying to contribute to the tutorial) etc... Thanks for reading and if you have some questions, don?t hesitate to reply :) From ptalbot at hyc.io Sat Feb 8 05:37:17 2014 From: ptalbot at hyc.io (Pierre Talbot) Date: Sat, 08 Feb 2014 14:37:17 +0100 Subject: [rust-dev] Compile-time function evaluation in Rust In-Reply-To: <52E933BF.2010807@aim.com> References: <52E82BE7.3010502@hyc.io> <52E843A4.3000207@hyc.io> <20140129164418.GB4929@Mr-Bennet> <52E933BF.2010807@aim.com> Message-ID: <52F6330D.9060508@hyc.io> On 01/29/2014 06:00 PM, SiegeLord wrote: > On 01/29/2014 11:44 AM, Niko Matsakis wrote: >> On Tue, Jan 28, 2014 at 07:01:44PM -0500, comex wrote: >>> Actually, Rust already has procedural macros as of recently. I was >>> wondering whether that could be combined with the proposed new system. >> >> I haven't looked in detail at the procedural macro support that was >> recently added, but off hand I think I favor that approach. That is, >> I'd rather compile a Rust module, link it dynamically, and run it as >> normal, versus defining some subset of Rust that the compiler can >> execute. The latter seems like it'll be difficult to define, >> implement, and understand. Our experience with effect systems and >> purity has not been particularly good, and I think staged compilation >> is easier to explain and free from the twin hazards of "this library >> function is pure but not marked pure" (when using explicit >> declaration) or "this library function is accidentally pure" (when >> using inference). >> > > I was under the impression from some time ago that this was going to > be the way CTFE is implemented in Rust. Having tried CTFE in D, I was > not impressed by the nebulous definition of the constant language used > there, it was never clear ahead of time what will work and what won't > (although maybe the problem won't be as big in Rust, as Rust is a > smaller language). Additionally, it was just plain slow (you are > essentially creating a very slow scripting language without JIT). > > It seems to me (judging at the size of the loadable procedural macro > commit size) that using staged compilation approach will be easier to > implement and be more powerful at the cost of, perhaps, less > convenient usage. > > -SL > I didn't consider procedural macro and wasn't aware that it was in Rust. I can't find any resources, could someone points me out some documentations? Cheers, Pierre Talbot. From dguenther9 at gmail.com Sat Feb 8 06:36:29 2014 From: dguenther9 at gmail.com (Derek Guenther) Date: Sat, 8 Feb 2014 08:36:29 -0600 Subject: [rust-dev] Compile-time function evaluation in Rust In-Reply-To: <52F6330D.9060508@hyc.io> References: <52E82BE7.3010502@hyc.io> <52E843A4.3000207@hyc.io> <20140129164418.GB4929@Mr-Bennet> <52E933BF.2010807@aim.com> <52F6330D.9060508@hyc.io> Message-ID: Here's the pull request for loadable syntax extensions: https://github.com/mozilla/rust/pull/11151 If you look through the tests, there are a few more examples of them, like this: https://github.com/mozilla/rust/blob/master/src/test/auxiliary/macro_crate_test.rs Once this pull requests lands, it'll be a practical example of a loadable procedural macro: https://github.com/mozilla/rust/pull/12034 Hope this helps! Derek On Sat, Feb 8, 2014 at 7:37 AM, Pierre Talbot wrote: > On 01/29/2014 06:00 PM, SiegeLord wrote: >> >> On 01/29/2014 11:44 AM, Niko Matsakis wrote: >>> >>> On Tue, Jan 28, 2014 at 07:01:44PM -0500, comex wrote: >>>> >>>> Actually, Rust already has procedural macros as of recently. I was >>>> wondering whether that could be combined with the proposed new system. >>> >>> >>> I haven't looked in detail at the procedural macro support that was >>> recently added, but off hand I think I favor that approach. That is, >>> I'd rather compile a Rust module, link it dynamically, and run it as >>> normal, versus defining some subset of Rust that the compiler can >>> execute. The latter seems like it'll be difficult to define, >>> implement, and understand. Our experience with effect systems and >>> purity has not been particularly good, and I think staged compilation >>> is easier to explain and free from the twin hazards of "this library >>> function is pure but not marked pure" (when using explicit >>> declaration) or "this library function is accidentally pure" (when >>> using inference). >>> >> >> I was under the impression from some time ago that this was going to be >> the way CTFE is implemented in Rust. Having tried CTFE in D, I was not >> impressed by the nebulous definition of the constant language used there, it >> was never clear ahead of time what will work and what won't (although maybe >> the problem won't be as big in Rust, as Rust is a smaller language). >> Additionally, it was just plain slow (you are essentially creating a very >> slow scripting language without JIT). >> >> It seems to me (judging at the size of the loadable procedural macro >> commit size) that using staged compilation approach will be easier to >> implement and be more powerful at the cost of, perhaps, less convenient >> usage. >> >> -SL >> > I didn't consider procedural macro and wasn't aware that it was in Rust. I > can't find any resources, could someone points me out some documentations? > > Cheers, > Pierre Talbot. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From philippe.delrieu at free.fr Sat Feb 8 10:21:29 2014 From: philippe.delrieu at free.fr (Philippe Delrieu) Date: Sat, 08 Feb 2014 19:21:29 +0100 Subject: [rust-dev] How to use dynamic polymorphism with collection Message-ID: <52F675A9.3000301@free.fr> Hello, I can't find a solution to a polymorphic problem. I construct a system to compute a value but I want it to be generic so it doesn't have to know on what type it computes. So I use generic and it works fine. Now the system store the object on what it computes in a collection ([]). The object are stored them computed. I've done a sample code : pub trait Base { fn do_base(&self); } struct TestBase; impl Base for TestBase { fn do_base(&self) { println!("ici"); } } trait GenerciFn { fn do_generic(&self, base: &T); } struct DoGenericFn; impl GenerciFn for DoGenericFn { fn do_generic(&self, base: &T) { base.do_base(); } } struct ToTestStr { vec_gen: ~[~TestBase], } impl ToTestStr { fn testgencall(&self, gen: &T) { for base in self.vec_gen.iter() { //let test = base as &~TestBase; gen.do_generic(&**base); } } } #[main] fn main() { let base = TestBase; let test = ToTestStr {vec_gen: ~[~base],}; let gen = DoGenericFn; test.testgencall(&gen); } This code work but I would like to replace the vec_gen: ~[~TestBase], with vec_gen: ~[~Base]. I didn't find a solution. I'am not shure it's possible because it's sort of dynamic polymorphism that it's very hard to do at compile time. How the compiler know that my collection contains TestBase. I think I don't use the right way to do this in Rust. Perhaps I'am too used to dynamic polymorphic language like Java, C#, .... What I want is that my lib doesn't know the real Base type use but the user's lib knows and tell by adding the right type to a collection. Philippe Delrieu From rexlen at gmail.com Sat Feb 8 15:06:51 2014 From: rexlen at gmail.com (Renato Lenzi) Date: Sun, 9 Feb 2014 00:06:51 +0100 Subject: [rust-dev] user input Message-ID: I would like to manage user input for example by storing it in a string. I found this solution: use std::io::buffered::BufferedReader; use std::io::stdin; fn main() { let mut stdin = BufferedReader::new(stdin()); let mut s1 = stdin.read_line().unwrap_or(~"nothing"); print(s1); } It works but it seems (to me) a bit verbose, heavy... is there a cheaper way to do this simple task? Thx. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Sat Feb 8 15:35:39 2014 From: alex at crichton.co (Alex Crichton) Date: Sat, 8 Feb 2014 15:35:39 -0800 Subject: [rust-dev] user input In-Reply-To: References: Message-ID: We do indeed want to make common tasks like this fairly lightweight, but we also strive to require that the program handle possible error cases. Currently, the code you have shows well what one would expect when reading a line of input. On today's master, you might be able to shorten it slightly to: use std::io::{stdin, BufferedReader}; fn main() { let mut stdin = BufferedReader::new(stdin()); for line in stdin.lines() { println!("{}", line); } } I'm curious thought what you think is the heavy/verbose aspects of this? I like common patterns having shortcuts here and there! On Sat, Feb 8, 2014 at 3:06 PM, Renato Lenzi wrote: > I would like to manage user input for example by storing it in a string. I > found this solution: > > use std::io::buffered::BufferedReader; > use std::io::stdin; > > fn main() > { > let mut stdin = BufferedReader::new(stdin()); > let mut s1 = stdin.read_line().unwrap_or(~"nothing"); > print(s1); > } > > It works but it seems (to me) a bit verbose, heavy... is there a cheaper way > to do this simple task? > > Thx. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From marcianx at gmail.com Sat Feb 8 15:48:42 2014 From: marcianx at gmail.com (Ashish Myles) Date: Sat, 8 Feb 2014 18:48:42 -0500 Subject: [rust-dev] How to use dynamic polymorphism with collection In-Reply-To: <52F675A9.3000301@free.fr> References: <52F675A9.3000301@free.fr> Message-ID: On Sat, Feb 8, 2014 at 1:21 PM, Philippe Delrieu wrote: > pub trait Base { > fn do_base(&self); > } > > struct TestBase; > > impl Base for TestBase { > fn do_base(&self) { > println!("ici"); > } > } > > trait GenerciFn { > fn do_generic(&self, base: &T); > } > > struct DoGenericFn; > > impl GenerciFn for DoGenericFn { > fn do_generic(&self, base: &T) { > base.do_base(); > } > } > > struct ToTestStr { > vec_gen: ~[~TestBase], > } > > impl ToTestStr { > fn testgencall(&self, gen: &T) { > for base in self.vec_gen.iter() { > //let test = base as &~TestBase; > gen.do_generic(&**base); > } > } > } > > #[main] > fn main() { > let base = TestBase; > let test = ToTestStr {vec_gen: ~[~base],}; > let gen = DoGenericFn; > test.testgencall(&gen); > It took me a few attempts to get the for loop right, but here you go. pub trait Base { fn do_base(&self); } struct TestBase; impl Base for TestBase { fn do_base(&self) { println!("ici"); } } trait GenericFn { fn do_generic(&self, base: &Base); } struct DoGenericFn; impl GenericFn for DoGenericFn { fn do_generic(&self, base: &Base) { base.do_base(); } } struct ToTestStr { vec_gen: ~[~Base], } impl ToTestStr { fn testgencall(&self, gen: &T) { for ref base in self.vec_gen.iter() { gen.do_generic(**base); } } } #[main] fn main() { let testbase = TestBase; let test = ToTestStr {vec_gen: ~[~testbase as ~Base],}; let gen = DoGenericFn; test.testgencall(&gen); } -------------- next part -------------- An HTML attachment was scrubbed... URL: From leebraid at gmail.com Sat Feb 8 15:50:20 2014 From: leebraid at gmail.com (Lee Braiden) Date: Sat, 08 Feb 2014 23:50:20 +0000 Subject: [rust-dev] user input In-Reply-To: References: Message-ID: <52F6C2BC.6000103@gmail.com> On 08/02/14 23:35, Alex Crichton wrote: > I'm curious thought what you think is the heavy/verbose aspects of > this? I like common patterns having shortcuts here and there! When reading the original post, it did occur to me that there should probably be a readln() equivalent of println(), if only as a special case to keep very simple programs short, or to help people "step up" from Hello, World. I was concerned that it might interfere with the ability to create a BufferedReader around stdin, but if BufferedReader is a self-contained object/struct/trait, then I guess it should "just work". -- Lee From marcianx at gmail.com Sat Feb 8 15:56:22 2014 From: marcianx at gmail.com (Ashish Myles) Date: Sat, 8 Feb 2014 18:56:22 -0500 Subject: [rust-dev] How to use dynamic polymorphism with collection In-Reply-To: References: <52F675A9.3000301@free.fr> Message-ID: On Sat, Feb 8, 2014 at 6:48 PM, Ashish Myles wrote: > > On Sat, Feb 8, 2014 at 1:21 PM, Philippe Delrieu > wrote: > >> pub trait Base { >> fn do_base(&self); >> } >> >> struct TestBase; >> >> impl Base for TestBase { >> fn do_base(&self) { >> println!("ici"); >> } >> } >> >> trait GenerciFn { >> fn do_generic(&self, base: &T); >> } >> >> struct DoGenericFn; >> >> impl GenerciFn for DoGenericFn { >> fn do_generic(&self, base: &T) { >> base.do_base(); >> } >> } >> >> struct ToTestStr { >> vec_gen: ~[~TestBase], >> } >> >> impl ToTestStr { >> fn testgencall(&self, gen: &T) { >> for base in self.vec_gen.iter() { >> //let test = base as &~TestBase; >> gen.do_generic(&**base); >> } >> } >> } >> >> #[main] >> fn main() { >> let base = TestBase; >> let test = ToTestStr {vec_gen: ~[~base],}; >> let gen = DoGenericFn; >> test.testgencall(&gen); >> > > > It took me a few attempts to get the for loop right, but here you go. > > > pub trait Base { > fn do_base(&self); > } > > struct TestBase; > > impl Base for TestBase { > fn do_base(&self) { > println!("ici"); > } > } > > trait GenericFn { > fn do_generic(&self, base: &Base); > } > > struct DoGenericFn; > > impl GenericFn for DoGenericFn { > fn do_generic(&self, base: &Base) { > base.do_base(); > } > } > > struct ToTestStr { > vec_gen: ~[~Base], > } > > impl ToTestStr { > fn testgencall(&self, gen: &T) { > for ref base in self.vec_gen.iter() { > gen.do_generic(**base); > } > } > } > > #[main] > fn main() { > let testbase = TestBase; > let test = ToTestStr {vec_gen: ~[~testbase as ~Base],}; > > let gen = DoGenericFn; > test.testgencall(&gen); > } > > Also, for a little more runtime polymorphism, you could change testgencall's declaration to fn testgencall(&self, gen: &GenericFn) { ... } -------------- next part -------------- An HTML attachment was scrubbed... URL: From rexlen at gmail.com Sat Feb 8 16:09:21 2014 From: rexlen at gmail.com (Renato Lenzi) Date: Sun, 9 Feb 2014 01:09:21 +0100 Subject: [rust-dev] user input In-Reply-To: References: Message-ID: I believe that reading a string from console should be considered one of the simplest task to perform..... Ok i do not pretend a sintax like s1 = input() ala Python but perhaps somehting like string s1 = Console.Readline(); as in C# mode would be sufficient for a basic input control... sure, it's more readable... but actually the thing that puzzles me is that you cannot write something like: let mut s1 = stdin.read_line(); cause the compiler complains (why?)....it's a matter of taste, of course. Regards On Sun, Feb 9, 2014 at 12:35 AM, Alex Crichton wrote: > We do indeed want to make common tasks like this fairly lightweight, > but we also strive to require that the program handle possible error > cases. Currently, the code you have shows well what one would expect > when reading a line of input. On today's master, you might be able to > shorten it slightly to: > > use std::io::{stdin, BufferedReader}; > > fn main() { > let mut stdin = BufferedReader::new(stdin()); > for line in stdin.lines() { > println!("{}", line); > } > } > > I'm curious thought what you think is the heavy/verbose aspects of > this? I like common patterns having shortcuts here and there! > > On Sat, Feb 8, 2014 at 3:06 PM, Renato Lenzi wrote: > > I would like to manage user input for example by storing it in a string. > I > > found this solution: > > > > use std::io::buffered::BufferedReader; > > use std::io::stdin; > > > > fn main() > > { > > let mut stdin = BufferedReader::new(stdin()); > > let mut s1 = stdin.read_line().unwrap_or(~"nothing"); > > print(s1); > > } > > > > It works but it seems (to me) a bit verbose, heavy... is there a cheaper > way > > to do this simple task? > > > > Thx. > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From com.liigo at gmail.com Sat Feb 8 16:44:25 2014 From: com.liigo at gmail.com (Liigo Zhuang) Date: Sun, 9 Feb 2014 08:44:25 +0800 Subject: [rust-dev] user input In-Reply-To: References: Message-ID: 2014?2?9? ??7:35? "Alex Crichton" ??? > > We do indeed want to make common tasks like this fairly lightweight, > but we also strive to require that the program handle possible error > cases. Currently, the code you have shows well what one would expect > when reading a line of input. On today's master, you might be able to > shorten it slightly to: > > use std::io::{stdin, BufferedReader}; > > fn main() { > let mut stdin = BufferedReader::new(stdin()); > for line in stdin.lines() { > println!("{}", line); > } > } > > I'm curious thought what you think is the heavy/verbose aspects of > this? I like common patterns having shortcuts here and there! > This is not a common pattern for stdin. Programs often need process something when user press return key, immediately. So read one line is more useful than read multiple lines, at least for stdin. I agree to need stdin.readln or read_line. > On Sat, Feb 8, 2014 at 3:06 PM, Renato Lenzi wrote: > > I would like to manage user input for example by storing it in a string. I > > found this solution: > > > > use std::io::buffered::BufferedReader; > > use std::io::stdin; > > > > fn main() > > { > > let mut stdin = BufferedReader::new(stdin()); > > let mut s1 = stdin.read_line().unwrap_or(~"nothing"); > > print(s1); > > } > > > > It works but it seems (to me) a bit verbose, heavy... is there a cheaper way > > to do this simple task? > > > > Thx. > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbau.pp at gmail.com Sat Feb 8 16:48:27 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Sun, 09 Feb 2014 11:48:27 +1100 Subject: [rust-dev] user input In-Reply-To: References: Message-ID: <52F6D05B.5010102@gmail.com> There is read_line: http://static.rust-lang.org/doc/master/std/io/trait.Buffer.html#method.read_line use std::io::{stdin, BufferedReader}; fn main() { let mut stdin = BufferedReader::new(stdin()); let line = stdin.read_line().unwrap(); println!("{}", line); } Huon On 09/02/14 11:44, Liigo Zhuang wrote: > > > 2014?2?9? ??7:35? "Alex Crichton" >??? > > > > We do indeed want to make common tasks like this fairly lightweight, > > but we also strive to require that the program handle possible error > > cases. Currently, the code you have shows well what one would expect > > when reading a line of input. On today's master, you might be able to > > shorten it slightly to: > > > > use std::io::{stdin, BufferedReader}; > > > > fn main() { > > let mut stdin = BufferedReader::new(stdin()); > > for line in stdin.lines() { > > println!("{}", line); > > } > > } > > > > I'm curious thought what you think is the heavy/verbose aspects of > > this? I like common patterns having shortcuts here and there! > > > > This is not a common pattern for stdin. Programs often need process > something when user press return key, immediately. So read one line is > more useful than read multiple lines, at least for stdin. I agree to > need stdin.readln or read_line. > > > On Sat, Feb 8, 2014 at 3:06 PM, Renato Lenzi > wrote: > > > I would like to manage user input for example by storing it in a > string. I > > > found this solution: > > > > > > use std::io::buffered::BufferedReader; > > > use std::io::stdin; > > > > > > fn main() > > > { > > > let mut stdin = BufferedReader::new(stdin()); > > > let mut s1 = stdin.read_line().unwrap_or(~"nothing"); > > > print(s1); > > > } > > > > > > It works but it seems (to me) a bit verbose, heavy... is there a > cheaper way > > > to do this simple task? > > > > > > Thx. > > > > > > _______________________________________________ > > > Rust-dev mailing list > > > Rust-dev at mozilla.org > > > https://mail.mozilla.org/listinfo/rust-dev > > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From smcarthur at mozilla.com Sat Feb 8 17:23:12 2014 From: smcarthur at mozilla.com (Sean McArthur) Date: Sat, 8 Feb 2014 17:23:12 -0800 Subject: [rust-dev] user input In-Reply-To: <52F6D05B.5010102@gmail.com> References: <52F6D05B.5010102@gmail.com> Message-ID: let in = readln!() ? macro_rules! readln( () => ({ let mut stdin = ::std::io::BufferedReader::new(::std::io::stdin()); stdin.read_line().unwrap() }) ) On Sat, Feb 8, 2014 at 4:48 PM, Huon Wilson wrote: > There is read_line: > http://static.rust-lang.org/doc/master/std/io/trait.Buffer.html#method.read_line > > > use std::io::{stdin, BufferedReader}; > > fn main() { > let mut stdin = BufferedReader::new(stdin()); > let line = stdin.read_line().unwrap(); > println!("{}", line); > } > > > > Huon > > > On 09/02/14 11:44, Liigo Zhuang wrote: > > > 2014?2?9? ??7:35? "Alex Crichton" ??? > > > > We do indeed want to make common tasks like this fairly lightweight, > > but we also strive to require that the program handle possible error > > cases. Currently, the code you have shows well what one would expect > > when reading a line of input. On today's master, you might be able to > > shorten it slightly to: > > > > use std::io::{stdin, BufferedReader}; > > > > fn main() { > > let mut stdin = BufferedReader::new(stdin()); > > for line in stdin.lines() { > > println!("{}", line); > > } > > } > > > > I'm curious thought what you think is the heavy/verbose aspects of > > this? I like common patterns having shortcuts here and there! > > > > This is not a common pattern for stdin. Programs often need process > something when user press return key, immediately. So read one line is more > useful than read multiple lines, at least for stdin. I agree to need > stdin.readln or read_line. > > > On Sat, Feb 8, 2014 at 3:06 PM, Renato Lenzi wrote: > > > I would like to manage user input for example by storing it in a > string. I > > > found this solution: > > > > > > use std::io::buffered::BufferedReader; > > > use std::io::stdin; > > > > > > fn main() > > > { > > > let mut stdin = BufferedReader::new(stdin()); > > > let mut s1 = stdin.read_line().unwrap_or(~"nothing"); > > > print(s1); > > > } > > > > > > It works but it seems (to me) a bit verbose, heavy... is there a > cheaper way > > > to do this simple task? > > > > > > Thx. > > > > > > _______________________________________________ > > > Rust-dev mailing list > > > Rust-dev at mozilla.org > > > https://mail.mozilla.org/listinfo/rust-dev > > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing listRust-dev at mozilla.orghttps://mail.mozilla.org/listinfo/rust-dev > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nical.silva at gmail.com Sat Feb 8 17:30:11 2014 From: nical.silva at gmail.com (Nicolas Silva) Date: Sun, 9 Feb 2014 02:30:11 +0100 Subject: [rust-dev] Rust meetup in Paris @MozSpace -> 25 February 2014 In-Reply-To: <52F623D9.4000406@darnuria.eu> References: <52F623D9.4000406@darnuria.eu> Message-ID: I don't want to discourage English speakers so I think we should do it in English and optionally switch to French depending on the audience. I have been to a few hacker meetups in Paris where the language of choice for presentations was English (even though 90% of the attendees were French) and it works well. On Sat, Feb 8, 2014 at 1:32 PM, Axel Viala wrote: > Hello everybody! > > (Sorry for my English it's not my mother tongue...) > > Nical, Pnkfelix and I are organizing a Rust meetup in Paris in the Mozilla > space. > > If you want to come well free to register here however a lot of thing > would be in French : > https://www.eventbrite.fr/e/billets-rust-paris-meetup-10528169037 > > Actually we are actually 25/50 peoples. > > And we are planning to make some talks and workshop(like trying to > contribute to the tutorial) etc... > > > Thanks for reading and if you have some questions, don't hesitate to reply > :) > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcwalton at mozilla.com Sat Feb 8 17:40:45 2014 From: pcwalton at mozilla.com (Patrick Walton) Date: Sat, 08 Feb 2014 17:40:45 -0800 Subject: [rust-dev] user input In-Reply-To: References: Message-ID: <52F6DC9D.4040906@mozilla.com> On 2/8/14 3:35 PM, Alex Crichton wrote: > We do indeed want to make common tasks like this fairly lightweight, > but we also strive to require that the program handle possible error > cases. Currently, the code you have shows well what one would expect > when reading a line of input. On today's master, you might be able to > shorten it slightly to: > > use std::io::{stdin, BufferedReader}; > > fn main() { > let mut stdin = BufferedReader::new(stdin()); > for line in stdin.lines() { > println!("{}", line); > } > } > > I'm curious thought what you think is the heavy/verbose aspects of > this? I like common patterns having shortcuts here and there! Is there any way we can get rid of the need to create a buffered reader? It feels too enterprisey. Patrick From mahmutbulut0 at gmail.com Sun Feb 9 01:29:02 2014 From: mahmutbulut0 at gmail.com (Mahmut Bulut) Date: Sun, 9 Feb 2014 11:29:02 +0200 Subject: [rust-dev] fork-join parallelism library Message-ID: Hi Rust people, We are two senior Computer Engineering students from Turkey. We want to contribute to project in the parallelism way(call it data parallelism). So we want to start with brand new library like ?libgreen?. On the way of doing this I think we should determine: The name of the library People would love to help us. (calling mentors or contributors, if you want to say "is it needed?" imho yes for perfection of software and community and I am sure that contributors will help us) The knowledge base suggestions (especially the books that we can read and cite in project report, also contributors would be mentioned in report) -- Mahmut Bulut -------------- next part -------------- An HTML attachment was scrubbed... URL: From rexlen at gmail.com Sun Feb 9 03:15:25 2014 From: rexlen at gmail.com (Renato Lenzi) Date: Sun, 9 Feb 2014 12:15:25 +0100 Subject: [rust-dev] Fwd: user input In-Reply-To: References: <52F6DC9D.4040906@mozilla.com> Message-ID: Always talking about read & write i noticed another interesting thing: use std::io::buffered::BufferedReader; use std::io::stdin; fn main() { print!("Insert your name: "); let mut stdin = BufferedReader::new(stdin()); let s1 = stdin.read_line().unwrap_or(~"nothing"); print!("Welcome, {}", s1); } when i run this simple code the output "Insert your name" doesn't appear on the screen... only after typing and entering a string the whole output jumps out... am i missing some "flush" (ala Fantom) or similar? I am using Rust 0.9 on W7. On Sun, Feb 9, 2014 at 2:40 AM, Patrick Walton wrote: > On 2/8/14 3:35 PM, Alex Crichton wrote: > >> We do indeed want to make common tasks like this fairly lightweight, >> but we also strive to require that the program handle possible error >> cases. Currently, the code you have shows well what one would expect >> when reading a line of input. On today's master, you might be able to >> shorten it slightly to: >> >> use std::io::{stdin, BufferedReader}; >> >> fn main() { >> let mut stdin = BufferedReader::new(stdin()); >> for line in stdin.lines() { >> println!("{}", line); >> } >> } >> >> I'm curious thought what you think is the heavy/verbose aspects of >> this? I like common patterns having shortcuts here and there! >> > > Is there any way we can get rid of the need to create a buffered reader? > It feels too enterprisey. > > Patrick > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.monrocq at gmail.com Sun Feb 9 03:26:28 2014 From: matthieu.monrocq at gmail.com (Matthieu Monrocq) Date: Sun, 9 Feb 2014 12:26:28 +0100 Subject: [rust-dev] Fwd: user input In-Reply-To: References: <52F6DC9D.4040906@mozilla.com> Message-ID: On Sun, Feb 9, 2014 at 12:15 PM, Renato Lenzi wrote: > > > Always talking about read & write i noticed another interesting thing: > > use std::io::buffered::BufferedReader; > use std::io::stdin; > > fn main() > { > print!("Insert your name: "); > let mut stdin = BufferedReader::new(stdin()); > let s1 = stdin.read_line().unwrap_or(~"nothing"); > print!("Welcome, {}", s1); > } > > when i run this simple code the output "Insert your name" doesn't appear > on the screen... only after typing and entering a string the whole output > jumps out... am i missing some "flush" (ala Fantom) or similar? I am using > Rust 0.9 on W7. > Ah, that's interesting. In most languages whenever you ask for user input (read on stdin) it automatically triggers a flush on stdout and stderr to avoid this uncomfortable situation. I suppose it would not be took difficult to incorporate this in Rust. -- Matthieu. > > > On Sun, Feb 9, 2014 at 2:40 AM, Patrick Walton wrote: > >> On 2/8/14 3:35 PM, Alex Crichton wrote: >> >>> We do indeed want to make common tasks like this fairly lightweight, >>> but we also strive to require that the program handle possible error >>> cases. Currently, the code you have shows well what one would expect >>> when reading a line of input. On today's master, you might be able to >>> shorten it slightly to: >>> >>> use std::io::{stdin, BufferedReader}; >>> >>> fn main() { >>> let mut stdin = BufferedReader::new(stdin()); >>> for line in stdin.lines() { >>> println!("{}", line); >>> } >>> } >>> >>> I'm curious thought what you think is the heavy/verbose aspects of >>> this? I like common patterns having shortcuts here and there! >>> >> >> Is there any way we can get rid of the need to create a buffered reader? >> It feels too enterprisey. >> >> Patrick >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From axel.viala at darnuria.eu Sun Feb 9 04:02:24 2014 From: axel.viala at darnuria.eu (Axel Viala) Date: Sun, 09 Feb 2014 13:02:24 +0100 Subject: [rust-dev] Rust meetup in Paris @MozSpace -> 25 February 2014 In-Reply-To: References: <52F623D9.4000406@darnuria.eu> Message-ID: <52F76E50.6010801@darnuria.eu> I'am totally agree Nical. So we 40 at this time. I have begin a draft for the summary don't hesitate to contribute : -> https://etherpad.mozilla.org/meetupRustParis01 > If you want to come well free to register here however a lot of thing > would be in French : > https://www.eventbrite.fr/e/billets-rust-paris-meetup-10528169037 Presentation shall be in English. :) And if you're an English speaker don't hesitate to talk with French peoples they will switch to English, French peoples are not so harsh. ;p On 02/09/2014 02:30 AM, Nicolas Silva wrote: > I don't want to discourage English speakers so I think we should do it > in English and optionally switch to French depending on the audience. > I have been to a few hacker meetups in Paris where the language of > choice for presentations was English (even though 90% of the attendees > were French) and it works well. > > > On Sat, Feb 8, 2014 at 1:32 PM, Axel Viala > wrote: > > Hello everybody! > > (Sorry for my English it's not my mother tongue...) > > Nical, Pnkfelix and I are organizing a Rust meetup in Paris in the > Mozilla space. > > If you want to come well free to register here however a lot of > thing would be in French : > https://www.eventbrite.fr/e/billets-rust-paris-meetup-10528169037 > > Actually we are actually 25/50 peoples. > > And we are planning to make some talks and workshop(like trying to > contribute to the tutorial) etc... > > > Thanks for reading and if you have some questions, don't hesitate > to reply :) > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.sapin at exyr.org Sun Feb 9 05:36:23 2014 From: simon.sapin at exyr.org (Simon Sapin) Date: Sun, 09 Feb 2014 13:36:23 +0000 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F42A64.9070408@mozilla.com> References: <52F42A64.9070408@mozilla.com> Message-ID: <52F78457.7000800@exyr.org> On 07/02/2014 00:35, Brian Anderson wrote: > We can also attempt to package Rust with various of the most common > package managers: homebrew, macports, dpkg, rpm. In my experience with WeasyPrint, this only works if the person maintaining one of these packages uses it personally. (Scratch your own itch.) This probably excludes most contributors, as they will have a git clone built from source to work with. Alternatively, this may be viable if these packages can be *entirely* automated as part of the normal build/release system, so that they don?t really need maintainance. But I don?t know if that?s possible. > There community-maintained packages for some of these already, so we > don't necessarily need to redevelop from scratch if we just want to > adopt one or all of them as official packages. We could also create a > GUI installer for OS X, but I'm not sure how important that is. -- Simon Sapin From danielmicay at gmail.com Sun Feb 9 07:39:34 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Sun, 9 Feb 2014 10:39:34 -0500 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F78457.7000800@exyr.org> References: <52F42A64.9070408@mozilla.com> <52F78457.7000800@exyr.org> Message-ID: On Sun, Feb 9, 2014 at 8:36 AM, Simon Sapin wrote: > On 07/02/2014 00:35, Brian Anderson wrote: >> >> We can also attempt to package Rust with various of the most common >> package managers: homebrew, macports, dpkg, rpm. > > > In my experience with WeasyPrint, this only works if the person maintaining > one of these packages uses it personally. (Scratch your own itch.) This > probably excludes most contributors, as they will have a git clone built > from source to work with. > > Alternatively, this may be viable if these packages can be *entirely* > automated as part of the normal build/release system, so that they don?t > really need maintainance. But I don?t know if that?s possible. I certainly use my nightly Arch package even though I usually have a build or two of local branches around. It's very convenient to always have a working install of master that's less than a day old. It's built automatically and in theory doesn't require any attention. Rust's Makefile does love to break though... From uzytkownik2 at gmail.com Sun Feb 9 07:49:25 2014 From: uzytkownik2 at gmail.com (Maciej Piechotka) Date: Sun, 09 Feb 2014 16:49:25 +0100 Subject: [rust-dev] user input In-Reply-To: References: <52F6D05B.5010102@gmail.com> Message-ID: <1391960965.8787.5.camel@localhost> On Sat, 2014-02-08 at 17:23 -0800, Sean McArthur wrote: > let in = readln!() ? > > macro_rules! readln( > () => ({ > let mut stdin > = ::std::io::BufferedReader::new(::std::io::stdin()); > stdin.read_line().unwrap() > }) > ) > Unless I read the source incorrectly that won't work if the input is not line buffered. The filling of the buffer can exceed the length of line - it can be as large as 64ki. You then read a line and then discard all of the buffered data on exit. In many circumstances if might work as by default stdin is line buffered (at least in C) but the program might start misbehave when it would be switched from terminal input to pipe input. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From philippe.delrieu at free.fr Sun Feb 9 08:25:06 2014 From: philippe.delrieu at free.fr (Philippe Delrieu) Date: Sun, 09 Feb 2014 17:25:06 +0100 Subject: [rust-dev] How to use dynamic polymorphism with collection In-Reply-To: References: <52F675A9.3000301@free.fr> Message-ID: <52F7ABE2.20304@free.fr> Thank you. My problem is more complex than the example I gave. Your answer help me in reorganizing my code. I use a lib that as generic method that why I put generic in the example. I remove them and change the way I pass parameters. I didn't solve everything but I think I'm progressing. If I don't find the answer I'll post a new example. Philippe Le 09/02/2014 00:56, Ashish Myles a ?crit : > On Sat, Feb 8, 2014 at 6:48 PM, Ashish Myles > wrote: > > > On Sat, Feb 8, 2014 at 1:21 PM, Philippe Delrieu > > wrote: > > pub trait Base { > fn do_base(&self); > } > > struct TestBase; > > impl Base for TestBase { > fn do_base(&self) { > println!("ici"); > } > } > > trait GenerciFn { > fn do_generic(&self, base: &T); > } > > struct DoGenericFn; > > impl GenerciFn for DoGenericFn { > fn do_generic(&self, base: &T) { > base.do_base(); > } > } > > struct ToTestStr { > vec_gen: ~[~TestBase], > } > > impl ToTestStr { > fn testgencall(&self, gen: &T) { > for base in self.vec_gen.iter() { > //let test = base as &~TestBase; > gen.do_generic(&**base); > } > } > } > > #[main] > fn main() { > let base = TestBase; > let test = ToTestStr {vec_gen: ~[~base],}; > let gen = DoGenericFn; > test.testgencall(&gen); > > > > It took me a few attempts to get the for loop right, but here you go. > > > pub trait Base { > fn do_base(&self); > } > > struct TestBase; > > impl Base for TestBase { > fn do_base(&self) { > println!("ici"); > } > } > > trait GenericFn { > fn do_generic(&self, base: &Base); > } > > struct DoGenericFn; > > impl GenericFn for DoGenericFn { > fn do_generic(&self, base: &Base) { > base.do_base(); > } > } > > struct ToTestStr { > vec_gen: ~[~Base], > } > > impl ToTestStr { > fn testgencall(&self, gen: &T) { > for ref base in self.vec_gen.iter() { > gen.do_generic(**base); > } > } > } > > #[main] > fn main() { > let testbase = TestBase; > let test = ToTestStr {vec_gen: ~[~testbase as ~Base],}; > > let gen = DoGenericFn; > test.testgencall(&gen); > } > > > Also, for a little more runtime polymorphism, you could change > testgencall's declaration to > fn testgencall(&self, gen: &GenericFn) { > ... > } -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Sun Feb 9 11:14:04 2014 From: alex at crichton.co (Alex Crichton) Date: Sun, 9 Feb 2014 11:14:04 -0800 Subject: [rust-dev] Fwd: user input In-Reply-To: References: <52F6DC9D.4040906@mozilla.com> Message-ID: > Is there any way we can get rid of the need to create a buffered reader? It feels too enterprisey. There's really no way to efficiently define a `read_line` method on a reader that doesn't have an underlying buffer. For that reason, I think that we'll always require that the stream be buffered somehow. Despite that, we could return a buffered stdin by default. We could also have task-local stdin handles which hide the ability to be buffered. This will not play nicely at all with multiple tasks reading stdin, but I don't think the current solution plays very nicely so I'd be fine glossing over that use case. > Ah, that's interesting. In most languages whenever you ask for user input (read on stdin) it automatically triggers a flush on stdout and stderr That's a good point that I hadn't thought of. If we go towards a task-local stdin then we could make the read methods on the local stdin handle flush the local output handles, which would probably solve this problem. On Sun, Feb 9, 2014 at 3:26 AM, Matthieu Monrocq wrote: > > > > On Sun, Feb 9, 2014 at 12:15 PM, Renato Lenzi wrote: >> >> >> >> Always talking about read & write i noticed another interesting thing: >> >> use std::io::buffered::BufferedReader; >> use std::io::stdin; >> >> fn main() >> { >> print!("Insert your name: "); >> let mut stdin = BufferedReader::new(stdin()); >> let s1 = stdin.read_line().unwrap_or(~"nothing"); >> print!("Welcome, {}", s1); >> } >> >> when i run this simple code the output "Insert your name" doesn't appear >> on the screen... only after typing and entering a string the whole output >> jumps out... am i missing some "flush" (ala Fantom) or similar? I am using >> Rust 0.9 on W7. > > > Ah, that's interesting. In most languages whenever you ask for user input > (read on stdin) it automatically triggers a flush on stdout and stderr to > avoid this uncomfortable situation. > > I suppose it would not be took difficult to incorporate this in Rust. > > -- Matthieu. > >> >> >> >> On Sun, Feb 9, 2014 at 2:40 AM, Patrick Walton >> wrote: >>> >>> On 2/8/14 3:35 PM, Alex Crichton wrote: >>>> >>>> We do indeed want to make common tasks like this fairly lightweight, >>>> but we also strive to require that the program handle possible error >>>> cases. Currently, the code you have shows well what one would expect >>>> when reading a line of input. On today's master, you might be able to >>>> shorten it slightly to: >>>> >>>> use std::io::{stdin, BufferedReader}; >>>> >>>> fn main() { >>>> let mut stdin = BufferedReader::new(stdin()); >>>> for line in stdin.lines() { >>>> println!("{}", line); >>>> } >>>> } >>>> >>>> I'm curious thought what you think is the heavy/verbose aspects of >>>> this? I like common patterns having shortcuts here and there! >>> >>> >>> Is there any way we can get rid of the need to create a buffered reader? >>> It feels too enterprisey. >>> >>> Patrick >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >> >> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From pwalton at mozilla.com Sun Feb 9 11:21:39 2014 From: pwalton at mozilla.com (Patrick Walton) Date: Sun, 09 Feb 2014 11:21:39 -0800 Subject: [rust-dev] Fwd: user input In-Reply-To: References: <52F6DC9D.4040906@mozilla.com> Message-ID: Yeah, I had similar thoughts around putting an auto-buffered stdin in TLS. I think it's a good idea if for no other reason than first impressions. Patrick Alex Crichton wrote: >> Is there any way we can get rid of the need to create a buffered >reader? It feels too enterprisey. > >There's really no way to efficiently define a `read_line` method on a >reader that doesn't have an underlying buffer. For that reason, I >think that we'll always require that the stream be buffered somehow. > >Despite that, we could return a buffered stdin by default. We could >also have task-local stdin handles which hide the ability to be >buffered. This will not play nicely at all with multiple tasks reading >stdin, but I don't think the current solution plays very nicely so I'd >be fine glossing over that use case. > >> Ah, that's interesting. In most languages whenever you ask for user >input (read on stdin) it automatically triggers a flush on stdout and >stderr > >That's a good point that I hadn't thought of. If we go towards a >task-local stdin then we could make the read methods on the local >stdin handle flush the local output handles, which would probably >solve this problem. > > >On Sun, Feb 9, 2014 at 3:26 AM, Matthieu Monrocq > wrote: >> >> >> >> On Sun, Feb 9, 2014 at 12:15 PM, Renato Lenzi >wrote: >>> >>> >>> >>> Always talking about read & write i noticed another interesting >thing: >>> >>> use std::io::buffered::BufferedReader; >>> use std::io::stdin; >>> >>> fn main() >>> { >>> print!("Insert your name: "); >>> let mut stdin = BufferedReader::new(stdin()); >>> let s1 = stdin.read_line().unwrap_or(~"nothing"); >>> print!("Welcome, {}", s1); >>> } >>> >>> when i run this simple code the output "Insert your name" doesn't >appear >>> on the screen... only after typing and entering a string the whole >output >>> jumps out... am i missing some "flush" (ala Fantom) or similar? I am >using >>> Rust 0.9 on W7. >> >> >> Ah, that's interesting. In most languages whenever you ask for user >input >> (read on stdin) it automatically triggers a flush on stdout and >stderr to >> avoid this uncomfortable situation. >> >> I suppose it would not be took difficult to incorporate this in Rust. >> >> -- Matthieu. >> >>> >>> >>> >>> On Sun, Feb 9, 2014 at 2:40 AM, Patrick Walton > >>> wrote: >>>> >>>> On 2/8/14 3:35 PM, Alex Crichton wrote: >>>>> >>>>> We do indeed want to make common tasks like this fairly >lightweight, >>>>> but we also strive to require that the program handle possible >error >>>>> cases. Currently, the code you have shows well what one would >expect >>>>> when reading a line of input. On today's master, you might be able >to >>>>> shorten it slightly to: >>>>> >>>>> use std::io::{stdin, BufferedReader}; >>>>> >>>>> fn main() { >>>>> let mut stdin = BufferedReader::new(stdin()); >>>>> for line in stdin.lines() { >>>>> println!("{}", line); >>>>> } >>>>> } >>>>> >>>>> I'm curious thought what you think is the heavy/verbose aspects of >>>>> this? I like common patterns having shortcuts here and there! >>>> >>>> >>>> Is there any way we can get rid of the need to create a buffered >reader? >>>> It feels too enterprisey. >>>> >>>> Patrick >>>> >>>> >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>> >>> >>> >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >_______________________________________________ >Rust-dev mailing list >Rust-dev at mozilla.org >https://mail.mozilla.org/listinfo/rust-dev -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mak at issuu.com Sun Feb 9 13:15:29 2014 From: mak at issuu.com (Martin Koch) Date: Sun, 9 Feb 2014 22:15:29 +0100 Subject: [rust-dev] Fwd: Problems building rust on OSX In-Reply-To: References: Message-ID: Hi List I'm trying to get rust to compile, but I'm apparently running into this bug: https://github.com/mozilla/rust/issues/11162 So my question is: How do I manually download and use this snapshot: rust-stage0-2014-01-20-b6400f9-macos-x86_64-6458d3b46a951da62c20dd5b587d44333402e30b.tar.bz2 Thanks, /Martin Koch -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Sun Feb 9 13:30:24 2014 From: alex at crichton.co (Alex Crichton) Date: Sun, 9 Feb 2014 13:30:24 -0800 Subject: [rust-dev] Fwd: Problems building rust on OSX In-Reply-To: References: Message-ID: This problem has been fixed on master, so I would recommend using master or uninstalling LLVM temporarily from the system (a non-standard gcc in the path may also mess with compilation) On Sun, Feb 9, 2014 at 1:15 PM, Martin Koch wrote: > Hi List > > I'm trying to get rust to compile, but I'm apparently running into this bug: > > https://github.com/mozilla/rust/issues/11162 > > So my question is: How do I manually download and use this snapshot: > > rust-stage0-2014-01-20-b6400f9-macos-x86_64-6458d3b46a951da62c20dd5b587d44333402e30b.tar.bz2 > > Thanks, > > /Martin Koch > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From mak at issuu.com Sun Feb 9 13:56:13 2014 From: mak at issuu.com (Martin Koch) Date: Sun, 9 Feb 2014 22:56:13 +0100 Subject: [rust-dev] Fwd: Problems building rust on OSX In-Reply-To: References: Message-ID: Thanks for your reply. I built by cloning from github, so I am on master. Also, I don't have llvm installed, so that must come from the rust build somehow? GCC is > gcc --version i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) On Sun, Feb 9, 2014 at 10:30 PM, Alex Crichton wrote: > This problem has been fixed on master, so I would recommend using > master or uninstalling LLVM temporarily from the system (a > non-standard gcc in the path may also mess with compilation) > > On Sun, Feb 9, 2014 at 1:15 PM, Martin Koch wrote: > > Hi List > > > > I'm trying to get rust to compile, but I'm apparently running into this > bug: > > > > https://github.com/mozilla/rust/issues/11162 > > > > So my question is: How do I manually download and use this snapshot: > > > > > rust-stage0-2014-01-20-b6400f9-macos-x86_64-6458d3b46a951da62c20dd5b587d44333402e30b.tar.bz2 > > > > Thanks, > > > > /Martin Koch > > > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mak at issuu.com Sun Feb 9 13:12:41 2014 From: mak at issuu.com (Martin Koch) Date: Sun, 9 Feb 2014 22:12:41 +0100 Subject: [rust-dev] Problems building rust on OSX Message-ID: Hi List I'm trying to get rust to compile, but I'm apparently running into this bug: https://github.com/mozilla/rust/issues/11162 So my question is: How do I manually download and use this snapshot: rust-stage0-2014-01-20-b6400f9-macos-x86_64-6458d3b46a951da62c20dd5b587d44333402e30b.tar.bz2 Thanks, /Martin Koch -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan at xeberon.net Sun Feb 9 23:56:17 2014 From: gaetan at xeberon.net (Gaetan) Date: Mon, 10 Feb 2014 08:56:17 +0100 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: References: <52F42A64.9070408@mozilla.com> <52F78457.7000800@exyr.org> Message-ID: You ll need to create many binary packages : Ubuntu (10.04, 10.10, 11.04, 11.10, 12.04, 12.10, 13.04, 13.10), debian, homebrew, Windows,... There is a huge among of work. Le 9 f?vr. 2014 16:40, "Daniel Micay" a ?crit : > On Sun, Feb 9, 2014 at 8:36 AM, Simon Sapin wrote: > > On 07/02/2014 00:35, Brian Anderson wrote: > >> > >> We can also attempt to package Rust with various of the most common > >> package managers: homebrew, macports, dpkg, rpm. > > > > > > In my experience with WeasyPrint, this only works if the person > maintaining > > one of these packages uses it personally. (Scratch your own itch.) This > > probably excludes most contributors, as they will have a git clone built > > from source to work with. > > > > Alternatively, this may be viable if these packages can be *entirely* > > automated as part of the normal build/release system, so that they don't > > really need maintainance. But I don't know if that's possible. > > I certainly use my nightly Arch package even though I usually have a > build or two of local branches around. It's very convenient to always > have a working install of master that's less than a day old. > > It's built automatically and in theory doesn't require any attention. > Rust's Makefile does love to break though... > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.ronacher at active-4.com Mon Feb 10 03:40:09 2014 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Mon, 10 Feb 2014 11:40:09 +0000 Subject: [rust-dev] Chaining Results and Options Message-ID: <52F8BA99.7010709@active-4.com> Hi, I was playing around with the new IO system a lot and got some very high level feedback on that. Currently the IoResult objects implement a few traits to pass through to the included success value and to dispatch method calls. That's nice but it means that from looking at the code you can easily miss that a result is involved and if you add methods you need to add manual proxy methods on the result. At the end of the day the only thing you actually need to pass through is the error. So I would propose a new operator "->" that acts as a "resolve deref" operator. It would be operating on the "Some" part of an Option and the "Ok" part of a result and pass through the errors unchanged: So essentially this: let rv = match expr() { Ok(tmp) => tmp.method(), err => err, }; Would be equivalent to: let rv = expr->method(); Likewise for options: let rv = match expr { Some(tmp) => tmp.method(), None => None, } Would likewise be equivalent to: let rv = expr->method(); As a result you could only ever call this on things that return results/options themselves. Annoyingly enough this also means that the results need to be compatible which is still a problem. The example there would be an IO trait that is implemented by another system that also has its own error cases. Case in point: SSL wrapping that wants to fail with SSL errors in addition to IO errors. I fail to understand at the moment how library authors are supposed to deal with this. Thoughts on this? Am I missing something entirely? Regards, Armin From s.b.maximov at gmail.com Mon Feb 10 03:45:58 2014 From: s.b.maximov at gmail.com (Sergei Maximov) Date: Mon, 10 Feb 2014 22:45:58 +1100 Subject: [rust-dev] Chaining Results and Options In-Reply-To: <52F8BA99.7010709@active-4.com> References: <52F8BA99.7010709@active-4.com> Message-ID: <52F8BBF6.3050209@gmail.com> It looks very similar to Haskell's monadic bind operator (>>=) at first glance. 02/10/2014 10:40 PM, Armin Ronacher ?????: > Hi, > > I was playing around with the new IO system a lot and got some very > high level feedback on that. Currently the IoResult objects implement > a few traits to pass through to the included success value and to > dispatch method calls. > > That's nice but it means that from looking at the code you can easily > miss that a result is involved and if you add methods you need to add > manual proxy methods on the result. > > At the end of the day the only thing you actually need to pass through > is the error. So I would propose a new operator "->" that acts as a > "resolve deref" operator. It would be operating on the "Some" part of > an Option and the "Ok" part of a result and pass through the errors > unchanged: > > So essentially this: > > let rv = match expr() { > Ok(tmp) => tmp.method(), > err => err, > }; > > Would be equivalent to: > > let rv = expr->method(); > > Likewise for options: > > let rv = match expr { > Some(tmp) => tmp.method(), > None => None, > } > > Would likewise be equivalent to: > > let rv = expr->method(); > > As a result you could only ever call this on things that return > results/options themselves. > > Annoyingly enough this also means that the results need to be > compatible which is still a problem. The example there would be an IO > trait that is implemented by another system that also has its own > error cases. Case in point: SSL wrapping that wants to fail with SSL > errors in addition to IO errors. I fail to understand at the moment > how library authors are supposed to deal with this. > > Thoughts on this? Am I missing something entirely? > > > Regards, > Armin > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -- WBR, S. Maximov From dbau.pp at gmail.com Mon Feb 10 03:50:14 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Mon, 10 Feb 2014 22:50:14 +1100 Subject: [rust-dev] Chaining Results and Options In-Reply-To: <52F8BBF6.3050209@gmail.com> References: <52F8BA99.7010709@active-4.com> <52F8BBF6.3050209@gmail.com> Message-ID: <52F8BCF6.4030008@gmail.com> It's actually Haskell's fmap, which we have in the form of .map for both Option[1] and Result[2], e.g. the proposed expr->method() is the same as expr.map(|x| x.method()) (which is still quite verbose). Monadic bind comes in the form of the .and_then methods (which both Option and Result have). [1]: http://static.rust-lang.org/doc/master/std/option/enum.Option.html#method.map [2]: http://static.rust-lang.org/doc/master/std/result/enum.Result.html#method.map Huon On 10/02/14 22:45, Sergei Maximov wrote: > It looks very similar to Haskell's monadic bind operator (>>=) at > first glance. > > 02/10/2014 10:40 PM, Armin Ronacher ?????: >> Hi, >> >> I was playing around with the new IO system a lot and got some very >> high level feedback on that. Currently the IoResult objects implement >> a few traits to pass through to the included success value and to >> dispatch method calls. >> >> That's nice but it means that from looking at the code you can easily >> miss that a result is involved and if you add methods you need to add >> manual proxy methods on the result. >> >> At the end of the day the only thing you actually need to pass >> through is the error. So I would propose a new operator "->" that >> acts as a "resolve deref" operator. It would be operating on the >> "Some" part of an Option and the "Ok" part of a result and pass >> through the errors unchanged: >> >> So essentially this: >> >> let rv = match expr() { >> Ok(tmp) => tmp.method(), >> err => err, >> }; >> >> Would be equivalent to: >> >> let rv = expr->method(); >> >> Likewise for options: >> >> let rv = match expr { >> Some(tmp) => tmp.method(), >> None => None, >> } >> >> Would likewise be equivalent to: >> >> let rv = expr->method(); >> >> As a result you could only ever call this on things that return >> results/options themselves. >> >> Annoyingly enough this also means that the results need to be >> compatible which is still a problem. The example there would be an IO >> trait that is implemented by another system that also has its own >> error cases. Case in point: SSL wrapping that wants to fail with SSL >> errors in addition to IO errors. I fail to understand at the moment >> how library authors are supposed to deal with this. >> >> Thoughts on this? Am I missing something entirely? >> >> >> Regards, >> Armin >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > From s.b.maximov at gmail.com Mon Feb 10 04:00:05 2014 From: s.b.maximov at gmail.com (Sergei Maximov) Date: Mon, 10 Feb 2014 23:00:05 +1100 Subject: [rust-dev] Chaining Results and Options In-Reply-To: <52F8BCF6.4030008@gmail.com> References: <52F8BA99.7010709@active-4.com> <52F8BBF6.3050209@gmail.com> <52F8BCF6.4030008@gmail.com> Message-ID: <52F8BF45.7050903@gmail.com> I believe it depends on what return type `.method` has, but I got your point. 02/10/2014 10:50 PM, Huon Wilson ?????: > It's actually Haskell's fmap, which we have in the form of .map for > both Option[1] and Result[2], e.g. the proposed expr->method() is the > same as expr.map(|x| x.method()) (which is still quite verbose). > > > Monadic bind comes in the form of the .and_then methods (which both > Option and Result have). > > > [1]: > http://static.rust-lang.org/doc/master/std/option/enum.Option.html#method.map > [2]: > http://static.rust-lang.org/doc/master/std/result/enum.Result.html#method.map > > > Huon > > > On 10/02/14 22:45, Sergei Maximov wrote: >> It looks very similar to Haskell's monadic bind operator (>>=) at >> first glance. >> >> 02/10/2014 10:40 PM, Armin Ronacher ?????: >>> Hi, >>> >>> I was playing around with the new IO system a lot and got some very >>> high level feedback on that. Currently the IoResult objects >>> implement a few traits to pass through to the included success value >>> and to dispatch method calls. >>> >>> That's nice but it means that from looking at the code you can >>> easily miss that a result is involved and if you add methods you >>> need to add manual proxy methods on the result. >>> >>> At the end of the day the only thing you actually need to pass >>> through is the error. So I would propose a new operator "->" that >>> acts as a "resolve deref" operator. It would be operating on the >>> "Some" part of an Option and the "Ok" part of a result and pass >>> through the errors unchanged: >>> >>> So essentially this: >>> >>> let rv = match expr() { >>> Ok(tmp) => tmp.method(), >>> err => err, >>> }; >>> >>> Would be equivalent to: >>> >>> let rv = expr->method(); >>> >>> Likewise for options: >>> >>> let rv = match expr { >>> Some(tmp) => tmp.method(), >>> None => None, >>> } >>> >>> Would likewise be equivalent to: >>> >>> let rv = expr->method(); >>> >>> As a result you could only ever call this on things that return >>> results/options themselves. >>> >>> Annoyingly enough this also means that the results need to be >>> compatible which is still a problem. The example there would be an >>> IO trait that is implemented by another system that also has its own >>> error cases. Case in point: SSL wrapping that wants to fail with SSL >>> errors in addition to IO errors. I fail to understand at the moment >>> how library authors are supposed to deal with this. >>> >>> Thoughts on this? Am I missing something entirely? >>> >>> >>> Regards, >>> Armin >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -- WBR, S. Maximov From dbau.pp at gmail.com Mon Feb 10 04:03:11 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Mon, 10 Feb 2014 23:03:11 +1100 Subject: [rust-dev] Chaining Results and Options In-Reply-To: <52F8BF45.7050903@gmail.com> References: <52F8BA99.7010709@active-4.com> <52F8BBF6.3050209@gmail.com> <52F8BCF6.4030008@gmail.com> <52F8BF45.7050903@gmail.com> Message-ID: <52F8BFFF.1010806@gmail.com> Ah, true enough. I think you were correct, and I misread the original email. `swap(&mut map, &mut and_then)` and fmap <-> monadic bind in my reponse. Huon On 10/02/14 23:00, Sergei Maximov wrote: > I believe it depends on what return type `.method` has, but I got your > point. > > 02/10/2014 10:50 PM, Huon Wilson ?????: >> It's actually Haskell's fmap, which we have in the form of .map for >> both Option[1] and Result[2], e.g. the proposed expr->method() is the >> same as expr.map(|x| x.method()) (which is still quite verbose). >> >> >> Monadic bind comes in the form of the .and_then methods (which both >> Option and Result have). >> >> >> [1]: >> http://static.rust-lang.org/doc/master/std/option/enum.Option.html#method.map >> [2]: >> http://static.rust-lang.org/doc/master/std/result/enum.Result.html#method.map >> >> >> Huon >> >> >> On 10/02/14 22:45, Sergei Maximov wrote: >>> It looks very similar to Haskell's monadic bind operator (>>=) at >>> first glance. >>> >>> 02/10/2014 10:40 PM, Armin Ronacher ?????: >>>> Hi, >>>> >>>> I was playing around with the new IO system a lot and got some very >>>> high level feedback on that. Currently the IoResult objects >>>> implement a few traits to pass through to the included success >>>> value and to dispatch method calls. >>>> >>>> That's nice but it means that from looking at the code you can >>>> easily miss that a result is involved and if you add methods you >>>> need to add manual proxy methods on the result. >>>> >>>> At the end of the day the only thing you actually need to pass >>>> through is the error. So I would propose a new operator "->" that >>>> acts as a "resolve deref" operator. It would be operating on the >>>> "Some" part of an Option and the "Ok" part of a result and pass >>>> through the errors unchanged: >>>> >>>> So essentially this: >>>> >>>> let rv = match expr() { >>>> Ok(tmp) => tmp.method(), >>>> err => err, >>>> }; >>>> >>>> Would be equivalent to: >>>> >>>> let rv = expr->method(); >>>> >>>> Likewise for options: >>>> >>>> let rv = match expr { >>>> Some(tmp) => tmp.method(), >>>> None => None, >>>> } >>>> >>>> Would likewise be equivalent to: >>>> >>>> let rv = expr->method(); >>>> >>>> As a result you could only ever call this on things that return >>>> results/options themselves. >>>> >>>> Annoyingly enough this also means that the results need to be >>>> compatible which is still a problem. The example there would be an >>>> IO trait that is implemented by another system that also has its >>>> own error cases. Case in point: SSL wrapping that wants to fail >>>> with SSL errors in addition to IO errors. I fail to understand at >>>> the moment how library authors are supposed to deal with this. >>>> >>>> Thoughts on this? Am I missing something entirely? >>>> >>>> >>>> Regards, >>>> Armin >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > From pnkfelix at mozilla.com Mon Feb 10 04:33:38 2014 From: pnkfelix at mozilla.com (Felix S. Klock II) Date: Mon, 10 Feb 2014 13:33:38 +0100 Subject: [rust-dev] Fwd: Problems building rust on OSX In-Reply-To: References: Message-ID: <52F8C722.20800@mozilla.com> Martin (cc'ing rust-dev)- I recommend you file a fresh bug with a transcript of your own build attempt. I infer you are pointing us to issue #11162 because of some similarity in the log output you see between that and your own build issue, but issue #11162 is fundamentally related to a local LLVM install (at least according to its current title) and has been verified as fixed by others, so I believe you are probably better off making a fresh bug (and perhaps linking to #11162 from it). Cheers, -Felix On 09/02/2014 22:56, Martin Koch wrote: > Thanks for your reply. > > I built by cloning from github, so I am on master. Also, I don't have > llvm installed, so that must come from the rust build somehow? > > GCC is > > > gcc --version > i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. > build 5658) (LLVM build 2336.11.00) > > > > On Sun, Feb 9, 2014 at 10:30 PM, Alex Crichton > wrote: > > This problem has been fixed on master, so I would recommend using > master or uninstalling LLVM temporarily from the system (a > non-standard gcc in the path may also mess with compilation) > > On Sun, Feb 9, 2014 at 1:15 PM, Martin Koch > wrote: > > Hi List > > > > I'm trying to get rust to compile, but I'm apparently running > into this bug: > > > > https://github.com/mozilla/rust/issues/11162 > > > > So my question is: How do I manually download and use this snapshot: > > > > > rust-stage0-2014-01-20-b6400f9-macos-x86_64-6458d3b46a951da62c20dd5b587d44333402e30b.tar.bz2 > > > > Thanks, > > > > /Martin Koch > > > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -- irc: pnkfelix on irc.mozilla.org email: {fklock, pnkfelix}@mozilla.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.ronacher at active-4.com Mon Feb 10 04:40:55 2014 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Mon, 10 Feb 2014 12:40:55 +0000 Subject: [rust-dev] Chaining Results and Options In-Reply-To: <52F8BCF6.4030008@gmail.com> References: <52F8BA99.7010709@active-4.com> <52F8BBF6.3050209@gmail.com> <52F8BCF6.4030008@gmail.com> Message-ID: <52F8C8D7.5010301@active-4.com> Hi, On 10/02/2014 11:50, Huon Wilson wrote: > It's actually Haskell's fmap, which we have in the form of .map for both > Option[1] and Result[2], e.g. the proposed expr->method() is the same as > expr.map(|x| x.method()) (which is still quite verbose). The return value of .method() is actually a Result/Option so it would be more similar to .and_then: foo->bar() // would be the same as foo.and_then(|x| x.bar()); Regards, Armin From mak at issuu.com Mon Feb 10 04:54:29 2014 From: mak at issuu.com (Martin Koch) Date: Mon, 10 Feb 2014 13:54:29 +0100 Subject: [rust-dev] Fwd: Problems building rust on OSX In-Reply-To: <52F8C722.20800@mozilla.com> References: <52F8C722.20800@mozilla.com> Message-ID: Thanks, Felix. I came up with a simpler soution: brew install rust :) - That works just fine (at least I now have a running rust 0.9 compiler). /Martin On Mon, Feb 10, 2014 at 1:33 PM, Felix S. Klock II wrote: > Martin (cc'ing rust-dev)- > > I recommend you file a fresh bug with a transcript of your own build > attempt. > > I infer you are pointing us to issue #11162 because of some similarity in > the log output you see between that and your own build issue, but issue > #11162 is fundamentally related to a local LLVM install (at least according > to its current title) and has been verified as fixed by others, so I > believe you are probably better off making a fresh bug (and perhaps linking > to #11162 from it). > > Cheers, > -Felix > > > On 09/02/2014 22:56, Martin Koch wrote: > > Thanks for your reply. > > I built by cloning from github, so I am on master. Also, I don't have > llvm installed, so that must come from the rust build somehow? > > GCC is > > > gcc --version > i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build > 5658) (LLVM build 2336.11.00) > > > > On Sun, Feb 9, 2014 at 10:30 PM, Alex Crichton wrote: > >> This problem has been fixed on master, so I would recommend using >> master or uninstalling LLVM temporarily from the system (a >> non-standard gcc in the path may also mess with compilation) >> >> On Sun, Feb 9, 2014 at 1:15 PM, Martin Koch wrote: >> > Hi List >> > >> > I'm trying to get rust to compile, but I'm apparently running into this >> bug: >> > >> > https://github.com/mozilla/rust/issues/11162 >> > >> > So my question is: How do I manually download and use this snapshot: >> > >> > >> rust-stage0-2014-01-20-b6400f9-macos-x86_64-6458d3b46a951da62c20dd5b587d44333402e30b.tar.bz2 >> > >> > Thanks, >> > >> > /Martin Koch >> > >> > >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> > >> > > > > _______________________________________________ > Rust-dev mailing listRust-dev at mozilla.orghttps://mail.mozilla.org/listinfo/rust-dev > > > > -- > irc: pnkfelix on irc.mozilla.org > email: {fklock, pnkfelix}@mozilla.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hatahet at gmail.com Mon Feb 10 12:04:24 2014 From: hatahet at gmail.com (Ziad Hatahet) Date: Mon, 10 Feb 2014 12:04:24 -0800 Subject: [rust-dev] Chaining Results and Options In-Reply-To: <52F8C8D7.5010301@active-4.com> References: <52F8BA99.7010709@active-4.com> <52F8BBF6.3050209@gmail.com> <52F8BCF6.4030008@gmail.com> <52F8C8D7.5010301@active-4.com> Message-ID: Isn't this proposal a subset of having `do` syntax like Haskell does? I thought that was being blocked on HKTs. -- Ziad On Mon, Feb 10, 2014 at 4:40 AM, Armin Ronacher wrote: > Hi, > > > On 10/02/2014 11:50, Huon Wilson wrote: > >> It's actually Haskell's fmap, which we have in the form of .map for both >> Option[1] and Result[2], e.g. the proposed expr->method() is the same as >> expr.map(|x| x.method()) (which is still quite verbose). >> > The return value of .method() is actually a Result/Option so it would be > more similar to .and_then: > > foo->bar() > > // would be the same as > > foo.and_then(|x| x.bar()); > > > > Regards, > Armin > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smcarthur at mozilla.com Mon Feb 10 13:12:11 2014 From: smcarthur at mozilla.com (Sean McArthur) Date: Mon, 10 Feb 2014 13:12:11 -0800 Subject: [rust-dev] Chaining Results and Options In-Reply-To: References: <52F8BA99.7010709@active-4.com> <52F8BBF6.3050209@gmail.com> <52F8BCF6.4030008@gmail.com> <52F8C8D7.5010301@active-4.com> Message-ID: This could be handled by a macro. opt_method!(tmp, method) On Mon, Feb 10, 2014 at 12:04 PM, Ziad Hatahet wrote: > Isn't this proposal a subset of having `do` syntax like Haskell does? I > thought that was being blocked on HKTs. > > -- > Ziad > > > On Mon, Feb 10, 2014 at 4:40 AM, Armin Ronacher < > armin.ronacher at active-4.com> wrote: > >> Hi, >> >> >> On 10/02/2014 11:50, Huon Wilson wrote: >> >>> It's actually Haskell's fmap, which we have in the form of .map for both >>> Option[1] and Result[2], e.g. the proposed expr->method() is the same as >>> expr.map(|x| x.method()) (which is still quite verbose). >>> >> The return value of .method() is actually a Result/Option so it would be >> more similar to .and_then: >> >> foo->bar() >> >> // would be the same as >> >> foo.and_then(|x| x.bar()); >> >> >> >> Regards, >> Armin >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.ronacher at active-4.com Mon Feb 10 15:06:47 2014 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Mon, 10 Feb 2014 23:06:47 +0000 Subject: [rust-dev] Chaining Results and Options In-Reply-To: <52F8BA99.7010709@active-4.com> References: <52F8BA99.7010709@active-4.com> Message-ID: <52F95B87.6020801@active-4.com> Hi, Actually on that note I want to put more emphasis on this part: On 10/02/2014 11:40, Armin Ronacher wrote: > Annoyingly enough this also means that the results need to be compatible > which is still a problem. The example there would be an IO trait that is > implemented by another system that also has its own error cases. Case in > point: SSL wrapping that wants to fail with SSL errors in addition to IO > errors. I fail to understand at the moment how library authors are supposed > to deal with this. I played around with some general helper methods on Result (such as and_then) to see how nice it can be built but the fact that the error cases need to be compatible with each other is a major pain point. In Python and other languages with exceptions you just introduce new runtime exceptions and that's the end of it. In C you have usually one sentinel value that indicates failure and then various modes to check what failure means (for instance thread local error information, error information on a context struct etc.) In Rust exceptions do not exist and error information stashed away somewhere else we basically just removed (in the form of conditions). I really feel like there is a tool missing in the box. Regards, Armin From banderson at mozilla.com Mon Feb 10 17:12:19 2014 From: banderson at mozilla.com (Brian Anderson) Date: Mon, 10 Feb 2014 17:12:19 -0800 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F42A64.9070408@mozilla.com> References: <52F42A64.9070408@mozilla.com> Message-ID: <52F978F3.4070705@mozilla.com> Thanks for the replies, everyone. Here are my current takeaways: * Don't create Linux distro-specific packages, let the various communities deal with it * Don't create a networked installer So here's what I'm thinking we do now. These are the install methods we would be promoting on the home page: * Mac: .pkg file * Linux: standalone, cross-distro installer * Windows: use the installer we've already got I'm worried that, if we keep out of the packaging business, but those packages end up being peoples' preferred way to get Rust, then the web page would be advocating the worst ways to get Rust. It seems like we'll need to put a link to 'alternate installation methods' on the homepage to link people to homebrew, macports, ubuntu, arch, etc. packages, emphasizing that they are unsupported. On 02/06/2014 04:35 PM, Brian Anderson wrote: > Hey. > > One of my goals for 0.10 is to make the Rust installation and upgrade > experience better. My personal ambitions are to make Rust installable > with a single shell command, distribute binaries, not source, and to > have both nightlies and point releases. > > Since we're already able to create highly-compatible snapshot > compilers, it should be relatively easy to extend our snapshot > procedure to produce complete binaries, installable via a > cross-platform shell script. This would require the least amount of > effort and maintenance because we don't need to use any specific > package managers or add new bots, and a single installer can work on > all Linuxes. > > We can also attempt to package Rust with various of the most common > package managers: homebrew, macports, dpkg, rpm. There > community-maintained packages for some of these already, so we don't > necessarily need to redevelop from scratch if we just want to adopt > one or all of them as official packages. We could also create a GUI > installer for OS X, but I'm not sure how important that is. > > What shall we do? From rust-dev at tomlee.co Tue Feb 11 01:01:30 2014 From: rust-dev at tomlee.co (Tom Lee) Date: Tue, 11 Feb 2014 01:01:30 -0800 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52F978F3.4070705@mozilla.com> References: <52F42A64.9070408@mozilla.com> <52F978F3.4070705@mozilla.com> Message-ID: Hey Brian, Not sure I understand the last paragraph of your email (do you or do you not want to encourage distro-specific installation? :)), but my two cents: I do some packaging work for the Debian project & expressed some interest in helping out with the Rust packaging for Debian. Last I heard, there were a couple of blockers there preventing us from including Rust as a first-class citizen (and some of those issues also impacted the packaging for Fedora). These were non-trivial at the time, but perhaps that's changed. I've cced Luca Bruno in since I think he's been keeping a closer eye on the details. Relevant link on the wiki: https://github.com/mozilla/rust/wiki/Note-packaging At a glance, most of those tickets still seem to be open. Pushing to address these issues in Rust and upstream would go a long way to getting first-class support for Rust in Debian and Fedora -- and in turn Ubuntu, RHEL, et al. Helping us work through some of the thornier issues would be a huge help. I'm sure you'd see a lot of support from the community wrt making this happen, but frankly many of the issues involved seem to be the sort of thing where we need some guidance from members on the core team and/or a strong push upstream to projects like libuv and llvm. In the absence of a first-class package for Debian, I'd personally prefer a sane build from source before a custom installer on Linux (something that Rust does a pretty good job of as of the time of writing this). Cheers, Tom On Mon, Feb 10, 2014 at 5:12 PM, Brian Anderson wrote: > Thanks for the replies, everyone. Here are my current takeaways: > > * Don't create Linux distro-specific packages, let the various communities > deal with it > * Don't create a networked installer > > So here's what I'm thinking we do now. These are the install methods we > would be promoting on the home page: > > * Mac: .pkg file > * Linux: standalone, cross-distro installer > * Windows: use the installer we've already got > > I'm worried that, if we keep out of the packaging business, but those > packages end up being peoples' preferred way to get Rust, then the web page > would be advocating the worst ways to get Rust. It seems like we'll need to > put a link to 'alternate installation methods' on the homepage to link > people to homebrew, macports, ubuntu, arch, etc. packages, emphasizing that > they are unsupported. > > > > > On 02/06/2014 04:35 PM, Brian Anderson wrote: > >> Hey. >> >> One of my goals for 0.10 is to make the Rust installation and upgrade >> experience better. My personal ambitions are to make Rust installable with >> a single shell command, distribute binaries, not source, and to have both >> nightlies and point releases. >> >> Since we're already able to create highly-compatible snapshot compilers, >> it should be relatively easy to extend our snapshot procedure to produce >> complete binaries, installable via a cross-platform shell script. This >> would require the least amount of effort and maintenance because we don't >> need to use any specific package managers or add new bots, and a single >> installer can work on all Linuxes. >> >> We can also attempt to package Rust with various of the most common >> package managers: homebrew, macports, dpkg, rpm. There community-maintained >> packages for some of these already, so we don't necessarily need to >> redevelop from scratch if we just want to adopt one or all of them as >> official packages. We could also create a GUI installer for OS X, but I'm >> not sure how important that is. >> >> What shall we do? >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- *Tom Lee */ http://tomlee.co / @tglee -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko at alum.mit.edu Tue Feb 11 03:50:45 2014 From: niko at alum.mit.edu (Niko Matsakis) Date: Tue, 11 Feb 2014 06:50:45 -0500 Subject: [rust-dev] fork-join parallelism library In-Reply-To: References: Message-ID: <20140211115045.GD31013@Mr-Bennet> Hi Mahmut, More important than the name of the library, I think, is the shape of the API that it offers. There is some prior work in Servo and also I have been tinkering with some designs. I was hoping to write a new blog post with my latest thoughts in the next day or so but you can read up on an older design here: http://smallcultfollowing.com/babysteps/blog/2013/06/11/data-parallelism-in-rust/ Niko On Sun, Feb 09, 2014 at 11:29:02AM +0200, Mahmut Bulut wrote: > Hi Rust people, > > We are two senior Computer Engineering students from Turkey. We want to contribute to project in the parallelism way(call it data parallelism). So we want to start with brand new library like ?libgreen?. On the way of doing this I think we should determine: > > The name of the library > > People would love to help us. (calling mentors or contributors, if you want to say "is it needed?" imho yes for perfection of software and community and I am sure that contributors will help us) > > The knowledge base suggestions (especially the books that we can read and cite in project report, also contributors would be mentioned in report) > > -- Mahmut Bulut > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From mahmutbulut0 at gmail.com Tue Feb 11 04:36:53 2014 From: mahmutbulut0 at gmail.com (Mahmut Bulut) Date: Tue, 11 Feb 2014 14:36:53 +0200 Subject: [rust-dev] fork-join parallelism library In-Reply-To: <20140211115045.GD31013@Mr-Bennet> References: <20140211115045.GD31013@Mr-Bennet> Message-ID: I borrowed the code of pcwalton?s work queue on servo.? https://github.com/mozilla/servo/blob/5ca55bb996b2a447ff05c09aa0a8d87e80e75ee5/src/components/util/workqueue.rs But I am confused about how it will be used in syntax i mean it can be like green and use `green:start` or it can be dependent on the systems processor count to run in background in every task. It might be not in the every new task also maybe we can use task for it smt like task pool(java does like this ForkJoinPool). For now I don?t what to do and where I have to start with. --? Mahmut Bulut From:?Niko Matsakis Niko Matsakis Yan?tla:?Niko Matsakis niko at alum.mit.edu Tarih:?11 Feb 2014 at 13:50:51 To:?Mahmut Bulut mahmutbulut0 at gmail.com Konu:? Re: [rust-dev] fork-join parallelism library Hi Mahmut, More important than the name of the library, I think, is the shape of the API that it offers. There is some prior work in Servo and also I have been tinkering with some designs. I was hoping to write a new blog post with my latest thoughts in the next day or so but you can read up on an older design here: http://smallcultfollowing.com/babysteps/blog/2013/06/11/data-parallelism-in-rust/ Niko On Sun, Feb 09, 2014 at 11:29:02AM +0200, Mahmut Bulut wrote: > Hi Rust people, > > We are two senior Computer Engineering students from Turkey. We want to contribute to project in the parallelism way(call it data parallelism). So we want to start with brand new library like ?libgreen?. On the way of doing this I think we should determine: > > The name of the library > > People would love to help us. (calling mentors or contributors, if you want to say "is it needed?" imho yes for perfection of software and community and I am sure that contributors will help us) > > The knowledge base suggestions (especially the books that we can read and cite in project report, also contributors would be mentioned in report) > > -- Mahmut Bulut > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From edbalint at inf.u-szeged.hu Tue Feb 11 08:25:07 2014 From: edbalint at inf.u-szeged.hu (Edit Balint) Date: Tue, 11 Feb 2014 17:25:07 +0100 Subject: [rust-dev] Rust (Servo) Cross-Compile to ARM In-Reply-To: <52F3C12D.7000103@exyr.org> References: <52F3B889.7010008@inf.u-szeged.hu> <52F3C12D.7000103@exyr.org> Message-ID: <52FA4EE3.5000306@inf.u-szeged.hu> Dear Rust Developers, Thank you for your answers. Cross-compiling Rust HelloWorld to ARM was working fine. Passing --target-triples=arm-unknown-linux-gnueabihf to the servo configure script wasn't working. The result servo binary was x86-64. Today I'm going to try to compile Rust on ARM platform, reusing Luqman's compiled binaries as a snapshot. Best Regards, Edit On 2014-02-06 18:06, Simon Sapin wrote: > On 06/02/2014 16:30, Edit Balint wrote: >> Dear Rust Developers! >> >> My name is Edit Balint, I'm a software developer at University of >> Szeged, Hungary. >> We have a research project regarding Servo and Rust. >> Our main goal is to cross-compile and run Rust and Servo on ARM Linux >> (not Android). >> We have several issues with the cross-compiling procedure. Is there any >> guide how to achieve this? > > Although you?re not using Android, this > > https://github.com/mozilla/servo/wiki/Building-for-Android#wiki-build-servo > > > suggests that you need to pass --target-triples=arm-linux-SOMETHING to > the configure script, but I don?t know what value of SOMETHING is > relevant to you. It may be gnu, the default triple on my machine is > x86_64-unknown-linux-gnu. > From lucab at debian.org Tue Feb 11 10:18:50 2014 From: lucab at debian.org (Luca BRUNO) Date: Tue, 11 Feb 2014 19:18:50 +0100 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: References: <52F42A64.9070408@mozilla.com> <52F978F3.4070705@mozilla.com> Message-ID: <20140211191850.5a098ae1@debian.org> Tom Lee wrote: > I've cced Luca Bruno in since I think he's been keeping a closer eye > on the details. Thanks for copying, I was already lurking the thread. For the sake of completeness, I've been in talk with several Mozillers and Debian guys in the last few days to try to sort out all the details, hopefully in time for 0.10. I'm currently waiting for some final quotable feedback, I'll try to post all the details after that. > Relevant link on the wiki: > https://github.com/mozilla/rust/wiki/Note-packaging > > At a glance, most of those tickets still seem to be open. > Helping us work through some of the thornier issues would be a huge > help. In fact, rust devs have been very collaborative and most of the issues are mostly fixed or being addressed. rpath should be ok, libuv has been upstreamed and there is a plan for llvm before 1.0. Bootstrapping will stay as is and hopefully be accepted (with some workaround processes) also in Debian. I can't speak for Fedora/RH, but looks like things are consistently improving on this front. And many third-party repositories already exists for those who prefer the bleeding-edge :) Cheers, Luca -- .''`. | ~<[ Luca BRUNO ~ (kaeso) ]>~ : :' : | Email: lucab (AT) debian.org ~ Debian Developer `. `'` | GPG Key ID: 0x3BFB9FB3 ~ Free Software supporter `- | HAM-radio callsign: IZ1WGT ~ Networking sorcerer -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From robert at octarineparrot.com Tue Feb 11 12:45:14 2014 From: robert at octarineparrot.com (Robert Clipsham) Date: Tue, 11 Feb 2014 20:45:14 +0000 Subject: [rust-dev] Raw socket API Message-ID: Hi all, I've been working on adding support for raw sockets to Rust ( https://github.com/mozilla/rust/pull/11410), and was looking for input on the API. I am considering something like the following: enum Address { IpAddress(IpAddr), MacAddress(u8, u8, u8, u8, u8, u8) } // OSI Layer to work at, eg to put arbitary packets onto the network use DataLinkProtocol, // to implement an IP stack on top of ethernet use NetworkProtocol(EthernetDataLinkProtocol(IpEthernetProtocol)) etc enum Protocol { DataLinkProtocol, NetworkProtocol(DataLinkProto), TransportProtocol(NetworkProto) } enum DataLinkProto { EthernetDataLinkProtocol(EthernetProto), OtherDataLinkProtocol(uint) } impl RawSocket { pub fn new(protocol: Protocol) -> IoResult {} // Address would be filled when implementing network/transport layer protocols pub fn recvfrom(&mut self, buf: &mut [u8]) -> IoResult<(uint, Option
)> {} // Address only required for network/transport pub fn sendto(&mut self, buf: &[u8], dst: Option
) -> IoResult {} } This gives a nice enum-based way to create a socket (just specify what level you want to work at), and provides a way to work with protocols which haven't been included with the enums (using OtherDataLinkProtocol() for example). Does this seem sufficient? Is there anything I've missed/is there a nicer way I could express this? Thanks, Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Tue Feb 11 16:58:58 2014 From: banderson at mozilla.com (Brian Anderson) Date: Tue, 11 Feb 2014 16:58:58 -0800 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: References: <52F42A64.9070408@mozilla.com> <52F978F3.4070705@mozilla.com> Message-ID: <52FAC752.4050903@mozilla.com> On 02/11/2014 01:01 AM, Tom Lee wrote: > Hey Brian, > > Not sure I understand the last paragraph of your email (do you or do > you not want to encourage distro-specific installation? :)) I'm still not sure. I want people to be able to install Rust easily and I want those sources to be reliable. From danielmicay at gmail.com Tue Feb 11 17:04:06 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Tue, 11 Feb 2014 20:04:06 -0500 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <52FAC752.4050903@mozilla.com> References: <52F42A64.9070408@mozilla.com> <52F978F3.4070705@mozilla.com> <52FAC752.4050903@mozilla.com> Message-ID: On Tue, Feb 11, 2014 at 7:58 PM, Brian Anderson wrote: > On 02/11/2014 01:01 AM, Tom Lee wrote: >> >> Hey Brian, >> >> Not sure I understand the last paragraph of your email (do you or do you >> not want to encourage distro-specific installation? :)) > > > I'm still not sure. I want people to be able to install Rust easily and I > want those sources to be reliable. I don't think Rust should endorse third party binary builds, but official distribution packages aren't third party as the distribution is already trusted by the user. From flaper87 at gmail.com Wed Feb 12 01:13:03 2014 From: flaper87 at gmail.com (Flaper87) Date: Wed, 12 Feb 2014 10:13:03 +0100 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: References: <52F42A64.9070408@mozilla.com> <52F978F3.4070705@mozilla.com> <52FAC752.4050903@mozilla.com> Message-ID: 2014-02-12 2:04 GMT+01:00 Daniel Micay : > On Tue, Feb 11, 2014 at 7:58 PM, Brian Anderson > wrote: > > On 02/11/2014 01:01 AM, Tom Lee wrote: > >> > >> Hey Brian, > >> > >> Not sure I understand the last paragraph of your email (do you or do you > >> not want to encourage distro-specific installation? :)) > > > > > > I'm still not sure. I want people to be able to install Rust easily and I > > want those sources to be reliable. > > I don't think Rust should endorse third party binary builds, but > official distribution packages aren't third party as the distribution > is already trusted by the user. > There's some value in not being opinionated when it comes to supporting distros. I think Rust should have 1 official distribution package (binaries) and let distros' package maintainers take care of the rest. Distro users will most likely prefer and tust their own package manager / maintainer. what is important, though, is that Rust doesn't block distros on building their packages. This has been raised in this thread already, though. Cheers, Fla. -- Flavio (@flaper87) Percoco http://www.flaper87.com http://github.com/FlaPer87 -------------- next part -------------- An HTML attachment was scrubbed... URL: From edbalint at inf.u-szeged.hu Wed Feb 12 02:09:52 2014 From: edbalint at inf.u-szeged.hu (Edit Balint) Date: Wed, 12 Feb 2014 11:09:52 +0100 Subject: [rust-dev] Rust (Servo) Cross-Compile to ARM In-Reply-To: <52FA4EE3.5000306@inf.u-szeged.hu> References: <52F3B889.7010008@inf.u-szeged.hu> <52F3C12D.7000103@exyr.org> <52FA4EE3.5000306@inf.u-szeged.hu> Message-ID: <52FB4870.9060804@inf.u-szeged.hu> Dear Rust Developers, Yesterday I tried to Compile Rust on a Panda board with Ubuntu 12.04. I reused Luqman's Rust binary (armhf), but I got the following error: compile_and_link: arm-unknown-linux-gnu/stage0/lib/rustc/arm-unknown-linux-gnu/lib/libstd.so arm-unknown-linux-gnu/stage0/bin/rustc: error while loading shared libraries: libgreen-83b1c0e5-0.9.so: cannot open shared object file: No such file or directory make: *** [arm-unknown-linux-gnu/stage0/lib/rustc/arm-unknown-linux-gnu/lib/libstd.so] Error 127 Then I copied this lib from the Luqman's Rust libs to this folder: arm-unknown-linux-gnu/stage0/lib/. After that a new error came: cp: arm-unknown-linux-gnu/stage0/lib/rustc/arm-unknown-linux-gnu/lib/librustrt.a compile_and_link: arm-unknown-linux-gnu/stage0/lib/rustc/arm-unknown-linux-gnu/lib/libstd.so /home/ebalint/Work/rust_servo_version_cross/rust/src/libstd/managed.rs:36:26: 36:29 error: found `mut` in ident position /home/ebalint/Work/rust_servo_version_cross/rust/src/libstd/managed.rs:36 pub fn mut_ptr_eq(a: @mut T, b: @mut T) -> bool { ^~~ /home/ebalint/Work/rust_servo_version_cross/rust/src/libstd/managed.rs:36:30: 36:31 error: expected `,` but found `T` /home/ebalint/Work/rust_servo_version_cross/rust/src/libstd/managed.rs:36 pub fn mut_ptr_eq(a: @mut T, b: @mut T) -> bool { ^ task 'rustc' failed at 'explicit failure', /scratch/laden/rust/src/libsyntax/diagnostic.rs:41 task '
' failed at 'explicit failure', /scratch/laden/rust/src/librustc/lib.rs:448 make: *** [arm-unknown-linux-gnu/stage0/lib/rustc/arm-unknown-linux-gnu/lib/libstd.so] Error 101 Can you write me how you did the compiling on your RK3188? Thank you, Best Regards: Edit On 2014-02-11 17:25, Edit Balint wrote: > Dear Rust Developers, > Thank you for your answers. > > Cross-compiling Rust HelloWorld to ARM was working fine. > > Passing --target-triples=arm-unknown-linux-gnueabihf to the servo > configure script wasn't working. The result servo binary was x86-64. > > Today I'm going to try to compile Rust on ARM platform, reusing > Luqman's compiled binaries as a snapshot. > > Best Regards, > > Edit > > On 2014-02-06 18:06, Simon Sapin wrote: >> On 06/02/2014 16:30, Edit Balint wrote: >>> Dear Rust Developers! >>> >>> My name is Edit Balint, I'm a software developer at University of >>> Szeged, Hungary. >>> We have a research project regarding Servo and Rust. >>> Our main goal is to cross-compile and run Rust and Servo on ARM Linux >>> (not Android). >>> We have several issues with the cross-compiling procedure. Is there any >>> guide how to achieve this? >> >> Although you?re not using Android, this >> >> https://github.com/mozilla/servo/wiki/Building-for-Android#wiki-build-servo >> >> >> suggests that you need to pass --target-triples=arm-linux-SOMETHING >> to the configure script, but I don?t know what value of SOMETHING is >> relevant to you. It may be gnu, the default triple on my machine is >> x86_64-unknown-linux-gnu. >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From ben.striegel at gmail.com Wed Feb 12 15:27:01 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Wed, 12 Feb 2014 18:27:01 -0500 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: <20140211191850.5a098ae1@debian.org> References: <52F42A64.9070408@mozilla.com> <52F978F3.4070705@mozilla.com> <20140211191850.5a098ae1@debian.org> Message-ID: > there is a plan for llvm before 1.0. What does this refer to? Last I checked we were certain to still be using a custom LLVM as of 1.0. On Tue, Feb 11, 2014 at 1:18 PM, Luca BRUNO wrote: > Tom Lee wrote: > > > I've cced Luca Bruno in since I think he's been keeping a closer eye > > on the details. > > Thanks for copying, I was already lurking the thread. For the sake of > completeness, I've been in talk with several Mozillers and Debian guys > in the last few days to try to sort out all the details, hopefully in > time for 0.10. I'm currently waiting for some final quotable feedback, > I'll try to post all the details after that. > > > Relevant link on the wiki: > > https://github.com/mozilla/rust/wiki/Note-packaging > > > > At a glance, most of those tickets still seem to be open. > > Helping us work through some of the thornier issues would be a huge > > help. > > In fact, rust devs have been very collaborative and most of the issues > are mostly fixed or being addressed. rpath should be ok, libuv has been > upstreamed and there is a plan for llvm before 1.0. > Bootstrapping will stay as is and hopefully be accepted (with some > workaround processes) also in Debian. > > I can't speak for Fedora/RH, but looks like things are consistently > improving on this front. And many third-party repositories already > exists for those who prefer the bleeding-edge :) > > Cheers, Luca > > -- > .''`. | ~<[ Luca BRUNO ~ (kaeso) ]>~ > : :' : | Email: lucab (AT) debian.org ~ Debian Developer > `. `'` | GPG Key ID: 0x3BFB9FB3 ~ Free Software supporter > `- | HAM-radio callsign: IZ1WGT ~ Networking sorcerer > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.sapin at exyr.org Wed Feb 12 15:47:10 2014 From: simon.sapin at exyr.org (Simon Sapin) Date: Wed, 12 Feb 2014 23:47:10 +0000 Subject: [rust-dev] What form should the official Rust binary installers for Unixes take? In-Reply-To: References: <52F42A64.9070408@mozilla.com> <52F978F3.4070705@mozilla.com> <20140211191850.5a098ae1@debian.org> Message-ID: <52FC07FE.6060401@exyr.org> On 12/02/2014 23:27, Benjamin Striegel wrote: > > there is a plan for llvm before 1.0. > > What does this refer to? Last I checked we were certain to still be > using a custom LLVM as of 1.0. Apparently using upstream LLVM is desired, but not a blocker for 1.0: https://github.com/mozilla/rust/issues/4259#issuecomment-34094922 https://github.com/mozilla/rust/wiki/Meeting-weekly-2014-02-04#wiki-llvm -- Simon Sapin From erick.tryzelaar at gmail.com Wed Feb 12 16:58:11 2014 From: erick.tryzelaar at gmail.com (Erick Tryzelaar) Date: Wed, 12 Feb 2014 16:58:11 -0800 Subject: [rust-dev] Call for Presenters at the March (and future) Bay Area Rust meetup Message-ID: Good afternoon Rusties, I'm looking for presenters for the March and beyond Bay Area Rust meetups. We are happy to take: * Tutorials * Demos * Academic research that would be of interest to our community * Community organization proposals * Cake recipes * And more We can even handle remote presentations if you have access to a web cam and the internet. If you're interested, please send Brian Anderson or myself a proposal of what you'd like to talk about. Thanks! -Erick -------------- next part -------------- An HTML attachment was scrubbed... URL: From com.liigo at gmail.com Thu Feb 13 04:12:35 2014 From: com.liigo at gmail.com (Liigo Zhuang) Date: Thu, 13 Feb 2014 20:12:35 +0800 Subject: [rust-dev] Help: type `std::comm::Chan` does not implement any method in scope named `clone` Message-ID: Hi Rusties, When try to compile tmp.rs, I got the error: ``` tmp.rs:8:10: 8:19 error: type `std::comm::Chan` does not implement any method in scope named `clone` tmp.rs:8 let _ = c.clone(); ^~~~~~~~~ ``` But I don't know how to do. Please help me. Thank you. tmp.rs: ``` #[deriving(Clone)] pub struct A { dummy: uint, } pub fn main() { let (p, c) = Chan::::new(); let _ = c.clone(); } ``` -- by *Liigo*, http://blog.csdn.net/liigo/ Google+ https://plus.google.com/105597640837742873343/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Thu Feb 13 04:17:20 2014 From: alex at crichton.co (Alex Crichton) Date: Thu, 13 Feb 2014 06:17:20 -0600 Subject: [rust-dev] Help: type `std::comm::Chan` does not implement any method in scope named `clone` In-Reply-To: References: Message-ID: What version of the compiler are you using? The clone-able Chan only very recently landed, so you'll need a very up-to-date compiler to get the change. On Thu, Feb 13, 2014 at 6:12 AM, Liigo Zhuang wrote: > Hi Rusties, > > When try to compile tmp.rs, I got the error: > > ``` > tmp.rs:8:10: 8:19 error: type `std::comm::Chan` does not implement any > method in scope named `clone` > tmp.rs:8 let _ = c.clone(); > ^~~~~~~~~ > ``` > > But I don't know how to do. Please help me. Thank you. > > tmp.rs: > ``` > #[deriving(Clone)] > pub struct A { > dummy: uint, > } > > pub fn main() { > let (p, c) = Chan::::new(); > let _ = c.clone(); > } > ``` > > -- > by Liigo, http://blog.csdn.net/liigo/ > Google+ https://plus.google.com/105597640837742873343/ > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From com.liigo at gmail.com Thu Feb 13 05:17:43 2014 From: com.liigo at gmail.com (Liigo Zhuang) Date: Thu, 13 Feb 2014 21:17:43 +0800 Subject: [rust-dev] Help: type `std::comm::Chan` does not implement any method in scope named `clone` In-Reply-To: References: Message-ID: I compiled the lasted rustc from source yesterday. 2014?2?13? ??8:17? "Alex Crichton" ??? > What version of the compiler are you using? The clone-able Chan only > very recently landed, so you'll need a very up-to-date compiler to get > the change. > > On Thu, Feb 13, 2014 at 6:12 AM, Liigo Zhuang wrote: > > Hi Rusties, > > > > When try to compile tmp.rs, I got the error: > > > > ``` > > tmp.rs:8:10: 8:19 error: type `std::comm::Chan` does not implement > any > > method in scope named `clone` > > tmp.rs:8 let _ = c.clone(); > > ^~~~~~~~~ > > ``` > > > > But I don't know how to do. Please help me. Thank you. > > > > tmp.rs: > > ``` > > #[deriving(Clone)] > > pub struct A { > > dummy: uint, > > } > > > > pub fn main() { > > let (p, c) = Chan::::new(); > > let _ = c.clone(); > > } > > ``` > > > > -- > > by Liigo, http://blog.csdn.net/liigo/ > > Google+ https://plus.google.com/105597640837742873343/ > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.sapin at exyr.org Thu Feb 13 07:05:48 2014 From: simon.sapin at exyr.org (Simon Sapin) Date: Thu, 13 Feb 2014 15:05:48 +0000 Subject: [rust-dev] =?windows-1252?q?RFC=3A=A0Conventions_for_=22well-beha?= =?windows-1252?q?ved=22_iterators?= Message-ID: <52FCDF4C.2080607@exyr.org> Hi, The Rust documentation currently makes iterators behavior undefined after .next() has returned None once. http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html > The Iterator protocol does not define behavior after None is > returned. A concrete Iterator implementation may choose to behave > however it wishes, either by returning None infinitely, or by doing > something else. http://static.rust-lang.org/doc/master/guide-container.html > In general, you cannot rely on the behavior of the next() method > after it has returned None. Some iterators may return None forever. > Others may behave differently. This is unfortunate. Code that accepts any iterator as input and does with it anything more complicated than a single 'for' loop will have to be defensive in order to not fall into undefined behavior. The type system can not enforce anything about this, but I?d like that we consider having conventions about "well-behaved" iterators. --- Proposal: 0. An iterator is said to be "well-behaved" if, after its .next() method has returned None once, any subsequent call also returns None. 1. Iterators *should* be well-behaved. 2. Iterators in libstd and other libraries distributed with rustc *must* be well-behaved. (I.e. not being well-behaved is a bug.) 3. When accepting an iterator as input, it?s ok to assume it?s well-behaved. 4. For iterator adaptors in particular, 3. means that 1. and 2. only apply for well-behaved input. (So that, eg. std::iter::Map can stay as straightforward as it is, and does not need to be coded defensively.) --- Does the general idea sound like something y?all want? I?m not overly attached to the details. -- Simon Sapin From o.renaud at gmx.fr Thu Feb 13 08:09:07 2014 From: o.renaud at gmx.fr (Olivier Renaud) Date: Thu, 13 Feb 2014 17:09:07 +0100 Subject: [rust-dev] =?iso-8859-1?q?RFC=3A=A0Conventions_for_=22well-behave?= =?iso-8859-1?q?d=22_iterators?= Message-ID: <20140213160907.58060@gmx.com> As you said, the type system can not enforce this rule, that's why the documentation have no choice but to say the behavior is undefined. If the code you write relies on None being returned forever, then you should use the Fuse iterator adaptor, that wraps an existing iterator and enforces this behavior: http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html#method.fuse http://static.rust-lang.org/doc/master/guide-container.html#iterator-adaptors ----- Message d'origine ----- De : Simon Sapin Envoy?s : 13.02.14 16:05 ? : rust-dev at mozilla.org Objet : [rust-dev] RFC:?Conventions for "well-behaved" iterators Hi, The Rust documentation currently makes iterators behavior undefined after .next() has returned None once. http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html > The Iterator protocol does not define behavior after None is > returned. A concrete Iterator implementation may choose to behave > however it wishes, either by returning None infinitely, or by doing > something else. http://static.rust-lang.org/doc/master/guide-container.html > In general, you cannot rely on the behavior of the next() method > after it has returned None. Some iterators may return None forever. > Others may behave differently. This is unfortunate. Code that accepts any iterator as input and does with it anything more complicated than a single 'for' loop will have to be defensive in order to not fall into undefined behavior. The type system can not enforce anything about this, but I?d like that we consider having conventions about "well-behaved" iterators. --- Proposal: 0. An iterator is said to be "well-behaved" if, after its .next() method has returned None once, any subsequent call also returns None. 1. Iterators *should* be well-behaved. 2. Iterators in libstd and other libraries distributed with rustc *must* be well-behaved. (I.e. not being well-behaved is a bug.) 3. When accepting an iterator as input, it?s ok to assume it?s well-behaved. 4. For iterator adaptors in particular, 3. means that 1. and 2. only apply for well-behaved input. (So that, eg. std::iter::Map can stay as straightforward as it is, and does not need to be coded defensively.) --- Does the general idea sound like something y?all want? I?m not overly attached to the details. -- Simon Sapin From simon.sapin at exyr.org Thu Feb 13 08:33:13 2014 From: simon.sapin at exyr.org (Simon Sapin) Date: Thu, 13 Feb 2014 16:33:13 +0000 Subject: [rust-dev] =?utf-8?q?RFC=3A=C2=A0Conventions_for_=22well-behaved?= =?utf-8?q?=22_iterators?= In-Reply-To: <20140213160907.58060@gmx.com> References: <20140213160907.58060@gmx.com> Message-ID: <52FCF3C9.8020908@exyr.org> On 13/02/2014 16:09, Olivier Renaud wrote: > As you said, the type system can not enforce this rule, that's why > the documentation have no choice but to say the behavior is undefined. Just because we can not make them automatically-enforced rules doesn?t mean we can?t have conventions. I?m suggesting that we adopt a convention that iterators *should* be well-behaved. > If the code you write relies on None being returned forever, then you > should use the Fuse iterator adaptor, that wraps an existing iterator > and enforces this behavior: > > http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html#method.fuse > http://static.rust-lang.org/doc/master/guide-container.html#iterator-adaptors -- Simon Sapin From alex at crichton.co Thu Feb 13 09:35:43 2014 From: alex at crichton.co (Alex Crichton) Date: Thu, 13 Feb 2014 12:35:43 -0500 Subject: [rust-dev] RFC: Conventions for "well-behaved" iterators In-Reply-To: <52FCDF4C.2080607@exyr.org> References: <52FCDF4C.2080607@exyr.org> Message-ID: For reference, this topic was discussed last August as well: https://mail.mozilla.org/pipermail/rust-dev/2013-August/005113.html On Thu, Feb 13, 2014 at 10:05 AM, Simon Sapin wrote: > Hi, > > The Rust documentation currently makes iterators behavior undefined after > .next() has returned None once. > > http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html >> >> The Iterator protocol does not define behavior after None is >> returned. A concrete Iterator implementation may choose to behave >> however it wishes, either by returning None infinitely, or by doing >> something else. > > > http://static.rust-lang.org/doc/master/guide-container.html >> >> In general, you cannot rely on the behavior of the next() method >> after it has returned None. Some iterators may return None forever. >> Others may behave differently. > > > > This is unfortunate. Code that accepts any iterator as input and does with > it anything more complicated than a single 'for' loop will have to be > defensive in order to not fall into undefined behavior. > > The type system can not enforce anything about this, but I'd like that we > consider having conventions about "well-behaved" iterators. > > --- > > Proposal: > > 0. An iterator is said to be "well-behaved" if, after its .next() method has > returned None once, any subsequent call also returns None. > > 1. Iterators *should* be well-behaved. > > 2. Iterators in libstd and other libraries distributed with rustc *must* be > well-behaved. (I.e. not being well-behaved is a bug.) > > 3. When accepting an iterator as input, it's ok to assume it's well-behaved. > > 4. For iterator adaptors in particular, 3. means that 1. and 2. only apply > for well-behaved input. (So that, eg. std::iter::Map can stay as > straightforward as it is, and does not need to be coded defensively.) > > --- > > Does the general idea sound like something y'all want? I'm not overly > attached to the details. > > -- > Simon Sapin > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From alex at crichton.co Thu Feb 13 09:37:55 2014 From: alex at crichton.co (Alex Crichton) Date: Thu, 13 Feb 2014 12:37:55 -0500 Subject: [rust-dev] Help: type `std::comm::Chan` does not implement any method in scope named `clone` In-Reply-To: References: Message-ID: Can you supply the output of `rustc -v`? The snippet complies ok for me off master. On Thu, Feb 13, 2014 at 8:17 AM, Liigo Zhuang wrote: > I compiled the lasted rustc from source yesterday. > > 2014?2?13? ??8:17? "Alex Crichton" ??? > >> What version of the compiler are you using? The clone-able Chan only >> very recently landed, so you'll need a very up-to-date compiler to get >> the change. >> >> On Thu, Feb 13, 2014 at 6:12 AM, Liigo Zhuang wrote: >> > Hi Rusties, >> > >> > When try to compile tmp.rs, I got the error: >> > >> > ``` >> > tmp.rs:8:10: 8:19 error: type `std::comm::Chan` does not implement >> > any >> > method in scope named `clone` >> > tmp.rs:8 let _ = c.clone(); >> > ^~~~~~~~~ >> > ``` >> > >> > But I don't know how to do. Please help me. Thank you. >> > >> > tmp.rs: >> > ``` >> > #[deriving(Clone)] >> > pub struct A { >> > dummy: uint, >> > } >> > >> > pub fn main() { >> > let (p, c) = Chan::::new(); >> > let _ = c.clone(); >> > } >> > ``` >> > >> > -- >> > by Liigo, http://blog.csdn.net/liigo/ >> > Google+ https://plus.google.com/105597640837742873343/ >> > >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> > From glaebhoerl at gmail.com Thu Feb 13 11:13:13 2014 From: glaebhoerl at gmail.com (=?ISO-8859-1?Q?G=E1bor_Lehel?=) Date: Thu, 13 Feb 2014 20:13:13 +0100 Subject: [rust-dev] RFC: Conventions for "well-behaved" iterators In-Reply-To: <20140213160907.58060@gmx.com> References: <20140213160907.58060@gmx.com> Message-ID: On Thu, Feb 13, 2014 at 5:09 PM, Olivier Renaud wrote: > As you said, the type system can not enforce this rule, that's why > the documentation have no choice but to say the behavior is undefined. > This is not strictly true. If instead of fn next(&mut self) -> Option; we had something like fn next(self) -> Option<(Self, A)>; then access to exhausted iterators would be ruled out at the type level. (But it's more cumbersome to work with and is currently incompatible with trait objects.) > > If the code you write relies on None being returned forever, then you > should use the Fuse iterator adaptor, that wraps an existing iterator > and enforces this behavior: > > > http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html#method.fuse > > http://static.rust-lang.org/doc/master/guide-container.html#iterator-adaptors > > ----- Message d'origine ----- > De : Simon Sapin > Envoy?s : 13.02.14 16:05 > ? : rust-dev at mozilla.org > Objet : [rust-dev] RFC: Conventions for "well-behaved" iterators > Hi, > > The Rust documentation currently makes iterators behavior undefined > after .next() has returned None once. > > http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html > > The Iterator protocol does not define behavior after None is > > returned. A concrete Iterator implementation may choose to behave > > however it wishes, either by returning None infinitely, or by doing > > something else. > > http://static.rust-lang.org/doc/master/guide-container.html > > In general, you cannot rely on the behavior of the next() method > > after it has returned None. Some iterators may return None forever. > > Others may behave differently. > > > This is unfortunate. Code that accepts any iterator as input and does > with it anything more complicated than a single 'for' loop will have to > be defensive in order to not fall into undefined behavior. > > The type system can not enforce anything about this, but I'd like that > we consider having conventions about "well-behaved" iterators. > > --- > > Proposal: > > 0. An iterator is said to be "well-behaved" if, after its .next() method > has returned None once, any subsequent call also returns None. > > 1. Iterators *should* be well-behaved. > > 2. Iterators in libstd and other libraries distributed with rustc *must* > be well-behaved. (I.e. not being well-behaved is a bug.) > > 3. When accepting an iterator as input, it's ok to assume it's > well-behaved. > > 4. For iterator adaptors in particular, 3. means that 1. and 2. only > apply for well-behaved input. (So that, eg. std::iter::Map can stay as > straightforward as it is, and does not need to be coded defensively.) > > --- > > Does the general idea sound like something y'all want? I'm not overly > attached to the details. > > -- > Simon Sapin > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Thu Feb 13 11:28:14 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Thu, 13 Feb 2014 14:28:14 -0500 Subject: [rust-dev] RFC: Conventions for "well-behaved" iterators In-Reply-To: References: <20140213160907.58060@gmx.com> Message-ID: On Thu, Feb 13, 2014 at 2:13 PM, G?bor Lehel wrote: > > (But it's more cumbersome to work with and is currently incompatible with > trait objects.) Iterators are already mostly incompatible with trait objects since all the adaptors take by-value self. From danielmicay at gmail.com Thu Feb 13 11:56:21 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Thu, 13 Feb 2014 14:56:21 -0500 Subject: [rust-dev] RFC: Conventions for "well-behaved" iterators In-Reply-To: <52FCDF4C.2080607@exyr.org> References: <52FCDF4C.2080607@exyr.org> Message-ID: On Thu, Feb 13, 2014 at 10:05 AM, Simon Sapin wrote: > Hi, > > The Rust documentation currently makes iterators behavior undefined after > .next() has returned None once. > > http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html >> >> The Iterator protocol does not define behavior after None is >> returned. A concrete Iterator implementation may choose to behave >> however it wishes, either by returning None infinitely, or by doing >> something else. > > > http://static.rust-lang.org/doc/master/guide-container.html >> >> In general, you cannot rely on the behavior of the next() method >> after it has returned None. Some iterators may return None forever. >> Others may behave differently. > > > > This is unfortunate. Code that accepts any iterator as input and does with > it anything more complicated than a single 'for' loop will have to be > defensive in order to not fall into undefined behavior. > > The type system can not enforce anything about this, but I?d like that we > consider having conventions about "well-behaved" iterators. > > --- > > Proposal: > > 0. An iterator is said to be "well-behaved" if, after its .next() method has > returned None once, any subsequent call also returns None. > > 1. Iterators *should* be well-behaved. > > 2. Iterators in libstd and other libraries distributed with rustc *must* be > well-behaved. (I.e. not being well-behaved is a bug.) > > 3. When accepting an iterator as input, it?s ok to assume it?s well-behaved. > > 4. For iterator adaptors in particular, 3. means that 1. and 2. only apply > for well-behaved input. (So that, eg. std::iter::Map can stay as > straightforward as it is, and does not need to be coded defensively.) > > --- > > Does the general idea sound like something y?all want? I?m not overly > attached to the details. > > -- > Simon Sapin Enforcing this invariant makes many adaptors more complex. For example, the `filter` adaptor would need to maintain a boolean flag and branch on it. I'm fine with the current solution of a `fuse` adaptor because it moves all of the responsibility to a single location, and user-defined adaptors don't need to get this right. From kevin at sb.org Thu Feb 13 12:08:19 2014 From: kevin at sb.org (Kevin Ballard) Date: Thu, 13 Feb 2014 12:08:19 -0800 Subject: [rust-dev] RFC: Conventions for "well-behaved" iterators In-Reply-To: References: <52FCDF4C.2080607@exyr.org> Message-ID: <3A173949-381F-4477-B1BD-D9FB501710AD@sb.org> On Feb 13, 2014, at 11:56 AM, Daniel Micay wrote: > On Thu, Feb 13, 2014 at 10:05 AM, Simon Sapin wrote: >> Hi, >> >> The Rust documentation currently makes iterators behavior undefined after >> .next() has returned None once. >> >> http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html >>> >>> The Iterator protocol does not define behavior after None is >>> returned. A concrete Iterator implementation may choose to behave >>> however it wishes, either by returning None infinitely, or by doing >>> something else. >> >> >> http://static.rust-lang.org/doc/master/guide-container.html >>> >>> In general, you cannot rely on the behavior of the next() method >>> after it has returned None. Some iterators may return None forever. >>> Others may behave differently. >> >> >> >> This is unfortunate. Code that accepts any iterator as input and does with >> it anything more complicated than a single 'for' loop will have to be >> defensive in order to not fall into undefined behavior. >> >> The type system can not enforce anything about this, but I?d like that we >> consider having conventions about "well-behaved" iterators. >> >> --- >> >> Proposal: >> >> 0. An iterator is said to be "well-behaved" if, after its .next() method has >> returned None once, any subsequent call also returns None. >> >> 1. Iterators *should* be well-behaved. >> >> 2. Iterators in libstd and other libraries distributed with rustc *must* be >> well-behaved. (I.e. not being well-behaved is a bug.) >> >> 3. When accepting an iterator as input, it?s ok to assume it?s well-behaved. >> >> 4. For iterator adaptors in particular, 3. means that 1. and 2. only apply >> for well-behaved input. (So that, eg. std::iter::Map can stay as >> straightforward as it is, and does not need to be coded defensively.) >> >> --- >> >> Does the general idea sound like something y?all want? I?m not overly >> attached to the details. >> >> -- >> Simon Sapin > > Enforcing this invariant makes many adaptors more complex. For > example, the `filter` adaptor would need to maintain a boolean flag > and branch on it. I'm fine with the current solution of a `fuse` > adaptor because it moves all of the responsibility to a single > location, and user-defined adaptors don't need to get this right. This was the main reasoning behind the current logic. The vast majority of users of iterators don't care about next() behavior after the iterator has returned None, so there was no need to make the iterator adaptors track extra state in the general case. Any client who does need it can just call `.fuse()` to get a Fuse adaptor that adds the necessary checks. -Kevin -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4118 bytes Desc: not available URL: From simon.sapin at exyr.org Thu Feb 13 15:32:13 2014 From: simon.sapin at exyr.org (Simon Sapin) Date: Thu, 13 Feb 2014 23:32:13 +0000 Subject: [rust-dev] RFC: Conventions for "well-behaved" iterators In-Reply-To: References: <52FCDF4C.2080607@exyr.org> Message-ID: <52FD55FD.7020800@exyr.org> On 13/02/2014 19:56, Daniel Micay wrote: > Enforcing this invariant makes many adaptors more complex. For > example, the `filter` adaptor would need to maintain a boolean flag > and branch on it. I'm fine with the current solution of a `fuse` > adaptor because it moves all of the responsibility to a single > location, and user-defined adaptors don't need to get this right. My entire email was about specifically *not* enforcing anything, only convention. Filter is in the exact same category as Map: it?s only expected to be well-behaved when its input is. -- Simon Sapin From erick.tryzelaar at gmail.com Thu Feb 13 15:33:23 2014 From: erick.tryzelaar at gmail.com (Erick Tryzelaar) Date: Thu, 13 Feb 2014 15:33:23 -0800 Subject: [rust-dev] RFC: Conventions for "well-behaved" iterators In-Reply-To: References: <20140213160907.58060@gmx.com> Message-ID: On Thursday, February 13, 2014, G?bor Lehel > wrote: > > > This is not strictly true. > > If instead of > > fn next(&mut self) -> Option; > > we had something like > > fn next(self) -> Option<(Self, A)>; > > then access to exhausted iterators would be ruled out at the type level. > > (But it's more cumbersome to work with and is currently incompatible with > trait objects.) > This is an appealing option. If it is really this simple to close this undefined behavior, I think we should consider it. Are there any other downsides? Does it optimize down to the same code as our current iterators? -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielmicay at gmail.com Thu Feb 13 15:35:29 2014 From: danielmicay at gmail.com (Daniel Micay) Date: Thu, 13 Feb 2014 18:35:29 -0500 Subject: [rust-dev] RFC: Conventions for "well-behaved" iterators In-Reply-To: References: <20140213160907.58060@gmx.com> Message-ID: On Thu, Feb 13, 2014 at 6:33 PM, Erick Tryzelaar wrote: > On Thursday, February 13, 2014, G?bor Lehel wrote: >> >> >> >> This is not strictly true. >> >> If instead of >> >> fn next(&mut self) -> Option; >> >> we had something like >> >> fn next(self) -> Option<(Self, A)>; >> >> then access to exhausted iterators would be ruled out at the type level. >> >> (But it's more cumbersome to work with and is currently incompatible with >> trait objects.) > > This is an appealing option. If it is really this simple to close this > undefined behavior, I think we should consider it. Are there any other > downsides? Does it optimize down to the same code as our current iterators? It's certainly not as convenient and would only work if all iterators were marked as `NoPod`. From com.liigo at gmail.com Thu Feb 13 19:58:02 2014 From: com.liigo at gmail.com (Liigo Zhuang) Date: Fri, 14 Feb 2014 11:58:02 +0800 Subject: [rust-dev] Help: type `std::comm::Chan` does not implement any method in scope named `clone` In-Reply-To: References: Message-ID: rustc -v: ``` rustc 0.10-pre (a102aef 2014-02-12 08:41:19 +0800) host: x86_64-unknown-linux-gnu ``` The most recent rustc, i just recompiled from mozilla/rust/master several minutes ago. The same compile error occurred. 2014-02-14 1:37 GMT+08:00 Alex Crichton : > Can you supply the output of `rustc -v`? The snippet complies ok for > me off master. > > On Thu, Feb 13, 2014 at 8:17 AM, Liigo Zhuang wrote: > > I compiled the lasted rustc from source yesterday. > > > > 2014?2?13? ??8:17? "Alex Crichton" ??? > > > >> What version of the compiler are you using? The clone-able Chan only > >> very recently landed, so you'll need a very up-to-date compiler to get > >> the change. > >> > >> On Thu, Feb 13, 2014 at 6:12 AM, Liigo Zhuang > wrote: > >> > Hi Rusties, > >> > > >> > When try to compile tmp.rs, I got the error: > >> > > >> > ``` > >> > tmp.rs:8:10: 8:19 error: type `std::comm::Chan` does not implement > >> > any > >> > method in scope named `clone` > >> > tmp.rs:8 let _ = c.clone(); > >> > ^~~~~~~~~ > >> > ``` > >> > > >> > But I don't know how to do. Please help me. Thank you. > >> > > >> > tmp.rs: > >> > ``` > >> > #[deriving(Clone)] > >> > pub struct A { > >> > dummy: uint, > >> > } > >> > > >> > pub fn main() { > >> > let (p, c) = Chan::::new(); > >> > let _ = c.clone(); > >> > } > >> > ``` > >> > > >> > -- > >> > by Liigo, http://blog.csdn.net/liigo/ > >> > Google+ https://plus.google.com/105597640837742873343/ > >> > > >> > _______________________________________________ > >> > Rust-dev mailing list > >> > Rust-dev at mozilla.org > >> > https://mail.mozilla.org/listinfo/rust-dev > >> > > -- by *Liigo*, http://blog.csdn.net/liigo/ Google+ https://plus.google.com/105597640837742873343/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From com.liigo at gmail.com Thu Feb 13 20:02:42 2014 From: com.liigo at gmail.com (Liigo Zhuang) Date: Fri, 14 Feb 2014 12:02:42 +0800 Subject: [rust-dev] Help: type `std::comm::Chan` does not implement any method in scope named `clone` In-Reply-To: References: Message-ID: I'm very sorry. I forgot `make install`. Now it works OK. 2014-02-14 11:58 GMT+08:00 Liigo Zhuang : > rustc -v: > ``` > rustc 0.10-pre (a102aef 2014-02-12 08:41:19 +0800) > host: x86_64-unknown-linux-gnu > ``` > > The most recent rustc, i just recompiled from mozilla/rust/master several > minutes ago. > The same compile error occurred. > > > 2014-02-14 1:37 GMT+08:00 Alex Crichton : > > Can you supply the output of `rustc -v`? The snippet complies ok for >> me off master. >> >> On Thu, Feb 13, 2014 at 8:17 AM, Liigo Zhuang >> wrote: >> > I compiled the lasted rustc from source yesterday. >> > >> > 2014?2?13? ??8:17? "Alex Crichton" ??? >> > >> >> What version of the compiler are you using? The clone-able Chan only >> >> very recently landed, so you'll need a very up-to-date compiler to get >> >> the change. >> >> >> >> On Thu, Feb 13, 2014 at 6:12 AM, Liigo Zhuang >> wrote: >> >> > Hi Rusties, >> >> > >> >> > When try to compile tmp.rs, I got the error: >> >> > >> >> > ``` >> >> > tmp.rs:8:10: 8:19 error: type `std::comm::Chan` does not >> implement >> >> > any >> >> > method in scope named `clone` >> >> > tmp.rs:8 let _ = c.clone(); >> >> > ^~~~~~~~~ >> >> > ``` >> >> > >> >> > But I don't know how to do. Please help me. Thank you. >> >> > >> >> > tmp.rs: >> >> > ``` >> >> > #[deriving(Clone)] >> >> > pub struct A { >> >> > dummy: uint, >> >> > } >> >> > >> >> > pub fn main() { >> >> > let (p, c) = Chan::::new(); >> >> > let _ = c.clone(); >> >> > } >> >> > ``` >> >> > >> >> > -- >> >> > by Liigo, http://blog.csdn.net/liigo/ >> >> > Google+ https://plus.google.com/105597640837742873343/ >> >> > >> >> > _______________________________________________ >> >> > Rust-dev mailing list >> >> > Rust-dev at mozilla.org >> >> > https://mail.mozilla.org/listinfo/rust-dev >> >> > >> > > > > -- > by *Liigo*, http://blog.csdn.net/liigo/ > Google+ https://plus.google.com/105597640837742873343/ > -- by *Liigo*, http://blog.csdn.net/liigo/ Google+ https://plus.google.com/105597640837742873343/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From glaebhoerl at gmail.com Fri Feb 14 02:46:44 2014 From: glaebhoerl at gmail.com (=?ISO-8859-1?Q?G=E1bor_Lehel?=) Date: Fri, 14 Feb 2014 11:46:44 +0100 Subject: [rust-dev] RFC: Conventions for "well-behaved" iterators In-Reply-To: References: <20140213160907.58060@gmx.com> Message-ID: On Fri, Feb 14, 2014 at 12:35 AM, Daniel Micay wrote: > On Thu, Feb 13, 2014 at 6:33 PM, Erick Tryzelaar > wrote: > > On Thursday, February 13, 2014, G?bor Lehel > wrote: > >> > >> > >> > >> This is not strictly true. > >> > >> If instead of > >> > >> fn next(&mut self) -> Option; > >> > >> we had something like > >> > >> fn next(self) -> Option<(Self, A)>; > >> > >> then access to exhausted iterators would be ruled out at the type level. > >> > >> (But it's more cumbersome to work with and is currently incompatible > with > >> trait objects.) > > > > This is an appealing option. If it is really this simple to close this > > undefined behavior, I think we should consider it. Are there any other > > downsides? Does it optimize down to the same code as our current > iterators? > > It's certainly not as convenient and would only work if all iterators > were marked as `NoPod`. > Even if it were `Pod` (i.e. copyable), the state of the old copy would be left unchanged by the call, so I don't think this is a problem. You could also recover the behavior of the existing `Fuse` adapter (call it any number of times, exhaustion checked at runtime) by wrapping it in an `Option` like so: fn next_fused>(opt_iter: &mut Option) -> Option { opt_iter.take().and_then(|iter| { iter.next().map(|(next_iter, result)| { *opt_iter = Some(next_iter); result }) }) } Dunno about performance. Lots of copies/moves with this scheme, so it seems possible that it might be slower. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbau.pp at gmail.com Fri Feb 14 03:03:59 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Fri, 14 Feb 2014 22:03:59 +1100 Subject: [rust-dev] RFC: Conventions for "well-behaved" iterators In-Reply-To: References: <20140213160907.58060@gmx.com> Message-ID: <52FDF81F.6010703@gmail.com> Taking `self` would make it much harder to thread iterators statefully through other function calls, and also make using Iterator trait objects harder/impossible, since `next` requires `~self` until 10672 is fixed, which is unacceptable, and mentions `Self` in the return value, making it uncallable through something with the type erased. Huon On 14/02/14 21:46, G?bor Lehel wrote: > > > > On Fri, Feb 14, 2014 at 12:35 AM, Daniel Micay > wrote: > > On Thu, Feb 13, 2014 at 6:33 PM, Erick Tryzelaar > > wrote: > > On Thursday, February 13, 2014, G?bor Lehel > > wrote: > >> > >> > >> > >> This is not strictly true. > >> > >> If instead of > >> > >> fn next(&mut self) -> Option; > >> > >> we had something like > >> > >> fn next(self) -> Option<(Self, A)>; > >> > >> then access to exhausted iterators would be ruled out at the > type level. > >> > >> (But it's more cumbersome to work with and is currently > incompatible with > >> trait objects.) > > > > This is an appealing option. If it is really this simple to > close this > > undefined behavior, I think we should consider it. Are there any > other > > downsides? Does it optimize down to the same code as our current > iterators? > > It's certainly not as convenient and would only work if all iterators > were marked as `NoPod`. > > > Even if it were `Pod` (i.e. copyable), the state of the old copy would > be left unchanged by the call, so I don't think this is a problem. > > You could also recover the behavior of the existing `Fuse` adapter > (call it any number of times, exhaustion checked at runtime) by > wrapping it in an `Option` like so: > > fn next_fused>(opt_iter: &mut Option) > -> Option { > opt_iter.take().and_then(|iter| { > iter.next().map(|(next_iter, result)| { > *opt_iter = Some(next_iter); > result > }) > }) > } > > Dunno about performance. Lots of copies/moves with this scheme, so it > seems possible that it might be slower. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From explodingmind at gmail.com Fri Feb 14 06:48:30 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Fri, 14 Feb 2014 09:48:30 -0500 Subject: [rust-dev] [rustc-f039d10] A newer kernel is required to run this binary. (__kernel_cmpxchg64 helper) Message-ID: Hey All, I'm attempting to run rustc on a 3.0.36 kernel. Within the last few weeks, it started complaining about __kernel_cmpxchg64. Unfortunately, like many, the systems on which I'd like to use Rust are beyond my control, so simply upgrading the kernel's not an especially viable option. Anyone know the root cause of this issue? Thanks! -- Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Fri Feb 14 07:20:20 2014 From: alex at crichton.co (Alex Crichton) Date: Fri, 14 Feb 2014 10:20:20 -0500 Subject: [rust-dev] [rustc-f039d10] A newer kernel is required to run this binary. (__kernel_cmpxchg64 helper) In-Reply-To: References: Message-ID: Are you targeting a platform other than x86? I recently added support for 64-bit atomics on all platforms, and without the right cpu or target feature set LLVM will lower them to intrinsic calls, and it's possible that you're missing an intrinsic somewhere. On Fri, Feb 14, 2014 at 9:48 AM, Ian Daniher wrote: > Hey All, > > I'm attempting to run rustc on a 3.0.36 kernel. Within the last few weeks, > it started complaining about __kernel_cmpxchg64. Unfortunately, like many, > the systems on which I'd like to use Rust are beyond my control, so simply > upgrading the kernel's not an especially viable option. > > Anyone know the root cause of this issue? > > Thanks! > -- > Ian > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From explodingmind at gmail.com Fri Feb 14 07:31:56 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Fri, 14 Feb 2014 07:31:56 -0800 (PST) Subject: [rust-dev] [rustc-f039d10] A newer kernel is required to run this binary. (__kernel_cmpxchg64 helper) In-Reply-To: References: Message-ID: <1392391916533.adccc8f3@Nodemailer> Targetting ARM hard float, v7 CPU. Any ideas how to go about addressing this? ? >From My Tiny Glowing Screen On Fri, Feb 14, 2014 at 10:20 AM, Alex Crichton wrote: > Are you targeting a platform other than x86? I recently added support > for 64-bit atomics on all platforms, and without the right cpu or > target feature set LLVM will lower them to intrinsic calls, and it's > possible that you're missing an intrinsic somewhere. > On Fri, Feb 14, 2014 at 9:48 AM, Ian Daniher wrote: >> Hey All, >> >> I'm attempting to run rustc on a 3.0.36 kernel. Within the last few weeks, >> it started complaining about __kernel_cmpxchg64. Unfortunately, like many, >> the systems on which I'd like to use Rust are beyond my control, so simply >> upgrading the kernel's not an especially viable option. >> >> Anyone know the root cause of this issue? >> >> Thanks! >> -- >> Ian >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Fri Feb 14 07:34:46 2014 From: alex at crichton.co (Alex Crichton) Date: Fri, 14 Feb 2014 10:34:46 -0500 Subject: [rust-dev] [rustc-f039d10] A newer kernel is required to run this binary. (__kernel_cmpxchg64 helper) In-Reply-To: <1392391916533.adccc8f3@Nodemailer> References: <1392391916533.adccc8f3@Nodemailer> Message-ID: For android, we provide the +v7 feature by default in order to allow LLVM to lower these 64-bit atomics to actual instructions. If you compile with `-C target-feature=+v7` then it shouldn't make a function call to __kernel_cmpxchg64. On Fri, Feb 14, 2014 at 10:31 AM, Ian Daniher wrote: > Targetting ARM hard float, v7 CPU. > > Any ideas how to go about addressing this? > -- > From My Tiny Glowing Screen > > > On Fri, Feb 14, 2014 at 10:20 AM, Alex Crichton wrote: >> >> Are you targeting a platform other than x86? I recently added support >> for 64-bit atomics on all platforms, and without the right cpu or >> target feature set LLVM will lower them to intrinsic calls, and it's >> possible that you're missing an intrinsic somewhere. >> >> On Fri, Feb 14, 2014 at 9:48 AM, Ian Daniher >> wrote: >> > Hey All, >> > >> > I'm attempting to run rustc on a 3.0.36 kernel. Within the last few >> > weeks, >> > it started complaining about __kernel_cmpxchg64. Unfortunately, like >> > many, >> > the systems on which I'd like to use Rust are beyond my control, so >> > simply >> > upgrading the kernel's not an especially viable option. >> > >> > Anyone know the root cause of this issue? >> > >> > Thanks! >> > -- >> > Ian >> > >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> > > > From explodingmind at gmail.com Fri Feb 14 08:29:12 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Fri, 14 Feb 2014 11:29:12 -0500 Subject: [rust-dev] [Boston] Rust talk on Monday Message-ID: Hello fellow Rustafarians! Monday the 17th, at 6pm, at 1000 Olin Way, Needham, MA, there will be free pizza for anyone who shows up. I'll be giving a ~30m talk on my work using Rust for software radio demodulation, decoding, and parsing, focusing on some neat datastructure and architectural decisions I've made. The focus of the talk will be on https://github.com/ade-ma/LibRedio, my preliminary work on a sort of "gnuradio-lite" my capstone team is using for making low-cost automation tools. Contact me with questions, comments, and RSVPs! Best, -- Ian Daniher -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at metajack.im Fri Feb 14 09:01:28 2014 From: jack at metajack.im (Jack Moffitt) Date: Fri, 14 Feb 2014 10:01:28 -0700 Subject: [rust-dev] [Boston] Rust talk on Monday In-Reply-To: References: Message-ID: This sounds pretty interesting. Will it be recorded at all? jack. On Fri, Feb 14, 2014 at 9:29 AM, Ian Daniher wrote: > Hello fellow Rustafarians! > > Monday the 17th, at 6pm, at 1000 Olin Way, Needham, MA, there will be free > pizza for anyone who shows up. > > I'll be giving a ~30m talk on my work using Rust for software radio > demodulation, decoding, and parsing, focusing on some neat datastructure and > architectural decisions I've made. > > The focus of the talk will be on https://github.com/ade-ma/LibRedio, my > preliminary work on a sort of "gnuradio-lite" my capstone team is using for > making low-cost automation tools. > > Contact me with questions, comments, and RSVPs! > > Best, > -- > Ian Daniher > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From explodingmind at gmail.com Fri Feb 14 09:18:14 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Fri, 14 Feb 2014 12:18:14 -0500 Subject: [rust-dev] [Boston] Rust talk on Monday In-Reply-To: References: Message-ID: I can make sure it's recorded & put on youtube. On Fri, Feb 14, 2014 at 12:01 PM, Jack Moffitt wrote: > This sounds pretty interesting. Will it be recorded at all? > > jack. > > On Fri, Feb 14, 2014 at 9:29 AM, Ian Daniher > wrote: > > Hello fellow Rustafarians! > > > > Monday the 17th, at 6pm, at 1000 Olin Way, Needham, MA, there will be > free > > pizza for anyone who shows up. > > > > I'll be giving a ~30m talk on my work using Rust for software radio > > demodulation, decoding, and parsing, focusing on some neat datastructure > and > > architectural decisions I've made. > > > > The focus of the talk will be on https://github.com/ade-ma/LibRedio, my > > preliminary work on a sort of "gnuradio-lite" my capstone team is using > for > > making low-cost automation tools. > > > > Contact me with questions, comments, and RSVPs! > > > > Best, > > -- > > Ian Daniher > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arielb1 at mail.tau.ac.il Fri Feb 14 10:00:44 2014 From: arielb1 at mail.tau.ac.il (Ariel Ben-Yehuda) Date: Fri, 14 Feb 2014 20:00:44 +0200 Subject: [rust-dev] RFC: Struct Self-Burrows Message-ID: Hi, Currently, Rust has C#-like implicit variance - actually it's even more implicit then C#, being inferred automatically - which is nice in being boilerplate-light. However, in some cases more explicit variance would be desired, as in this example: Say Library A has a mutable AContext, and Library B, which depends on Library A, has a BContext which depends on the AContext. We could type this as follows: struct AContext { priv state: RefCell, ... } fn create_acontext() -> AContext; struct BContext<'a> { acontext: &'a AContext, ... } fn create_bcontext<'a>(&'a AContext) -> BContext<'a>; And use it like: let acontext = create_acontext() let bcontext = create_bcontext_(&acontext) (Note that acontext: ~AContext won't work, because Library B objects may have a reference to a subobject of the AContext, and besides, the user may like to use the AContext themselves) However, this forces the user of Library B to interact with its dependencies, and if Library B depends on half a dozen other libraries and the user creates ten contexts in his tests, we have a problem. We would like to create a function that creates a BContext and an AContext together - however, someone needs to manage the ownership of the AContext and make sure it goes with the BContext. A potential solution is to use Rc, but this creates Rc pollution when the lifetime is static - now objects with a pointer to part of AInternalState need to use an Rc. Note that the lifetime of the inner AContext is completely static here - it goes with the BContext. We could have something like this: struct BContextHolder { priv acontext: ~AContext, ctx: BContext<'something> } fn create_bcontext() -> BContextHolder { let actx = ~create_acontext() BContextHolder {acontext: actx, ctx: create_bcontext_(&actx)) } However, this isn't legal rust - we can't find a good lifetime for 'something - and besides, acontext isn't burrowed so people can free it. I think I found a way to solve this - allow struct fields to have burrows into *other fields of the same struct*: A (bad) syntax for this is struct BContextHolder { priv &'a acontext: ~AContext, ctx: BContext<'a> } Which means that the field acontext is borrowed in the lifetime 'a, which is also the lifetime of ctx. I didn't formalise this, but from where I looked we essentially need to: 1) Make sure that people accessing acontext see it as burrowed 2) Make sure you can't just change acontext 3) Make sure the burrow relationships are not strengthened when structuring/destructuring. This post is getting tl;dr already so I'll end it here. -- Ariel Ben-Yehuda -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcguire at crsr.net Fri Feb 14 11:14:08 2014 From: mcguire at crsr.net (Tommy M. McGuire) Date: Fri, 14 Feb 2014 13:14:08 -0600 Subject: [rust-dev] Pointer to trait method? Message-ID: <52FE6B00.10906@crsr.net> Suppose I have a trait T: trait A { fn fun(t:T); // <- [1] } Is there a way to get a pointer to the implementation of fun for a specific T? How would I go about calling it? (Yeah, I know I'm kinda far off the reservation.) [1] Not necessarily an instance method, but that would be useful, too. -- Tommy M. McGuire mcguire at crsr.net From arielb1 at mail.tau.ac.il Fri Feb 14 11:26:01 2014 From: arielb1 at mail.tau.ac.il (Ariel Ben-Yehuda) Date: Fri, 14 Feb 2014 21:26:01 +0200 Subject: [rust-dev] RFC: Struct Self-Borrows Message-ID: [/Burrow/Borrow/ damn spellcheck] More thoughts on this: The syntax I used earlier (&'a NAME) has a potential problem that it would make the pointer borrowed, rather then the pointee. I don't really understand the rules of borrowed owned ptrs: can you write through them? can others have live aliases? (If unique ptrs work internally like Rc<>s then that' parts fine [~T -> &'a ~T -> &'a T], the latter by an automatic .borrow()) -- but I'm not really interested in declaration syntax here. By the way, are ~ptrs special to the borrow checker in any way other then autoref/deref (By the looks of it DST will make slices not special to the borrow checker and special @s were turned into non-special Rc's -- is the same thing being done for ~s)? Informal semantics: structs can have fields borrowed to a lifetime. That binds the lifetime (if you use the same name twice you get an error). If a field is borrowed to a lifetime: a) When constructing the struct, a borrowed value can be used as long as you put all references into the right places in the new struct b) Dually, when destructing the struct, the parts end up borrowed in the same relationship they were in the struct. c) It can only be accessed borrowed to that lifetime. d) It can't be set The latter 2 behave in the same way as a borrowed unique pointer in a stack frame -- you can say that this proposal makes stack frames less special by allowing ordinary structs to root borrows. This has the disadvantage of making lifetimes non-totally-ordered. Is that too bad? -- Ariel Ben-Yehuda -------------- next part -------------- An HTML attachment was scrubbed... URL: From damienradtke at gmail.com Fri Feb 14 12:22:29 2014 From: damienradtke at gmail.com (Damien Radtke) Date: Fri, 14 Feb 2014 14:22:29 -0600 Subject: [rust-dev] Need help implementing some complex parent-child task behavior. Message-ID: I'm trying to write what is essentially a card game simulator in Rust, but I'm running into a bit of a roadblock with Rust's memory management. The gist of what I want to accomplish is: 1. In the program's main loop, iterate over several "players" and call their "play" method in turn. 2. Each "play" method should be able to send requests back to the parent in order to take certain actions, who will validate that the action is possible and update the player's state accordingly. The problem I'm running into is that, in order to let a player "play" and have the game validate actions for them, I would need to run each player in their own task, (I considered implementing it as each function call indicating a request for action [e.g. by returning Some(action), or None when finished] and calling it repeatedly until none are taken, but this makes the implementation for each player needlessly complex) but this makes for some tricky situations. My current implementation uses a DuplexStream to communicate back and forth, the child sending requests to the parent and the parent sending responses, but then I run into the issue of how to inform the child of their current state, but don't let them modify it outside of sending action requests. Ideally I'd like to be able to create an (unsafe) immutable pointer to the state held by the parent as mutable, but that gives me a "values differ in mutability" error. Other approaches so far have failed as well; Arcs don't work because I need to have one-sided mutability; standard borrowed pointers don't work because the child and parent need to access it at the same time (though only the parent should be able to modify it, ensuring its safety); even copying the state doesn't work because the child then needs to update its local state with a new copy sent by the parent, which is also prone to mutability-related errors. Any tips on how to accomplish something like this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at sb.org Fri Feb 14 12:47:03 2014 From: kevin at sb.org (Kevin Ballard) Date: Fri, 14 Feb 2014 12:47:03 -0800 Subject: [rust-dev] Need help implementing some complex parent-child task behavior. In-Reply-To: References: Message-ID: <2EC9B5EE-48F7-4248-BDEE-2B3715F9D4A5@sb.org> What if the state's fields are private, and in a different module than the players, but exposes getters to query the state? Then the players can't modify it, but if the component that processes the actions has visibility into the state's fields, it can modify them just fine. -Kevin On Feb 14, 2014, at 12:22 PM, Damien Radtke wrote: > I'm trying to write what is essentially a card game simulator in Rust, but I'm running into a bit of a roadblock with Rust's memory management. The gist of what I want to accomplish is: > > 1. In the program's main loop, iterate over several "players" and call their "play" method in turn. > 2. Each "play" method should be able to send requests back to the parent in order to take certain actions, who will validate that the action is possible and update the player's state accordingly. > > The problem I'm running into is that, in order to let a player "play" and have the game validate actions for them, I would need to run each player in their own task, (I considered implementing it as each function call indicating a request for action [e.g. by returning Some(action), or None when finished] and calling it repeatedly until none are taken, but this makes the implementation for each player needlessly complex) but this makes for some tricky situations. > > My current implementation uses a DuplexStream to communicate back and forth, the child sending requests to the parent and the parent sending responses, but then I run into the issue of how to inform the child of their current state, but don't let them modify it outside of sending action requests. > > Ideally I'd like to be able to create an (unsafe) immutable pointer to the state held by the parent as mutable, but that gives me a "values differ in mutability" error. Other approaches so far have failed as well; Arcs don't work because I need to have one-sided mutability; standard borrowed pointers don't work because the child and parent need to access it at the same time (though only the parent should be able to modify it, ensuring its safety); even copying the state doesn't work because the child then needs to update its local state with a new copy sent by the parent, which is also prone to mutability-related errors. > > Any tips on how to accomplish something like this? > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4118 bytes Desc: not available URL: From ben.striegel at gmail.com Fri Feb 14 12:54:03 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Fri, 14 Feb 2014 15:54:03 -0500 Subject: [rust-dev] [Boston] Rust talk on Monday In-Reply-To: References: Message-ID: Though if you don't record it, you can offer to give the presentation remotely at the monthly Bay Area Rust Meetup. :) On Fri, Feb 14, 2014 at 12:18 PM, Ian Daniher wrote: > I can make sure it's recorded & put on youtube. > > > On Fri, Feb 14, 2014 at 12:01 PM, Jack Moffitt wrote: > >> This sounds pretty interesting. Will it be recorded at all? >> >> jack. >> >> On Fri, Feb 14, 2014 at 9:29 AM, Ian Daniher >> wrote: >> > Hello fellow Rustafarians! >> > >> > Monday the 17th, at 6pm, at 1000 Olin Way, Needham, MA, there will be >> free >> > pizza for anyone who shows up. >> > >> > I'll be giving a ~30m talk on my work using Rust for software radio >> > demodulation, decoding, and parsing, focusing on some neat >> datastructure and >> > architectural decisions I've made. >> > >> > The focus of the talk will be on https://github.com/ade-ma/LibRedio, my >> > preliminary work on a sort of "gnuradio-lite" my capstone team is using >> for >> > making low-cost automation tools. >> > >> > Contact me with questions, comments, and RSVPs! >> > >> > Best, >> > -- >> > Ian Daniher >> > >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> > >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From damienradtke at gmail.com Fri Feb 14 12:59:33 2014 From: damienradtke at gmail.com (Damien Radtke) Date: Fri, 14 Feb 2014 14:59:33 -0600 Subject: [rust-dev] Need help implementing some complex parent-child task behavior. In-Reply-To: <2EC9B5EE-48F7-4248-BDEE-2B3715F9D4A5@sb.org> References: <2EC9B5EE-48F7-4248-BDEE-2B3715F9D4A5@sb.org> Message-ID: Unfortunately, the type that maintains the state apparently doesn't fulfill Send, which confuses me because it's a struct that consists of a string, function pointer, and a few dynamically-sized vectors. Which of these types makes the struct as a whole violate Send? On Fri, Feb 14, 2014 at 2:47 PM, Kevin Ballard wrote: > What if the state's fields are private, and in a different module than the > players, but exposes getters to query the state? Then the players can't > modify it, but if the component that processes the actions has visibility > into the state's fields, it can modify them just fine. > > -Kevin > > On Feb 14, 2014, at 12:22 PM, Damien Radtke > wrote: > > > I'm trying to write what is essentially a card game simulator in Rust, > but I'm running into a bit of a roadblock with Rust's memory management. > The gist of what I want to accomplish is: > > > > 1. In the program's main loop, iterate over several "players" and call > their "play" method in turn. > > 2. Each "play" method should be able to send requests back to the parent > in order to take certain actions, who will validate that the action is > possible and update the player's state accordingly. > > > > The problem I'm running into is that, in order to let a player "play" > and have the game validate actions for them, I would need to run each > player in their own task, (I considered implementing it as each function > call indicating a request for action [e.g. by returning Some(action), or > None when finished] and calling it repeatedly until none are taken, but > this makes the implementation for each player needlessly complex) but this > makes for some tricky situations. > > > > My current implementation uses a DuplexStream to communicate back and > forth, the child sending requests to the parent and the parent sending > responses, but then I run into the issue of how to inform the child of > their current state, but don't let them modify it outside of sending action > requests. > > > > Ideally I'd like to be able to create an (unsafe) immutable pointer to > the state held by the parent as mutable, but that gives me a "values differ > in mutability" error. Other approaches so far have failed as well; Arcs > don't work because I need to have one-sided mutability; standard borrowed > pointers don't work because the child and parent need to access it at the > same time (though only the parent should be able to modify it, ensuring its > safety); even copying the state doesn't work because the child then needs > to update its local state with a new copy sent by the parent, which is also > prone to mutability-related errors. > > > > Any tips on how to accomplish something like this? > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erick.tryzelaar at gmail.com Fri Feb 14 13:03:29 2014 From: erick.tryzelaar at gmail.com (Erick Tryzelaar) Date: Fri, 14 Feb 2014 13:03:29 -0800 Subject: [rust-dev] [Boston] Rust talk on Monday In-Reply-To: References: Message-ID: Yep, we'd be happy to VTC you in for this presentation in the next meetup! On Fri, Feb 14, 2014 at 12:54 PM, Benjamin Striegel wrote: > Though if you don't record it, you can offer to give the presentation > remotely at the monthly Bay Area Rust Meetup. :) > > > On Fri, Feb 14, 2014 at 12:18 PM, Ian Daniher wrote: > >> I can make sure it's recorded & put on youtube. >> >> >> On Fri, Feb 14, 2014 at 12:01 PM, Jack Moffitt wrote: >> >>> This sounds pretty interesting. Will it be recorded at all? >>> >>> jack. >>> >>> On Fri, Feb 14, 2014 at 9:29 AM, Ian Daniher >>> wrote: >>> > Hello fellow Rustafarians! >>> > >>> > Monday the 17th, at 6pm, at 1000 Olin Way, Needham, MA, there will be >>> free >>> > pizza for anyone who shows up. >>> > >>> > I'll be giving a ~30m talk on my work using Rust for software radio >>> > demodulation, decoding, and parsing, focusing on some neat >>> datastructure and >>> > architectural decisions I've made. >>> > >>> > The focus of the talk will be on https://github.com/ade-ma/LibRedio, >>> my >>> > preliminary work on a sort of "gnuradio-lite" my capstone team is >>> using for >>> > making low-cost automation tools. >>> > >>> > Contact me with questions, comments, and RSVPs! >>> > >>> > Best, >>> > -- >>> > Ian Daniher >>> > >>> > _______________________________________________ >>> > Rust-dev mailing list >>> > Rust-dev at mozilla.org >>> > https://mail.mozilla.org/listinfo/rust-dev >>> > >>> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Fri Feb 14 13:08:43 2014 From: alex at crichton.co (Alex Crichton) Date: Fri, 14 Feb 2014 16:08:43 -0500 Subject: [rust-dev] Pointer to trait method? In-Reply-To: <52FE6B00.10906@crsr.net> References: <52FE6B00.10906@crsr.net> Message-ID: You'll always need a concrete type in order to get the trait method for that type. Something like this may work for you though: trait A { fn foo(Self); } impl A for int { fn foo(a: int) {} } fn main() { let f: fn(int) = A::foo; } On Fri, Feb 14, 2014 at 2:14 PM, Tommy M. McGuire wrote: > Suppose I have a trait T: > > trait A { > fn fun(t:T); // <- [1] > } > > Is there a way to get a pointer to the implementation of fun for a > specific T? How would I go about calling it? > > (Yeah, I know I'm kinda far off the reservation.) > > [1] Not necessarily an instance method, but that would be useful, too. > > > -- > Tommy M. McGuire > mcguire at crsr.net > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From kevin at sb.org Fri Feb 14 13:19:09 2014 From: kevin at sb.org (Kevin Ballard) Date: Fri, 14 Feb 2014 13:19:09 -0800 Subject: [rust-dev] Need help implementing some complex parent-child task behavior. In-Reply-To: References: <2EC9B5EE-48F7-4248-BDEE-2B3715F9D4A5@sb.org> Message-ID: Depends. If the string or the vectors are & instead of ~, that would do it. Also, if the element type of the vector does not fulfill Send. Oh, and the function pointer is a function pointer, not a closure, right? -Kevin On Feb 14, 2014, at 12:59 PM, Damien Radtke wrote: > Unfortunately, the type that maintains the state apparently doesn't fulfill Send, which confuses me because it's a struct that consists of a string, function pointer, and a few dynamically-sized vectors. Which of these types makes the struct as a whole violate Send? > > > On Fri, Feb 14, 2014 at 2:47 PM, Kevin Ballard wrote: > What if the state's fields are private, and in a different module than the players, but exposes getters to query the state? Then the players can't modify it, but if the component that processes the actions has visibility into the state's fields, it can modify them just fine. > > -Kevin > > On Feb 14, 2014, at 12:22 PM, Damien Radtke wrote: > > > I'm trying to write what is essentially a card game simulator in Rust, but I'm running into a bit of a roadblock with Rust's memory management. The gist of what I want to accomplish is: > > > > 1. In the program's main loop, iterate over several "players" and call their "play" method in turn. > > 2. Each "play" method should be able to send requests back to the parent in order to take certain actions, who will validate that the action is possible and update the player's state accordingly. > > > > The problem I'm running into is that, in order to let a player "play" and have the game validate actions for them, I would need to run each player in their own task, (I considered implementing it as each function call indicating a request for action [e.g. by returning Some(action), or None when finished] and calling it repeatedly until none are taken, but this makes the implementation for each player needlessly complex) but this makes for some tricky situations. > > > > My current implementation uses a DuplexStream to communicate back and forth, the child sending requests to the parent and the parent sending responses, but then I run into the issue of how to inform the child of their current state, but don't let them modify it outside of sending action requests. > > > > Ideally I'd like to be able to create an (unsafe) immutable pointer to the state held by the parent as mutable, but that gives me a "values differ in mutability" error. Other approaches so far have failed as well; Arcs don't work because I need to have one-sided mutability; standard borrowed pointers don't work because the child and parent need to access it at the same time (though only the parent should be able to modify it, ensuring its safety); even copying the state doesn't work because the child then needs to update its local state with a new copy sent by the parent, which is also prone to mutability-related errors. > > > > Any tips on how to accomplish something like this? > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4118 bytes Desc: not available URL: From damienradtke at gmail.com Fri Feb 14 14:12:09 2014 From: damienradtke at gmail.com (Damien Radtke) Date: Fri, 14 Feb 2014 16:12:09 -0600 Subject: [rust-dev] Need help implementing some complex parent-child task behavior. In-Reply-To: References: <2EC9B5EE-48F7-4248-BDEE-2B3715F9D4A5@sb.org> Message-ID: The function pointer is indeed a function pointer and all of the strings and vectors are ~, but the vector type is &'static. They're meant to hold references to card definitions, which is more efficient than passing around the cards themselves. I tried modifying the vectors to hold ~-strings instead, but it still didn't work. Looks like I'll need to do more research on Send. On Fri, Feb 14, 2014 at 3:19 PM, Kevin Ballard wrote: > Depends. If the string or the vectors are & instead of ~, that would do > it. Also, if the element type of the vector does not fulfill Send. Oh, and > the function pointer is a function pointer, not a closure, right? > > -Kevin > > On Feb 14, 2014, at 12:59 PM, Damien Radtke > wrote: > > Unfortunately, the type that maintains the state apparently doesn't > fulfill Send, which confuses me because it's a struct that consists of a > string, function pointer, and a few dynamically-sized vectors. Which of > these types makes the struct as a whole violate Send? > > > On Fri, Feb 14, 2014 at 2:47 PM, Kevin Ballard wrote: > >> What if the state's fields are private, and in a different module than >> the players, but exposes getters to query the state? Then the players can't >> modify it, but if the component that processes the actions has visibility >> into the state's fields, it can modify them just fine. >> >> -Kevin >> >> On Feb 14, 2014, at 12:22 PM, Damien Radtke >> wrote: >> >> > I'm trying to write what is essentially a card game simulator in Rust, >> but I'm running into a bit of a roadblock with Rust's memory management. >> The gist of what I want to accomplish is: >> > >> > 1. In the program's main loop, iterate over several "players" and call >> their "play" method in turn. >> > 2. Each "play" method should be able to send requests back to the >> parent in order to take certain actions, who will validate that the action >> is possible and update the player's state accordingly. >> > >> > The problem I'm running into is that, in order to let a player "play" >> and have the game validate actions for them, I would need to run each >> player in their own task, (I considered implementing it as each function >> call indicating a request for action [e.g. by returning Some(action), or >> None when finished] and calling it repeatedly until none are taken, but >> this makes the implementation for each player needlessly complex) but this >> makes for some tricky situations. >> > >> > My current implementation uses a DuplexStream to communicate back and >> forth, the child sending requests to the parent and the parent sending >> responses, but then I run into the issue of how to inform the child of >> their current state, but don't let them modify it outside of sending action >> requests. >> > >> > Ideally I'd like to be able to create an (unsafe) immutable pointer to >> the state held by the parent as mutable, but that gives me a "values differ >> in mutability" error. Other approaches so far have failed as well; Arcs >> don't work because I need to have one-sided mutability; standard borrowed >> pointers don't work because the child and parent need to access it at the >> same time (though only the parent should be able to modify it, ensuring its >> safety); even copying the state doesn't work because the child then needs >> to update its local state with a new copy sent by the parent, which is also >> prone to mutability-related errors. >> > >> > Any tips on how to accomplish something like this? >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From damienradtke at gmail.com Fri Feb 14 14:18:38 2014 From: damienradtke at gmail.com (Damien Radtke) Date: Fri, 14 Feb 2014 16:18:38 -0600 Subject: [rust-dev] Need help implementing some complex parent-child task behavior. In-Reply-To: References: <2EC9B5EE-48F7-4248-BDEE-2B3715F9D4A5@sb.org> Message-ID: Ah, I think the problem is that I'm trying to create a new task within a loop over an iterator, so each value is an &-ptr and is therefore causing it to fail... On Fri, Feb 14, 2014 at 4:12 PM, Damien Radtke wrote: > The function pointer is indeed a function pointer and all of the strings > and vectors are ~, but the vector type is &'static. They're meant to hold > references to card definitions, which is more efficient than passing around > the cards themselves. I tried modifying the vectors to hold ~-strings > instead, but it still didn't work. > > Looks like I'll need to do more research on Send. > > > On Fri, Feb 14, 2014 at 3:19 PM, Kevin Ballard wrote: > >> Depends. If the string or the vectors are & instead of ~, that would do >> it. Also, if the element type of the vector does not fulfill Send. Oh, and >> the function pointer is a function pointer, not a closure, right? >> >> -Kevin >> >> On Feb 14, 2014, at 12:59 PM, Damien Radtke >> wrote: >> >> Unfortunately, the type that maintains the state apparently doesn't >> fulfill Send, which confuses me because it's a struct that consists of a >> string, function pointer, and a few dynamically-sized vectors. Which of >> these types makes the struct as a whole violate Send? >> >> >> On Fri, Feb 14, 2014 at 2:47 PM, Kevin Ballard wrote: >> >>> What if the state's fields are private, and in a different module than >>> the players, but exposes getters to query the state? Then the players can't >>> modify it, but if the component that processes the actions has visibility >>> into the state's fields, it can modify them just fine. >>> >>> -Kevin >>> >>> On Feb 14, 2014, at 12:22 PM, Damien Radtke >>> wrote: >>> >>> > I'm trying to write what is essentially a card game simulator in Rust, >>> but I'm running into a bit of a roadblock with Rust's memory management. >>> The gist of what I want to accomplish is: >>> > >>> > 1. In the program's main loop, iterate over several "players" and call >>> their "play" method in turn. >>> > 2. Each "play" method should be able to send requests back to the >>> parent in order to take certain actions, who will validate that the action >>> is possible and update the player's state accordingly. >>> > >>> > The problem I'm running into is that, in order to let a player "play" >>> and have the game validate actions for them, I would need to run each >>> player in their own task, (I considered implementing it as each function >>> call indicating a request for action [e.g. by returning Some(action), or >>> None when finished] and calling it repeatedly until none are taken, but >>> this makes the implementation for each player needlessly complex) but this >>> makes for some tricky situations. >>> > >>> > My current implementation uses a DuplexStream to communicate back and >>> forth, the child sending requests to the parent and the parent sending >>> responses, but then I run into the issue of how to inform the child of >>> their current state, but don't let them modify it outside of sending action >>> requests. >>> > >>> > Ideally I'd like to be able to create an (unsafe) immutable pointer to >>> the state held by the parent as mutable, but that gives me a "values differ >>> in mutability" error. Other approaches so far have failed as well; Arcs >>> don't work because I need to have one-sided mutability; standard borrowed >>> pointers don't work because the child and parent need to access it at the >>> same time (though only the parent should be able to modify it, ensuring its >>> safety); even copying the state doesn't work because the child then needs >>> to update its local state with a new copy sent by the parent, which is also >>> prone to mutability-related errors. >>> > >>> > Any tips on how to accomplish something like this? >>> > _______________________________________________ >>> > Rust-dev mailing list >>> > Rust-dev at mozilla.org >>> > https://mail.mozilla.org/listinfo/rust-dev >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arielb1 at mail.tau.ac.il Fri Feb 14 04:50:49 2014 From: arielb1 at mail.tau.ac.il (Ariel Ben-Yehuda) Date: Fri, 14 Feb 2014 14:50:49 +0200 Subject: [rust-dev] RFC: Struct Self-Burrows Message-ID: Hi, Currently, Rust has C#-like implicit variance - actually it's even more implicit then C#, being inferred automatically - which is nice in being boilerplate-light. However, in some cases more explicit variance would be desired, as in this example: Say Library A has a mutable AContext, and Library B, which depends on Library A, has a BContext which depends on the AContext. We could type this as follows: struct AContext { priv state: RefCell, ... } fn create_acontext() -> AContext; struct BContext<'a> { acontext: &'a AContext, ... } fn create_bcontext<'a>(&'a AContext) -> BContext<'a>; And use it like: let acontext = create_acontext() let bcontext = create_bcontext_(&acontext) (Note that acontext: ~AContext won't work, because Library B objects may have a reference to a subobject of the AContext, and besides, the user may like to use the AContext themselves) However, this forces the user of Library B to interact with its dependencies, and if Library B depends on half a dozen other libraries and the user creates ten contexts in his tests, we have a problem. We would like to create a function that creates a BContext and an AContext together - however, someone needs to manage the ownership of the AContext and make sure it goes with the BContext. A potential solution is to use Rc, but this creates Rc pollution when the lifetime is static - now objects with a pointer to part of AInternalState need to use an Rc. Note that the lifetime of the inner AContext is completely static here - it goes with the BContext. We could have something like this: struct BContextHolder { priv acontext: ~AContext, ctx: BContext<'something> } fn create_bcontext() -> BContextHolder { let actx = ~create_acontext() BContextHolder {acontext: actx, ctx: create_bcontext_(&actx)) } However, this isn't legal rust - we can't find a good lifetime for 'something - and besides, acontext isn't burrowed so people can free it. I think I found a way to solve this - allow struct fields to have burrows into *other fields of the same struct*: A (bad) syntax for this is struct BContextHolder { priv &'a acontext: ~AContext, ctx: BContext<'a> } Which means that the field acontext is borrowed in the lifetime 'a, which is also the lifetime of ctx. I didn't formalise this, but from where I looked we essentially need to: 1) Make sure that people accessing acontext see it as burrowed 2) Make sure you can't just change acontext 3) Make sure the burrow relationships are not strengthened when structuring/destructuring. This post is getting tl;dr already so I'll end it here. -- Ariel Ben-Yehuda -------------- next part -------------- An HTML attachment was scrubbed... URL: From com.liigo at gmail.com Fri Feb 14 17:16:23 2014 From: com.liigo at gmail.com (Liigo Zhuang) Date: Sat, 15 Feb 2014 09:16:23 +0800 Subject: [rust-dev] Help, what's the meaning of `unknown` in `x86_64-unknown-linux-gnu`? Message-ID: Hello Rusties: I'm using Debian 7.4 Linux, not "unknown linux" obviously. And I don't know the meaning of `-gnu`. On Windows, that it `x86-pc-mingw32`, which is quite meaningful to understand. Thank you. -- by *Liigo*, http://blog.csdn.net/liigo/ Google+ https://plus.google.com/105597640837742873343/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From com.liigo at gmail.com Fri Feb 14 17:31:30 2014 From: com.liigo at gmail.com (Liigo Zhuang) Date: Sat, 15 Feb 2014 09:31:30 +0800 Subject: [rust-dev] Help, what's the meaning of `unknown` in `x86_64-unknown-linux-gnu`? In-Reply-To: References: Message-ID: 2014-02-15 9:26 GMT+08:00 Lee Braiden : > Unknown-linux presumably means generic linux, and GNU you should probably > learn about, fir your own good, at gnu.org especially gnu.org/philosophy. > > Hint: much of what people think of as "Linux" is actually part of GNU, or > using GNU. > If so, why not `x86_64-gnu-linux`? -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Fri Feb 14 17:41:36 2014 From: banderson at mozilla.com (Brian Anderson) Date: Fri, 14 Feb 2014 17:41:36 -0800 Subject: [rust-dev] Help, what's the meaning of `unknown` in `x86_64-unknown-linux-gnu`? In-Reply-To: References: Message-ID: <52FEC5D0.3020109@mozilla.com> This may be the most canonical description of target triples (autoconf config names): https://sourceware.org/autobook/autobook/autobook_17.html Triples are just a short way of identifying a compilation target, and their naming is mostly out of our hands, established by historical precedent. The individual components of the triple mean very little - it's generally the entire string used to identify a platform. "unknown" is a common vendor name where there's no obvious vendor, shows up a lot in linux triples, though `x86_64-pc-linux-gnu` is also common; "-gnu" probably means the target has a GNU userspace. On 02/14/2014 05:16 PM, Liigo Zhuang wrote: > Hello Rusties: > > I'm using Debian 7.4 Linux, not "unknown linux" obviously. > And I don't know the meaning of `-gnu`. > > On Windows, that it `x86-pc-mingw32`, which is quite meaningful to > understand. > > Thank you. > > -- > by *Liigo*, http://blog.csdn.net/liigo/ > Google+ https://plus.google.com/105597640837742873343/ > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From com.liigo at gmail.com Fri Feb 14 17:44:54 2014 From: com.liigo at gmail.com (Liigo Zhuang) Date: Sat, 15 Feb 2014 09:44:54 +0800 Subject: [rust-dev] Help, what's the meaning of `unknown` in `x86_64-unknown-linux-gnu`? In-Reply-To: References: Message-ID: Thank you. But I don't think "unknown" is a meaningful word here, it says nothing. You didn't send to rust-dev at mozilla.org, so no other people can receive your email except me. 2014-02-15 9:37 GMT+08:00 Lee Braiden : > The elements of these "triplets" (each of the parts separated by dashes) > have a specific order and meaning, so they can't just be randomly rephrased > on a per-combination basis. > > They're not meant to be pretty English, but to encode information in a > semi-readable format. > On 15 Feb 2014 01:31, "Liigo Zhuang" wrote: > >> 2014-02-15 9:26 GMT+08:00 Lee Braiden : >> >>> Unknown-linux presumably means generic linux, and GNU you should >>> probably learn about, fir your own good, at gnu.org especially >>> gnu.org/philosophy. >>> >>> Hint: much of what people think of as "Linux" is actually part of GNU, >>> or using GNU. >>> >> If so, why not `x86_64-gnu-linux`? >> > -- by *Liigo*, http://blog.csdn.net/liigo/ Google+ https://plus.google.com/105597640837742873343/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From com.liigo at gmail.com Fri Feb 14 17:49:33 2014 From: com.liigo at gmail.com (Liigo Zhuang) Date: Sat, 15 Feb 2014 09:49:33 +0800 Subject: [rust-dev] Help, what's the meaning of `unknown` in `x86_64-unknown-linux-gnu`? In-Reply-To: <52FEC5D0.3020109@mozilla.com> References: <52FEC5D0.3020109@mozilla.com> Message-ID: Nice answer. Thank you! 2014-02-15 9:41 GMT+08:00 Brian Anderson : > This may be the most canonical description of target triples (autoconf > config names): https://sourceware.org/autobook/autobook/autobook_17.html > > Triples are just a short way of identifying a compilation target, and > their naming is mostly out of our hands, established by historical > precedent. The individual components of the triple mean very little - it's > generally the entire string used to identify a platform. "unknown" is a > common vendor name where there's no obvious vendor, shows up a lot in linux > triples, though `x86_64-pc-linux-gnu` is also common; "-gnu" probably means > the target has a GNU userspace. > > > On 02/14/2014 05:16 PM, Liigo Zhuang wrote: > > Hello Rusties: > > I'm using Debian 7.4 Linux, not "unknown linux" obviously. > And I don't know the meaning of `-gnu`. > > On Windows, that it `x86-pc-mingw32`, which is quite meaningful to > understand. > > Thank you. > > -- > by *Liigo*, http://blog.csdn.net/liigo/ > Google+ https://plus.google.com/105597640837742873343/ > > > _______________________________________________ > Rust-dev mailing listRust-dev at mozilla.orghttps://mail.mozilla.org/listinfo/rust-dev > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -- by *Liigo*, http://blog.csdn.net/liigo/ Google+ https://plus.google.com/105597640837742873343/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From explodingmind at gmail.com Fri Feb 14 20:30:42 2014 From: explodingmind at gmail.com (Ian Daniher) Date: Fri, 14 Feb 2014 23:30:42 -0500 Subject: [rust-dev] [Boston] Rust talk on Monday In-Reply-To: References: Message-ID: 02/17. 6pm. Parking at 42.292987, -71.265391. Map at http://itdaniher.com/static/rustOlin0217.png. -- Ian On Fri, Feb 14, 2014 at 11:29 AM, Ian Daniher wrote: > Hello fellow Rustafarians! > > Monday the 17th, at 6pm, at 1000 Olin Way, Needham, MA, there will be free > pizza for anyone who shows up. > > I'll be giving a ~30m talk on my work using Rust for software radio > demodulation, decoding, and parsing, focusing on some neat datastructure > and architectural decisions I've made. > > The focus of the talk will be on https://github.com/ade-ma/LibRedio, my > preliminary work on a sort of "gnuradio-lite" my capstone team is using for > making low-cost automation tools. > > Contact me with questions, comments, and RSVPs! > > Best, > -- > Ian Daniher > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svetoslav at neykov.name Sat Feb 15 05:36:45 2014 From: svetoslav at neykov.name (Svetoslav Neykov) Date: Sat, 15 Feb 2014 15:36:45 +0200 Subject: [rust-dev] [PATCH] Add stack overflow check for ARM Thumb instruction set. Message-ID: <1392471405-18189-1-git-send-email-svetoslav@neykov.name> Hi, I am working on getting Rust to directly compile code for bare metal ARM devices working in Thumb mode. I created a patch for LLVM to emit the appropriate function prologue. Since I couldn't find instructions on how to submit the change for review and inclusion in the Rust's copy of LLVM I am sending it here on the dev mailing list. Besides the mechanincal differences between the ARM and Thumb functions, because of the different instruction sets, there is difference in how the stack limit is located. The ARM version uses hardware which isn't available on the lower-end Thumb processors (namely system co-processor and MMU) therefore I am looking for the stack limit at a predefined location in memory - STACK_LIMIT. It is the responsibility of the wrapping runtime to manage this location with the correct value. It can vary from a simple constant defined by the linker to actively managed variable by a RTOS implementation. (thanks to whitequark for discussing the possible approaches) There is an old pull request for Rust which was the precursor to this change located at https://github.com/mozilla/rust/pull/10942. Once the patch is accepted I will try to update it to the latest changes in the repository. Here is the patch itself: =============================================================================== Add stack overflow check for ARM Thumb instruction set. The code assumes that the stack limit will be located at the address labeled STACK_LIMIT. --- lib/Target/ARM/ARMFrameLowering.cpp | 184 +++++++++++++++++++++++++++++++++++- lib/Target/ARM/ARMFrameLowering.h | 2 + 2 files changed, 185 insertions(+), 1 deletion(-) diff --git a/lib/Target/ARM/ARMFrameLowering.cpp b/lib/Target/ARM/ARMFrameLowering.cpp index bdf0480..c286228 100644 --- a/lib/Target/ARM/ARMFrameLowering.cpp +++ b/lib/Target/ARM/ARMFrameLowering.cpp @@ -14,6 +14,7 @@ #include "ARMFrameLowering.h" #include "ARMBaseInstrInfo.h" #include "ARMBaseRegisterInfo.h" +#include "ARMConstantPoolValue.h" #include "ARMInstrInfo.h" #include "ARMMachineFunctionInfo.h" #include "ARMTargetMachine.h" @@ -1481,10 +1482,20 @@ static uint32_t AlignToARMConstant(uint32_t Value) { // stack limit. static const uint64_t kSplitStackAvailable = 256; +void +ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { + const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); + if(ST->isThumb()) { + adjustForSegmentedStacksThumb(MF); + } else { + adjustForSegmentedStacksARM(MF); + } +} + // Adjust function prologue to enable split stack. // Only support android and linux. void -ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { +ARMFrameLowering::adjustForSegmentedStacksARM(MachineFunction &MF) const { const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); // Doesn't support vararg function. @@ -1697,3 +1708,174 @@ ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { MF.verify(); #endif } + +void +ARMFrameLowering::adjustForSegmentedStacksThumb(MachineFunction &MF) const { +// const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); + + // Doesn't support vararg function. + if (MF.getFunction()->isVarArg()) + report_fatal_error("Segmented stacks do not support vararg functions."); + + MachineBasicBlock &prologueMBB = MF.front(); + MachineFrameInfo* MFI = MF.getFrameInfo(); + const ARMBaseInstrInfo &TII = *TM.getInstrInfo(); + ARMFunctionInfo* ARMFI = MF.getInfo(); + DebugLoc DL; + + // Use R4 and R5 as scratch register. + // We should save R4 and R5 before use it and restore before + // leave the function. + unsigned ScratchReg0 = ARM::R4; + unsigned ScratchReg1 = ARM::R5; + uint64_t AlignedStackSize; + + MachineBasicBlock* prevStackMBB = MF.CreateMachineBasicBlock(); + MachineBasicBlock* postStackMBB = MF.CreateMachineBasicBlock(); + MachineBasicBlock* allocMBB = MF.CreateMachineBasicBlock(); + MachineBasicBlock* getMBB = MF.CreateMachineBasicBlock(); + MachineBasicBlock* mcrMBB = MF.CreateMachineBasicBlock(); + MachineBasicBlock* magicMBB = MF.CreateMachineBasicBlock(); + + for (MachineBasicBlock::livein_iterator i = prologueMBB.livein_begin(), + e = prologueMBB.livein_end(); i != e; ++i) { + allocMBB->addLiveIn(*i); + getMBB->addLiveIn(*i); + magicMBB->addLiveIn(*i); + mcrMBB->addLiveIn(*i); + prevStackMBB->addLiveIn(*i); + postStackMBB->addLiveIn(*i); + } + + MF.push_front(postStackMBB); + MF.push_front(allocMBB); + MF.push_front(getMBB); + MF.push_front(magicMBB); + MF.push_front(mcrMBB); + MF.push_front(prevStackMBB); + + // The required stack size that is aligend to ARM constant critarion. + uint64_t StackSize = MFI->getStackSize(); + + AlignedStackSize = AlignToARMConstant(StackSize); + + // When the frame size is less than 256 we just compare the stack + // boundary directly to the value of the stack pointer, per gcc. + bool CompareStackPointer = AlignedStackSize < kSplitStackAvailable; + + // We will use two of callee save registers as scratch register so we + // need to save those registers into stack frame before use it. + // We will use SR0 to hold stack limit and SR1 to stack size requested. + // and arguments for __morestack(). + // SR0: Scratch Register #0 + // SR1: Scratch Register #1 + // push {SR0, SR1} + AddDefaultPred(BuildMI(prevStackMBB, DL, TII.get(ARM::tPUSH))) + .addReg(ScratchReg0) + .addReg(ScratchReg1); + + if (CompareStackPointer) { + // mov SR1, sp + AddDefaultPred(BuildMI(mcrMBB, DL, TII.get(ARM::tMOVr), ScratchReg1) + .addReg(ARM::SP)); + } else { + // sub SR1, sp, #StackSize + AddDefaultPred(BuildMI(mcrMBB, DL, TII.get(ARM::tSUBi8), ScratchReg1) + .addReg(ARM::SP).addImm(AlignedStackSize)); + } + + unsigned PCLabelId = ARMFI->createPICLabelUId(); + ARMConstantPoolValue *NewCPV = ARMConstantPoolSymbol:: + Create(MF.getFunction()->getContext(), "STACK_LIMIT", PCLabelId, 0); + MachineConstantPool *MCP = MF.getConstantPool(); + unsigned CPI = MCP->getConstantPoolIndex(NewCPV, MF.getAlignment()); + + //ldr SR0, [pc, offset(STACK_LIMIT)] + AddDefaultPred(BuildMI(magicMBB, DL, TII.get(ARM::tLDRpci), ScratchReg0) + .addConstantPoolIndex(CPI)); + + //ldr SR0, [SR0] + AddDefaultPred(BuildMI(magicMBB, DL, TII.get(ARM::tLDRi), ScratchReg0) + .addReg(ScratchReg0) + .addImm(0)); + + // Compare stack limit with stack size requested. + // cmp SR0, SR1 + AddDefaultPred(BuildMI(getMBB, DL, TII.get(ARM::tCMPr)) + .addReg(ScratchReg0) + .addReg(ScratchReg1)); + + // This jump is taken if StackLimit < SP - stack required. + BuildMI(getMBB, DL, TII.get(ARM::tBcc)) + .addMBB(postStackMBB) + .addImm(ARMCC::LO) + .addReg(ARM::CPSR); + + + // Calling __morestack(StackSize, Size of stack arguments). + // __morestack knows that the stack size requested is in SR0(r4) + // and amount size of stack arguments is in SR1(r5). + + // Pass first argument for the __morestack by Scratch Register #0. + // The amount size of stack required + AddDefaultPred(AddDefaultCC(BuildMI(allocMBB, DL, TII.get(ARM::tMOVi8), ScratchReg0)) + .addImm(AlignedStackSize)); + // Pass second argument for the __morestack by Scratch Register #1. + // The amount size of stack consumed to save function arguments. + AddDefaultPred(AddDefaultCC(BuildMI(allocMBB, DL, TII.get(ARM::tMOVi8), ScratchReg1)) + .addImm(AlignToARMConstant(ARMFI->getArgumentStackSize()))); + + // push {lr} - Save return address of this function. + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPUSH))) + .addReg(ARM::LR); + + // Call __morestack(). + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tBL))) + .addExternalSymbol("__morestack"); + + // Restore return address of this original function. + // pop {SR0} + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPOP))) + .addReg(ScratchReg0); + + // mov lr, SR0 + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tMOVr), ARM::LR) + .addReg(ScratchReg0)); + + // Restore SR0 and SR1 in case of __morestack() was called. + // __morestack() will skip postStackMBB block so we need to restore + // scratch registers from here. + // pop {SR0, SR1} + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPOP))) + .addReg(ScratchReg0) + .addReg(ScratchReg1); + + // Return from this function. + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tMOVr), ARM::PC) + .addReg(ARM::LR)); + + // Restore SR0 and SR1 in case of __morestack() was not called. + // pop {SR0, SR1} + AddDefaultPred(BuildMI(postStackMBB, DL, TII.get(ARM::tPOP))) + .addReg(ScratchReg0) + .addReg(ScratchReg1); + + // Organizing MBB lists + postStackMBB->addSuccessor(&prologueMBB); + + allocMBB->addSuccessor(postStackMBB); + + getMBB->addSuccessor(postStackMBB); + getMBB->addSuccessor(allocMBB); + + magicMBB->addSuccessor(getMBB); + + mcrMBB->addSuccessor(getMBB); + mcrMBB->addSuccessor(magicMBB); + + prevStackMBB->addSuccessor(mcrMBB); + +#ifdef XDEBUG + MF.verify(); +#endif +} diff --git a/lib/Target/ARM/ARMFrameLowering.h b/lib/Target/ARM/ARMFrameLowering.h index 16b477a..0cb8e5a 100644 --- a/lib/Target/ARM/ARMFrameLowering.h +++ b/lib/Target/ARM/ARMFrameLowering.h @@ -62,6 +62,8 @@ public: RegScavenger *RS) const; void adjustForSegmentedStacks(MachineFunction &MF) const; + void adjustForSegmentedStacksThumb(MachineFunction &MF) const; + void adjustForSegmentedStacksARM(MachineFunction &MF) const; private: void emitPushInst(MachineBasicBlock &MBB, MachineBasicBlock::iterator MI, -- 1.8.3.2 From alex at crichton.co Sat Feb 15 15:15:36 2014 From: alex at crichton.co (Alex Crichton) Date: Sat, 15 Feb 2014 18:15:36 -0500 Subject: [rust-dev] [PATCH] Add stack overflow check for ARM Thumb instruction set. In-Reply-To: <1392471405-18189-1-git-send-email-svetoslav@neykov.name> References: <1392471405-18189-1-git-send-email-svetoslav@neykov.name> Message-ID: For LLVM patches, we prefer if you have first attempted to upstream the patch with LLVM before we push it to our local fork. This normally entails emailing the llvm-commits mailing list. Once this upstream attempt has been made, you can open a PR against the rust-lang/llvm repo on github. This looks pretty awesome though, nice work! On Sat, Feb 15, 2014 at 8:36 AM, Svetoslav Neykov wrote: > Hi, > > I am working on getting Rust to directly compile code for bare metal ARM > devices working in Thumb mode. I created a patch for LLVM to emit > the appropriate function prologue. Since I couldn't find instructions on how > to submit the change for review and inclusion in the Rust's copy of LLVM I > am sending it here on the dev mailing list. > > Besides the mechanincal differences between the ARM and Thumb functions, > because of the different instruction sets, there is difference in how the > stack limit is located. The ARM version uses hardware which isn't available > on the lower-end Thumb processors (namely system co-processor and MMU) > therefore I am looking for the stack limit at a predefined location in > memory - STACK_LIMIT. It is the responsibility of the wrapping runtime > to manage this location with the correct value. It can vary from a simple > constant defined by the linker to actively managed variable by a RTOS > implementation. > (thanks to whitequark for discussing the possible approaches) > > There is an old pull request for Rust which was the precursor to this change > located at https://github.com/mozilla/rust/pull/10942. Once the patch is > accepted I will try to update it to the latest changes in the repository. > > Here is the patch itself: > =============================================================================== > > Add stack overflow check for ARM Thumb instruction set. > > The code assumes that the stack limit will be located at the > address labeled STACK_LIMIT. > --- > lib/Target/ARM/ARMFrameLowering.cpp | 184 +++++++++++++++++++++++++++++++++++- > lib/Target/ARM/ARMFrameLowering.h | 2 + > 2 files changed, 185 insertions(+), 1 deletion(-) > > diff --git a/lib/Target/ARM/ARMFrameLowering.cpp b/lib/Target/ARM/ARMFrameLowering.cpp > index bdf0480..c286228 100644 > --- a/lib/Target/ARM/ARMFrameLowering.cpp > +++ b/lib/Target/ARM/ARMFrameLowering.cpp > @@ -14,6 +14,7 @@ > #include "ARMFrameLowering.h" > #include "ARMBaseInstrInfo.h" > #include "ARMBaseRegisterInfo.h" > +#include "ARMConstantPoolValue.h" > #include "ARMInstrInfo.h" > #include "ARMMachineFunctionInfo.h" > #include "ARMTargetMachine.h" > @@ -1481,10 +1482,20 @@ static uint32_t AlignToARMConstant(uint32_t Value) { > // stack limit. > static const uint64_t kSplitStackAvailable = 256; > > +void > +ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { > + const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); > + if(ST->isThumb()) { > + adjustForSegmentedStacksThumb(MF); > + } else { > + adjustForSegmentedStacksARM(MF); > + } > +} > + > // Adjust function prologue to enable split stack. > // Only support android and linux. > void > -ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { > +ARMFrameLowering::adjustForSegmentedStacksARM(MachineFunction &MF) const { > const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); > > // Doesn't support vararg function. > @@ -1697,3 +1708,174 @@ ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { > MF.verify(); > #endif > } > + > +void > +ARMFrameLowering::adjustForSegmentedStacksThumb(MachineFunction &MF) const { > +// const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); > + > + // Doesn't support vararg function. > + if (MF.getFunction()->isVarArg()) > + report_fatal_error("Segmented stacks do not support vararg functions."); > + > + MachineBasicBlock &prologueMBB = MF.front(); > + MachineFrameInfo* MFI = MF.getFrameInfo(); > + const ARMBaseInstrInfo &TII = *TM.getInstrInfo(); > + ARMFunctionInfo* ARMFI = MF.getInfo(); > + DebugLoc DL; > + > + // Use R4 and R5 as scratch register. > + // We should save R4 and R5 before use it and restore before > + // leave the function. > + unsigned ScratchReg0 = ARM::R4; > + unsigned ScratchReg1 = ARM::R5; > + uint64_t AlignedStackSize; > + > + MachineBasicBlock* prevStackMBB = MF.CreateMachineBasicBlock(); > + MachineBasicBlock* postStackMBB = MF.CreateMachineBasicBlock(); > + MachineBasicBlock* allocMBB = MF.CreateMachineBasicBlock(); > + MachineBasicBlock* getMBB = MF.CreateMachineBasicBlock(); > + MachineBasicBlock* mcrMBB = MF.CreateMachineBasicBlock(); > + MachineBasicBlock* magicMBB = MF.CreateMachineBasicBlock(); > + > + for (MachineBasicBlock::livein_iterator i = prologueMBB.livein_begin(), > + e = prologueMBB.livein_end(); i != e; ++i) { > + allocMBB->addLiveIn(*i); > + getMBB->addLiveIn(*i); > + magicMBB->addLiveIn(*i); > + mcrMBB->addLiveIn(*i); > + prevStackMBB->addLiveIn(*i); > + postStackMBB->addLiveIn(*i); > + } > + > + MF.push_front(postStackMBB); > + MF.push_front(allocMBB); > + MF.push_front(getMBB); > + MF.push_front(magicMBB); > + MF.push_front(mcrMBB); > + MF.push_front(prevStackMBB); > + > + // The required stack size that is aligend to ARM constant critarion. > + uint64_t StackSize = MFI->getStackSize(); > + > + AlignedStackSize = AlignToARMConstant(StackSize); > + > + // When the frame size is less than 256 we just compare the stack > + // boundary directly to the value of the stack pointer, per gcc. > + bool CompareStackPointer = AlignedStackSize < kSplitStackAvailable; > + > + // We will use two of callee save registers as scratch register so we > + // need to save those registers into stack frame before use it. > + // We will use SR0 to hold stack limit and SR1 to stack size requested. > + // and arguments for __morestack(). > + // SR0: Scratch Register #0 > + // SR1: Scratch Register #1 > + // push {SR0, SR1} > + AddDefaultPred(BuildMI(prevStackMBB, DL, TII.get(ARM::tPUSH))) > + .addReg(ScratchReg0) > + .addReg(ScratchReg1); > + > + if (CompareStackPointer) { > + // mov SR1, sp > + AddDefaultPred(BuildMI(mcrMBB, DL, TII.get(ARM::tMOVr), ScratchReg1) > + .addReg(ARM::SP)); > + } else { > + // sub SR1, sp, #StackSize > + AddDefaultPred(BuildMI(mcrMBB, DL, TII.get(ARM::tSUBi8), ScratchReg1) > + .addReg(ARM::SP).addImm(AlignedStackSize)); > + } > + > + unsigned PCLabelId = ARMFI->createPICLabelUId(); > + ARMConstantPoolValue *NewCPV = ARMConstantPoolSymbol:: > + Create(MF.getFunction()->getContext(), "STACK_LIMIT", PCLabelId, 0); > + MachineConstantPool *MCP = MF.getConstantPool(); > + unsigned CPI = MCP->getConstantPoolIndex(NewCPV, MF.getAlignment()); > + > + //ldr SR0, [pc, offset(STACK_LIMIT)] > + AddDefaultPred(BuildMI(magicMBB, DL, TII.get(ARM::tLDRpci), ScratchReg0) > + .addConstantPoolIndex(CPI)); > + > + //ldr SR0, [SR0] > + AddDefaultPred(BuildMI(magicMBB, DL, TII.get(ARM::tLDRi), ScratchReg0) > + .addReg(ScratchReg0) > + .addImm(0)); > + > + // Compare stack limit with stack size requested. > + // cmp SR0, SR1 > + AddDefaultPred(BuildMI(getMBB, DL, TII.get(ARM::tCMPr)) > + .addReg(ScratchReg0) > + .addReg(ScratchReg1)); > + > + // This jump is taken if StackLimit < SP - stack required. > + BuildMI(getMBB, DL, TII.get(ARM::tBcc)) > + .addMBB(postStackMBB) > + .addImm(ARMCC::LO) > + .addReg(ARM::CPSR); > + > + > + // Calling __morestack(StackSize, Size of stack arguments). > + // __morestack knows that the stack size requested is in SR0(r4) > + // and amount size of stack arguments is in SR1(r5). > + > + // Pass first argument for the __morestack by Scratch Register #0. > + // The amount size of stack required > + AddDefaultPred(AddDefaultCC(BuildMI(allocMBB, DL, TII.get(ARM::tMOVi8), ScratchReg0)) > + .addImm(AlignedStackSize)); > + // Pass second argument for the __morestack by Scratch Register #1. > + // The amount size of stack consumed to save function arguments. > + AddDefaultPred(AddDefaultCC(BuildMI(allocMBB, DL, TII.get(ARM::tMOVi8), ScratchReg1)) > + .addImm(AlignToARMConstant(ARMFI->getArgumentStackSize()))); > + > + // push {lr} - Save return address of this function. > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPUSH))) > + .addReg(ARM::LR); > + > + // Call __morestack(). > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tBL))) > + .addExternalSymbol("__morestack"); > + > + // Restore return address of this original function. > + // pop {SR0} > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPOP))) > + .addReg(ScratchReg0); > + > + // mov lr, SR0 > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tMOVr), ARM::LR) > + .addReg(ScratchReg0)); > + > + // Restore SR0 and SR1 in case of __morestack() was called. > + // __morestack() will skip postStackMBB block so we need to restore > + // scratch registers from here. > + // pop {SR0, SR1} > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPOP))) > + .addReg(ScratchReg0) > + .addReg(ScratchReg1); > + > + // Return from this function. > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tMOVr), ARM::PC) > + .addReg(ARM::LR)); > + > + // Restore SR0 and SR1 in case of __morestack() was not called. > + // pop {SR0, SR1} > + AddDefaultPred(BuildMI(postStackMBB, DL, TII.get(ARM::tPOP))) > + .addReg(ScratchReg0) > + .addReg(ScratchReg1); > + > + // Organizing MBB lists > + postStackMBB->addSuccessor(&prologueMBB); > + > + allocMBB->addSuccessor(postStackMBB); > + > + getMBB->addSuccessor(postStackMBB); > + getMBB->addSuccessor(allocMBB); > + > + magicMBB->addSuccessor(getMBB); > + > + mcrMBB->addSuccessor(getMBB); > + mcrMBB->addSuccessor(magicMBB); > + > + prevStackMBB->addSuccessor(mcrMBB); > + > +#ifdef XDEBUG > + MF.verify(); > +#endif > +} > diff --git a/lib/Target/ARM/ARMFrameLowering.h b/lib/Target/ARM/ARMFrameLowering.h > index 16b477a..0cb8e5a 100644 > --- a/lib/Target/ARM/ARMFrameLowering.h > +++ b/lib/Target/ARM/ARMFrameLowering.h > @@ -62,6 +62,8 @@ public: > RegScavenger *RS) const; > > void adjustForSegmentedStacks(MachineFunction &MF) const; > + void adjustForSegmentedStacksThumb(MachineFunction &MF) const; > + void adjustForSegmentedStacksARM(MachineFunction &MF) const; > > private: > void emitPushInst(MachineBasicBlock &MBB, MachineBasicBlock::iterator MI, > -- > 1.8.3.2 > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From svetoslav at neykov.name Sun Feb 16 01:08:25 2014 From: svetoslav at neykov.name (Svetoslav Neykov) Date: Sun, 16 Feb 2014 11:08:25 +0200 Subject: [rust-dev] [PATCH] Add stack overflow check for ARM Thumb instruction set. In-Reply-To: References: <1392471405-18189-1-git-send-email-svetoslav@neykov.name> Message-ID: <000001cf2af6$a8c4e0b0$fa4ea210$@neykov.name> I don't find any of the ARM split stack changes in the LLVM tree, just a single patch to the llvm-commits a year ago with no followup. (http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130318/168838 .html) Since my changes depend on the ARM changes it doesn't make sense to try to merge them before the previous changes are accepted. I guess I should use the rust-llvm-2014-02-11 branch as the base for my PR? Svetoslav. -----Original Message----- From: alexc605 at gmail.com [mailto:alexc605 at gmail.com] On Behalf Of Alex Crichton Sent: Sunday, February 16, 2014 1:16 AM To: Svetoslav Neykov Cc: rust-dev at mozilla.org Subject: Re: [rust-dev] [PATCH] Add stack overflow check for ARM Thumb instruction set. For LLVM patches, we prefer if you have first attempted to upstream the patch with LLVM before we push it to our local fork. This normally entails emailing the llvm-commits mailing list. Once this upstream attempt has been made, you can open a PR against the rust-lang/llvm repo on github. This looks pretty awesome though, nice work! On Sat, Feb 15, 2014 at 8:36 AM, Svetoslav Neykov wrote: > Hi, > > I am working on getting Rust to directly compile code for bare metal ARM > devices working in Thumb mode. I created a patch for LLVM to emit > the appropriate function prologue. Since I couldn't find instructions on how > to submit the change for review and inclusion in the Rust's copy of LLVM I > am sending it here on the dev mailing list. > > Besides the mechanincal differences between the ARM and Thumb functions, > because of the different instruction sets, there is difference in how the > stack limit is located. The ARM version uses hardware which isn't available > on the lower-end Thumb processors (namely system co-processor and MMU) > therefore I am looking for the stack limit at a predefined location in > memory - STACK_LIMIT. It is the responsibility of the wrapping runtime > to manage this location with the correct value. It can vary from a simple > constant defined by the linker to actively managed variable by a RTOS > implementation. > (thanks to whitequark for discussing the possible approaches) > > There is an old pull request for Rust which was the precursor to this change > located at https://github.com/mozilla/rust/pull/10942. Once the patch is > accepted I will try to update it to the latest changes in the repository. > > Here is the patch itself: > ============================================================================ === > > Add stack overflow check for ARM Thumb instruction set. > > The code assumes that the stack limit will be located at the > address labeled STACK_LIMIT. > --- > lib/Target/ARM/ARMFrameLowering.cpp | 184 +++++++++++++++++++++++++++++++++++- > lib/Target/ARM/ARMFrameLowering.h | 2 + > 2 files changed, 185 insertions(+), 1 deletion(-) > > diff --git a/lib/Target/ARM/ARMFrameLowering.cpp b/lib/Target/ARM/ARMFrameLowering.cpp > index bdf0480..c286228 100644 > --- a/lib/Target/ARM/ARMFrameLowering.cpp > +++ b/lib/Target/ARM/ARMFrameLowering.cpp > @@ -14,6 +14,7 @@ > #include "ARMFrameLowering.h" > #include "ARMBaseInstrInfo.h" > #include "ARMBaseRegisterInfo.h" > +#include "ARMConstantPoolValue.h" > #include "ARMInstrInfo.h" > #include "ARMMachineFunctionInfo.h" > #include "ARMTargetMachine.h" > @@ -1481,10 +1482,20 @@ static uint32_t AlignToARMConstant(uint32_t Value) { > // stack limit. > static const uint64_t kSplitStackAvailable = 256; > > +void > +ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { > + const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); > + if(ST->isThumb()) { > + adjustForSegmentedStacksThumb(MF); > + } else { > + adjustForSegmentedStacksARM(MF); > + } > +} > + > // Adjust function prologue to enable split stack. > // Only support android and linux. > void > -ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { > +ARMFrameLowering::adjustForSegmentedStacksARM(MachineFunction &MF) const { > const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); > > // Doesn't support vararg function. > @@ -1697,3 +1708,174 @@ ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { > MF.verify(); > #endif > } > + > +void > +ARMFrameLowering::adjustForSegmentedStacksThumb(MachineFunction &MF) const { > +// const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); > + > + // Doesn't support vararg function. > + if (MF.getFunction()->isVarArg()) > + report_fatal_error("Segmented stacks do not support vararg functions."); > + > + MachineBasicBlock &prologueMBB = MF.front(); > + MachineFrameInfo* MFI = MF.getFrameInfo(); > + const ARMBaseInstrInfo &TII = *TM.getInstrInfo(); > + ARMFunctionInfo* ARMFI = MF.getInfo(); > + DebugLoc DL; > + > + // Use R4 and R5 as scratch register. > + // We should save R4 and R5 before use it and restore before > + // leave the function. > + unsigned ScratchReg0 = ARM::R4; > + unsigned ScratchReg1 = ARM::R5; > + uint64_t AlignedStackSize; > + > + MachineBasicBlock* prevStackMBB = MF.CreateMachineBasicBlock(); > + MachineBasicBlock* postStackMBB = MF.CreateMachineBasicBlock(); > + MachineBasicBlock* allocMBB = MF.CreateMachineBasicBlock(); > + MachineBasicBlock* getMBB = MF.CreateMachineBasicBlock(); > + MachineBasicBlock* mcrMBB = MF.CreateMachineBasicBlock(); > + MachineBasicBlock* magicMBB = MF.CreateMachineBasicBlock(); > + > + for (MachineBasicBlock::livein_iterator i = prologueMBB.livein_begin(), > + e = prologueMBB.livein_end(); i != e; ++i) { > + allocMBB->addLiveIn(*i); > + getMBB->addLiveIn(*i); > + magicMBB->addLiveIn(*i); > + mcrMBB->addLiveIn(*i); > + prevStackMBB->addLiveIn(*i); > + postStackMBB->addLiveIn(*i); > + } > + > + MF.push_front(postStackMBB); > + MF.push_front(allocMBB); > + MF.push_front(getMBB); > + MF.push_front(magicMBB); > + MF.push_front(mcrMBB); > + MF.push_front(prevStackMBB); > + > + // The required stack size that is aligend to ARM constant critarion. > + uint64_t StackSize = MFI->getStackSize(); > + > + AlignedStackSize = AlignToARMConstant(StackSize); > + > + // When the frame size is less than 256 we just compare the stack > + // boundary directly to the value of the stack pointer, per gcc. > + bool CompareStackPointer = AlignedStackSize < kSplitStackAvailable; > + > + // We will use two of callee save registers as scratch register so we > + // need to save those registers into stack frame before use it. > + // We will use SR0 to hold stack limit and SR1 to stack size requested. > + // and arguments for __morestack(). > + // SR0: Scratch Register #0 > + // SR1: Scratch Register #1 > + // push {SR0, SR1} > + AddDefaultPred(BuildMI(prevStackMBB, DL, TII.get(ARM::tPUSH))) > + .addReg(ScratchReg0) > + .addReg(ScratchReg1); > + > + if (CompareStackPointer) { > + // mov SR1, sp > + AddDefaultPred(BuildMI(mcrMBB, DL, TII.get(ARM::tMOVr), ScratchReg1) > + .addReg(ARM::SP)); > + } else { > + // sub SR1, sp, #StackSize > + AddDefaultPred(BuildMI(mcrMBB, DL, TII.get(ARM::tSUBi8), ScratchReg1) > + .addReg(ARM::SP).addImm(AlignedStackSize)); > + } > + > + unsigned PCLabelId = ARMFI->createPICLabelUId(); > + ARMConstantPoolValue *NewCPV = ARMConstantPoolSymbol:: > + Create(MF.getFunction()->getContext(), "STACK_LIMIT", PCLabelId, 0); > + MachineConstantPool *MCP = MF.getConstantPool(); > + unsigned CPI = MCP->getConstantPoolIndex(NewCPV, MF.getAlignment()); > + > + //ldr SR0, [pc, offset(STACK_LIMIT)] > + AddDefaultPred(BuildMI(magicMBB, DL, TII.get(ARM::tLDRpci), ScratchReg0) > + .addConstantPoolIndex(CPI)); > + > + //ldr SR0, [SR0] > + AddDefaultPred(BuildMI(magicMBB, DL, TII.get(ARM::tLDRi), ScratchReg0) > + .addReg(ScratchReg0) > + .addImm(0)); > + > + // Compare stack limit with stack size requested. > + // cmp SR0, SR1 > + AddDefaultPred(BuildMI(getMBB, DL, TII.get(ARM::tCMPr)) > + .addReg(ScratchReg0) > + .addReg(ScratchReg1)); > + > + // This jump is taken if StackLimit < SP - stack required. > + BuildMI(getMBB, DL, TII.get(ARM::tBcc)) > + .addMBB(postStackMBB) > + .addImm(ARMCC::LO) > + .addReg(ARM::CPSR); > + > + > + // Calling __morestack(StackSize, Size of stack arguments). > + // __morestack knows that the stack size requested is in SR0(r4) > + // and amount size of stack arguments is in SR1(r5). > + > + // Pass first argument for the __morestack by Scratch Register #0. > + // The amount size of stack required > + AddDefaultPred(AddDefaultCC(BuildMI(allocMBB, DL, TII.get(ARM::tMOVi8), ScratchReg0)) > + .addImm(AlignedStackSize)); > + // Pass second argument for the __morestack by Scratch Register #1. > + // The amount size of stack consumed to save function arguments. > + AddDefaultPred(AddDefaultCC(BuildMI(allocMBB, DL, TII.get(ARM::tMOVi8), ScratchReg1)) > + .addImm(AlignToARMConstant(ARMFI->getArgumentStackSize()))); > + > + // push {lr} - Save return address of this function. > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPUSH))) > + .addReg(ARM::LR); > + > + // Call __morestack(). > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tBL))) > + .addExternalSymbol("__morestack"); > + > + // Restore return address of this original function. > + // pop {SR0} > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPOP))) > + .addReg(ScratchReg0); > + > + // mov lr, SR0 > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tMOVr), ARM::LR) > + .addReg(ScratchReg0)); > + > + // Restore SR0 and SR1 in case of __morestack() was called. > + // __morestack() will skip postStackMBB block so we need to restore > + // scratch registers from here. > + // pop {SR0, SR1} > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPOP))) > + .addReg(ScratchReg0) > + .addReg(ScratchReg1); > + > + // Return from this function. > + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tMOVr), ARM::PC) > + .addReg(ARM::LR)); > + > + // Restore SR0 and SR1 in case of __morestack() was not called. > + // pop {SR0, SR1} > + AddDefaultPred(BuildMI(postStackMBB, DL, TII.get(ARM::tPOP))) > + .addReg(ScratchReg0) > + .addReg(ScratchReg1); > + > + // Organizing MBB lists > + postStackMBB->addSuccessor(&prologueMBB); > + > + allocMBB->addSuccessor(postStackMBB); > + > + getMBB->addSuccessor(postStackMBB); > + getMBB->addSuccessor(allocMBB); > + > + magicMBB->addSuccessor(getMBB); > + > + mcrMBB->addSuccessor(getMBB); > + mcrMBB->addSuccessor(magicMBB); > + > + prevStackMBB->addSuccessor(mcrMBB); > + > +#ifdef XDEBUG > + MF.verify(); > +#endif > +} > diff --git a/lib/Target/ARM/ARMFrameLowering.h b/lib/Target/ARM/ARMFrameLowering.h > index 16b477a..0cb8e5a 100644 > --- a/lib/Target/ARM/ARMFrameLowering.h > +++ b/lib/Target/ARM/ARMFrameLowering.h > @@ -62,6 +62,8 @@ public: > RegScavenger *RS) const; > > void adjustForSegmentedStacks(MachineFunction &MF) const; > + void adjustForSegmentedStacksThumb(MachineFunction &MF) const; > + void adjustForSegmentedStacksARM(MachineFunction &MF) const; > > private: > void emitPushInst(MachineBasicBlock &MBB, MachineBasicBlock::iterator MI, > -- > 1.8.3.2 > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From alex at crichton.co Sun Feb 16 17:32:40 2014 From: alex at crichton.co (Alex Crichton) Date: Sun, 16 Feb 2014 20:32:40 -0500 Subject: [rust-dev] [PATCH] Add stack overflow check for ARM Thumb instruction set. In-Reply-To: <000001cf2af6$a8c4e0b0$fa4ea210$@neykov.name> References: <1392471405-18189-1-git-send-email-svetoslav@neykov.name> <000001cf2af6$a8c4e0b0$fa4ea210$@neykov.name> Message-ID: Yes, if you use rust-llvm-2014-02-11 as the base of the PR I can merge it in and update the LLVM that rust is using. On Sun, Feb 16, 2014 at 4:08 AM, Svetoslav Neykov wrote: > I don't find any of the ARM split stack changes in the LLVM tree, just a > single > patch to the llvm-commits a year ago with no followup. > (http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130318/168838 > .html) > Since my changes depend on the ARM changes it doesn't make sense to try to > merge > them before the previous changes are accepted. > > I guess I should use the rust-llvm-2014-02-11 branch as the base for my PR? > > Svetoslav. > > > -----Original Message----- > From: alexc605 at gmail.com [mailto:alexc605 at gmail.com] On Behalf Of Alex > Crichton > Sent: Sunday, February 16, 2014 1:16 AM > To: Svetoslav Neykov > Cc: rust-dev at mozilla.org > Subject: Re: [rust-dev] [PATCH] Add stack overflow check for ARM Thumb > instruction set. > > For LLVM patches, we prefer if you have first attempted to upstream > the patch with LLVM before we push it to our local fork. This normally > entails emailing the llvm-commits mailing list. Once this upstream > attempt has been made, you can open a PR against the rust-lang/llvm > repo on github. > > This looks pretty awesome though, nice work! > > > On Sat, Feb 15, 2014 at 8:36 AM, Svetoslav Neykov > wrote: >> Hi, >> >> I am working on getting Rust to directly compile code for bare metal ARM >> devices working in Thumb mode. I created a patch for LLVM to emit >> the appropriate function prologue. Since I couldn't find instructions on > how >> to submit the change for review and inclusion in the Rust's copy of LLVM I >> am sending it here on the dev mailing list. >> >> Besides the mechanincal differences between the ARM and Thumb functions, >> because of the different instruction sets, there is difference in how the >> stack limit is located. The ARM version uses hardware which isn't > available >> on the lower-end Thumb processors (namely system co-processor and MMU) >> therefore I am looking for the stack limit at a predefined location in >> memory - STACK_LIMIT. It is the responsibility of the wrapping runtime >> to manage this location with the correct value. It can vary from a simple >> constant defined by the linker to actively managed variable by a RTOS >> implementation. >> (thanks to whitequark for discussing the possible approaches) >> >> There is an old pull request for Rust which was the precursor to this > change >> located at https://github.com/mozilla/rust/pull/10942. Once the patch is >> accepted I will try to update it to the latest changes in the repository. >> >> Here is the patch itself: >> > ============================================================================ > === >> >> Add stack overflow check for ARM Thumb instruction set. >> >> The code assumes that the stack limit will be located at the >> address labeled STACK_LIMIT. >> --- >> lib/Target/ARM/ARMFrameLowering.cpp | 184 > +++++++++++++++++++++++++++++++++++- >> lib/Target/ARM/ARMFrameLowering.h | 2 + >> 2 files changed, 185 insertions(+), 1 deletion(-) >> >> diff --git a/lib/Target/ARM/ARMFrameLowering.cpp > b/lib/Target/ARM/ARMFrameLowering.cpp >> index bdf0480..c286228 100644 >> --- a/lib/Target/ARM/ARMFrameLowering.cpp >> +++ b/lib/Target/ARM/ARMFrameLowering.cpp >> @@ -14,6 +14,7 @@ >> #include "ARMFrameLowering.h" >> #include "ARMBaseInstrInfo.h" >> #include "ARMBaseRegisterInfo.h" >> +#include "ARMConstantPoolValue.h" >> #include "ARMInstrInfo.h" >> #include "ARMMachineFunctionInfo.h" >> #include "ARMTargetMachine.h" >> @@ -1481,10 +1482,20 @@ static uint32_t AlignToARMConstant(uint32_t Value) > { >> // stack limit. >> static const uint64_t kSplitStackAvailable = 256; >> >> +void >> +ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { >> + const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); >> + if(ST->isThumb()) { >> + adjustForSegmentedStacksThumb(MF); >> + } else { >> + adjustForSegmentedStacksARM(MF); >> + } >> +} >> + >> // Adjust function prologue to enable split stack. >> // Only support android and linux. >> void >> -ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { >> +ARMFrameLowering::adjustForSegmentedStacksARM(MachineFunction &MF) const > { >> const ARMSubtarget *ST = &MF.getTarget().getSubtarget(); >> >> // Doesn't support vararg function. >> @@ -1697,3 +1708,174 @@ > ARMFrameLowering::adjustForSegmentedStacks(MachineFunction &MF) const { >> MF.verify(); >> #endif >> } >> + >> +void >> +ARMFrameLowering::adjustForSegmentedStacksThumb(MachineFunction &MF) > const { >> +// const ARMSubtarget *ST = > &MF.getTarget().getSubtarget(); >> + >> + // Doesn't support vararg function. >> + if (MF.getFunction()->isVarArg()) >> + report_fatal_error("Segmented stacks do not support vararg > functions."); >> + >> + MachineBasicBlock &prologueMBB = MF.front(); >> + MachineFrameInfo* MFI = MF.getFrameInfo(); >> + const ARMBaseInstrInfo &TII = *TM.getInstrInfo(); >> + ARMFunctionInfo* ARMFI = MF.getInfo(); >> + DebugLoc DL; >> + >> + // Use R4 and R5 as scratch register. >> + // We should save R4 and R5 before use it and restore before >> + // leave the function. >> + unsigned ScratchReg0 = ARM::R4; >> + unsigned ScratchReg1 = ARM::R5; >> + uint64_t AlignedStackSize; >> + >> + MachineBasicBlock* prevStackMBB = MF.CreateMachineBasicBlock(); >> + MachineBasicBlock* postStackMBB = MF.CreateMachineBasicBlock(); >> + MachineBasicBlock* allocMBB = MF.CreateMachineBasicBlock(); >> + MachineBasicBlock* getMBB = MF.CreateMachineBasicBlock(); >> + MachineBasicBlock* mcrMBB = MF.CreateMachineBasicBlock(); >> + MachineBasicBlock* magicMBB = MF.CreateMachineBasicBlock(); >> + >> + for (MachineBasicBlock::livein_iterator i = prologueMBB.livein_begin(), >> + e = prologueMBB.livein_end(); i != e; ++i) { >> + allocMBB->addLiveIn(*i); >> + getMBB->addLiveIn(*i); >> + magicMBB->addLiveIn(*i); >> + mcrMBB->addLiveIn(*i); >> + prevStackMBB->addLiveIn(*i); >> + postStackMBB->addLiveIn(*i); >> + } >> + >> + MF.push_front(postStackMBB); >> + MF.push_front(allocMBB); >> + MF.push_front(getMBB); >> + MF.push_front(magicMBB); >> + MF.push_front(mcrMBB); >> + MF.push_front(prevStackMBB); >> + >> + // The required stack size that is aligend to ARM constant critarion. >> + uint64_t StackSize = MFI->getStackSize(); >> + >> + AlignedStackSize = AlignToARMConstant(StackSize); >> + >> + // When the frame size is less than 256 we just compare the stack >> + // boundary directly to the value of the stack pointer, per gcc. >> + bool CompareStackPointer = AlignedStackSize < kSplitStackAvailable; >> + >> + // We will use two of callee save registers as scratch register so we >> + // need to save those registers into stack frame before use it. >> + // We will use SR0 to hold stack limit and SR1 to stack size requested. >> + // and arguments for __morestack(). >> + // SR0: Scratch Register #0 >> + // SR1: Scratch Register #1 >> + // push {SR0, SR1} >> + AddDefaultPred(BuildMI(prevStackMBB, DL, TII.get(ARM::tPUSH))) >> + .addReg(ScratchReg0) >> + .addReg(ScratchReg1); >> + >> + if (CompareStackPointer) { >> + // mov SR1, sp >> + AddDefaultPred(BuildMI(mcrMBB, DL, TII.get(ARM::tMOVr), ScratchReg1) >> + .addReg(ARM::SP)); >> + } else { >> + // sub SR1, sp, #StackSize >> + AddDefaultPred(BuildMI(mcrMBB, DL, TII.get(ARM::tSUBi8), ScratchReg1) >> + .addReg(ARM::SP).addImm(AlignedStackSize)); >> + } >> + >> + unsigned PCLabelId = ARMFI->createPICLabelUId(); >> + ARMConstantPoolValue *NewCPV = ARMConstantPoolSymbol:: >> + Create(MF.getFunction()->getContext(), "STACK_LIMIT", PCLabelId, 0); >> + MachineConstantPool *MCP = MF.getConstantPool(); >> + unsigned CPI = MCP->getConstantPoolIndex(NewCPV, MF.getAlignment()); >> + >> + //ldr SR0, [pc, offset(STACK_LIMIT)] >> + AddDefaultPred(BuildMI(magicMBB, DL, TII.get(ARM::tLDRpci), > ScratchReg0) >> + .addConstantPoolIndex(CPI)); >> + >> + //ldr SR0, [SR0] >> + AddDefaultPred(BuildMI(magicMBB, DL, TII.get(ARM::tLDRi), ScratchReg0) >> + .addReg(ScratchReg0) >> + .addImm(0)); >> + >> + // Compare stack limit with stack size requested. >> + // cmp SR0, SR1 >> + AddDefaultPred(BuildMI(getMBB, DL, TII.get(ARM::tCMPr)) >> + .addReg(ScratchReg0) >> + .addReg(ScratchReg1)); >> + >> + // This jump is taken if StackLimit < SP - stack required. >> + BuildMI(getMBB, DL, TII.get(ARM::tBcc)) >> + .addMBB(postStackMBB) >> + .addImm(ARMCC::LO) >> + .addReg(ARM::CPSR); >> + >> + >> + // Calling __morestack(StackSize, Size of stack arguments). >> + // __morestack knows that the stack size requested is in SR0(r4) >> + // and amount size of stack arguments is in SR1(r5). >> + >> + // Pass first argument for the __morestack by Scratch Register #0. >> + // The amount size of stack required >> + AddDefaultPred(AddDefaultCC(BuildMI(allocMBB, DL, TII.get(ARM::tMOVi8), > ScratchReg0)) >> + .addImm(AlignedStackSize)); >> + // Pass second argument for the __morestack by Scratch Register #1. >> + // The amount size of stack consumed to save function arguments. >> + AddDefaultPred(AddDefaultCC(BuildMI(allocMBB, DL, TII.get(ARM::tMOVi8), > ScratchReg1)) >> + > .addImm(AlignToARMConstant(ARMFI->getArgumentStackSize()))); >> + >> + // push {lr} - Save return address of this function. >> + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPUSH))) >> + .addReg(ARM::LR); >> + >> + // Call __morestack(). >> + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tBL))) >> + .addExternalSymbol("__morestack"); >> + >> + // Restore return address of this original function. >> + // pop {SR0} >> + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPOP))) >> + .addReg(ScratchReg0); >> + >> + // mov lr, SR0 >> + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tMOVr), ARM::LR) >> + .addReg(ScratchReg0)); >> + >> + // Restore SR0 and SR1 in case of __morestack() was called. >> + // __morestack() will skip postStackMBB block so we need to restore >> + // scratch registers from here. >> + // pop {SR0, SR1} >> + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tPOP))) >> + .addReg(ScratchReg0) >> + .addReg(ScratchReg1); >> + >> + // Return from this function. >> + AddDefaultPred(BuildMI(allocMBB, DL, TII.get(ARM::tMOVr), ARM::PC) >> + .addReg(ARM::LR)); >> + >> + // Restore SR0 and SR1 in case of __morestack() was not called. >> + // pop {SR0, SR1} >> + AddDefaultPred(BuildMI(postStackMBB, DL, TII.get(ARM::tPOP))) >> + .addReg(ScratchReg0) >> + .addReg(ScratchReg1); >> + >> + // Organizing MBB lists >> + postStackMBB->addSuccessor(&prologueMBB); >> + >> + allocMBB->addSuccessor(postStackMBB); >> + >> + getMBB->addSuccessor(postStackMBB); >> + getMBB->addSuccessor(allocMBB); >> + >> + magicMBB->addSuccessor(getMBB); >> + >> + mcrMBB->addSuccessor(getMBB); >> + mcrMBB->addSuccessor(magicMBB); >> + >> + prevStackMBB->addSuccessor(mcrMBB); >> + >> +#ifdef XDEBUG >> + MF.verify(); >> +#endif >> +} >> diff --git a/lib/Target/ARM/ARMFrameLowering.h > b/lib/Target/ARM/ARMFrameLowering.h >> index 16b477a..0cb8e5a 100644 >> --- a/lib/Target/ARM/ARMFrameLowering.h >> +++ b/lib/Target/ARM/ARMFrameLowering.h >> @@ -62,6 +62,8 @@ public: >> RegScavenger *RS) const; >> >> void adjustForSegmentedStacks(MachineFunction &MF) const; >> + void adjustForSegmentedStacksThumb(MachineFunction &MF) const; >> + void adjustForSegmentedStacksARM(MachineFunction &MF) const; >> >> private: >> void emitPushInst(MachineBasicBlock &MBB, MachineBasicBlock::iterator > MI, >> -- >> 1.8.3.2 >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > From jayhawk at cs.ucsc.edu Mon Feb 17 13:06:20 2014 From: jayhawk at cs.ucsc.edu (Noah Watkins) Date: Mon, 17 Feb 2014 13:06:20 -0800 Subject: [rust-dev] idiomatic conversion from Option<&str> to *libc::c_char Message-ID: When calling a native function that takes a char* string or NULL for the default behavior I treat the option like this pub fn read_file(&self, path: Option<&str>) { let c_path = match path { Some(ref path) => path.to_c_str().with_ref(|x| x), None => ptr::null() }; unsafe { native_read_file(self.ch, c_path) as int }; } Is this the best way to handle the situation? `path.to_cstr().with_ref(|x| x)` seems a bit verbose. On the other hand, `unwrap` says it forgets the ownership which I'm assuming means that the buffer won't be freed. -Noah From damienradtke at gmail.com Mon Feb 17 13:06:52 2014 From: damienradtke at gmail.com (Damien Radtke) Date: Mon, 17 Feb 2014 15:06:52 -0600 Subject: [rust-dev] Need help implementing some complex parent-child task behavior. In-Reply-To: References: <2EC9B5EE-48F7-4248-BDEE-2B3715F9D4A5@sb.org> Message-ID: So I managed to implement a solution without needing a child task, using Kevin's initial idea of a struct with private fields and public getters. The idea had occurred to me but I wasn't sure exactly how to implement it; needless to say, I think I was overcomplicating things. Thanks for the help, Kevin. On Fri, Feb 14, 2014 at 4:18 PM, Damien Radtke wrote: > Ah, I think the problem is that I'm trying to create a new task within a > loop over an iterator, so each value is an &-ptr and is therefore causing > it to fail... > > > On Fri, Feb 14, 2014 at 4:12 PM, Damien Radtke wrote: > >> The function pointer is indeed a function pointer and all of the strings >> and vectors are ~, but the vector type is &'static. They're meant to hold >> references to card definitions, which is more efficient than passing around >> the cards themselves. I tried modifying the vectors to hold ~-strings >> instead, but it still didn't work. >> >> Looks like I'll need to do more research on Send. >> >> >> On Fri, Feb 14, 2014 at 3:19 PM, Kevin Ballard wrote: >> >>> Depends. If the string or the vectors are & instead of ~, that would do >>> it. Also, if the element type of the vector does not fulfill Send. Oh, and >>> the function pointer is a function pointer, not a closure, right? >>> >>> -Kevin >>> >>> On Feb 14, 2014, at 12:59 PM, Damien Radtke >>> wrote: >>> >>> Unfortunately, the type that maintains the state apparently doesn't >>> fulfill Send, which confuses me because it's a struct that consists of a >>> string, function pointer, and a few dynamically-sized vectors. Which of >>> these types makes the struct as a whole violate Send? >>> >>> >>> On Fri, Feb 14, 2014 at 2:47 PM, Kevin Ballard wrote: >>> >>>> What if the state's fields are private, and in a different module than >>>> the players, but exposes getters to query the state? Then the players can't >>>> modify it, but if the component that processes the actions has visibility >>>> into the state's fields, it can modify them just fine. >>>> >>>> -Kevin >>>> >>>> On Feb 14, 2014, at 12:22 PM, Damien Radtke >>>> wrote: >>>> >>>> > I'm trying to write what is essentially a card game simulator in >>>> Rust, but I'm running into a bit of a roadblock with Rust's memory >>>> management. The gist of what I want to accomplish is: >>>> > >>>> > 1. In the program's main loop, iterate over several "players" and >>>> call their "play" method in turn. >>>> > 2. Each "play" method should be able to send requests back to the >>>> parent in order to take certain actions, who will validate that the action >>>> is possible and update the player's state accordingly. >>>> > >>>> > The problem I'm running into is that, in order to let a player "play" >>>> and have the game validate actions for them, I would need to run each >>>> player in their own task, (I considered implementing it as each function >>>> call indicating a request for action [e.g. by returning Some(action), or >>>> None when finished] and calling it repeatedly until none are taken, but >>>> this makes the implementation for each player needlessly complex) but this >>>> makes for some tricky situations. >>>> > >>>> > My current implementation uses a DuplexStream to communicate back and >>>> forth, the child sending requests to the parent and the parent sending >>>> responses, but then I run into the issue of how to inform the child of >>>> their current state, but don't let them modify it outside of sending action >>>> requests. >>>> > >>>> > Ideally I'd like to be able to create an (unsafe) immutable pointer >>>> to the state held by the parent as mutable, but that gives me a "values >>>> differ in mutability" error. Other approaches so far have failed as well; >>>> Arcs don't work because I need to have one-sided mutability; standard >>>> borrowed pointers don't work because the child and parent need to access it >>>> at the same time (though only the parent should be able to modify it, >>>> ensuring its safety); even copying the state doesn't work because the child >>>> then needs to update its local state with a new copy sent by the parent, >>>> which is also prone to mutability-related errors. >>>> > >>>> > Any tips on how to accomplish something like this? >>>> > _______________________________________________ >>>> > Rust-dev mailing list >>>> > Rust-dev at mozilla.org >>>> > https://mail.mozilla.org/listinfo/rust-dev >>>> >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at sb.org Mon Feb 17 13:10:13 2014 From: kevin at sb.org (Kevin Ballard) Date: Mon, 17 Feb 2014 13:10:13 -0800 Subject: [rust-dev] idiomatic conversion from Option<&str> to *libc::c_char In-Reply-To: References: Message-ID: <0CDE64CD-7FA5-45A3-A020-5630EF0D34F3@sb.org> No, this is likely to crash. `.to_c_str()` constructs a CString, which you are then promptly throwing away. So your subsequent access to `c_path` is actually accessing freed memory. Try something like this: pub fn read_file(&self, path: Option<&str>) { let path = path.map(|s| s.to_c_str()); let c_path = path.map_or(ptr::null(), |p| p.with_ref(|x| x)); unsafe { native_read_file(self.ch, c_path) as int } } This variant will keep the CString alive inside the `path` variable, which will mean that your `c_path` pointer is still valid until you return from the function. -Kevin On Feb 17, 2014, at 1:06 PM, Noah Watkins wrote: > When calling a native function that takes a char* string or NULL for > the default behavior I treat the option like this > > pub fn read_file(&self, path: Option<&str>) { > let c_path = match path { > Some(ref path) => path.to_c_str().with_ref(|x| x), > None => ptr::null() > }; > unsafe { > native_read_file(self.ch, c_path) as int > }; > } > > Is this the best way to handle the situation? > `path.to_cstr().with_ref(|x| x)` seems a bit verbose. On the other > hand, `unwrap` says it forgets the ownership which I'm assuming means > that the buffer won't be freed. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4118 bytes Desc: not available URL: From nick at ncameron.org Mon Feb 17 13:50:42 2014 From: nick at ncameron.org (Nick Cameron) Date: Tue, 18 Feb 2014 10:50:42 +1300 Subject: [rust-dev] issue numbers in commit messages Message-ID: How would people feel about a requirement for all commit messages to have an issue number in them? And could we make bors enforce that? The reason is that GitHub is very bad at being able to trace back a commit to the issue it fixes (sometimes it manages, but not always). Not being able to find the discussion around a commit is extremely annoying. Cheers, Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Mon Feb 17 14:39:02 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Mon, 17 Feb 2014 14:39:02 -0800 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: Why not make bors simply add the issue number in when it makes the actual merge commit? From palmercox at gmail.com Mon Feb 17 14:43:04 2014 From: palmercox at gmail.com (Palmer Cox) Date: Mon, 17 Feb 2014 17:43:04 -0500 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: I believe that bors never does a fast forward merge and that the merge commits always contain the pull number. So, if you have a particular commit and you want to find the issue that it was part of, I believe you can always look look through its children until you find a commit by "bors" which should have a commit message like: "auto merge of #12313 : bjz/rust/tuple, r=huonw" which contains the issue number. Let says that the commit you are interested in is "6f39eb1". I think if you run the command: git log --author "bors" --ancestry-path 6f39eb1..origin/master And look at the commit at the very bottom of the list, that will be the merge commit that you are interested in. I'm not a git expert - there may be a better way to do that. -Palmer Cox On Mon, Feb 17, 2014 at 4:50 PM, Nick Cameron wrote: > How would people feel about a requirement for all commit messages to have > an issue number in them? And could we make bors enforce that? > > The reason is that GitHub is very bad at being able to trace back a commit > to the issue it fixes (sometimes it manages, but not always). Not being > able to find the discussion around a commit is extremely annoying. > > Cheers, Nick > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan at xeberon.net Mon Feb 17 15:08:11 2014 From: gaetan at xeberon.net (Gaetan) Date: Tue, 18 Feb 2014 00:08:11 +0100 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: It is generally a good practice to embed the information of the tracking tool inside the git commit message so finding this information is straight forward, and you can jump from any commit to the GitHub issue also instaneously. We use Gerrit with a "tracked-on" field on the last paragraph of the commit message (which is supported by gerrit) in order to link this commit to our tracking tool. Github seems to support only "#123" substring in the commit message. Beware because it will close automatically the issue if the string "Fix #123" is found in the first line, and a single commit can refer to more than only one github issue. My 2 cents, ----- Gaetan 2014-02-17 23:43 GMT+01:00 Palmer Cox : > I believe that bors never does a fast forward merge and that the merge > commits always contain the pull number. So, if you have a particular commit > and you want to find the issue that it was part of, I believe you can > always look look through its children until you find a commit by "bors" > which should have a commit message like: "auto merge of #12313 : > bjz/rust/tuple, r=huonw" which contains the issue number. > > Let says that the commit you are interested in is "6f39eb1". I think if > you run the command: > > git log --author "bors" --ancestry-path 6f39eb1..origin/master > > And look at the commit at the very bottom of the list, that will be the > merge commit that you are interested in. > > I'm not a git expert - there may be a better way to do that. > > -Palmer Cox > > > > On Mon, Feb 17, 2014 at 4:50 PM, Nick Cameron wrote: > >> How would people feel about a requirement for all commit messages to have >> an issue number in them? And could we make bors enforce that? >> >> The reason is that GitHub is very bad at being able to trace back a >> commit to the issue it fixes (sometimes it manages, but not always). Not >> being able to find the discussion around a commit is extremely annoying. >> >> Cheers, Nick >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bytbox at gmail.com Mon Feb 17 15:16:34 2014 From: bytbox at gmail.com (Scott Lawrence) Date: Mon, 17 Feb 2014 18:16:34 -0500 (EST) Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: Maybe I'm misunderstanding? This would require that all commits be specifically associated with an issue. I don't have actual stats, but briefly skimming recent commits and looking at the issue tracker, a lot of commits can't be reasonably associated with an issue. This requirement would either force people to create fake issues for each commit, or to reference tangentially-related or overly-broad issues in commit messages, neither of which is very useful. Referencing any conversation that leads to or influences a commit is a good idea, but something this inflexible doesn't seem right. My 1.5?. On Tue, 18 Feb 2014, Nick Cameron wrote: > How would people feel about a requirement for all commit messages to have > an issue number in them? And could we make bors enforce that? > > The reason is that GitHub is very bad at being able to trace back a commit > to the issue it fixes (sometimes it manages, but not always). Not being > able to find the discussion around a commit is extremely annoying. > > Cheers, Nick > -- Scott Lawrence From lists at ncameron.org Mon Feb 17 15:42:28 2014 From: lists at ncameron.org (Nick Cameron) Date: Tue, 18 Feb 2014 12:42:28 +1300 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: I really just want to avoid any kind of Git-fu. I would like to see the commit on GitHub and click a link. Or see a commit in git log, and copy paste something to my browser. Anything more complex than that is distracting. (thanks for the solution to my specific problem though - that is helpful!) On Tue, Feb 18, 2014 at 11:43 AM, Palmer Cox wrote: > I believe that bors never does a fast forward merge and that the merge > commits always contain the pull number. So, if you have a particular commit > and you want to find the issue that it was part of, I believe you can > always look look through its children until you find a commit by "bors" > which should have a commit message like: "auto merge of #12313 : > bjz/rust/tuple, r=huonw" which contains the issue number. > > Let says that the commit you are interested in is "6f39eb1". I think if > you run the command: > > git log --author "bors" --ancestry-path 6f39eb1..origin/master > > And look at the commit at the very bottom of the list, that will be the > merge commit that you are interested in. > > I'm not a git expert - there may be a better way to do that. > > -Palmer Cox > > > > On Mon, Feb 17, 2014 at 4:50 PM, Nick Cameron wrote: > >> How would people feel about a requirement for all commit messages to have >> an issue number in them? And could we make bors enforce that? >> >> The reason is that GitHub is very bad at being able to trace back a >> commit to the issue it fixes (sometimes it manages, but not always). Not >> being able to find the discussion around a commit is extremely annoying. >> >> Cheers, Nick >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ncameron.org Mon Feb 17 15:50:38 2014 From: lists at ncameron.org (Nick Cameron) Date: Tue, 18 Feb 2014 12:50:38 +1300 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: At worst you could just use the issue number for the PR. But I think all non-trivial commits _should_ have an issue associated. For really tiny commits we could allow "no issue" or '#0' in the message. Just so long as the author is being explicit, I think that is OK. On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence wrote: > Maybe I'm misunderstanding? This would require that all commits be > specifically associated with an issue. I don't have actual stats, but > briefly skimming recent commits and looking at the issue tracker, a lot of > commits can't be reasonably associated with an issue. This requirement > would either force people to create fake issues for each commit, or to > reference tangentially-related or overly-broad issues in commit messages, > neither of which is very useful. > > Referencing any conversation that leads to or influences a commit is a > good idea, but something this inflexible doesn't seem right. > > My 1.5?. > > > On Tue, 18 Feb 2014, Nick Cameron wrote: > > How would people feel about a requirement for all commit messages to have >> an issue number in them? And could we make bors enforce that? >> >> The reason is that GitHub is very bad at being able to trace back a commit >> to the issue it fixes (sometimes it manages, but not always). Not being >> able to find the discussion around a commit is extremely annoying. >> >> Cheers, Nick >> >> > -- > Scott Lawrence -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at sb.org Mon Feb 17 15:53:40 2014 From: kevin at sb.org (Kevin Ballard) Date: Mon, 17 Feb 2014 15:53:40 -0800 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: <2C23D4D4-6C27-40F3-9325-34BA05E29F56@sb.org> This is not going to work in the slightest. Most PRs don't have an associated issue. The pull request is the issue. And that's perfectly fine. There's no need to file an issue separate from the PR itself. Requiring a referenced issue for every single commit would be extremely cumbersome, serve no real purpose aside from aiding an unwillingness to learn how source control works, and would probably slow down the rate of development of Rust. -Kevin On Feb 17, 2014, at 3:50 PM, Nick Cameron wrote: > At worst you could just use the issue number for the PR. But I think all non-trivial commits _should_ have an issue associated. For really tiny commits we could allow "no issue" or '#0' in the message. Just so long as the author is being explicit, I think that is OK. > > > On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence wrote: > Maybe I'm misunderstanding? This would require that all commits be specifically associated with an issue. I don't have actual stats, but briefly skimming recent commits and looking at the issue tracker, a lot of commits can't be reasonably associated with an issue. This requirement would either force people to create fake issues for each commit, or to reference tangentially-related or overly-broad issues in commit messages, neither of which is very useful. > > Referencing any conversation that leads to or influences a commit is a good idea, but something this inflexible doesn't seem right. > > My 1.5?. > > > On Tue, 18 Feb 2014, Nick Cameron wrote: > > How would people feel about a requirement for all commit messages to have > an issue number in them? And could we make bors enforce that? > > The reason is that GitHub is very bad at being able to trace back a commit > to the issue it fixes (sometimes it manages, but not always). Not being > able to find the discussion around a commit is extremely annoying. > > Cheers, Nick > > > -- > Scott Lawrence > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4118 bytes Desc: not available URL: From nick at ncameron.org Mon Feb 17 15:40:22 2014 From: nick at ncameron.org (Nick Cameron) Date: Tue, 18 Feb 2014 12:40:22 +1300 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: This is a nice solution, I like it. People who know about bors - is it possible to make it work? On Tue, Feb 18, 2014 at 11:39 AM, Steve Klabnik wrote: > Why not make bors simply add the issue number in when it makes the > actual merge commit? > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Mon Feb 17 16:06:04 2014 From: corey at octayn.net (Corey Richardson) Date: Mon, 17 Feb 2014 19:06:04 -0500 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: Bors already mentions the pull request that he merged, and any commits that close or work on issues usually mention that explicitly in their commit message. What more do you want? On Mon, Feb 17, 2014 at 6:40 PM, Nick Cameron wrote: > This is a nice solution, I like it. > > People who know about bors - is it possible to make it work? > > > On Tue, Feb 18, 2014 at 11:39 AM, Steve Klabnik > wrote: >> >> Why not make bors simply add the issue number in when it makes the >> actual merge commit? >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From lists at ncameron.org Mon Feb 17 16:14:36 2014 From: lists at ncameron.org (Nick Cameron) Date: Tue, 18 Feb 2014 13:14:36 +1300 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: For _all_ commits to mention the issue explicitly, not just usually On Tue, Feb 18, 2014 at 1:06 PM, Corey Richardson wrote: > Bors already mentions the pull request that he merged, and any commits > that close or work on issues usually mention that explicitly in their > commit message. What more do you want? > > On Mon, Feb 17, 2014 at 6:40 PM, Nick Cameron wrote: > > This is a nice solution, I like it. > > > > People who know about bors - is it possible to make it work? > > > > > > On Tue, Feb 18, 2014 at 11:39 AM, Steve Klabnik > > wrote: > >> > >> Why not make bors simply add the issue number in when it makes the > >> actual merge commit? > >> _______________________________________________ > >> Rust-dev mailing list > >> Rust-dev at mozilla.org > >> https://mail.mozilla.org/listinfo/rust-dev > > > > > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at steveklabnik.com Mon Feb 17 16:37:23 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Mon, 17 Feb 2014 16:37:23 -0800 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: Yeah, I'm not into modifying every single commit, I basically only want what bors already (apparently....) already does. From lists at ncameron.org Mon Feb 17 17:54:13 2014 From: lists at ncameron.org (Nick Cameron) Date: Tue, 18 Feb 2014 14:54:13 +1300 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: <2C23D4D4-6C27-40F3-9325-34BA05E29F56@sb.org> References: <2C23D4D4-6C27-40F3-9325-34BA05E29F56@sb.org> Message-ID: Whether we need issues for PRs is a separate discussion. There has to be _something_ for every commit - either a PR or an issue, at the least there needs to be an r+ somewhere. I would like to see who reviewed something so I can ping someone with questions other than the author (if they are offline). Any discussion is likely to be useful. So the question is how to find that, when necessary. GitHub sometimes fails to point to the info. And when it does, you do not know if you are missing more info. For the price of 6 characters in the commit message (or "no issue"), we know with certainty where to find that info and that we are not missing other potentially useful info. This would not slow down development in any way. Note that this is orthogonal to use of version control - you still need to know Git in order to get the commit message - it is about how one can go easily from a commit message to meta-data about a commit. On Tue, Feb 18, 2014 at 12:53 PM, Kevin Ballard wrote: > This is not going to work in the slightest. > > Most PRs don't have an associated issue. The pull request *is* the issue. > And that's perfectly fine. There's no need to file an issue separate from > the PR itself. Requiring a referenced issue for every single commit would > be extremely cumbersome, serve no real purpose aside from aiding an > unwillingness to learn how source control works, and would probably slow > down the rate of development of Rust. > > -Kevin > > On Feb 17, 2014, at 3:50 PM, Nick Cameron wrote: > > At worst you could just use the issue number for the PR. But I think all > non-trivial commits _should_ have an issue associated. For really tiny > commits we could allow "no issue" or '#0' in the message. Just so long as > the author is being explicit, I think that is OK. > > > On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence wrote: > >> Maybe I'm misunderstanding? This would require that all commits be >> specifically associated with an issue. I don't have actual stats, but >> briefly skimming recent commits and looking at the issue tracker, a lot of >> commits can't be reasonably associated with an issue. This requirement >> would either force people to create fake issues for each commit, or to >> reference tangentially-related or overly-broad issues in commit messages, >> neither of which is very useful. >> >> Referencing any conversation that leads to or influences a commit is a >> good idea, but something this inflexible doesn't seem right. >> >> My 1.5?. >> >> >> On Tue, 18 Feb 2014, Nick Cameron wrote: >> >> How would people feel about a requirement for all commit messages to have >>> an issue number in them? And could we make bors enforce that? >>> >>> The reason is that GitHub is very bad at being able to trace back a >>> commit >>> to the issue it fixes (sometimes it manages, but not always). Not being >>> able to find the discussion around a commit is extremely annoying. >>> >>> Cheers, Nick >>> >>> >> -- >> Scott Lawrence > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ncameron.org Mon Feb 17 17:57:57 2014 From: lists at ncameron.org (Nick Cameron) Date: Tue, 18 Feb 2014 14:57:57 +1300 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: Adding a few chars to a commit is not onerous and it is useful. You may not want it now, but perhaps you would if you had it to use. _I_ certainly want it, and I think others would find it useful if it was there to use. On Tue, Feb 18, 2014 at 1:37 PM, Steve Klabnik wrote: > Yeah, I'm not into modifying every single commit, I basically only > want what bors already (apparently....) already does. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Mon Feb 17 18:02:44 2014 From: corey at octayn.net (Corey Richardson) Date: Mon, 17 Feb 2014 21:02:44 -0500 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: <2C23D4D4-6C27-40F3-9325-34BA05E29F56@sb.org> Message-ID: https://github.com/mozilla/rust/commit/25147b2644ed569f16f22dc02d10a0a9b7b97c7e seems to provide all of the information you are asking for? It includes the text of the PR description, the PR number, the name of the branch, and who reviewed it. I agree with your premise but I'm not sure I agree that the current situation isn't adequate. But I wouldn't be opposed to such a change. On Mon, Feb 17, 2014 at 8:54 PM, Nick Cameron wrote: > Whether we need issues for PRs is a separate discussion. There has to be > _something_ for every commit - either a PR or an issue, at the least there > needs to be an r+ somewhere. I would like to see who reviewed something so I > can ping someone with questions other than the author (if they are offline). > Any discussion is likely to be useful. > > So the question is how to find that, when necessary. GitHub sometimes fails > to point to the info. And when it does, you do not know if you are missing > more info. For the price of 6 characters in the commit message (or "no > issue"), we know with certainty where to find that info and that we are not > missing other potentially useful info. This would not slow down development > in any way. > > Note that this is orthogonal to use of version control - you still need to > know Git in order to get the commit message - it is about how one can go > easily from a commit message to meta-data about a commit. > > > On Tue, Feb 18, 2014 at 12:53 PM, Kevin Ballard wrote: >> >> This is not going to work in the slightest. >> >> Most PRs don't have an associated issue. The pull request is the issue. >> And that's perfectly fine. There's no need to file an issue separate from >> the PR itself. Requiring a referenced issue for every single commit would be >> extremely cumbersome, serve no real purpose aside from aiding an >> unwillingness to learn how source control works, and would probably slow >> down the rate of development of Rust. >> >> -Kevin >> >> On Feb 17, 2014, at 3:50 PM, Nick Cameron wrote: >> >> At worst you could just use the issue number for the PR. But I think all >> non-trivial commits _should_ have an issue associated. For really tiny >> commits we could allow "no issue" or '#0' in the message. Just so long as >> the author is being explicit, I think that is OK. >> >> >> On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence wrote: >>> >>> Maybe I'm misunderstanding? This would require that all commits be >>> specifically associated with an issue. I don't have actual stats, but >>> briefly skimming recent commits and looking at the issue tracker, a lot of >>> commits can't be reasonably associated with an issue. This requirement would >>> either force people to create fake issues for each commit, or to reference >>> tangentially-related or overly-broad issues in commit messages, neither of >>> which is very useful. >>> >>> Referencing any conversation that leads to or influences a commit is a >>> good idea, but something this inflexible doesn't seem right. >>> >>> My 1.5?. >>> >>> >>> On Tue, 18 Feb 2014, Nick Cameron wrote: >>> >>>> How would people feel about a requirement for all commit messages to >>>> have >>>> an issue number in them? And could we make bors enforce that? >>>> >>>> The reason is that GitHub is very bad at being able to trace back a >>>> commit >>>> to the issue it fixes (sometimes it manages, but not always). Not being >>>> able to find the discussion around a commit is extremely annoying. >>>> >>>> Cheers, Nick >>>> >>> >>> -- >>> Scott Lawrence >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From bytbox at gmail.com Mon Feb 17 18:02:05 2014 From: bytbox at gmail.com (Scott Lawrence) Date: Mon, 17 Feb 2014 21:02:05 -0500 (EST) Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: What about having bors place the hash of each commit merged into the auto-merge message? Then finding the PR, and any closed issues, consists of backwards-searching in git-log. (Having bors modify commit messages would probably cause major problems with hashes changing.) On Tue, 18 Feb 2014, Nick Cameron wrote: > Adding a few chars to a commit is not onerous and it is useful. You may not > want it now, but perhaps you would if you had it to use. _I_ certainly want > it, and I think others would find it useful if it was there to use. > > > On Tue, Feb 18, 2014 at 1:37 PM, Steve Klabnik wrote: > >> Yeah, I'm not into modifying every single commit, I basically only >> want what bors already (apparently....) already does. >> > -- Scott Lawrence From lists at ncameron.org Mon Feb 17 18:17:27 2014 From: lists at ncameron.org (Nick Cameron) Date: Tue, 18 Feb 2014 15:17:27 +1300 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: <2C23D4D4-6C27-40F3-9325-34BA05E29F56@sb.org> Message-ID: Right, that is exactly what I want to see, just on every commit. For example, https://github.com/mozilla/rust/commit/a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2. has none of that info and I can't see any way to get it (without the kind of Git-fu suggested earlier). (Well, I can actually see that r=nikomatsakis from the comments at the bottom, but I can't see how that r+ came about, whether there was any discussion, whether there was an issue where this was discussed or not, etc.). On Tue, Feb 18, 2014 at 3:02 PM, Corey Richardson wrote: > > https://github.com/mozilla/rust/commit/25147b2644ed569f16f22dc02d10a0a9b7b97c7e > seems to provide all of the information you are asking for? It > includes the text of the PR description, the PR number, the name of > the branch, and who reviewed it. I agree with your premise but I'm not > sure I agree that the current situation isn't adequate. But I wouldn't > be opposed to such a change. > > On Mon, Feb 17, 2014 at 8:54 PM, Nick Cameron wrote: > > Whether we need issues for PRs is a separate discussion. There has to be > > _something_ for every commit - either a PR or an issue, at the least > there > > needs to be an r+ somewhere. I would like to see who reviewed something > so I > > can ping someone with questions other than the author (if they are > offline). > > Any discussion is likely to be useful. > > > > So the question is how to find that, when necessary. GitHub sometimes > fails > > to point to the info. And when it does, you do not know if you are > missing > > more info. For the price of 6 characters in the commit message (or "no > > issue"), we know with certainty where to find that info and that we are > not > > missing other potentially useful info. This would not slow down > development > > in any way. > > > > Note that this is orthogonal to use of version control - you still need > to > > know Git in order to get the commit message - it is about how one can go > > easily from a commit message to meta-data about a commit. > > > > > > On Tue, Feb 18, 2014 at 12:53 PM, Kevin Ballard wrote: > >> > >> This is not going to work in the slightest. > >> > >> Most PRs don't have an associated issue. The pull request is the issue. > >> And that's perfectly fine. There's no need to file an issue separate > from > >> the PR itself. Requiring a referenced issue for every single commit > would be > >> extremely cumbersome, serve no real purpose aside from aiding an > >> unwillingness to learn how source control works, and would probably slow > >> down the rate of development of Rust. > >> > >> -Kevin > >> > >> On Feb 17, 2014, at 3:50 PM, Nick Cameron wrote: > >> > >> At worst you could just use the issue number for the PR. But I think all > >> non-trivial commits _should_ have an issue associated. For really tiny > >> commits we could allow "no issue" or '#0' in the message. Just so long > as > >> the author is being explicit, I think that is OK. > >> > >> > >> On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence > wrote: > >>> > >>> Maybe I'm misunderstanding? This would require that all commits be > >>> specifically associated with an issue. I don't have actual stats, but > >>> briefly skimming recent commits and looking at the issue tracker, a > lot of > >>> commits can't be reasonably associated with an issue. This requirement > would > >>> either force people to create fake issues for each commit, or to > reference > >>> tangentially-related or overly-broad issues in commit messages, > neither of > >>> which is very useful. > >>> > >>> Referencing any conversation that leads to or influences a commit is a > >>> good idea, but something this inflexible doesn't seem right. > >>> > >>> My 1.5?. > >>> > >>> > >>> On Tue, 18 Feb 2014, Nick Cameron wrote: > >>> > >>>> How would people feel about a requirement for all commit messages to > >>>> have > >>>> an issue number in them? And could we make bors enforce that? > >>>> > >>>> The reason is that GitHub is very bad at being able to trace back a > >>>> commit > >>>> to the issue it fixes (sometimes it manages, but not always). Not > being > >>>> able to find the discussion around a commit is extremely annoying. > >>>> > >>>> Cheers, Nick > >>>> > >>> > >>> -- > >>> Scott Lawrence > >> > >> > >> _______________________________________________ > >> Rust-dev mailing list > >> Rust-dev at mozilla.org > >> https://mail.mozilla.org/listinfo/rust-dev > >> > >> > > > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From palmercox at gmail.com Mon Feb 17 18:22:01 2014 From: palmercox at gmail.com (Palmer Cox) Date: Mon, 17 Feb 2014 21:22:01 -0500 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: The PR# and who reviewed it is already available in the merge commit and its already possible to take any arbitrary commit and to see which merge commit merged it into master. So, I don't see any benefit in changing anything about the merge commit. Unless I'm missing something, this isn't a question of information not being available; its a question of that information being inconvenient to get to. I think having bors rewrite the commit messages would be somewhat problematic since it would change all the hashes. So, I think the only solution would be to manually put the issue number into the messages. However, many PRs aren't related to issues. So, if some large percentages of commits are just annotated with "no issue" or the like, it seems to really impact the utility of this change. Thus, I think it would really have to be the PR# instead of an issue # since every commit is related to a PR. However, I think it isn't a zero impact procedure. I always right the changes I want to merge before opening the PR. So, when I'm making my changes, I don't know what the eventual PR# is going to be. Only after I open the PR with the commits already created, I find out the PR#. So, then I'd have to rewrite all of the commit messages and force push back into the branch to get the numbers right. Its not the worst thing in the world, but it is an extra few steps. So, I strongly agree that the current procedure for finding the github discussion is fairly unpleasant and I very much wish that Github had a button that would take me to the PR that merged it. However, I don't think there is a 100% consistent, zero impact workaround for that missing feature in Github. My vote would be to leave things as they are. A little scripting could improve the situation quite a bit, although it still won't be as nice as being able to click on a link in Github. -Palmer Cox On Mon, Feb 17, 2014 at 9:02 PM, Scott Lawrence wrote: > What about having bors place the hash of each commit merged into the > auto-merge message? Then finding the PR, and any closed issues, consists of > backwards-searching in git-log. (Having bors modify commit messages would > probably cause major problems with hashes changing.) > > > On Tue, 18 Feb 2014, Nick Cameron wrote: > > Adding a few chars to a commit is not onerous and it is useful. You may >> not >> want it now, but perhaps you would if you had it to use. _I_ certainly >> want >> it, and I think others would find it useful if it was there to use. >> >> >> On Tue, Feb 18, 2014 at 1:37 PM, Steve Klabnik > >wrote: >> >> Yeah, I'm not into modifying every single commit, I basically only >>> want what bors already (apparently....) already does. >>> >>> >> > -- > Scott Lawrence > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ncameron.org Mon Feb 17 18:28:46 2014 From: lists at ncameron.org (Nick Cameron) Date: Tue, 18 Feb 2014 15:28:46 +1300 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: You are right, it is about convenient access to the info, not the lack of info. What is problematic about bors rewriting commit messages and changing hashes? My workflow is to always work on throw away branches and merge back into master. Is it common to work on master and merge back on top of your PR? Or are there other problems with changing the hash? On Tue, Feb 18, 2014 at 3:22 PM, Palmer Cox wrote: > The PR# and who reviewed it is already available in the merge commit and > its already possible to take any arbitrary commit and to see which merge > commit merged it into master. So, I don't see any benefit in changing > anything about the merge commit. Unless I'm missing something, this isn't a > question of information not being available; its a question of that > information being inconvenient to get to. I think having bors rewrite the > commit messages would be somewhat problematic since it would change all the > hashes. So, I think the only solution would be to manually put the issue > number into the messages. However, many PRs aren't related to issues. So, > if some large percentages of commits are just annotated with "no issue" or > the like, it seems to really impact the utility of this change. Thus, I > think it would really have to be the PR# instead of an issue # since every > commit is related to a PR. However, I think it isn't a zero impact > procedure. I always right the changes I want to merge before opening the > PR. So, when I'm making my changes, I don't know what the eventual PR# is > going to be. Only after I open the PR with the commits already created, I > find out the PR#. So, then I'd have to rewrite all of the commit messages > and force push back into the branch to get the numbers right. Its not the > worst thing in the world, but it is an extra few steps. > > So, I strongly agree that the current procedure for finding the github > discussion is fairly unpleasant and I very much wish that Github had a > button that would take me to the PR that merged it. However, I don't think > there is a 100% consistent, zero impact workaround for that missing feature > in Github. > > My vote would be to leave things as they are. A little scripting could > improve the situation quite a bit, although it still won't be as nice as > being able to click on a link in Github. > > -Palmer Cox > > > > > > > > On Mon, Feb 17, 2014 at 9:02 PM, Scott Lawrence wrote: > >> What about having bors place the hash of each commit merged into the >> auto-merge message? Then finding the PR, and any closed issues, consists of >> backwards-searching in git-log. (Having bors modify commit messages would >> probably cause major problems with hashes changing.) >> >> >> On Tue, 18 Feb 2014, Nick Cameron wrote: >> >> Adding a few chars to a commit is not onerous and it is useful. You may >>> not >>> want it now, but perhaps you would if you had it to use. _I_ certainly >>> want >>> it, and I think others would find it useful if it was there to use. >>> >>> >>> On Tue, Feb 18, 2014 at 1:37 PM, Steve Klabnik >> >wrote: >>> >>> Yeah, I'm not into modifying every single commit, I basically only >>>> want what bors already (apparently....) already does. >>>> >>>> >>> >> -- >> Scott Lawrence >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ncameron.org Mon Feb 17 19:01:46 2014 From: lists at ncameron.org (Nick Cameron) Date: Tue, 18 Feb 2014 16:01:46 +1300 Subject: [rust-dev] [RFC] Proposal for associated/'static' method syntax Message-ID: I'm looking at addressing #8888. Here is a possible solution ( https://github.com/mozilla/rust/issues/12358) - a variation on past proposals. I'd appreciate any comments. Thanks, Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From palmercox at gmail.com Mon Feb 17 19:34:11 2014 From: palmercox at gmail.com (Palmer Cox) Date: Mon, 17 Feb 2014 22:34:11 -0500 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: If bors rewrites the commit messages, it means that if someone approves commit ABC, what actually gets merged will be commit XYZ. This seems potentially confusing to me and might also make it more difficult to start with a reviewed commit on Github, such as https://github.com/gentlefolk/rust/commit/37bf97a0f9cc764a19dfcff21d62384b2445dcbc, and then track back to the actually merged commit in the history. I'm also not 100% sure, but I think git might have some issues with it as well. If I do my work on a throwaway branch, after merging, will git know that the changes in that branch were merged? Or, will git require me to do a git branch -D to delete that branch? Are there other projects that rewrite commit messages before merging? It seems to me that the ideal case would be for Github to add a link on the commit view page back to the PR that merged that commit. I'd be concerned that if Github adds support for such a feature in the future that it might not work if we've re-written all of the commit messages in the meantime. -Palmer Cox On Mon, Feb 17, 2014 at 9:28 PM, Nick Cameron wrote: > You are right, it is about convenient access to the info, not the lack of > info. > > What is problematic about bors rewriting commit messages and changing > hashes? My workflow is to always work on throw away branches and merge back > into master. Is it common to work on master and merge back on top of your > PR? Or are there other problems with changing the hash? > > > On Tue, Feb 18, 2014 at 3:22 PM, Palmer Cox wrote: > >> The PR# and who reviewed it is already available in the merge commit and >> its already possible to take any arbitrary commit and to see which merge >> commit merged it into master. So, I don't see any benefit in changing >> anything about the merge commit. Unless I'm missing something, this isn't a >> question of information not being available; its a question of that >> information being inconvenient to get to. I think having bors rewrite the >> commit messages would be somewhat problematic since it would change all the >> hashes. So, I think the only solution would be to manually put the issue >> number into the messages. However, many PRs aren't related to issues. So, >> if some large percentages of commits are just annotated with "no issue" or >> the like, it seems to really impact the utility of this change. Thus, I >> think it would really have to be the PR# instead of an issue # since every >> commit is related to a PR. However, I think it isn't a zero impact >> procedure. I always right the changes I want to merge before opening the >> PR. So, when I'm making my changes, I don't know what the eventual PR# is >> going to be. Only after I open the PR with the commits already created, I >> find out the PR#. So, then I'd have to rewrite all of the commit messages >> and force push back into the branch to get the numbers right. Its not the >> worst thing in the world, but it is an extra few steps. >> >> So, I strongly agree that the current procedure for finding the github >> discussion is fairly unpleasant and I very much wish that Github had a >> button that would take me to the PR that merged it. However, I don't think >> there is a 100% consistent, zero impact workaround for that missing feature >> in Github. >> >> My vote would be to leave things as they are. A little scripting could >> improve the situation quite a bit, although it still won't be as nice as >> being able to click on a link in Github. >> >> -Palmer Cox >> >> >> >> >> >> >> >> On Mon, Feb 17, 2014 at 9:02 PM, Scott Lawrence wrote: >> >>> What about having bors place the hash of each commit merged into the >>> auto-merge message? Then finding the PR, and any closed issues, consists of >>> backwards-searching in git-log. (Having bors modify commit messages would >>> probably cause major problems with hashes changing.) >>> >>> >>> On Tue, 18 Feb 2014, Nick Cameron wrote: >>> >>> Adding a few chars to a commit is not onerous and it is useful. You may >>>> not >>>> want it now, but perhaps you would if you had it to use. _I_ certainly >>>> want >>>> it, and I think others would find it useful if it was there to use. >>>> >>>> >>>> On Tue, Feb 18, 2014 at 1:37 PM, Steve Klabnik >>> >wrote: >>>> >>>> Yeah, I'm not into modifying every single commit, I basically only >>>>> want what bors already (apparently....) already does. >>>>> >>>>> >>>> >>> -- >>> Scott Lawrence >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thadguidry at gmail.com Mon Feb 17 20:24:11 2014 From: thadguidry at gmail.com (Thad Guidry) Date: Mon, 17 Feb 2014 22:24:11 -0600 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: Use a better graphical Git client ? (instead of Github itself) I personally just do my reviews through "Gitk" from my Git Bash install on Windows. Screenshot: Gitk.PNG On Linux, you might be better suited with other graphical Git clients, just for your own browsing of auto-merges, etc. Here's a dated article (2012) that covers some of them: http://www.maketecheasier.com/6-useful-graphical-git-client-for-linux/ On Mon, Feb 17, 2014 at 9:34 PM, Palmer Cox wrote: > If bors rewrites the commit messages, it means that if someone approves > commit ABC, what actually gets merged will be commit XYZ. This seems > potentially confusing to me and might also make it more difficult to start > with a reviewed commit on Github, such as > https://github.com/gentlefolk/rust/commit/37bf97a0f9cc764a19dfcff21d62384b2445dcbc, > and then track back to the actually merged commit in the history. > > I'm also not 100% sure, but I think git might have some issues with it as > well. If I do my work on a throwaway branch, after merging, will git know > that the changes in that branch were merged? Or, will git require me to do > a git branch -D to delete that branch? Are there other projects that > rewrite commit messages before merging? > > It seems to me that the ideal case would be for Github to add a link on > the commit view page back to the PR that merged that commit. I'd be > concerned that if Github adds support for such a feature in the future > that it might not work if we've re-written all of the commit messages in > the meantime. > > -Palmer Cox > > > > > On Mon, Feb 17, 2014 at 9:28 PM, Nick Cameron wrote: > >> You are right, it is about convenient access to the info, not the lack of >> info. >> >> What is problematic about bors rewriting commit messages and changing >> hashes? My workflow is to always work on throw away branches and merge back >> into master. Is it common to work on master and merge back on top of your >> PR? Or are there other problems with changing the hash? >> >> >> On Tue, Feb 18, 2014 at 3:22 PM, Palmer Cox wrote: >> >>> The PR# and who reviewed it is already available in the merge commit and >>> its already possible to take any arbitrary commit and to see which merge >>> commit merged it into master. So, I don't see any benefit in changing >>> anything about the merge commit. Unless I'm missing something, this isn't a >>> question of information not being available; its a question of that >>> information being inconvenient to get to. I think having bors rewrite the >>> commit messages would be somewhat problematic since it would change all the >>> hashes. So, I think the only solution would be to manually put the issue >>> number into the messages. However, many PRs aren't related to issues. So, >>> if some large percentages of commits are just annotated with "no issue" or >>> the like, it seems to really impact the utility of this change. Thus, I >>> think it would really have to be the PR# instead of an issue # since every >>> commit is related to a PR. However, I think it isn't a zero impact >>> procedure. I always right the changes I want to merge before opening the >>> PR. So, when I'm making my changes, I don't know what the eventual PR# is >>> going to be. Only after I open the PR with the commits already created, I >>> find out the PR#. So, then I'd have to rewrite all of the commit messages >>> and force push back into the branch to get the numbers right. Its not the >>> worst thing in the world, but it is an extra few steps. >>> >>> So, I strongly agree that the current procedure for finding the github >>> discussion is fairly unpleasant and I very much wish that Github had a >>> button that would take me to the PR that merged it. However, I don't think >>> there is a 100% consistent, zero impact workaround for that missing feature >>> in Github. >>> >>> My vote would be to leave things as they are. A little scripting could >>> improve the situation quite a bit, although it still won't be as nice as >>> being able to click on a link in Github. >>> >>> -Palmer Cox >>> >>> >>> >>> >>> >>> >>> >>> On Mon, Feb 17, 2014 at 9:02 PM, Scott Lawrence wrote: >>> >>>> What about having bors place the hash of each commit merged into the >>>> auto-merge message? Then finding the PR, and any closed issues, consists of >>>> backwards-searching in git-log. (Having bors modify commit messages would >>>> probably cause major problems with hashes changing.) >>>> >>>> >>>> On Tue, 18 Feb 2014, Nick Cameron wrote: >>>> >>>> Adding a few chars to a commit is not onerous and it is useful. You >>>>> may not >>>>> want it now, but perhaps you would if you had it to use. _I_ certainly >>>>> want >>>>> it, and I think others would find it useful if it was there to use. >>>>> >>>>> >>>>> On Tue, Feb 18, 2014 at 1:37 PM, Steve Klabnik >>>> >wrote: >>>>> >>>>> Yeah, I'm not into modifying every single commit, I basically only >>>>>> want what bors already (apparently....) already does. >>>>>> >>>>>> >>>>> >>>> -- >>>> Scott Lawrence >>>> >>>> _______________________________________________ >>>> Rust-dev mailing list >>>> Rust-dev at mozilla.org >>>> https://mail.mozilla.org/listinfo/rust-dev >>>> >>> >>> >> > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -- -Thad +ThadGuidry Thad on LinkedIn -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at chrismorgan.info Mon Feb 17 21:11:24 2014 From: me at chrismorgan.info (Chris Morgan) Date: Tue, 18 Feb 2014 16:11:24 +1100 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: > You are right, it is about convenient access to the info, not the lack of > info. I often wish I could conveniently find this information--far too often it's hard to identify the PR when a breaking feature came in. Often I end up waiting for Corey to publish TWiR, with all the relevant issue numbers in it... but this makes it hard for me to include references in commit messages in my own repositories. More often than not, it's an issue in the clarity and distinctness of the pull requests rather than the code, but often finding the changeset is easy and the PR hard. > What is problematic about bors rewriting commit messages and changing hashes? > My workflow is to always work on throw away branches and merge back into > master. Is it common to work on master and merge back on top of your PR? Or > are there other problems with changing the hash? It is not uncommon to base one's work upon another branch (one's own or another's), depending on that landing before the aforementioned work can land, or alternatively landing the depended-upon feature as part of said work. What is at present approximately automatic there would become a manual rebasing-- certainly not insurmountable, but an added inconvenience. From simon.sapin at exyr.org Mon Feb 17 23:15:36 2014 From: simon.sapin at exyr.org (Simon Sapin) Date: Tue, 18 Feb 2014 07:15:36 +0000 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: <53030898.8050100@exyr.org> On 17/02/2014 23:50, Nick Cameron wrote: > At worst you could just use the issue number for the PR. In order to get a PR number you need to have commits to submit, with already-composed messages. > But I think all non-trivial commits _should_ have an issue associated. GitHub PRs *are* issues. Requiring *another* issue when someone has code to submit already is not useful IMO. -- Simon Sapin From rustphil at phildawes.net Tue Feb 18 00:16:22 2014 From: rustphil at phildawes.net (Phil Dawes) Date: Tue, 18 Feb 2014 08:16:22 +0000 Subject: [rust-dev] reader.lines() swallows io errors Message-ID: Hello everyone, I was cutting and pasting the following example from the std lib docs: http://static.rust-lang.org/doc/master/std/io/index.html Iterate over the lines of a file use std::io::BufferedReader; use std::io::File; let path = Path::new("message.txt"); let mut file = BufferedReader::new(File::open(&path)); for line in file.lines() { print!("{}", line); } .. and I noticed that file.lines() swallows io errors. Given that this code will probably be copied a bunch by people new to the language (including me!) I was thinking it might be worth adding a comment to point this out or changing to remove the source of bugs. (BTW, thanks for Rust - I'm enjoying following the language and hope to use it as a safer replacement for C++ for latency sensitive code.) Cheers, Phil -------------- next part -------------- An HTML attachment was scrubbed... URL: From someone at mearie.org Tue Feb 18 03:02:16 2014 From: someone at mearie.org (Kang Seonghoon) Date: Tue, 18 Feb 2014 20:02:16 +0900 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: Message-ID: I think the following documentations describe this behavior pretty well. http://static.rust-lang.org/doc/master/std/io/trait.Buffer.html#method.lines http://static.rust-lang.org/doc/master/std/io/struct.Lines.html As the documentation puts, this behavior is intentional as it would be annoying for casual uses otherwise. 2014-02-18 17:16 GMT+09:00 Phil Dawes : > Hello everyone, > > I was cutting and pasting the following example from the std lib docs: > > http://static.rust-lang.org/doc/master/std/io/index.html > Iterate over the lines of a file > > use std::io::BufferedReader; > use std::io::File; > > let path = Path::new("message.txt"); > let mut file = BufferedReader::new(File::open(&path)); > for line in file.lines() { > print!("{}", line); > } > > .. and I noticed that file.lines() swallows io errors. Given that this code > will probably be copied a bunch by people new to the language (including > me!) I was thinking it might be worth adding a comment to point this out or > changing to remove the source of bugs. > > (BTW, thanks for Rust - I'm enjoying following the language and hope to use > it as a safer replacement for C++ for latency sensitive code.) > > Cheers, > > Phil > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -- -- Kang Seonghoon | Software Engineer, iPlateia Inc. | http://mearie.org/ -- Opinions expressed in this email do not necessarily represent the views of my employer. -- From dbau.pp at gmail.com Tue Feb 18 05:06:01 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Wed, 19 Feb 2014 00:06:01 +1100 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: <2C23D4D4-6C27-40F3-9325-34BA05E29F56@sb.org> Message-ID: <53035AB9.9060509@gmail.com> I wrote a quick & crappy script that automates going from commit -> PR: #!/bin/sh if [ $# -eq 0 ]; then echo 'Usage: which-pr COMMIT' exit 0 fi git log master ^$1 --ancestry-path --oneline --merges | \ tail -1 | \ sed 's at .*#\([0-9]*\) : .*@http://github.com/mozilla/rust/pull/\1@' Putting this in your path gives: $ which-pr 6555b04 http://github.com/mozilla/rust/pull/12345 $ which-pr a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2 http://github.com/mozilla/rust/pull/12162 Of course, I'm sure there are corner cases that don't work, and it's definitely not as usable as something directly encoded in the commit. Huon On 18/02/14 13:17, Nick Cameron wrote: > Right, that is exactly what I want to see, just on every commit. For > example, > https://github.com/mozilla/rust/commit/a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2. > has none of that info and I can't see any way to get it (without the > kind of Git-fu suggested earlier). (Well, I can actually see that > r=nikomatsakis from the comments at the bottom, but I can't see how > that r+ came about, whether there was any discussion, whether there > was an issue where this was discussed or not, etc.). > > > On Tue, Feb 18, 2014 at 3:02 PM, Corey Richardson > wrote: > > https://github.com/mozilla/rust/commit/25147b2644ed569f16f22dc02d10a0a9b7b97c7e > seems to provide all of the information you are asking for? It > includes the text of the PR description, the PR number, the name of > the branch, and who reviewed it. I agree with your premise but I'm not > sure I agree that the current situation isn't adequate. But I wouldn't > be opposed to such a change. > > On Mon, Feb 17, 2014 at 8:54 PM, Nick Cameron > wrote: > > Whether we need issues for PRs is a separate discussion. There > has to be > > _something_ for every commit - either a PR or an issue, at the > least there > > needs to be an r+ somewhere. I would like to see who reviewed > something so I > > can ping someone with questions other than the author (if they > are offline). > > Any discussion is likely to be useful. > > > > So the question is how to find that, when necessary. GitHub > sometimes fails > > to point to the info. And when it does, you do not know if you > are missing > > more info. For the price of 6 characters in the commit message > (or "no > > issue"), we know with certainty where to find that info and that > we are not > > missing other potentially useful info. This would not slow down > development > > in any way. > > > > Note that this is orthogonal to use of version control - you > still need to > > know Git in order to get the commit message - it is about how > one can go > > easily from a commit message to meta-data about a commit. > > > > > > On Tue, Feb 18, 2014 at 12:53 PM, Kevin Ballard > wrote: > >> > >> This is not going to work in the slightest. > >> > >> Most PRs don't have an associated issue. The pull request is > the issue. > >> And that's perfectly fine. There's no need to file an issue > separate from > >> the PR itself. Requiring a referenced issue for every single > commit would be > >> extremely cumbersome, serve no real purpose aside from aiding an > >> unwillingness to learn how source control works, and would > probably slow > >> down the rate of development of Rust. > >> > >> -Kevin > >> > >> On Feb 17, 2014, at 3:50 PM, Nick Cameron > wrote: > >> > >> At worst you could just use the issue number for the PR. But I > think all > >> non-trivial commits _should_ have an issue associated. For > really tiny > >> commits we could allow "no issue" or '#0' in the message. Just > so long as > >> the author is being explicit, I think that is OK. > >> > >> > >> On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence > > wrote: > >>> > >>> Maybe I'm misunderstanding? This would require that all commits be > >>> specifically associated with an issue. I don't have actual > stats, but > >>> briefly skimming recent commits and looking at the issue > tracker, a lot of > >>> commits can't be reasonably associated with an issue. This > requirement would > >>> either force people to create fake issues for each commit, or > to reference > >>> tangentially-related or overly-broad issues in commit > messages, neither of > >>> which is very useful. > >>> > >>> Referencing any conversation that leads to or influences a > commit is a > >>> good idea, but something this inflexible doesn't seem right. > >>> > >>> My 1.5?. > >>> > >>> > >>> On Tue, 18 Feb 2014, Nick Cameron wrote: > >>> > >>>> How would people feel about a requirement for all commit > messages to > >>>> have > >>>> an issue number in them? And could we make bors enforce that? > >>>> > >>>> The reason is that GitHub is very bad at being able to trace > back a > >>>> commit > >>>> to the issue it fixes (sometimes it manages, but not always). > Not being > >>>> able to find the discussion around a commit is extremely > annoying. > >>>> > >>>> Cheers, Nick > >>>> > >>> > >>> -- > >>> Scott Lawrence > >> > >> > >> _______________________________________________ > >> Rust-dev mailing list > >> Rust-dev at mozilla.org > >> https://mail.mozilla.org/listinfo/rust-dev > >> > >> > > > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredy at fredy.gr Tue Feb 18 13:52:45 2014 From: fredy at fredy.gr (Alfredos (fredy) Damkalis) Date: Tue, 18 Feb 2014 23:52:45 +0200 Subject: [rust-dev] lib: Datetime library Message-ID: <5303D62D.9050306@fredy.gr> Hi everyone, I am new to rust and interested in writing datetime library. I have already read most of the linked documents and code gathered by Luis de Bethencourt and others in wiki page [1]. I have also read the thread [2] where Luis offered his help on writing this library. I have talked to Luis and unfortunately he is busy these days, so I have offered to continue his work. Searching about datetime libraries ended up to JSR 310 [3] which was also mentioned in the previous thread [2]. This specification is in final draft state and it seems to be the most complete one out there about datetime libraries. You can take a quick look at its basic ideas in a recent article [4] in java magazine. I am also aware of Ted Horst's work[5] where the last commits look like maintenance work. I am not sure if he is going to expand his library, unfortunately I didn't have the chance to talk to him. So I would like to know if anyone else working on this and to read your comments on the JSR 310 choice. Thank you, fredy [1] https://github.com/mozilla/rust/wiki/Lib-datetime [2] https://mail.mozilla.org/pipermail/rust-dev/2013-September/005528.html [3] https://jcp.org/en/jsr/detail?id=310 [4] http://www.oracle.com/technetwork/articles/java/jf14-date-time-2125367.html [5] https://github.com/tedhorst/rust_datetime From banderson at mozilla.com Tue Feb 18 17:40:26 2014 From: banderson at mozilla.com (Brian Anderson) Date: Tue, 18 Feb 2014 17:40:26 -0800 Subject: [rust-dev] RFC: About the library stabilization process Message-ID: <53040B8A.5070807@mozilla.com> Hey there. I'd like to start the long process of stabilizing the libraries, and this is the opening salvo. This process and the tooling to support it has been percolating on the issue tracker for a while, but this is a summary of how I expect it to work. Assuming everybody feels good about it, we'll start trying to make some simple API's stable starting later this week or next. # What is the stability index and stability attributes? The stability index is a way of tracking, at the item level, which library features are safe to use backwards-compatibly. The intent is that the checks for stability catch all backwards-incompatible uses of library features. Between feature gates and stability The stability index of any particular item can be manually applied with stability attributes, like `#[unstable]`. These definitions are taken directly from the node.js documentation. node.js additionally defines the 'locked' and 'frozen' levels, but I don't think we need them yet. * Stability: 0 - Deprecated This feature is known to be problematic, and changes are planned. Do not rely on it. Use of the feature may cause warnings. Backwards compatibility should not be expected. * Stability: 1 - Experimental This feature was introduced recently, and may change or be removed in future versions. Please try it out and provide feedback. If it addresses a use-case that is important to you, tell the node core team. * Stability: 2 - Unstable The API is in the process of settling, but has not yet had sufficient real-world testing to be considered stable. Backwards-compatibility will be maintained if reasonable. * Stability: 3 - Stable The API has proven satisfactory, but cleanup in the underlying code may cause minor changes. Backwards-compatibility is guaranteed. Crucially, once something becomes 'stable' its interface can no longer change outside of extenuating circumstances - reviewers will need to be vigilant about this. All items may have a stability index: crates, modules, structs, enums, typedefs, fns, traits, impls, extern blocks; extern statics and fns, methods (of inherent impls only). Implementations of traits may have their own stability index, but their methods have the same stability as the trait's. # How is the stability index determined and checked? First, if the node has a stability attribute then it has that stability index. Second, the AST is traversed and stability index is propagated downward to any indexable node that isn't explicitly tagged. Reexported items maintain the stability they had in their original location. By default all nodes are *stable* - library authors have to opt-in to stability index tracking. This may end up being the wrong default and we'll want to revisit. During compilation the stabilization lint does at least the following checks: * All components of all paths, in all syntactic positions are checked, including in * use statements * trait implementation and inheritance * type parameter bounds * Casts to traits - checks the trait impl * Method calls - checks the method stability Note that not all of this is implemented, and we won't have complete tool support to start with. # What's the process for promoting libraries to stable? For 1.0 we're mostly concerned with promoting large portions of std to stable; most of the other libraries can be experimental or unstable. It's going to be a lengthy process, and it's going to require some iteration to figure out how it works best. The process 'leader' for a particular module will post a stabilization RFC to the mailing list. Within, she will state the API's under discussion, offer an overview of their functionality, the patterns used, related API's and the patterns they use, and finally offer specific suggestions about how the API needs to be improved or not before it's final. If she can confidently recommend that some API's can be tagged stable as-is then that helps everybody. After a week of discussion she will summarize the consensus, tag anything as stable that already has agreement, file and nominate issues for the remaining, and ensure that *somebody makes the changes*. During this process we don't necessarily need to arrive at a plan to stabilize everything that comes up; we just need to get the most crucial features stable, and make continual progress. We'll start by establishing a stability baseline, tagging most everything experimental or unstable, then proceed to the very simplest modules, like 'mem', 'ptr', 'cast', 'raw'. From dbau.pp at gmail.com Tue Feb 18 18:54:21 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Wed, 19 Feb 2014 13:54:21 +1100 Subject: [rust-dev] RFC: About the library stabilization process In-Reply-To: <53040B8A.5070807@mozilla.com> References: <53040B8A.5070807@mozilla.com> Message-ID: <53041CDD.9070603@gmail.com> There are some docs for these attributes: http://static.rust-lang.org/doc/master/rust.html#stability (which may need to be updated as we formalise exactly what each one means, and so on.) And, FWIW, the default currently implemented is unmarked nodes are unstable: that is, putting #[deny(unstable)] on an item will emit errors at the uses of functions etc. that lack an explicit stability attribute. Huon On 19/02/14 12:40, Brian Anderson wrote: > Hey there. > > I'd like to start the long process of stabilizing the libraries, and > this is the opening salvo. This process and the tooling to support it > has been percolating on the issue tracker for a while, but this is a > summary of how I expect it to work. Assuming everybody feels good > about it, we'll start trying to make some simple API's stable starting > later this week or next. > > > # What is the stability index and stability attributes? > > The stability index is a way of tracking, at the item level, which > library features are safe to use backwards-compatibly. The intent is > that the checks for stability catch all backwards-incompatible uses of > library features. Between feature gates and stability > > The stability index of any particular item can be manually applied > with stability attributes, like `#[unstable]`. > > These definitions are taken directly from the node.js documentation. > node.js additionally defines the 'locked' and 'frozen' levels, but I > don't think we need them yet. > > * Stability: 0 - Deprecated > > This feature is known to be problematic, and changes are > planned. Do not rely on it. Use of the feature may cause > warnings. Backwards > compatibility should not be expected. > > * Stability: 1 - Experimental > > This feature was introduced recently, and may change > or be removed in future versions. Please try it out and provide > feedback. > If it addresses a use-case that is important to you, tell the node > core team. > > * Stability: 2 - Unstable > > The API is in the process of settling, but has not yet had > sufficient real-world testing to be considered stable. > Backwards-compatibility > will be maintained if reasonable. > > * Stability: 3 - Stable > > The API has proven satisfactory, but cleanup in the underlying > code may cause minor changes. Backwards-compatibility is guaranteed. > > Crucially, once something becomes 'stable' its interface can no longer > change outside of extenuating circumstances - reviewers will need to > be vigilant about this. > > All items may have a stability index: crates, modules, structs, enums, > typedefs, fns, traits, impls, extern blocks; > extern statics and fns, methods (of inherent impls only). > > Implementations of traits may have their own stability index, but > their methods have the same stability as the trait's. > > > # How is the stability index determined and checked? > > First, if the node has a stability attribute then it has that > stability index. > > Second, the AST is traversed and stability index is propagated > downward to any indexable node that isn't explicitly tagged. > > Reexported items maintain the stability they had in their original > location. > > By default all nodes are *stable* - library authors have to opt-in to > stability index tracking. This may end up being the wrong default and > we'll want to revisit. > > During compilation the stabilization lint does at least the following > checks: > > * All components of all paths, in all syntactic positions are checked, > including in > * use statements > * trait implementation and inheritance > * type parameter bounds > * Casts to traits - checks the trait impl > * Method calls - checks the method stability > > Note that not all of this is implemented, and we won't have complete > tool support to start with. > > > # What's the process for promoting libraries to stable? > > For 1.0 we're mostly concerned with promoting large portions of std to > stable; most of the other libraries can be experimental or unstable. > It's going to be a lengthy process, and it's going to require some > iteration to figure out how it works best. > > The process 'leader' for a particular module will post a stabilization > RFC to the mailing list. Within, she will state the API's under > discussion, offer an overview of their functionality, the patterns > used, related API's and the patterns they use, and finally offer > specific suggestions about how the API needs to be improved or not > before it's final. If she can confidently recommend that some API's > can be tagged stable as-is then that helps everybody. > > After a week of discussion she will summarize the consensus, tag > anything as stable that already has agreement, file and nominate > issues for the remaining, and ensure that *somebody makes the changes*. > > During this process we don't necessarily need to arrive at a plan to > stabilize everything that comes up; we just need to get the most > crucial features stable, and make continual progress. > > We'll start by establishing a stability baseline, tagging most > everything experimental or unstable, then proceed to the very simplest > modules, like 'mem', 'ptr', 'cast', 'raw'. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From ben.striegel at gmail.com Tue Feb 18 19:38:02 2014 From: ben.striegel at gmail.com (Benjamin Striegel) Date: Tue, 18 Feb 2014 22:38:02 -0500 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: <53035AB9.9060509@gmail.com> References: <2C23D4D4-6C27-40F3-9325-34BA05E29F56@sb.org> <53035AB9.9060509@gmail.com> Message-ID: Having read this week's meeting notes on this topic: > we'll get bors to warn people about not putting the issue number in commit messages Can anyone elaborate on what this will entail? By "commit message" do you mean the honest-to-god git commit message, or the Github PR message, or both? What form will the warning take, and how easy will it be to ignore it in order to accomodate one-off contributors submitting typo fixes? On Tue, Feb 18, 2014 at 8:06 AM, Huon Wilson wrote: > I wrote a quick & crappy script that automates going from commit -> PR: > > #!/bin/sh > > if [ $# -eq 0 ]; then > echo 'Usage: which-pr COMMIT' > exit 0 > fi > > git log master ^$1 --ancestry-path --oneline --merges | \ > tail -1 | \ > sed 's at .*#\([0-9]*\) : .*@http://github.com/mozilla/rust/pull/\1@' > > Putting this in your path gives: > > $ which-pr 6555b04 > http://github.com/mozilla/rust/pull/12345 > > $ which-pr a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2 > http://github.com/mozilla/rust/pull/12162 > > Of course, I'm sure there are corner cases that don't work, and it's > definitely not as usable as something directly encoded in the commit. > > > Huon > > > > On 18/02/14 13:17, Nick Cameron wrote: > > Right, that is exactly what I want to see, just on every commit. For > example, > https://github.com/mozilla/rust/commit/a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2. > has none of that info and I can't see any way to get it (without the kind > of Git-fu suggested earlier). (Well, I can actually see that r=nikomatsakis > from the comments at the bottom, but I can't see how that r+ came about, > whether there was any discussion, whether there was an issue where this was > discussed or not, etc.). > > > On Tue, Feb 18, 2014 at 3:02 PM, Corey Richardson wrote: > >> >> https://github.com/mozilla/rust/commit/25147b2644ed569f16f22dc02d10a0a9b7b97c7e >> seems to provide all of the information you are asking for? It >> includes the text of the PR description, the PR number, the name of >> the branch, and who reviewed it. I agree with your premise but I'm not >> sure I agree that the current situation isn't adequate. But I wouldn't >> be opposed to such a change. >> >> On Mon, Feb 17, 2014 at 8:54 PM, Nick Cameron wrote: >> > Whether we need issues for PRs is a separate discussion. There has to be >> > _something_ for every commit - either a PR or an issue, at the least >> there >> > needs to be an r+ somewhere. I would like to see who reviewed something >> so I >> > can ping someone with questions other than the author (if they are >> offline). >> > Any discussion is likely to be useful. >> > >> > So the question is how to find that, when necessary. GitHub sometimes >> fails >> > to point to the info. And when it does, you do not know if you are >> missing >> > more info. For the price of 6 characters in the commit message (or "no >> > issue"), we know with certainty where to find that info and that we are >> not >> > missing other potentially useful info. This would not slow down >> development >> > in any way. >> > >> > Note that this is orthogonal to use of version control - you still need >> to >> > know Git in order to get the commit message - it is about how one can go >> > easily from a commit message to meta-data about a commit. >> > >> > >> > On Tue, Feb 18, 2014 at 12:53 PM, Kevin Ballard wrote: >> >> >> >> This is not going to work in the slightest. >> >> >> >> Most PRs don't have an associated issue. The pull request is the issue. >> >> And that's perfectly fine. There's no need to file an issue separate >> from >> >> the PR itself. Requiring a referenced issue for every single commit >> would be >> >> extremely cumbersome, serve no real purpose aside from aiding an >> >> unwillingness to learn how source control works, and would probably >> slow >> >> down the rate of development of Rust. >> >> >> >> -Kevin >> >> >> >> On Feb 17, 2014, at 3:50 PM, Nick Cameron wrote: >> >> >> >> At worst you could just use the issue number for the PR. But I think >> all >> >> non-trivial commits _should_ have an issue associated. For really tiny >> >> commits we could allow "no issue" or '#0' in the message. Just so long >> as >> >> the author is being explicit, I think that is OK. >> >> >> >> >> >> On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence >> wrote: >> >>> >> >>> Maybe I'm misunderstanding? This would require that all commits be >> >>> specifically associated with an issue. I don't have actual stats, but >> >>> briefly skimming recent commits and looking at the issue tracker, a >> lot of >> >>> commits can't be reasonably associated with an issue. This >> requirement would >> >>> either force people to create fake issues for each commit, or to >> reference >> >>> tangentially-related or overly-broad issues in commit messages, >> neither of >> >>> which is very useful. >> >>> >> >>> Referencing any conversation that leads to or influences a commit is a >> >>> good idea, but something this inflexible doesn't seem right. >> >>> >> >>> My 1.5?. >> >>> >> >>> >> >>> On Tue, 18 Feb 2014, Nick Cameron wrote: >> >>> >> >>>> How would people feel about a requirement for all commit messages to >> >>>> have >> >>>> an issue number in them? And could we make bors enforce that? >> >>>> >> >>>> The reason is that GitHub is very bad at being able to trace back a >> >>>> commit >> >>>> to the issue it fixes (sometimes it manages, but not always). Not >> being >> >>>> able to find the discussion around a commit is extremely annoying. >> >>>> >> >>>> Cheers, Nick >> >>>> >> >>> >> >>> -- >> >>> Scott Lawrence >> >> >> >> >> >> _______________________________________________ >> >> Rust-dev mailing list >> >> Rust-dev at mozilla.org >> >> https://mail.mozilla.org/listinfo/rust-dev >> >> >> >> >> > >> > >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> > >> > > > > _______________________________________________ > Rust-dev mailing listRust-dev at mozilla.orghttps://mail.mozilla.org/listinfo/rust-dev > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at metajack.im Tue Feb 18 19:56:28 2014 From: jack at metajack.im (Jack Moffitt) Date: Tue, 18 Feb 2014 20:56:28 -0700 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: <2C23D4D4-6C27-40F3-9325-34BA05E29F56@sb.org> <53035AB9.9060509@gmail.com> Message-ID: > Can anyone elaborate on what this will entail? By "commit message" do you > mean the honest-to-god git commit message, or the Github PR message, or > both? What form will the warning take, and how easy will it be to ignore it > in order to accomodate one-off contributors submitting typo fixes? Servo uses a bot called highfive[1] which adds warnings to PRs. I think this is what everyone had in mind. A good example of this is here: https://github.com/mozilla/servo/pull/1709 It also detects if you changed any unsafe code. Aside from the value of warning the contributor, reviewers are immediately alerted that the review may need extra attention, etc. jack. [1] It's name comes from the fact that it welcomes new contributors, ie, giving them a high five for helping out. Josh Matthews created and maintains it, and a small army of other bots. From bascule at gmail.com Tue Feb 18 22:12:40 2014 From: bascule at gmail.com (Tony Arcieri) Date: Tue, 18 Feb 2014 22:12:40 -0800 Subject: [rust-dev] issue numbers in commit messages In-Reply-To: References: Message-ID: On Mon, Feb 17, 2014 at 7:34 PM, Palmer Cox wrote: > If bors rewrites the commit messages, it means that if someone approves > commit ABC, what actually gets merged will be commit XYZ. This seems > potentially confusing to me and might also make it more difficult to start > with a reviewed commit on Github, such as > https://github.com/gentlefolk/rust/commit/37bf97a0f9cc764a19dfcff21d62384b2445dcbc, > and then track back to the actually merged commit in the history. > You can use git-notes to annotate commits without changing the commit hash. These are reflected in Github's UI too: https://www.kernel.org/pub/software/scm/git/docs/git-notes.html -- Tony Arcieri -------------- next part -------------- An HTML attachment was scrubbed... URL: From rustphil at phildawes.net Tue Feb 18 23:52:04 2014 From: rustphil at phildawes.net (Phil Dawes) Date: Wed, 19 Feb 2014 07:52:04 +0000 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: Message-ID: Is that not a big problem for production code? I think I'd prefer the default case to be to crash the task than deal with a logic bug. The existence of library functions that swallow errors makes reviewing code and reasoning about failure cases a lot more difficult. On Tue, Feb 18, 2014 at 11:02 AM, Kang Seonghoon wrote: > I think the following documentations describe this behavior pretty well. > > > http://static.rust-lang.org/doc/master/std/io/trait.Buffer.html#method.lines > http://static.rust-lang.org/doc/master/std/io/struct.Lines.html > > As the documentation puts, this behavior is intentional as it would be > annoying for casual uses otherwise. > > 2014-02-18 17:16 GMT+09:00 Phil Dawes : > > Hello everyone, > > > > I was cutting and pasting the following example from the std lib docs: > > > > http://static.rust-lang.org/doc/master/std/io/index.html > > Iterate over the lines of a file > > > > use std::io::BufferedReader; > > use std::io::File; > > > > let path = Path::new("message.txt"); > > let mut file = BufferedReader::new(File::open(&path)); > > for line in file.lines() { > > print!("{}", line); > > } > > > > .. and I noticed that file.lines() swallows io errors. Given that this > code > > will probably be copied a bunch by people new to the language (including > > me!) I was thinking it might be worth adding a comment to point this out > or > > changing to remove the source of bugs. > > > > (BTW, thanks for Rust - I'm enjoying following the language and hope to > use > > it as a safer replacement for C++ for latency sensitive code.) > > > > Cheers, > > > > Phil > > > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > > > > -- > -- Kang Seonghoon | Software Engineer, iPlateia Inc. | http://mearie.org/ > -- Opinions expressed in this email do not necessarily represent the > views of my employer. > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkuehn at cmu.edu Wed Feb 19 00:36:16 2014 From: tkuehn at cmu.edu (Tim Kuehn) Date: Wed, 19 Feb 2014 00:36:16 -0800 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: Message-ID: On Tue, Feb 18, 2014 at 11:52 PM, Phil Dawes wrote: > Is that not a big problem for production code? I think I'd prefer the > default case to be to crash the task than deal with a logic bug. > > The existence of library functions that swallow errors makes reviewing > code and reasoning about failure cases a lot more difficult. > There are other methods that allow one to read lines and handle error cases. `Lines` is a convenience method by design. > On Tue, Feb 18, 2014 at 11:02 AM, Kang Seonghoon wrote: > >> I think the following documentations describe this behavior pretty well. >> >> >> http://static.rust-lang.org/doc/master/std/io/trait.Buffer.html#method.lines >> http://static.rust-lang.org/doc/master/std/io/struct.Lines.html >> >> As the documentation puts, this behavior is intentional as it would be >> annoying for casual uses otherwise. >> >> 2014-02-18 17:16 GMT+09:00 Phil Dawes : >> > Hello everyone, >> > >> > I was cutting and pasting the following example from the std lib docs: >> > >> > http://static.rust-lang.org/doc/master/std/io/index.html >> > Iterate over the lines of a file >> > >> > use std::io::BufferedReader; >> > use std::io::File; >> > >> > let path = Path::new("message.txt"); >> > let mut file = BufferedReader::new(File::open(&path)); >> > for line in file.lines() { >> > print!("{}", line); >> > } >> > >> > .. and I noticed that file.lines() swallows io errors. Given that this >> code >> > will probably be copied a bunch by people new to the language (including >> > me!) I was thinking it might be worth adding a comment to point this >> out or >> > changing to remove the source of bugs. >> > >> > (BTW, thanks for Rust - I'm enjoying following the language and hope to >> use >> > it as a safer replacement for C++ for latency sensitive code.) >> > >> > Cheers, >> > >> > Phil >> > >> > >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> > >> >> >> >> -- >> -- Kang Seonghoon | Software Engineer, iPlateia Inc. | http://mearie.org/ >> -- Opinions expressed in this email do not necessarily represent the >> views of my employer. >> -- >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rustphil at phildawes.net Wed Feb 19 01:14:26 2014 From: rustphil at phildawes.net (Phil Dawes) Date: Wed, 19 Feb 2014 09:14:26 +0000 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: Message-ID: I understand, but it seems like a bad tradeoff to me in a language with safety as a primary feature. '.lines()' looks like the way to do line iteration in rust, so people will use it without thinking especially as it is part of the std.io introduction. On Wed, Feb 19, 2014 at 8:36 AM, Tim Kuehn wrote: > > > > On Tue, Feb 18, 2014 at 11:52 PM, Phil Dawes wrote: > >> Is that not a big problem for production code? I think I'd prefer the >> default case to be to crash the task than deal with a logic bug. >> >> The existence of library functions that swallow errors makes reviewing >> code and reasoning about failure cases a lot more difficult. >> > There are other methods that allow one to read lines and handle error > cases. `Lines` is a convenience method by design. > > >> On Tue, Feb 18, 2014 at 11:02 AM, Kang Seonghoon wrote: >> >>> I think the following documentations describe this behavior pretty well. >>> >>> >>> http://static.rust-lang.org/doc/master/std/io/trait.Buffer.html#method.lines >>> http://static.rust-lang.org/doc/master/std/io/struct.Lines.html >>> >>> As the documentation puts, this behavior is intentional as it would be >>> annoying for casual uses otherwise. >>> >>> 2014-02-18 17:16 GMT+09:00 Phil Dawes : >>> > Hello everyone, >>> > >>> > I was cutting and pasting the following example from the std lib docs: >>> > >>> > http://static.rust-lang.org/doc/master/std/io/index.html >>> > Iterate over the lines of a file >>> > >>> > use std::io::BufferedReader; >>> > use std::io::File; >>> > >>> > let path = Path::new("message.txt"); >>> > let mut file = BufferedReader::new(File::open(&path)); >>> > for line in file.lines() { >>> > print!("{}", line); >>> > } >>> > >>> > .. and I noticed that file.lines() swallows io errors. Given that this >>> code >>> > will probably be copied a bunch by people new to the language >>> (including >>> > me!) I was thinking it might be worth adding a comment to point this >>> out or >>> > changing to remove the source of bugs. >>> > >>> > (BTW, thanks for Rust - I'm enjoying following the language and hope >>> to use >>> > it as a safer replacement for C++ for latency sensitive code.) >>> > >>> > Cheers, >>> > >>> > Phil >>> > >>> > >>> > _______________________________________________ >>> > Rust-dev mailing list >>> > Rust-dev at mozilla.org >>> > https://mail.mozilla.org/listinfo/rust-dev >>> > >>> >>> >>> >>> -- >>> -- Kang Seonghoon | Software Engineer, iPlateia Inc. | >>> http://mearie.org/ >>> -- Opinions expressed in this email do not necessarily represent the >>> views of my employer. >>> -- >>> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From armin.ronacher at active-4.com Wed Feb 19 01:28:35 2014 From: armin.ronacher at active-4.com (Armin Ronacher) Date: Wed, 19 Feb 2014 15:28:35 +0600 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: Message-ID: <53047943.4060605@active-4.com> Hi, It could probably be improved by having a Reader::operate method which wraps some code in a closure and will return a result that is a failure if any of the wrapped IO operations failed: use std::io::BufferedReader; use std::io::File; let path = Path::new("message.txt"); let mut file = BufferedReader::new(File::open(&path)); let nil_result = file.operate(|| { for line in file.lines() { print!("{}", line); } Ok(()) }).unwrap(); Regards, Armin From leebraid at gmail.com Wed Feb 19 01:32:46 2014 From: leebraid at gmail.com (Lee Braiden) Date: Wed, 19 Feb 2014 09:32:46 +0000 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: Message-ID: <53047A3E.4020006@gmail.com> On 19/02/14 09:14, Phil Dawes wrote: > I understand, but it seems like a bad tradeoff to me in a language > with safety as a primary feature. > '.lines()' looks like the way to do line iteration in rust, so people > will use it without thinking especially as it is part of the std.io > introduction. > I agree. IO is certainly an area where you want to know that ALL errors are propagated upwards. Even with the best of intentions, it's very easy to write IO functions that handle 20 cases, across 5 levels of abstraction, and inadvertently not handle all failure conditions for one particular case. It may not even matter for the purposes of the original code, but can later bite you pretty hard, when you come to add functionality, and find that something is missing from your API to handle the more general case correctly. At the moment, Rust would have to get a red mark on this table: http://en.wikipedia.org/wiki/Comparison_of_programming_languages#Failsafe_I.2FO_and_system_calls Which would be quite sad, considering its goals. IMO, this should be acknowledged as the simple oversight that it is, and fixed. -- Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbau.pp at gmail.com Wed Feb 19 01:45:15 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Wed, 19 Feb 2014 20:45:15 +1100 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: <53047A3E.4020006@gmail.com> References: <53047A3E.4020006@gmail.com> Message-ID: <53047D2B.1020305@gmail.com> It would need a red mark because *one* convenience method doesn't provide the full error handling suite?? Surely Rust at least gets "Some". That said, the failing-reader and/or `operate` techniques sound like they might be nice, on first blush. (I'll note that `operate` is very similar to the "conditions" we used to use for all IO errors.) Huon On 19/02/14 20:32, Lee Braiden wrote: > On 19/02/14 09:14, Phil Dawes wrote: >> I understand, but it seems like a bad tradeoff to me in a language >> with safety as a primary feature. >> '.lines()' looks like the way to do line iteration in rust, so people >> will use it without thinking especially as it is part of the std.io >> introduction. >> > > I agree. IO is certainly an area where you want to know that ALL > errors are propagated upwards. Even with the best of intentions, it's > very easy to write IO functions that handle 20 cases, across 5 levels > of abstraction, and inadvertently not handle all failure conditions > for one particular case. It may not even matter for the purposes of > the original code, but can later bite you pretty hard, when you come > to add functionality, and find that something is missing from your API > to handle the more general case correctly. > > > At the moment, Rust would have to get a red mark on this table: > > http://en.wikipedia.org/wiki/Comparison_of_programming_languages#Failsafe_I.2FO_and_system_calls > > Which would be quite sad, considering its goals. > > > IMO, this should be acknowledged as the simple oversight that it is, > and fixed. > > > -- > Lee > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jurily at gmail.com Wed Feb 19 02:37:15 2014 From: jurily at gmail.com (=?UTF-8?Q?Gy=C3=B6rgy_Andrasek?=) Date: Wed, 19 Feb 2014 10:37:15 +0000 Subject: [rust-dev] RFC: About the library stabilization process In-Reply-To: <53040B8A.5070807@mozilla.com> References: <53040B8A.5070807@mozilla.com> Message-ID: On Wed, Feb 19, 2014 at 1:40 AM, Brian Anderson wrote: > Backwards-compatibility is guaranteed. Does that include ABI compatibility? > Second, the AST is traversed and stability index is propagated downward to any indexable node that isn't explicitly tagged. Should it be an error to use lower stability internally? > By default all nodes are *stable* - library authors have to opt-in to stability index tracking. This may end up being the wrong default and we'll want to revisit. Oh dear god no. `stable` should be *earned* over time, otherwise it's meaningless. The compiler should treat untagged code as `unstable`, `experimental` or a special `untagged` stability and accept that level by default. > For 1.0 we're mostly concerned with promoting large portions of std to stable Requesting permission to spam the issue tracker with minor annoyances we definitely don't want to live with forever. The C FFI <-> idiomatic Rust bridge is especially painful. From mneumann at ntecs.de Wed Feb 19 03:31:03 2014 From: mneumann at ntecs.de (Michael Neumann) Date: Wed, 19 Feb 2014 12:31:03 +0100 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: Message-ID: <530495F7.7080106@ntecs.de> Am 19.02.2014 08:52, schrieb Phil Dawes: > Is that not a big problem for production code? I think I'd prefer the > default case to be to crash the task than deal with a logic bug. > > The existence of library functions that swallow errors makes reviewing > code and reasoning about failure cases a lot more difficult. This is why I proposed a FailureReader: https://github.com/mozilla/rust/issues/12368 Regards, Michael From daniel.fath7 at gmail.com Wed Feb 19 05:43:40 2014 From: daniel.fath7 at gmail.com (Daniel Fath) Date: Wed, 19 Feb 2014 14:43:40 +0100 Subject: [rust-dev] Rust-dev Digest, Vol 44, Issue 70 In-Reply-To: References: Message-ID: > Hi everyone, > So I would like to know if anyone else working on this and to read your > comments on the JSR 310 choice. I was interested but day job, master thesis and my own XML parser got in the way :( If you are starting I'd love to join and help you, but I have a LOT of reading on my plate. From what I've gathered you best start from ISO-8601 add the Olson time database and basically build from there. Sincerely, -Y- On Wed, Feb 19, 2014 at 4:38 AM, wrote: > Send Rust-dev mailing list submissions to > rust-dev at mozilla.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.mozilla.org/listinfo/rust-dev > or, via email, send a message with subject or body 'help' to > rust-dev-request at mozilla.org > > You can reach the person managing the list at > rust-dev-owner at mozilla.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Rust-dev digest..." > > > Today's Topics: > > 1. lib: Datetime library (Alfredos (fredy) Damkalis) > 2. RFC: About the library stabilization process (Brian Anderson) > 3. Re: RFC: About the library stabilization process (Huon Wilson) > 4. Re: issue numbers in commit messages (Benjamin Striegel) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 18 Feb 2014 23:52:45 +0200 > From: "Alfredos (fredy) Damkalis" > To: rust-dev at mozilla.org > Subject: [rust-dev] lib: Datetime library > Message-ID: <5303D62D.9050306 at fredy.gr> > Content-Type: text/plain; charset=ISO-8859-1 > > Hi everyone, > > I am new to rust and interested in writing datetime library. > > I have already read most of the linked documents and code gathered by > Luis de Bethencourt and others in wiki page [1]. > > I have also read the thread [2] where Luis offered his help on writing > this library. I have talked to Luis and unfortunately he is busy these > days, so I have offered to continue his work. > > Searching about datetime libraries ended up to JSR 310 [3] which was > also mentioned in the previous thread [2]. This specification is in > final draft state and it seems to be the most complete one out there > about datetime libraries. You can take a quick look at its basic ideas > in a recent article [4] in java magazine. > > I am also aware of Ted Horst's work[5] where the last commits look like > maintenance work. I am not sure if he is going to expand his library, > unfortunately I didn't have the chance to talk to him. > > So I would like to know if anyone else working on this and to read your > comments on the JSR 310 choice. > > Thank you, > fredy > > [1] https://github.com/mozilla/rust/wiki/Lib-datetime > [2] https://mail.mozilla.org/pipermail/rust-dev/2013-September/005528.html > [3] https://jcp.org/en/jsr/detail?id=310 > [4] > http://www.oracle.com/technetwork/articles/java/jf14-date-time-2125367.html > [5] https://github.com/tedhorst/rust_datetime > > > ------------------------------ > > Message: 2 > Date: Tue, 18 Feb 2014 17:40:26 -0800 > From: Brian Anderson > To: "rust-dev at mozilla.org" > Subject: [rust-dev] RFC: About the library stabilization process > Message-ID: <53040B8A.5070807 at mozilla.com> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > Hey there. > > I'd like to start the long process of stabilizing the libraries, and > this is the opening salvo. This process and the tooling to support it > has been percolating on the issue tracker for a while, but this is a > summary of how I expect it to work. Assuming everybody feels good about > it, we'll start trying to make some simple API's stable starting later > this week or next. > > > # What is the stability index and stability attributes? > > The stability index is a way of tracking, at the item level, which > library features are safe to use backwards-compatibly. The intent is > that the checks for stability catch all backwards-incompatible uses of > library features. Between feature gates and stability > > The stability index of any particular item can be manually applied with > stability attributes, like `#[unstable]`. > > These definitions are taken directly from the node.js documentation. > node.js additionally defines the 'locked' and 'frozen' levels, but I > don't think we need them yet. > > * Stability: 0 - Deprecated > > This feature is known to be problematic, and changes are > planned. Do not rely on it. Use of the feature may cause > warnings. Backwards > compatibility should not be expected. > > * Stability: 1 - Experimental > > This feature was introduced recently, and may change > or be removed in future versions. Please try it out and provide > feedback. > If it addresses a use-case that is important to you, tell the node > core team. > > * Stability: 2 - Unstable > > The API is in the process of settling, but has not yet had > sufficient real-world testing to be considered stable. > Backwards-compatibility > will be maintained if reasonable. > > * Stability: 3 - Stable > > The API has proven satisfactory, but cleanup in the underlying > code may cause minor changes. Backwards-compatibility is guaranteed. > > Crucially, once something becomes 'stable' its interface can no longer > change outside of extenuating circumstances - reviewers will need to be > vigilant about this. > > All items may have a stability index: crates, modules, structs, enums, > typedefs, fns, traits, impls, extern blocks; > extern statics and fns, methods (of inherent impls only). > > Implementations of traits may have their own stability index, but their > methods have the same stability as the trait's. > > > # How is the stability index determined and checked? > > First, if the node has a stability attribute then it has that stability > index. > > Second, the AST is traversed and stability index is propagated downward > to any indexable node that isn't explicitly tagged. > > Reexported items maintain the stability they had in their original > location. > > By default all nodes are *stable* - library authors have to opt-in to > stability index tracking. This may end up being the wrong default and > we'll want to revisit. > > During compilation the stabilization lint does at least the following > checks: > > * All components of all paths, in all syntactic positions are checked, > including in > * use statements > * trait implementation and inheritance > * type parameter bounds > * Casts to traits - checks the trait impl > * Method calls - checks the method stability > > Note that not all of this is implemented, and we won't have complete > tool support to start with. > > > # What's the process for promoting libraries to stable? > > For 1.0 we're mostly concerned with promoting large portions of std to > stable; most of the other libraries can be experimental or unstable. > It's going to be a lengthy process, and it's going to require some > iteration to figure out how it works best. > > The process 'leader' for a particular module will post a stabilization > RFC to the mailing list. Within, she will state the API's under > discussion, offer an overview of their functionality, the patterns used, > related API's and the patterns they use, and finally offer specific > suggestions about how the API needs to be improved or not before it's > final. If she can confidently recommend that some API's can be tagged > stable as-is then that helps everybody. > > After a week of discussion she will summarize the consensus, tag > anything as stable that already has agreement, file and nominate issues > for the remaining, and ensure that *somebody makes the changes*. > > During this process we don't necessarily need to arrive at a plan to > stabilize everything that comes up; we just need to get the most crucial > features stable, and make continual progress. > > We'll start by establishing a stability baseline, tagging most > everything experimental or unstable, then proceed to the very simplest > modules, like 'mem', 'ptr', 'cast', 'raw'. > > > > ------------------------------ > > Message: 3 > Date: Wed, 19 Feb 2014 13:54:21 +1100 > From: Huon Wilson > To: rust-dev at mozilla.org > Subject: Re: [rust-dev] RFC: About the library stabilization process > Message-ID: <53041CDD.9070603 at gmail.com> > Content-Type: text/plain; charset=UTF-8; format=flowed > > There are some docs for these attributes: > http://static.rust-lang.org/doc/master/rust.html#stability (which may > need to be updated as we formalise exactly what each one means, and so on.) > > And, FWIW, the default currently implemented is unmarked nodes are > unstable: that is, putting #[deny(unstable)] on an item will emit errors > at the uses of functions etc. that lack an explicit stability attribute. > > Huon > > > On 19/02/14 12:40, Brian Anderson wrote: > > Hey there. > > > > I'd like to start the long process of stabilizing the libraries, and > > this is the opening salvo. This process and the tooling to support it > > has been percolating on the issue tracker for a while, but this is a > > summary of how I expect it to work. Assuming everybody feels good > > about it, we'll start trying to make some simple API's stable starting > > later this week or next. > > > > > > # What is the stability index and stability attributes? > > > > The stability index is a way of tracking, at the item level, which > > library features are safe to use backwards-compatibly. The intent is > > that the checks for stability catch all backwards-incompatible uses of > > library features. Between feature gates and stability > > > > The stability index of any particular item can be manually applied > > with stability attributes, like `#[unstable]`. > > > > These definitions are taken directly from the node.js documentation. > > node.js additionally defines the 'locked' and 'frozen' levels, but I > > don't think we need them yet. > > > > * Stability: 0 - Deprecated > > > > This feature is known to be problematic, and changes are > > planned. Do not rely on it. Use of the feature may cause > > warnings. Backwards > > compatibility should not be expected. > > > > * Stability: 1 - Experimental > > > > This feature was introduced recently, and may change > > or be removed in future versions. Please try it out and provide > > feedback. > > If it addresses a use-case that is important to you, tell the node > > core team. > > > > * Stability: 2 - Unstable > > > > The API is in the process of settling, but has not yet had > > sufficient real-world testing to be considered stable. > > Backwards-compatibility > > will be maintained if reasonable. > > > > * Stability: 3 - Stable > > > > The API has proven satisfactory, but cleanup in the underlying > > code may cause minor changes. Backwards-compatibility is guaranteed. > > > > Crucially, once something becomes 'stable' its interface can no longer > > change outside of extenuating circumstances - reviewers will need to > > be vigilant about this. > > > > All items may have a stability index: crates, modules, structs, enums, > > typedefs, fns, traits, impls, extern blocks; > > extern statics and fns, methods (of inherent impls only). > > > > Implementations of traits may have their own stability index, but > > their methods have the same stability as the trait's. > > > > > > # How is the stability index determined and checked? > > > > First, if the node has a stability attribute then it has that > > stability index. > > > > Second, the AST is traversed and stability index is propagated > > downward to any indexable node that isn't explicitly tagged. > > > > Reexported items maintain the stability they had in their original > > location. > > > > By default all nodes are *stable* - library authors have to opt-in to > > stability index tracking. This may end up being the wrong default and > > we'll want to revisit. > > > > During compilation the stabilization lint does at least the following > > checks: > > > > * All components of all paths, in all syntactic positions are checked, > > including in > > * use statements > > * trait implementation and inheritance > > * type parameter bounds > > * Casts to traits - checks the trait impl > > * Method calls - checks the method stability > > > > Note that not all of this is implemented, and we won't have complete > > tool support to start with. > > > > > > # What's the process for promoting libraries to stable? > > > > For 1.0 we're mostly concerned with promoting large portions of std to > > stable; most of the other libraries can be experimental or unstable. > > It's going to be a lengthy process, and it's going to require some > > iteration to figure out how it works best. > > > > The process 'leader' for a particular module will post a stabilization > > RFC to the mailing list. Within, she will state the API's under > > discussion, offer an overview of their functionality, the patterns > > used, related API's and the patterns they use, and finally offer > > specific suggestions about how the API needs to be improved or not > > before it's final. If she can confidently recommend that some API's > > can be tagged stable as-is then that helps everybody. > > > > After a week of discussion she will summarize the consensus, tag > > anything as stable that already has agreement, file and nominate > > issues for the remaining, and ensure that *somebody makes the changes*. > > > > During this process we don't necessarily need to arrive at a plan to > > stabilize everything that comes up; we just need to get the most > > crucial features stable, and make continual progress. > > > > We'll start by establishing a stability baseline, tagging most > > everything experimental or unstable, then proceed to the very simplest > > modules, like 'mem', 'ptr', 'cast', 'raw'. > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > > ------------------------------ > > Message: 4 > Date: Tue, 18 Feb 2014 22:38:02 -0500 > From: Benjamin Striegel > Cc: "rust-dev at mozilla.org" > Subject: Re: [rust-dev] issue numbers in commit messages > Message-ID: > < > CAAvrL-kYJPrUdicYQLNhrtp2e5ZhmF1ywn96WN35BjRLBMvS+w at mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Having read this week's meeting notes on this topic: > > > we'll get bors to warn people about not putting the issue number in > commit messages > > Can anyone elaborate on what this will entail? By "commit message" do you > mean the honest-to-god git commit message, or the Github PR message, or > both? What form will the warning take, and how easy will it be to ignore it > in order to accomodate one-off contributors submitting typo fixes? > > > On Tue, Feb 18, 2014 at 8:06 AM, Huon Wilson wrote: > > > I wrote a quick & crappy script that automates going from commit -> PR: > > > > #!/bin/sh > > > > if [ $# -eq 0 ]; then > > echo 'Usage: which-pr COMMIT' > > exit 0 > > fi > > > > git log master ^$1 --ancestry-path --oneline --merges | \ > > tail -1 | \ > > sed 's at .*#\([0-9]*\) : .*@ > http://github.com/mozilla/rust/pull/\1@' > > > > Putting this in your path gives: > > > > $ which-pr 6555b04 > > http://github.com/mozilla/rust/pull/12345 > > > > $ which-pr a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2 > > http://github.com/mozilla/rust/pull/12162 > > > > Of course, I'm sure there are corner cases that don't work, and it's > > definitely not as usable as something directly encoded in the commit. > > > > > > Huon > > > > > > > > On 18/02/14 13:17, Nick Cameron wrote: > > > > Right, that is exactly what I want to see, just on every commit. For > > example, > > > https://github.com/mozilla/rust/commit/a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2 > . > > has none of that info and I can't see any way to get it (without the kind > > of Git-fu suggested earlier). (Well, I can actually see that > r=nikomatsakis > > from the comments at the bottom, but I can't see how that r+ came about, > > whether there was any discussion, whether there was an issue where this > was > > discussed or not, etc.). > > > > > > On Tue, Feb 18, 2014 at 3:02 PM, Corey Richardson >wrote: > > > >> > >> > https://github.com/mozilla/rust/commit/25147b2644ed569f16f22dc02d10a0a9b7b97c7e > >> seems to provide all of the information you are asking for? It > >> includes the text of the PR description, the PR number, the name of > >> the branch, and who reviewed it. I agree with your premise but I'm not > >> sure I agree that the current situation isn't adequate. But I wouldn't > >> be opposed to such a change. > >> > >> On Mon, Feb 17, 2014 at 8:54 PM, Nick Cameron > wrote: > >> > Whether we need issues for PRs is a separate discussion. There has to > be > >> > _something_ for every commit - either a PR or an issue, at the least > >> there > >> > needs to be an r+ somewhere. I would like to see who reviewed > something > >> so I > >> > can ping someone with questions other than the author (if they are > >> offline). > >> > Any discussion is likely to be useful. > >> > > >> > So the question is how to find that, when necessary. GitHub sometimes > >> fails > >> > to point to the info. And when it does, you do not know if you are > >> missing > >> > more info. For the price of 6 characters in the commit message (or "no > >> > issue"), we know with certainty where to find that info and that we > are > >> not > >> > missing other potentially useful info. This would not slow down > >> development > >> > in any way. > >> > > >> > Note that this is orthogonal to use of version control - you still > need > >> to > >> > know Git in order to get the commit message - it is about how one can > go > >> > easily from a commit message to meta-data about a commit. > >> > > >> > > >> > On Tue, Feb 18, 2014 at 12:53 PM, Kevin Ballard wrote: > >> >> > >> >> This is not going to work in the slightest. > >> >> > >> >> Most PRs don't have an associated issue. The pull request is the > issue. > >> >> And that's perfectly fine. There's no need to file an issue separate > >> from > >> >> the PR itself. Requiring a referenced issue for every single commit > >> would be > >> >> extremely cumbersome, serve no real purpose aside from aiding an > >> >> unwillingness to learn how source control works, and would probably > >> slow > >> >> down the rate of development of Rust. > >> >> > >> >> -Kevin > >> >> > >> >> On Feb 17, 2014, at 3:50 PM, Nick Cameron > wrote: > >> >> > >> >> At worst you could just use the issue number for the PR. But I think > >> all > >> >> non-trivial commits _should_ have an issue associated. For really > tiny > >> >> commits we could allow "no issue" or '#0' in the message. Just so > long > >> as > >> >> the author is being explicit, I think that is OK. > >> >> > >> >> > >> >> On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence > >> wrote: > >> >>> > >> >>> Maybe I'm misunderstanding? This would require that all commits be > >> >>> specifically associated with an issue. I don't have actual stats, > but > >> >>> briefly skimming recent commits and looking at the issue tracker, a > >> lot of > >> >>> commits can't be reasonably associated with an issue. This > >> requirement would > >> >>> either force people to create fake issues for each commit, or to > >> reference > >> >>> tangentially-related or overly-broad issues in commit messages, > >> neither of > >> >>> which is very useful. > >> >>> > >> >>> Referencing any conversation that leads to or influences a commit > is a > >> >>> good idea, but something this inflexible doesn't seem right. > >> >>> > >> >>> My 1.5?. > >> >>> > >> >>> > >> >>> On Tue, 18 Feb 2014, Nick Cameron wrote: > >> >>> > >> >>>> How would people feel about a requirement for all commit messages > to > >> >>>> have > >> >>>> an issue number in them? And could we make bors enforce that? > >> >>>> > >> >>>> The reason is that GitHub is very bad at being able to trace back a > >> >>>> commit > >> >>>> to the issue it fixes (sometimes it manages, but not always). Not > >> being > >> >>>> able to find the discussion around a commit is extremely annoying. > >> >>>> > >> >>>> Cheers, Nick > >> >>>> > >> >>> > >> >>> -- > >> >>> Scott Lawrence > >> >> > >> >> > >> >> _______________________________________________ > >> >> Rust-dev mailing list > >> >> Rust-dev at mozilla.org > >> >> https://mail.mozilla.org/listinfo/rust-dev > >> >> > >> >> > >> > > >> > > >> > _______________________________________________ > >> > Rust-dev mailing list > >> > Rust-dev at mozilla.org > >> > https://mail.mozilla.org/listinfo/rust-dev > >> > > >> > > > > > > > > _______________________________________________ > > Rust-dev mailing listRust-dev at mozilla.orghttps:// > mail.mozilla.org/listinfo/rust-dev > > > > > > > > _______________________________________________ > > Rust-dev mailing list > > Rust-dev at mozilla.org > > https://mail.mozilla.org/listinfo/rust-dev > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.mozilla.org/pipermail/rust-dev/attachments/20140218/769ba219/attachment.html > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > ------------------------------ > > End of Rust-dev Digest, Vol 44, Issue 70 > **************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan at xeberon.net Wed Feb 19 05:46:41 2014 From: gaetan at xeberon.net (Gaetan) Date: Wed, 19 Feb 2014 14:46:41 +0100 Subject: [rust-dev] Rust-dev Digest, Vol 44, Issue 70 In-Reply-To: References: Message-ID: I also love to be part of it if you set up a github project I'll be glad to send some PL on this subject ----- Gaetan 2014-02-19 14:43 GMT+01:00 Daniel Fath : > > Hi everyone, > > > So I would like to know if anyone else working on this and to read your > > comments on the JSR 310 choice. > > > I was interested but day job, master thesis and my own XML parser got in > the way :( > > If you are starting I'd love to join and help you, but I have a LOT of > reading on my plate. From what I've gathered you best > start from ISO-8601 add the Olson time database and basically build from > there. > > Sincerely, > -Y- > > > On Wed, Feb 19, 2014 at 4:38 AM, wrote: > >> Send Rust-dev mailing list submissions to >> rust-dev at mozilla.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://mail.mozilla.org/listinfo/rust-dev >> or, via email, send a message with subject or body 'help' to >> rust-dev-request at mozilla.org >> >> You can reach the person managing the list at >> rust-dev-owner at mozilla.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of Rust-dev digest..." >> >> >> Today's Topics: >> >> 1. lib: Datetime library (Alfredos (fredy) Damkalis) >> 2. RFC: About the library stabilization process (Brian Anderson) >> 3. Re: RFC: About the library stabilization process (Huon Wilson) >> 4. Re: issue numbers in commit messages (Benjamin Striegel) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Tue, 18 Feb 2014 23:52:45 +0200 >> From: "Alfredos (fredy) Damkalis" >> To: rust-dev at mozilla.org >> Subject: [rust-dev] lib: Datetime library >> Message-ID: <5303D62D.9050306 at fredy.gr> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> Hi everyone, >> >> I am new to rust and interested in writing datetime library. >> >> I have already read most of the linked documents and code gathered by >> Luis de Bethencourt and others in wiki page [1]. >> >> I have also read the thread [2] where Luis offered his help on writing >> this library. I have talked to Luis and unfortunately he is busy these >> days, so I have offered to continue his work. >> >> Searching about datetime libraries ended up to JSR 310 [3] which was >> also mentioned in the previous thread [2]. This specification is in >> final draft state and it seems to be the most complete one out there >> about datetime libraries. You can take a quick look at its basic ideas >> in a recent article [4] in java magazine. >> >> I am also aware of Ted Horst's work[5] where the last commits look like >> maintenance work. I am not sure if he is going to expand his library, >> unfortunately I didn't have the chance to talk to him. >> >> So I would like to know if anyone else working on this and to read your >> comments on the JSR 310 choice. >> >> Thank you, >> fredy >> >> [1] https://github.com/mozilla/rust/wiki/Lib-datetime >> [2] >> https://mail.mozilla.org/pipermail/rust-dev/2013-September/005528.html >> [3] https://jcp.org/en/jsr/detail?id=310 >> [4] >> >> http://www.oracle.com/technetwork/articles/java/jf14-date-time-2125367.html >> [5] https://github.com/tedhorst/rust_datetime >> >> >> ------------------------------ >> >> Message: 2 >> Date: Tue, 18 Feb 2014 17:40:26 -0800 >> From: Brian Anderson >> To: "rust-dev at mozilla.org" >> Subject: [rust-dev] RFC: About the library stabilization process >> Message-ID: <53040B8A.5070807 at mozilla.com> >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >> >> Hey there. >> >> I'd like to start the long process of stabilizing the libraries, and >> this is the opening salvo. This process and the tooling to support it >> has been percolating on the issue tracker for a while, but this is a >> summary of how I expect it to work. Assuming everybody feels good about >> it, we'll start trying to make some simple API's stable starting later >> this week or next. >> >> >> # What is the stability index and stability attributes? >> >> The stability index is a way of tracking, at the item level, which >> library features are safe to use backwards-compatibly. The intent is >> that the checks for stability catch all backwards-incompatible uses of >> library features. Between feature gates and stability >> >> The stability index of any particular item can be manually applied with >> stability attributes, like `#[unstable]`. >> >> These definitions are taken directly from the node.js documentation. >> node.js additionally defines the 'locked' and 'frozen' levels, but I >> don't think we need them yet. >> >> * Stability: 0 - Deprecated >> >> This feature is known to be problematic, and changes are >> planned. Do not rely on it. Use of the feature may cause >> warnings. Backwards >> compatibility should not be expected. >> >> * Stability: 1 - Experimental >> >> This feature was introduced recently, and may change >> or be removed in future versions. Please try it out and provide >> feedback. >> If it addresses a use-case that is important to you, tell the node >> core team. >> >> * Stability: 2 - Unstable >> >> The API is in the process of settling, but has not yet had >> sufficient real-world testing to be considered stable. >> Backwards-compatibility >> will be maintained if reasonable. >> >> * Stability: 3 - Stable >> >> The API has proven satisfactory, but cleanup in the underlying >> code may cause minor changes. Backwards-compatibility is guaranteed. >> >> Crucially, once something becomes 'stable' its interface can no longer >> change outside of extenuating circumstances - reviewers will need to be >> vigilant about this. >> >> All items may have a stability index: crates, modules, structs, enums, >> typedefs, fns, traits, impls, extern blocks; >> extern statics and fns, methods (of inherent impls only). >> >> Implementations of traits may have their own stability index, but their >> methods have the same stability as the trait's. >> >> >> # How is the stability index determined and checked? >> >> First, if the node has a stability attribute then it has that stability >> index. >> >> Second, the AST is traversed and stability index is propagated downward >> to any indexable node that isn't explicitly tagged. >> >> Reexported items maintain the stability they had in their original >> location. >> >> By default all nodes are *stable* - library authors have to opt-in to >> stability index tracking. This may end up being the wrong default and >> we'll want to revisit. >> >> During compilation the stabilization lint does at least the following >> checks: >> >> * All components of all paths, in all syntactic positions are checked, >> including in >> * use statements >> * trait implementation and inheritance >> * type parameter bounds >> * Casts to traits - checks the trait impl >> * Method calls - checks the method stability >> >> Note that not all of this is implemented, and we won't have complete >> tool support to start with. >> >> >> # What's the process for promoting libraries to stable? >> >> For 1.0 we're mostly concerned with promoting large portions of std to >> stable; most of the other libraries can be experimental or unstable. >> It's going to be a lengthy process, and it's going to require some >> iteration to figure out how it works best. >> >> The process 'leader' for a particular module will post a stabilization >> RFC to the mailing list. Within, she will state the API's under >> discussion, offer an overview of their functionality, the patterns used, >> related API's and the patterns they use, and finally offer specific >> suggestions about how the API needs to be improved or not before it's >> final. If she can confidently recommend that some API's can be tagged >> stable as-is then that helps everybody. >> >> After a week of discussion she will summarize the consensus, tag >> anything as stable that already has agreement, file and nominate issues >> for the remaining, and ensure that *somebody makes the changes*. >> >> During this process we don't necessarily need to arrive at a plan to >> stabilize everything that comes up; we just need to get the most crucial >> features stable, and make continual progress. >> >> We'll start by establishing a stability baseline, tagging most >> everything experimental or unstable, then proceed to the very simplest >> modules, like 'mem', 'ptr', 'cast', 'raw'. >> >> >> >> ------------------------------ >> >> Message: 3 >> Date: Wed, 19 Feb 2014 13:54:21 +1100 >> From: Huon Wilson >> To: rust-dev at mozilla.org >> Subject: Re: [rust-dev] RFC: About the library stabilization process >> Message-ID: <53041CDD.9070603 at gmail.com> >> Content-Type: text/plain; charset=UTF-8; format=flowed >> >> There are some docs for these attributes: >> http://static.rust-lang.org/doc/master/rust.html#stability (which may >> need to be updated as we formalise exactly what each one means, and so >> on.) >> >> And, FWIW, the default currently implemented is unmarked nodes are >> unstable: that is, putting #[deny(unstable)] on an item will emit errors >> at the uses of functions etc. that lack an explicit stability attribute. >> >> Huon >> >> >> On 19/02/14 12:40, Brian Anderson wrote: >> > Hey there. >> > >> > I'd like to start the long process of stabilizing the libraries, and >> > this is the opening salvo. This process and the tooling to support it >> > has been percolating on the issue tracker for a while, but this is a >> > summary of how I expect it to work. Assuming everybody feels good >> > about it, we'll start trying to make some simple API's stable starting >> > later this week or next. >> > >> > >> > # What is the stability index and stability attributes? >> > >> > The stability index is a way of tracking, at the item level, which >> > library features are safe to use backwards-compatibly. The intent is >> > that the checks for stability catch all backwards-incompatible uses of >> > library features. Between feature gates and stability >> > >> > The stability index of any particular item can be manually applied >> > with stability attributes, like `#[unstable]`. >> > >> > These definitions are taken directly from the node.js documentation. >> > node.js additionally defines the 'locked' and 'frozen' levels, but I >> > don't think we need them yet. >> > >> > * Stability: 0 - Deprecated >> > >> > This feature is known to be problematic, and changes are >> > planned. Do not rely on it. Use of the feature may cause >> > warnings. Backwards >> > compatibility should not be expected. >> > >> > * Stability: 1 - Experimental >> > >> > This feature was introduced recently, and may change >> > or be removed in future versions. Please try it out and provide >> > feedback. >> > If it addresses a use-case that is important to you, tell the node >> > core team. >> > >> > * Stability: 2 - Unstable >> > >> > The API is in the process of settling, but has not yet had >> > sufficient real-world testing to be considered stable. >> > Backwards-compatibility >> > will be maintained if reasonable. >> > >> > * Stability: 3 - Stable >> > >> > The API has proven satisfactory, but cleanup in the underlying >> > code may cause minor changes. Backwards-compatibility is >> guaranteed. >> > >> > Crucially, once something becomes 'stable' its interface can no longer >> > change outside of extenuating circumstances - reviewers will need to >> > be vigilant about this. >> > >> > All items may have a stability index: crates, modules, structs, enums, >> > typedefs, fns, traits, impls, extern blocks; >> > extern statics and fns, methods (of inherent impls only). >> > >> > Implementations of traits may have their own stability index, but >> > their methods have the same stability as the trait's. >> > >> > >> > # How is the stability index determined and checked? >> > >> > First, if the node has a stability attribute then it has that >> > stability index. >> > >> > Second, the AST is traversed and stability index is propagated >> > downward to any indexable node that isn't explicitly tagged. >> > >> > Reexported items maintain the stability they had in their original >> > location. >> > >> > By default all nodes are *stable* - library authors have to opt-in to >> > stability index tracking. This may end up being the wrong default and >> > we'll want to revisit. >> > >> > During compilation the stabilization lint does at least the following >> > checks: >> > >> > * All components of all paths, in all syntactic positions are checked, >> > including in >> > * use statements >> > * trait implementation and inheritance >> > * type parameter bounds >> > * Casts to traits - checks the trait impl >> > * Method calls - checks the method stability >> > >> > Note that not all of this is implemented, and we won't have complete >> > tool support to start with. >> > >> > >> > # What's the process for promoting libraries to stable? >> > >> > For 1.0 we're mostly concerned with promoting large portions of std to >> > stable; most of the other libraries can be experimental or unstable. >> > It's going to be a lengthy process, and it's going to require some >> > iteration to figure out how it works best. >> > >> > The process 'leader' for a particular module will post a stabilization >> > RFC to the mailing list. Within, she will state the API's under >> > discussion, offer an overview of their functionality, the patterns >> > used, related API's and the patterns they use, and finally offer >> > specific suggestions about how the API needs to be improved or not >> > before it's final. If she can confidently recommend that some API's >> > can be tagged stable as-is then that helps everybody. >> > >> > After a week of discussion she will summarize the consensus, tag >> > anything as stable that already has agreement, file and nominate >> > issues for the remaining, and ensure that *somebody makes the changes*. >> > >> > During this process we don't necessarily need to arrive at a plan to >> > stabilize everything that comes up; we just need to get the most >> > crucial features stable, and make continual progress. >> > >> > We'll start by establishing a stability baseline, tagging most >> > everything experimental or unstable, then proceed to the very simplest >> > modules, like 'mem', 'ptr', 'cast', 'raw'. >> > >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> >> >> >> ------------------------------ >> >> Message: 4 >> Date: Tue, 18 Feb 2014 22:38:02 -0500 >> From: Benjamin Striegel >> Cc: "rust-dev at mozilla.org" >> Subject: Re: [rust-dev] issue numbers in commit messages >> Message-ID: >> < >> CAAvrL-kYJPrUdicYQLNhrtp2e5ZhmF1ywn96WN35BjRLBMvS+w at mail.gmail.com> >> Content-Type: text/plain; charset="iso-8859-1" >> >> Having read this week's meeting notes on this topic: >> >> > we'll get bors to warn people about not putting the issue number in >> commit messages >> >> Can anyone elaborate on what this will entail? By "commit message" do you >> mean the honest-to-god git commit message, or the Github PR message, or >> both? What form will the warning take, and how easy will it be to ignore >> it >> in order to accomodate one-off contributors submitting typo fixes? >> >> >> On Tue, Feb 18, 2014 at 8:06 AM, Huon Wilson wrote: >> >> > I wrote a quick & crappy script that automates going from commit -> PR: >> > >> > #!/bin/sh >> > >> > if [ $# -eq 0 ]; then >> > echo 'Usage: which-pr COMMIT' >> > exit 0 >> > fi >> > >> > git log master ^$1 --ancestry-path --oneline --merges | \ >> > tail -1 | \ >> > sed 's at .*#\([0-9]*\) : .*@ >> http://github.com/mozilla/rust/pull/\1@' >> > >> > Putting this in your path gives: >> > >> > $ which-pr 6555b04 >> > http://github.com/mozilla/rust/pull/12345 >> > >> > $ which-pr a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2 >> > http://github.com/mozilla/rust/pull/12162 >> > >> > Of course, I'm sure there are corner cases that don't work, and it's >> > definitely not as usable as something directly encoded in the commit. >> > >> > >> > Huon >> > >> > >> > >> > On 18/02/14 13:17, Nick Cameron wrote: >> > >> > Right, that is exactly what I want to see, just on every commit. For >> > example, >> > >> https://github.com/mozilla/rust/commit/a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2 >> . >> > has none of that info and I can't see any way to get it (without the >> kind >> > of Git-fu suggested earlier). (Well, I can actually see that >> r=nikomatsakis >> > from the comments at the bottom, but I can't see how that r+ came about, >> > whether there was any discussion, whether there was an issue where this >> was >> > discussed or not, etc.). >> > >> > >> > On Tue, Feb 18, 2014 at 3:02 PM, Corey Richardson > >wrote: >> > >> >> >> >> >> https://github.com/mozilla/rust/commit/25147b2644ed569f16f22dc02d10a0a9b7b97c7e >> >> seems to provide all of the information you are asking for? It >> >> includes the text of the PR description, the PR number, the name of >> >> the branch, and who reviewed it. I agree with your premise but I'm not >> >> sure I agree that the current situation isn't adequate. But I wouldn't >> >> be opposed to such a change. >> >> >> >> On Mon, Feb 17, 2014 at 8:54 PM, Nick Cameron >> wrote: >> >> > Whether we need issues for PRs is a separate discussion. There has >> to be >> >> > _something_ for every commit - either a PR or an issue, at the least >> >> there >> >> > needs to be an r+ somewhere. I would like to see who reviewed >> something >> >> so I >> >> > can ping someone with questions other than the author (if they are >> >> offline). >> >> > Any discussion is likely to be useful. >> >> > >> >> > So the question is how to find that, when necessary. GitHub sometimes >> >> fails >> >> > to point to the info. And when it does, you do not know if you are >> >> missing >> >> > more info. For the price of 6 characters in the commit message (or >> "no >> >> > issue"), we know with certainty where to find that info and that we >> are >> >> not >> >> > missing other potentially useful info. This would not slow down >> >> development >> >> > in any way. >> >> > >> >> > Note that this is orthogonal to use of version control - you still >> need >> >> to >> >> > know Git in order to get the commit message - it is about how one >> can go >> >> > easily from a commit message to meta-data about a commit. >> >> > >> >> > >> >> > On Tue, Feb 18, 2014 at 12:53 PM, Kevin Ballard >> wrote: >> >> >> >> >> >> This is not going to work in the slightest. >> >> >> >> >> >> Most PRs don't have an associated issue. The pull request is the >> issue. >> >> >> And that's perfectly fine. There's no need to file an issue separate >> >> from >> >> >> the PR itself. Requiring a referenced issue for every single commit >> >> would be >> >> >> extremely cumbersome, serve no real purpose aside from aiding an >> >> >> unwillingness to learn how source control works, and would probably >> >> slow >> >> >> down the rate of development of Rust. >> >> >> >> >> >> -Kevin >> >> >> >> >> >> On Feb 17, 2014, at 3:50 PM, Nick Cameron >> wrote: >> >> >> >> >> >> At worst you could just use the issue number for the PR. But I think >> >> all >> >> >> non-trivial commits _should_ have an issue associated. For really >> tiny >> >> >> commits we could allow "no issue" or '#0' in the message. Just so >> long >> >> as >> >> >> the author is being explicit, I think that is OK. >> >> >> >> >> >> >> >> >> On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence >> >> wrote: >> >> >>> >> >> >>> Maybe I'm misunderstanding? This would require that all commits be >> >> >>> specifically associated with an issue. I don't have actual stats, >> but >> >> >>> briefly skimming recent commits and looking at the issue tracker, a >> >> lot of >> >> >>> commits can't be reasonably associated with an issue. This >> >> requirement would >> >> >>> either force people to create fake issues for each commit, or to >> >> reference >> >> >>> tangentially-related or overly-broad issues in commit messages, >> >> neither of >> >> >>> which is very useful. >> >> >>> >> >> >>> Referencing any conversation that leads to or influences a commit >> is a >> >> >>> good idea, but something this inflexible doesn't seem right. >> >> >>> >> >> >>> My 1.5?. >> >> >>> >> >> >>> >> >> >>> On Tue, 18 Feb 2014, Nick Cameron wrote: >> >> >>> >> >> >>>> How would people feel about a requirement for all commit messages >> to >> >> >>>> have >> >> >>>> an issue number in them? And could we make bors enforce that? >> >> >>>> >> >> >>>> The reason is that GitHub is very bad at being able to trace back >> a >> >> >>>> commit >> >> >>>> to the issue it fixes (sometimes it manages, but not always). Not >> >> being >> >> >>>> able to find the discussion around a commit is extremely annoying. >> >> >>>> >> >> >>>> Cheers, Nick >> >> >>>> >> >> >>> >> >> >>> -- >> >> >>> Scott Lawrence >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> Rust-dev mailing list >> >> >> Rust-dev at mozilla.org >> >> >> https://mail.mozilla.org/listinfo/rust-dev >> >> >> >> >> >> >> >> > >> >> > >> >> > _______________________________________________ >> >> > Rust-dev mailing list >> >> > Rust-dev at mozilla.org >> >> > https://mail.mozilla.org/listinfo/rust-dev >> >> > >> >> >> > >> > >> > >> > _______________________________________________ >> > Rust-dev mailing listRust-dev at mozilla.orghttps:// >> mail.mozilla.org/listinfo/rust-dev >> > >> > >> > >> > _______________________________________________ >> > Rust-dev mailing list >> > Rust-dev at mozilla.org >> > https://mail.mozilla.org/listinfo/rust-dev >> > >> > >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://mail.mozilla.org/pipermail/rust-dev/attachments/20140218/769ba219/attachment.html >> > >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> >> ------------------------------ >> >> End of Rust-dev Digest, Vol 44, Issue 70 >> **************************************** >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ted.horst at earthlink.net Wed Feb 19 07:21:06 2014 From: ted.horst at earthlink.net (Ted Horst) Date: Wed, 19 Feb 2014 09:21:06 -0600 Subject: [rust-dev] lib: Datetime library In-Reply-To: <5303D62D.9050306@fredy.gr> References: <5303D62D.9050306@fredy.gr> Message-ID: <1B405EB1-87A9-44B4-822A-C56DE59E2CA0@earthlink.net> On 2014-02-18, at 15:52, Alfredos (fredy) Damkalis wrote: > Hi everyone, > > I am new to rust and interested in writing datetime library. > > I have already read most of the linked documents and code gathered by > Luis de Bethencourt and others in wiki page [1]. > > I have also read the thread [2] where Luis offered his help on writing > this library. I have talked to Luis and unfortunately he is busy these > days, so I have offered to continue his work. > > Searching about datetime libraries ended up to JSR 310 [3] which was > also mentioned in the previous thread [2]. This specification is in > final draft state and it seems to be the most complete one out there > about datetime libraries. You can take a quick look at its basic ideas > in a recent article [4] in java magazine. > > I am also aware of Ted Horst's work[5] where the last commits look like > maintenance work. I am not sure if he is going to expand his library, > unfortunately I didn't have the chance to talk to him. > My stuff is just for playing around with the language, its not very interesting. I would be willing to contribute to a datetime library if you want some help. Ted > So I would like to know if anyone else working on this and to read your > comments on the JSR 310 choice. > > Thank you, > fredy > > [1] https://github.com/mozilla/rust/wiki/Lib-datetime > [2] https://mail.mozilla.org/pipermail/rust-dev/2013-September/005528.html > [3] https://jcp.org/en/jsr/detail?id=310 > [4] > http://www.oracle.com/technetwork/articles/java/jf14-date-time-2125367.html > [5] https://github.com/tedhorst/rust_datetime > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From steve at steveklabnik.com Wed Feb 19 09:23:23 2014 From: steve at steveklabnik.com (Steve Klabnik) Date: Wed, 19 Feb 2014 09:23:23 -0800 Subject: [rust-dev] RFC: About the library stabilization process In-Reply-To: References: <53040B8A.5070807@mozilla.com> Message-ID: I would also agree that yes, this is the wrong default. Things should default to Stability 1 unless otherwise marked. If you don't care about stability tracking, this seems completely reasonable to properly communicate your intentions. From palmercox at gmail.com Wed Feb 19 09:26:26 2014 From: palmercox at gmail.com (Palmer Cox) Date: Wed, 19 Feb 2014 12:26:26 -0500 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: <530495F7.7080106@ntecs.de> References: <530495F7.7080106@ntecs.de> Message-ID: Why not just modify the Lines iterator to return values of IoResult<~str>? All the caller has to do to unwrap that is to use if_ok!() or try!() on the returned value, so, its basically just as easy to use and it means that errors are handled consistently. I don't see why this particular use case calls for a completely different error handling strategy than any other IO code. -Palmer Cox On Wed, Feb 19, 2014 at 6:31 AM, Michael Neumann wrote: > > Am 19.02.2014 08:52, schrieb Phil Dawes: > > Is that not a big problem for production code? I think I'd prefer the >> default case to be to crash the task than deal with a logic bug. >> >> The existence of library functions that swallow errors makes reviewing >> code and reasoning about failure cases a lot more difficult. >> > > This is why I proposed a FailureReader: https://github.com/mozilla/ > rust/issues/12368 > > Regards, > > Michael > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Wed Feb 19 09:35:25 2014 From: corey at octayn.net (Corey Richardson) Date: Wed, 19 Feb 2014 12:35:25 -0500 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: <530495F7.7080106@ntecs.de> Message-ID: Keep in mind that "end of file" and "would block" are considered "errors"... On Wed, Feb 19, 2014 at 12:26 PM, Palmer Cox wrote: > Why not just modify the Lines iterator to return values of IoResult<~str>? > All the caller has to do to unwrap that is to use if_ok!() or try!() on the > returned value, so, its basically just as easy to use and it means that > errors are handled consistently. I don't see why this particular use case > calls for a completely different error handling strategy than any other IO > code. > > -Palmer Cox > > > > On Wed, Feb 19, 2014 at 6:31 AM, Michael Neumann wrote: > >> >> Am 19.02.2014 08:52, schrieb Phil Dawes: >> >> Is that not a big problem for production code? I think I'd prefer the >>> default case to be to crash the task than deal with a logic bug. >>> >>> The existence of library functions that swallow errors makes reviewing >>> code and reasoning about failure cases a lot more difficult. >>> >> >> This is why I proposed a FailureReader: https://github.com/mozilla/ >> rust/issues/12368 >> >> Regards, >> >> Michael >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From palmercox at gmail.com Wed Feb 19 09:40:41 2014 From: palmercox at gmail.com (Palmer Cox) Date: Wed, 19 Feb 2014 12:40:41 -0500 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: <530495F7.7080106@ntecs.de> Message-ID: I think the Lines iterator could translate an EOF error into a None to abort iteration and pass all other errors through. I don't see a WouldBlock error code (is that the same as IoUnavailable), but that only applies to non-blocking IO and I don't think it makes sense to use use the Lines iterator in non-blocking mode. -Palmer Cox On Wed, Feb 19, 2014 at 12:35 PM, Corey Richardson wrote: > Keep in mind that "end of file" and "would block" are considered > "errors"... > > > On Wed, Feb 19, 2014 at 12:26 PM, Palmer Cox wrote: > >> Why not just modify the Lines iterator to return values of >> IoResult<~str>? All the caller has to do to unwrap that is to use if_ok!() >> or try!() on the returned value, so, its basically just as easy to use and >> it means that errors are handled consistently. I don't see why this >> particular use case calls for a completely different error handling >> strategy than any other IO code. >> >> -Palmer Cox >> >> >> >> On Wed, Feb 19, 2014 at 6:31 AM, Michael Neumann wrote: >> >>> >>> Am 19.02.2014 08:52, schrieb Phil Dawes: >>> >>> Is that not a big problem for production code? I think I'd prefer the >>>> default case to be to crash the task than deal with a logic bug. >>>> >>>> The existence of library functions that swallow errors makes reviewing >>>> code and reasoning about failure cases a lot more difficult. >>>> >>> >>> This is why I proposed a FailureReader: https://github.com/mozilla/ >>> rust/issues/12368 >>> >>> Regards, >>> >>> Michael >>> >>> _______________________________________________ >>> Rust-dev mailing list >>> Rust-dev at mozilla.org >>> https://mail.mozilla.org/listinfo/rust-dev >>> >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Wed Feb 19 10:42:35 2014 From: alex at crichton.co (Alex Crichton) Date: Wed, 19 Feb 2014 10:42:35 -0800 Subject: [rust-dev] RFC: About the library stabilization process In-Reply-To: References: <53040B8A.5070807@mozilla.com> Message-ID: > Does that include ABI compatibility? For now, this is going to be tough to provide because of compiler bugs sadly (see #10208 and #10207). ABI stability is a broad topic which encompasses symbol names, whether the function is generic or not, implementation of a generic function, etc. For now, I believe the stability attributes are targeted at the signature of a method. > Should it be an error to use lower stability internally? One of the goals will be to have the intrinsics module be #[experimental] or something other than #[stable], but this module is the basis of implementation for many other stable functions, so I believe that the module itself will have to opt-in to using the unstable intrinsic api, but the api provided by the module will still be stable. >> By default all nodes are *stable* - library authors have to opt-in to stability index tracking. This may end up being the wrong default and we'll want to revisit. > > Oh dear god no. `stable` should be *earned* over time, otherwise it's > meaningless. The compiler should treat untagged code as `unstable`, > `experimental` or a special `untagged` stability and accept that level > by default. One thing we should be sure to accomplish with this default is that you cannot use unstable apis by default. I imagine that an #[unstable] module can use #[unstable] functions, but perhaps #[unstable]-flagged items still have to opt-in to using other unstable items? > Requesting permission to spam the issue tracker with minor annoyances > we definitely don't want to live with forever. The C FFI <-> idiomatic > Rust bridge is especially painful. You may be interested in #11920 From flaper87 at gmail.com Wed Feb 19 12:12:05 2014 From: flaper87 at gmail.com (Flaper87) Date: Wed, 19 Feb 2014 21:12:05 +0100 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) Message-ID: Hi all, I'd like to share some thoughts with regard to our current test and approval process. Let me break this thoughts into 2 separate sections: 1. Testing: Currently, all patches are being tested after they are approved. However, I think it would be of great benefit for contributors - and reviewers - to test patches before and after they're approved. Testing the patches before approval will allow folks proposing patches - although they're expected to test the patches before submitting them - and reviewers to know that the patch is indeed mergeable. Furthermore, it will help spotting corner cases, regressions that would benefit from a good discussion while the PR is hot. I think we don't need to run all jobs, perhaps just Windows, OSx and Linux should be enough for a first test phase. It would also be nice to run lint checks, stability checks etc. IIRC, GH's API should allow us to notify this checks failures. 2. Approval Process I'm very happy about how patches are reviewed. The time a patch waits before receiving the first comment is almost 0 seconds and we are spread in many patches. If we think someone else should take a look at some patch, we always make sure to mention that person. I think the language would benefit from a more strict approval process. For example, requiring 2 r+ from 2 different reviewers instead of 1. This might seem a bit drastic now, however as the number of contributors grows, this will help with making sure that patches are reviewed at least by 2 core reviewers and they get enough attention. I think both of these points are very important now that we're moving towards 1.0 and the community keeps growing. Thoughts? Feedback? -- Flavio (@flaper87) Percoco http://www.flaper87.com http://github.com/FlaPer87 -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Wed Feb 19 12:28:26 2014 From: corey at octayn.net (Corey Richardson) Date: Wed, 19 Feb 2014 15:28:26 -0500 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: References: Message-ID: On Wed, Feb 19, 2014 at 3:12 PM, Flaper87 wrote: > > Hi all, > > I'd like to share some thoughts with regard to our current test and approval process. Let me break this thoughts into 2 separate sections: > > 1. Testing: > > Currently, all patches are being tested after they are approved. However, I think it would be of great benefit for contributors - and reviewers - to test patches before and after they're approved. Testing the patches before approval will allow folks proposing patches - although they're expected to test the patches before submitting them - and reviewers to know that the patch is indeed mergeable. Furthermore, it will help spotting corner cases, regressions that would benefit from a good discussion while the PR is hot. > > I think we don't need to run all jobs, perhaps just Windows, OSx and Linux should be enough for a first test phase. It would also be nice to run lint checks, stability checks etc. IIRC, GH's API should allow us to notify this checks failures. > This is a pretty bad idea, allowing *arbitrary unreviewed anything* to run on the buildbots. All it needs to do is remove the contents of its home directory to put the builder out of commission, afaik. It'd definitely be nice to have it run tidy etc first, but there needs to be a check tidy or any of its deps. From pnkfelix at mozilla.com Wed Feb 19 12:38:19 2014 From: pnkfelix at mozilla.com (Felix S. Klock II) Date: Wed, 19 Feb 2014 21:38:19 +0100 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: References: Message-ID: <5305163B.9050909@mozilla.com> On 19/02/2014 21:12, Flaper87 wrote: > 2. Approval Process > > [...] For example, requiring 2 r+ from 2 different reviewers instead > of 1. This might seem a bit drastic now, however as the number of > contributors grows, this will help with making sure that patches are > reviewed at least by 2 core reviewers and they get enough attention. I mentioned this on the #rust-internals irc channel but I figured I should broadcast it here as well: regarding fractional r+, someone I was talking to recently described their employer's process, where the first reviewer (who I think is perhaps part of a priveleged subgroup) assigned the patch with the number of reviewers it needs so that it isn't a flat "every patch needs two reviewers" but instead, someone says "this looks like something big/hairy enough that it needs K reviewers" just something to consider, if we're going to look into strengthening our review process. Cheers, -Felix On 19/02/2014 21:12, Flaper87 wrote: > Hi all, > > I'd like to share some thoughts with regard to our current test and > approval process. Let me break this thoughts into 2 separate sections: > > 1. Testing: > > Currently, all patches are being tested after they are approved. > However, I think it would be of great benefit for contributors - and > reviewers - to test patches before and after they're approved. Testing > the patches before approval will allow folks proposing patches - > although they're expected to test the patches before submitting them - > and reviewers to know that the patch is indeed mergeable. Furthermore, > it will help spotting corner cases, regressions that would benefit > from a good discussion while the PR is hot. > > I think we don't need to run all jobs, perhaps just Windows, OSx and > Linux should be enough for a first test phase. It would also be nice > to run lint checks, stability checks etc. IIRC, GH's API should allow > us to notify this checks failures. > > 2. Approval Process > > I'm very happy about how patches are reviewed. The time a patch waits > before receiving the first comment is almost 0 seconds and we are > spread in many patches. If we think someone else should take a look at > some patch, we always make sure to mention that person. > > I think the language would benefit from a more strict approval > process. For example, requiring 2 r+ from 2 different reviewers > instead of 1. This might seem a bit drastic now, however as the number > of contributors grows, this will help with making sure that patches > are reviewed at least by 2 core reviewers and they get enough attention. > > > I think both of these points are very important now that we're moving > towards 1.0 and the community keeps growing. > > Thoughts? Feedback? > > -- > Flavio (@flaper87) Percoco > http://www.flaper87.com > http://github.com/FlaPer87 > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -- irc: pnkfelix on irc.mozilla.org email: {fklock, pnkfelix}@mozilla.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgaebel at uwaterloo.ca Wed Feb 19 13:21:15 2014 From: cgaebel at uwaterloo.ca (Clark Gaebel) Date: Wed, 19 Feb 2014 16:21:15 -0500 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: <5305163B.9050909@mozilla.com> References: <5305163B.9050909@mozilla.com> Message-ID: As an alternative to "arbitrary code running on the buildbot", there could be a b+ which means "please try building this" which core contributors can comment with after a quick skim through the patch. On Wed, Feb 19, 2014 at 3:38 PM, Felix S. Klock II wrote: > > On 19/02/2014 21:12, Flaper87 wrote: > > 2. Approval Process > > [...] For example, requiring 2 r+ from 2 different reviewers instead of > 1. This might seem a bit drastic now, however as the number of contributors > grows, this will help with making sure that patches are reviewed at least > by 2 core reviewers and they get enough attention. > > > I mentioned this on the #rust-internals irc channel but I figured I should > broadcast it here as well: > > regarding fractional r+, someone I was talking to recently described their > employer's process, where the first reviewer (who I think is perhaps part > of a priveleged subgroup) assigned the patch with the number of reviewers > it needs so that it isn't a flat "every patch needs two reviewers" but > instead, someone says "this looks like something big/hairy enough that it > needs K reviewers" > > just something to consider, if we're going to look into strengthening our > review process. > > Cheers, > -Felix > > > On 19/02/2014 21:12, Flaper87 wrote: > > Hi all, > > I'd like to share some thoughts with regard to our current test and > approval process. Let me break this thoughts into 2 separate sections: > > 1. Testing: > > Currently, all patches are being tested after they are approved. However, > I think it would be of great benefit for contributors - and reviewers - to > test patches before and after they're approved. Testing the patches before > approval will allow folks proposing patches - although they're expected to > test the patches before submitting them - and reviewers to know that the > patch is indeed mergeable. Furthermore, it will help spotting corner cases, > regressions that would benefit from a good discussion while the PR is hot. > > I think we don't need to run all jobs, perhaps just Windows, OSx and > Linux should be enough for a first test phase. It would also be nice to run > lint checks, stability checks etc. IIRC, GH's API should allow us to notify > this checks failures. > > 2. Approval Process > > I'm very happy about how patches are reviewed. The time a patch waits > before receiving the first comment is almost 0 seconds and we are spread in > many patches. If we think someone else should take a look at some patch, we > always make sure to mention that person. > > I think the language would benefit from a more strict approval process. > For example, requiring 2 r+ from 2 different reviewers instead of 1. This > might seem a bit drastic now, however as the number of contributors grows, > this will help with making sure that patches are reviewed at least by 2 > core reviewers and they get enough attention. > > > I think both of these points are very important now that we're moving > towards 1.0 and the community keeps growing. > > Thoughts? Feedback? > > -- > Flavio (@flaper87) Percoco > http://www.flaper87.com > http://github.com/FlaPer87 > > > _______________________________________________ > Rust-dev mailing listRust-dev at mozilla.orghttps://mail.mozilla.org/listinfo/rust-dev > > > > -- > irc: pnkfelix on irc.mozilla.org > email: {fklock, pnkfelix}@mozilla.com > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -- Clark. Key ID : 0x78099922 Fingerprint: B292 493C 51AE F3AB D016 DD04 E5E3 C36F 5534 F907 -------------- next part -------------- An HTML attachment was scrubbed... URL: From val at markovic.io Wed Feb 19 13:27:36 2014 From: val at markovic.io (Val Markovic) Date: Wed, 19 Feb 2014 13:27:36 -0800 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: <5305163B.9050909@mozilla.com> References: <5305163B.9050909@mozilla.com> Message-ID: On Wed, Feb 19, 2014 at 12:38 PM, Felix S. Klock II wrote: > > On 19/02/2014 21:12, Flaper87 wrote: > > 2. Approval Process > > [...] For example, requiring 2 r+ from 2 different reviewers instead of > 1. This might seem a bit drastic now, however as the number of contributors > grows, this will help with making sure that patches are reviewed at least > by 2 core reviewers and they get enough attention. > > > I mentioned this on the #rust-internals irc channel but I figured I should > broadcast it here as well: > > regarding fractional r+, someone I was talking to recently described their > employer's process, where the first reviewer (who I think is perhaps part > of a priveleged subgroup) assigned the patch with the number of reviewers > it needs so that it isn't a flat "every patch needs two reviewers" but > instead, someone says "this looks like something big/hairy enough that it > needs K reviewers" > >From my personal experience with big companies, this is how it works. Having a policy where you require more than one reviewer for everything is too much process and wasted time; it's the big changes that require more than one person reviewing that get this level of attention, usually with the first reviewer going "hey X, could you take a look at this as well?". And having a every pull request auto-tested as soon as it is sent out would be wonderful; Travis CI is great for this (it isolates and times-out builds if needed), but obviously couldn't work for Rust. A "b+" meaning "please build this" seems like the second best thing. > > just something to consider, if we're going to look into strengthening our > review process. > > Cheers, > -Felix > > > On 19/02/2014 21:12, Flaper87 wrote: > > Hi all, > > I'd like to share some thoughts with regard to our current test and > approval process. Let me break this thoughts into 2 separate sections: > > 1. Testing: > > Currently, all patches are being tested after they are approved. However, > I think it would be of great benefit for contributors - and reviewers - to > test patches before and after they're approved. Testing the patches before > approval will allow folks proposing patches - although they're expected to > test the patches before submitting them - and reviewers to know that the > patch is indeed mergeable. Furthermore, it will help spotting corner cases, > regressions that would benefit from a good discussion while the PR is hot. > > I think we don't need to run all jobs, perhaps just Windows, OSx and > Linux should be enough for a first test phase. It would also be nice to run > lint checks, stability checks etc. IIRC, GH's API should allow us to notify > this checks failures. > > 2. Approval Process > > I'm very happy about how patches are reviewed. The time a patch waits > before receiving the first comment is almost 0 seconds and we are spread in > many patches. If we think someone else should take a look at some patch, we > always make sure to mention that person. > > I think the language would benefit from a more strict approval process. > For example, requiring 2 r+ from 2 different reviewers instead of 1. This > might seem a bit drastic now, however as the number of contributors grows, > this will help with making sure that patches are reviewed at least by 2 > core reviewers and they get enough attention. > > > I think both of these points are very important now that we're moving > towards 1.0 and the community keeps growing. > > Thoughts? Feedback? > > -- > Flavio (@flaper87) Percoco > http://www.flaper87.com > http://github.com/FlaPer87 > > > _______________________________________________ > Rust-dev mailing listRust-dev at mozilla.orghttps://mail.mozilla.org/listinfo/rust-dev > > > > -- > irc: pnkfelix on irc.mozilla.org > email: {fklock, pnkfelix}@mozilla.com > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at sb.org Wed Feb 19 13:44:02 2014 From: kevin at sb.org (Kevin Ballard) Date: Wed, 19 Feb 2014 13:44:02 -0800 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: <530495F7.7080106@ntecs.de> Message-ID: My understanding is that .lines() exists primarily to make quick-and-dirty I/O as easy as it is in, say, a scripting language. How do scripting languages handle I/O errors when iterating lines? Do they raise an exception? Perhaps .lines() should fail!() if it gets a non-EOF error. Then we could introduce a new struct to wrap any Reader that translates non-EOF errors into EOF specifically to let you say ?I really don?t care about failure?. That said, I?m comfortable with things as they are now. Making .lines() provide an IoResult would destroy much of the convenience of the function. I know I personally have used .lines() with stdin(), which is an area where I truly don?t care about any non-EOF error, because, heck it?s stdin. All I care about is when stdin is closed. -Kevin On Feb 19, 2014, at 9:40 AM, Palmer Cox wrote: > I think the Lines iterator could translate an EOF error into a None to abort iteration and pass all other errors through. I don't see a WouldBlock error code (is that the same as IoUnavailable), but that only applies to non-blocking IO and I don't think it makes sense to use use the Lines iterator in non-blocking mode. > > -Palmer Cox > > > > On Wed, Feb 19, 2014 at 12:35 PM, Corey Richardson wrote: > Keep in mind that "end of file" and "would block" are considered "errors"... > > > On Wed, Feb 19, 2014 at 12:26 PM, Palmer Cox wrote: > Why not just modify the Lines iterator to return values of IoResult<~str>? All the caller has to do to unwrap that is to use if_ok!() or try!() on the returned value, so, its basically just as easy to use and it means that errors are handled consistently. I don't see why this particular use case calls for a completely different error handling strategy than any other IO code. > > -Palmer Cox > > > > On Wed, Feb 19, 2014 at 6:31 AM, Michael Neumann wrote: > > Am 19.02.2014 08:52, schrieb Phil Dawes: > > Is that not a big problem for production code? I think I'd prefer the default case to be to crash the task than deal with a logic bug. > > The existence of library functions that swallow errors makes reviewing code and reasoning about failure cases a lot more difficult. > > This is why I proposed a FailureReader: https://github.com/mozilla/rust/issues/12368 > > Regards, > > Michael > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at sb.org Wed Feb 19 13:48:23 2014 From: kevin at sb.org (Kevin Ballard) Date: Wed, 19 Feb 2014 13:48:23 -0800 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: References: Message-ID: <1FF0C518-9137-459E-BA83-47AB6D27CE34@sb.org> On Feb 19, 2014, at 12:28 PM, Corey Richardson wrote: > This is a pretty bad idea, allowing *arbitrary unreviewed anything* to > run on the buildbots. All it needs to do is remove the contents of its > home directory to put the builder out of commission, afaik. It'd > definitely be nice to have it run tidy etc first, but there needs to > be a check tidy or any of its deps. This is a very good point. And it could do more than that too. It could use a local privilege escalation exploit (if one exists) to take over the entire machine. Or it could start sending out spam emails. Or maybe it starts mining bit coins. Code should not be run that is not at least read first by a reviewer. -Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbau.pp at gmail.com Wed Feb 19 14:03:31 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Thu, 20 Feb 2014 09:03:31 +1100 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: References: <5305163B.9050909@mozilla.com> Message-ID: <53052A33.3080505@gmail.com> Another alternative is to have a few fast builders (e.g. whatever configuration the try bots run) just run through the queue as they're r+'d to get fast feedback. (A (significant) problem with all these proposals is the increase in infrastructure complexity: there's already semi-regular automation failures.) Huon On 20/02/14 08:21, Clark Gaebel wrote: > As an alternative to "arbitrary code running on the buildbot", there > could be a b+ which means "please try building this" which core > contributors can comment with after a quick skim through the patch. > > > On Wed, Feb 19, 2014 at 3:38 PM, Felix S. Klock II > > wrote: > > > On 19/02/2014 21:12, Flaper87 wrote: >> 2. Approval Process >> >> [...] For example, requiring 2 r+ from 2 different reviewers >> instead of 1. This might seem a bit drastic now, however as the >> number of contributors grows, this will help with making sure >> that patches are reviewed at least by 2 core reviewers and they >> get enough attention. > > I mentioned this on the #rust-internals irc channel but I figured > I should broadcast it here as well: > > regarding fractional r+, someone I was talking to recently > described their employer's process, where the first reviewer (who > I think is perhaps part of a priveleged subgroup) assigned the > patch with the number of reviewers it needs so that it isn't a > flat "every patch needs two reviewers" but instead, someone says > "this looks like something big/hairy enough that it needs K reviewers" > > just something to consider, if we're going to look into > strengthening our review process. > > Cheers, > -Felix > > > On 19/02/2014 21:12, Flaper87 wrote: >> Hi all, >> >> I'd like to share some thoughts with regard to our current test >> and approval process. Let me break this thoughts into 2 separate >> sections: >> >> 1. Testing: >> >> Currently, all patches are being tested after they are approved. >> However, I think it would be of great benefit for contributors - >> and reviewers - to test patches before and after they're >> approved. Testing the patches before approval will allow folks >> proposing patches - although they're expected to test the patches >> before submitting them - and reviewers to know that the patch is >> indeed mergeable. Furthermore, it will help spotting corner >> cases, regressions that would benefit from a good discussion >> while the PR is hot. >> >> I think we don't need to run all jobs, perhaps just Windows, OSx >> and Linux should be enough for a first test phase. It would also >> be nice to run lint checks, stability checks etc. IIRC, GH's API >> should allow us to notify this checks failures. >> >> 2. Approval Process >> >> I'm very happy about how patches are reviewed. The time a patch >> waits before receiving the first comment is almost 0 seconds and >> we are spread in many patches. If we think someone else should >> take a look at some patch, we always make sure to mention that >> person. >> >> I think the language would benefit from a more strict approval >> process. For example, requiring 2 r+ from 2 different reviewers >> instead of 1. This might seem a bit drastic now, however as the >> number of contributors grows, this will help with making sure >> that patches are reviewed at least by 2 core reviewers and they >> get enough attention. >> >> >> I think both of these points are very important now that we're >> moving towards 1.0 and the community keeps growing. >> >> Thoughts? Feedback? >> >> -- >> Flavio (@flaper87) Percoco >> http://www.flaper87.com >> http://github.com/FlaPer87 >> >> >> _______________________________________________ >> Rust-dev mailing list >> Rust-dev at mozilla.org >> https://mail.mozilla.org/listinfo/rust-dev > > > -- > irc: pnkfelix onirc.mozilla.org > email: {fklock,pnkfelix}@mozilla.com > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > > > > -- > Clark. > > Key ID : 0x78099922 > Fingerprint: B292 493C 51AE F3AB D016 DD04 E5E3 C36F 5534 F907 > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From flaper87 at gmail.com Wed Feb 19 14:04:41 2014 From: flaper87 at gmail.com (Flaper87) Date: Wed, 19 Feb 2014 23:04:41 +0100 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: <1FF0C518-9137-459E-BA83-47AB6D27CE34@sb.org> References: <1FF0C518-9137-459E-BA83-47AB6D27CE34@sb.org> Message-ID: 2014-02-19 22:48 GMT+01:00 Kevin Ballard : > On Feb 19, 2014, at 12:28 PM, Corey Richardson wrote: > > This is a pretty bad idea, allowing *arbitrary unreviewed anything* to > run on the buildbots. All it needs to do is remove the contents of its > home directory to put the builder out of commission, afaik. It'd > definitely be nice to have it run tidy etc first, but there needs to > be a check tidy or any of its deps. > > > This is a very good point. And it could do more than that too. It could > use a local privilege escalation exploit (if one exists) to take over the > entire machine. Or it could start sending out spam emails. Or maybe it > starts mining bit coins. > > Code should not be run that is not at least read first by a reviewer. > > I should have expanded more that thought. I'm not expecting this to be doable with the way our jobs now. This would require things like: * Running jobs isolated boxes / VMs * Set limits on the execution time * Remove any internet connection in the box (?) * [add here whatever would make this more secure] I'm not proposing something new here. This is something that I've seen done in several communities (OpenStack's is one of those) and as mentioned in my previous emails, there's some benefit behind this. -- Flavio (@flaper87) Percoco http://www.flaper87.com http://github.com/FlaPer87 -------------- next part -------------- An HTML attachment was scrubbed... URL: From leebraid at gmail.com Wed Feb 19 14:34:18 2014 From: leebraid at gmail.com (Lee Braiden) Date: Wed, 19 Feb 2014 22:34:18 +0000 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: <530495F7.7080106@ntecs.de> Message-ID: <5305316A.2050805@gmail.com> On 19/02/14 21:44, Kevin Ballard wrote: > My understanding is that .lines() exists primarily to make > quick-and-dirty I/O as easy as it is in, say, a scripting language. > How do scripting languages handle I/O errors when iterating lines? Do > they raise an exception? Perhaps .lines() should fail!() if it gets a > non-EOF error. Yes, Python, at least, raises exceptions - IOError and OSError, specifically. > Then we could introduce a new struct to wrap any Reader that > translates non-EOF errors into EOF specifically to let you say "I > really don't care about failure". It sounds like a very specific way to handle a very general problem. People like (modern, complete) scripting languages because they handle this sort of intricacy in elegant, ways, not because they gloss over it and make half-baked programs that don't handle errors. It's just that you can, say, handle IOErrors in one step, at the top of your script, except for one particular issue that you know how to recover from, six levels into the call stack. Exceptions (so long as there isn't a lot of boilerplate around them) let you do that, easily. Rust needs a similarly generic approach to propagating errors and handling them five levels up, whether that's exceptions or fails (I don't think they currently are flexible enough), or monads, or something else. > That said, I'm comfortable with things as they are now. I'm not. I think this is the tip of the iceberg: a code smell that's tipping us off about deeper usability issues. > Making .lines() provide an IoResult would destroy much of the > convenience of the function. Only if there's no convenient way to handle ioResults (or results in general). > I know I personally have used .lines() with stdin(), which is an area > where I truly don't care about any non-EOF error, because, heck it's > stdin. All I care about is when stdin is closed. a) It's a file handle, which can be mapped to a pipe, a serial port, or some other much more error prone input device than the terminal b) It may be stdin, but it's also the input to your program. Garbage in, garbage out. c) Ideally your code won't care if it's stdin or some other device, except for a few lines of code which check the configuration / environment and decide which input file to pass to other code. -- Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at sb.org Wed Feb 19 14:50:45 2014 From: kevin at sb.org (Kevin Ballard) Date: Wed, 19 Feb 2014 14:50:45 -0800 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: <5305316A.2050805@gmail.com> References: <530495F7.7080106@ntecs.de> <5305316A.2050805@gmail.com> Message-ID: On Feb 19, 2014, at 2:34 PM, Lee Braiden wrote: >> Then we could introduce a new struct to wrap any Reader that translates non-EOF errors into EOF specifically to let you say ?I really don?t care about failure?. > > It sounds like a very specific way to handle a very general problem. People like (modern, complete) scripting languages because they handle this sort of intricacy in elegant, ways, not because they gloss over it and make half-baked programs that don't handle errors. It's just that you can, say, handle IOErrors in one step, at the top of your script, except for one particular issue that you know how to recover from, six levels into the call stack. Exceptions (so long as there isn't a lot of boilerplate around them) let you do that, easily. Rust needs a similarly generic approach to propagating errors and handling them five levels up, whether that's exceptions or fails (I don't think they currently are flexible enough), or monads, or something else. In my experience, exceptions are actually a very inelegant way to handle this problem. The code 5 levels higher that catches the exception doesn?t have enough information about the problem in order to recover. Maybe it just discards the entire computation, or perhaps restarts it. But it can?t recover and continue. We already tried conditions for this, which do let you recover and continue, except that turned out to be a dismal failure. Code that didn?t touch conditions were basically just hoping nothing went wrong, and would fail!() if it did. Code that did try to handle errors was very verbose because conditions were a PITA to work with. As for what we?re talking about here. lines() is fairly unique right now in its discarding of errors. I can?t think of another example offhand that will discard errors. As I said before, I believe that .lines() exists to facilitate I/O handling in a fashion similar to scripting languages, primarily because one of the basic things people try to do with new languages is read from stdin and handle the input, and it?s great if we can say our solution to that is: fn main() { for line in io::stdin().lines() { print!(?received: {}?, line); } } It?s a lot more confusing and off-putting if our example looks like fn main() { for line in io::stdin().lines() { match line { Ok(line) => print!(?received: {}?, line), Err(e) => { println!(?error: {}?, e); break; } } } or alternatively fn main() { for line in io::stdin().lines() { let line = line.unwrap(); // new user says ?what is .unwrap()?? and is still not handling errors here print!(?received: {}?, line); } } Note that we can?t even use try!() (n?e if_ok!()) here because main() doesn?t return an IoResult. The other thing to consider is that StrSlice also exposes a .lines() method and it may be confusing to have two .lines() methods that yield different types. Given that, the only reasonable solutions appear to be: 1. Keep the current behavior. .lines() already documents its behavior; anyone who cares about errors should use .read_line() in a loop 2. Change .lines() to fail!() on a non-EOF error. Introduce a new wrapper type IgnoreErrReader (name suggestions welcome!) that translates all errors into EOF. Now the original sample code will fail!() on a non-EOF error, and there?s a defined way of turning it back into the version that ignores errors for people who legitimately want that. This could be exposed as a default method on Reader called .ignoring_errors() that consumes self and returns the new wrapper. 3. Keep .lines() as-is and add the wrapper struct that fail!()s on errors. This doesn?t make a lot of sense to me because the struct would only ever be used with .lines(), and therefore this seems worse than: 4. Change .lines() to fail!() on errors and add a new method .lines_ignoring_errs() that behaves the way .lines() does today. That?s kind of verbose though, and is a specialized form of suggestion #2 (and therefore less useful). 5. Remove .lines() entirely and live with the uglier way of reading stdin that will put off new users. 6. Add some way to retrieve the ignored error after the fact. This would require uglifying the Buffer trait to have .err() and .set_err() methods, as well as expanding all the implementors to provide a field to store that information. I?m in favor of solutions #1 or #2. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Wed Feb 19 14:53:27 2014 From: alex at crichton.co (Alex Crichton) Date: Wed, 19 Feb 2014 14:53:27 -0800 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: References: Message-ID: > Currently, all patches are being tested after they are approved. However, I > think it would be of great benefit for contributors - and reviewers - to > test patches before and after they're approved. I would personally love to explore using Travis-CI for this. I think this is almost exactly what travis was built for. That being said, there's no way that travis could handle a full `make check` for rust. However, perhaps travis could handle `make check-stage0-lite` (not that this rule exists yet). I think we would have to figure out how to avoid building LLVM, but beyond that we *should* be able to run a bunch of stage0 tests and optimistically print out the results of the PR. This obviously won't catch many classes of bugs, but perhaps it would be good enough for a preemptive check. The best part about this is that it's almost 0 overhead of automation for us because travis would handle all of it. From dbau.pp at gmail.com Wed Feb 19 15:29:50 2014 From: dbau.pp at gmail.com (Huon Wilson) Date: Thu, 20 Feb 2014 10:29:50 +1100 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: <530495F7.7080106@ntecs.de> <5305316A.2050805@gmail.com> Message-ID: <53053E6E.60907@gmail.com> #12368 has 3 concrete suggestions for possible solutions (which are included in your list of 6): - A FailingReader wrapper that wraps another reader and fails on errors - A ChompingReader wrapper that "chomps" errors, but stores them so that they are externally accessible - Have the lines() iterator itself store the error, so that it can be accessed after the loop https://github.com/mozilla/rust/issues/12368 Huon On 20/02/14 09:50, Kevin Ballard wrote: > On Feb 19, 2014, at 2:34 PM, Lee Braiden > wrote: > >>> Then we could introduce a new struct to wrap any Reader that >>> translates non-EOF errors into EOF specifically to let you say ?I >>> really don?t care about failure?. >> >> It sounds like a very specific way to handle a very general problem. >> People like (modern, complete) scripting languages because they >> handle this sort of intricacy in elegant, ways, not because they >> gloss over it and make half-baked programs that don't handle errors. >> It's just that you can, say, handle IOErrors in one step, at the top >> of your script, except for one particular issue that you know how to >> recover from, six levels into the call stack. Exceptions (so long as >> there isn't a lot of boilerplate around them) let you do that, >> easily. Rust needs a similarly generic approach to propagating >> errors and handling them five levels up, whether that's exceptions or >> fails (I don't think they currently are flexible enough), or monads, >> or something else. > > In my experience, exceptions are actually a very /inelegant/ way to > handle this problem. The code 5 levels higher that catches the > exception doesn?t have enough information about the problem in order > to recover. Maybe it just discards the entire computation, or perhaps > restarts it. But it can?t recover and continue. > > We already tried conditions for this, which do let you recover and > continue, except that turned out to be a dismal failure. Code that > didn?t touch conditions were basically just hoping nothing went wrong, > and would fail!() if it did. Code that did try to handle errors was > very verbose because conditions were a PITA to work with. > > As for what we?re talking about here. lines() is fairly unique right > now in its discarding of errors. I can?t think of another example > offhand that will discard errors. As I said before, I believe that > .lines() exists to facilitate I/O handling in a fashion similar to > scripting languages, primarily because one of the basic things people > try to do with new languages is read from stdin and handle the input, > and it?s great if we can say our solution to that is: > > fn main() { > for line in io::stdin().lines() { > print!(?received: {}?, line); > } > } > > It?s a lot more confusing and off-putting if our example looks like > > fn main() { > for line in io::stdin().lines() { > match line { > Ok(line) => print!(?received: {}?, line), > Err(e) => { > println!(?error: {}?, e); > break; > } > } > } > > or alternatively > > fn main() { > for line in io::stdin().lines() { > let line = line.unwrap(); // new user says ?what is > .unwrap()?? and is still not handling errors here > print!(?received: {}?, line); > } > } > > Note that we can?t even use try!() (n?e if_ok!()) here because main() > doesn?t return an IoResult. > > The other thing to consider is that StrSlice also exposes a .lines() > method and it may be confusing to have two .lines() methods that yield > different types. > > Given that, the only reasonable solutions appear to be: > > 1. Keep the current behavior. .lines() already documents its behavior; > anyone who cares about errors should use .read_line() in a loop > > 2. Change .lines() to fail!() on a non-EOF error. Introduce a new > wrapper type IgnoreErrReader (name suggestions welcome!) that > translates all errors into EOF. Now the original sample code will > fail!() on a non-EOF error, and there?s a defined way of turning it > back into the version that ignores errors for people who legitimately > want that. This could be exposed as a default method on Reader called > .ignoring_errors() that consumes self and returns the new wrapper. > > 3. Keep .lines() as-is and add the wrapper struct that fail!()s on > errors. This doesn?t make a lot of sense to me because the struct > would only ever be used with .lines(), and therefore this seems worse > than: > > 4. Change .lines() to fail!() on errors and add a new method > .lines_ignoring_errs() that behaves the way .lines() does today. > That?s kind of verbose though, and is a specialized form of suggestion > #2 (and therefore less useful). > > 5. Remove .lines() entirely and live with the uglier way of reading > stdin that will put off new users. > > 6. Add some way to retrieve the ignored error after the fact. This > would require uglifying the Buffer trait to have .err() and .set_err() > methods, as well as expanding all the implementors to provide a field > to store that information. > > I?m in favor of solutions #1 or #2. > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From palmercox at gmail.com Wed Feb 19 15:32:12 2014 From: palmercox at gmail.com (Palmer Cox) Date: Wed, 19 Feb 2014 18:32:12 -0500 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: <530495F7.7080106@ntecs.de> <5305316A.2050805@gmail.com> Message-ID: The existing syntax looks nice to show a newcomer. However, its not a generally useful API. Any program that cares about error handling can't use it. If someone shows that code snippet to a newcomer, what newcomer is looking at is an example of what robust Rust code shouldn't do. Somewhat like demonstrating C using the gets function. We can put a warning in the documentation, but many people don't read documentation unless they have a specific question. If this is used as an example of how to write Rust code, many people won't make it to the documentation for this method since when they look up how to iterate over the lines of a file, this example will come up and they will have no reason to consult the API docs. If the Rust error handling model is too difficult to show newcomers, then maybe there is something that needs to be improved in the model. I don't think having a method that opts out of error handling serves newcomers and it creates a trap that will be easy to fall into for even non-newcomers. If there is no point in changing Lines to return IoResult<~str>, I'd be in favor of #5. -Palmer Cox On Wed, Feb 19, 2014 at 5:50 PM, Kevin Ballard wrote: > On Feb 19, 2014, at 2:34 PM, Lee Braiden wrote: > > Then we could introduce a new struct to wrap any Reader that translates > non-EOF errors into EOF specifically to let you say "I really don't care > about failure". > > > It sounds like a very specific way to handle a very general problem. > People like (modern, complete) scripting languages because they handle this > sort of intricacy in elegant, ways, not because they gloss over it and make > half-baked programs that don't handle errors. It's just that you can, say, > handle IOErrors in one step, at the top of your script, except for one > particular issue that you know how to recover from, six levels into the > call stack. Exceptions (so long as there isn't a lot of boilerplate around > them) let you do that, easily. Rust needs a similarly generic approach to > propagating errors and handling them five levels up, whether that's > exceptions or fails (I don't think they currently are flexible enough), or > monads, or something else. > > > In my experience, exceptions are actually a very *inelegant* way to > handle this problem. The code 5 levels higher that catches the exception > doesn't have enough information about the problem in order to recover. > Maybe it just discards the entire computation, or perhaps restarts it. But > it can't recover and continue. > > We already tried conditions for this, which do let you recover and > continue, except that turned out to be a dismal failure. Code that didn't > touch conditions were basically just hoping nothing went wrong, and would > fail!() if it did. Code that did try to handle errors was very verbose > because conditions were a PITA to work with. > > As for what we're talking about here. lines() is fairly unique right now > in its discarding of errors. I can't think of another example offhand that > will discard errors. As I said before, I believe that .lines() exists to > facilitate I/O handling in a fashion similar to scripting languages, > primarily because one of the basic things people try to do with new > languages is read from stdin and handle the input, and it's great if we can > say our solution to that is: > > fn main() { > for line in io::stdin().lines() { > print!("received: {}", line); > } > } > > It's a lot more confusing and off-putting if our example looks like > > fn main() { > for line in io::stdin().lines() { > match line { > Ok(line) => print!("received: {}", line), > Err(e) => { > println!("error: {}", e); > break; > } > } > } > > or alternatively > > fn main() { > for line in io::stdin().lines() { > let line = line.unwrap(); // new user says "what is .unwrap()?" > and is still not handling errors here > print!("received: {}", line); > } > } > > Note that we can't even use try!() (n?e if_ok!()) here because main() > doesn't return an IoResult. > > The other thing to consider is that StrSlice also exposes a .lines() > method and it may be confusing to have two .lines() methods that yield > different types. > > Given that, the only reasonable solutions appear to be: > > 1. Keep the current behavior. .lines() already documents its behavior; > anyone who cares about errors should use .read_line() in a loop > > 2. Change .lines() to fail!() on a non-EOF error. Introduce a new wrapper > type IgnoreErrReader (name suggestions welcome!) that translates all errors > into EOF. Now the original sample code will fail!() on a non-EOF error, and > there's a defined way of turning it back into the version that ignores > errors for people who legitimately want that. This could be exposed as a > default method on Reader called .ignoring_errors() that consumes self and > returns the new wrapper. > > 3. Keep .lines() as-is and add the wrapper struct that fail!()s on errors. > This doesn't make a lot of sense to me because the struct would only ever > be used with .lines(), and therefore this seems worse than: > > 4. Change .lines() to fail!() on errors and add a new method > .lines_ignoring_errs() that behaves the way .lines() does today. That's > kind of verbose though, and is a specialized form of suggestion #2 (and > therefore less useful). > > 5. Remove .lines() entirely and live with the uglier way of reading stdin > that will put off new users. > > 6. Add some way to retrieve the ignored error after the fact. This would > require uglifying the Buffer trait to have .err() and .set_err() methods, > as well as expanding all the implementors to provide a field to store that > information. > > I'm in favor of solutions #1 or #2. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfager at gmail.com Wed Feb 19 15:40:08 2014 From: jfager at gmail.com (Jason Fager) Date: Wed, 19 Feb 2014 18:40:08 -0500 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: <530495F7.7080106@ntecs.de> <5305316A.2050805@gmail.com> Message-ID: Can you point to any scripting langs whose lines equivalent just silently ignores errors? I'm not aware of any; even perl will at least populate $!. I opened https://github.com/mozilla/rust/issues/12130 a little while ago about if_ok!/try! not being usable from main and the limitations for simple use cases that can cause. Forgive a possibly dumb question, but is there a reason main has to return ()? Could Rust provide an 'ExitCode' trait that types could implement that would provide the exit code that the process would spit out if it were returned from main? IoResult's impl would just be `match self { Ok(_) => 0, Err(_) => 1 }` and your example would look like fn main() -> IoResult<~str> { for line in io::stdin().lines() { print!(?received: {}?, try!(line)); } } On Wed, Feb 19, 2014 at 5:50 PM, Kevin Ballard wrote: > On Feb 19, 2014, at 2:34 PM, Lee Braiden wrote: > > Then we could introduce a new struct to wrap any Reader that translates > non-EOF errors into EOF specifically to let you say ?I really don?t care > about failure?. > > > It sounds like a very specific way to handle a very general problem. > People like (modern, complete) scripting languages because they handle this > sort of intricacy in elegant, ways, not because they gloss over it and make > half-baked programs that don't handle errors. It's just that you can, say, > handle IOErrors in one step, at the top of your script, except for one > particular issue that you know how to recover from, six levels into the > call stack. Exceptions (so long as there isn't a lot of boilerplate around > them) let you do that, easily. Rust needs a similarly generic approach to > propagating errors and handling them five levels up, whether that's > exceptions or fails (I don't think they currently are flexible enough), or > monads, or something else. > > > In my experience, exceptions are actually a very *inelegant* way to > handle this problem. The code 5 levels higher that catches the exception > doesn?t have enough information about the problem in order to recover. > Maybe it just discards the entire computation, or perhaps restarts it. But > it can?t recover and continue. > > We already tried conditions for this, which do let you recover and > continue, except that turned out to be a dismal failure. Code that didn?t > touch conditions were basically just hoping nothing went wrong, and would > fail!() if it did. Code that did try to handle errors was very verbose > because conditions were a PITA to work with. > > As for what we?re talking about here. lines() is fairly unique right now > in its discarding of errors. I can?t think of another example offhand that > will discard errors. As I said before, I believe that .lines() exists to > facilitate I/O handling in a fashion similar to scripting languages, > primarily because one of the basic things people try to do with new > languages is read from stdin and handle the input, and it?s great if we can > say our solution to that is: > > fn main() { > for line in io::stdin().lines() { > print!(?received: {}?, line); > } > } > > It?s a lot more confusing and off-putting if our example looks like > > fn main() { > for line in io::stdin().lines() { > match line { > Ok(line) => print!(?received: {}?, line), > Err(e) => { > println!(?error: {}?, e); > break; > } > } > } > > or alternatively > > fn main() { > for line in io::stdin().lines() { > let line = line.unwrap(); // new user says ?what is .unwrap()?? > and is still not handling errors here > print!(?received: {}?, line); > } > } > > Note that we can?t even use try!() (n?e if_ok!()) here because main() > doesn?t return an IoResult. > > The other thing to consider is that StrSlice also exposes a .lines() > method and it may be confusing to have two .lines() methods that yield > different types. > > Given that, the only reasonable solutions appear to be: > > 1. Keep the current behavior. .lines() already documents its behavior; > anyone who cares about errors should use .read_line() in a loop > > 2. Change .lines() to fail!() on a non-EOF error. Introduce a new wrapper > type IgnoreErrReader (name suggestions welcome!) that translates all errors > into EOF. Now the original sample code will fail!() on a non-EOF error, and > there?s a defined way of turning it back into the version that ignores > errors for people who legitimately want that. This could be exposed as a > default method on Reader called .ignoring_errors() that consumes self and > returns the new wrapper. > > 3. Keep .lines() as-is and add the wrapper struct that fail!()s on errors. > This doesn?t make a lot of sense to me because the struct would only ever > be used with .lines(), and therefore this seems worse than: > > 4. Change .lines() to fail!() on errors and add a new method > .lines_ignoring_errs() that behaves the way .lines() does today. That?s > kind of verbose though, and is a specialized form of suggestion #2 (and > therefore less useful). > > 5. Remove .lines() entirely and live with the uglier way of reading stdin > that will put off new users. > > 6. Add some way to retrieve the ignored error after the fact. This would > require uglifying the Buffer trait to have .err() and .set_err() methods, > as well as expanding all the implementors to provide a field to store that > information. > > I?m in favor of solutions #1 or #2. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at bnoordhuis.nl Wed Feb 19 15:46:11 2014 From: info at bnoordhuis.nl (Ben Noordhuis) Date: Thu, 20 Feb 2014 00:46:11 +0100 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: References: Message-ID: On Wed, Feb 19, 2014 at 11:53 PM, Alex Crichton wrote: >> Currently, all patches are being tested after they are approved. However, I >> think it would be of great benefit for contributors - and reviewers - to >> test patches before and after they're approved. > > I would personally love to explore using Travis-CI for this. I think > this is almost exactly what travis was built for. That being said, > there's no way that travis could handle a full `make check` for rust. > > However, perhaps travis could handle `make check-stage0-lite` (not > that this rule exists yet). I think we would have to figure out how to > avoid building LLVM, but beyond that we *should* be able to run a > bunch of stage0 tests and optimistically print out the results of the > PR. This obviously won't catch many classes of bugs, but perhaps it > would be good enough for a preemptive check. The best part about this > is that it's almost 0 overhead of automation for us because travis > would handle all of it. $0.02 from the node.js and libuv camp: we have used Travis in the past but there were so many spurious test failures (with no way to debug them) that we moved to dedicated Jenkins instances. In my experience, anything involving I/O is hit and miss with Travis. From philippe.delrieu at free.fr Wed Feb 19 15:48:30 2014 From: philippe.delrieu at free.fr (Philippe Delrieu) Date: Thu, 20 Feb 2014 00:48:30 +0100 Subject: [rust-dev] Using a closure as a return value Message-ID: <530542CE.6040304@free.fr> Hello, I'am learning the functional programming paradigm with rust and to help me I decide to translate the pattern of the book "Functional Programming Patterns in Scala and Clojure" in Rust. In this work I have a problem to return a closure (or a function) as a return value and I didn't find any solution. I understand the problem but I can't find a solution. The code is : struct Person { firstname: ~str, lastname: ~str, } let p1 = Person {firstname: ~"Michael", lastname: ~"Bevilacqua"}; let p2 = Person {firstname: ~"Pedro", lastname: ~"Vasquez"}; let p3 = Person {firstname: ~"Robert", lastname: ~"Aarons"}; //mutable version. let mut people = ~[~p3, ~p2, ~p1]; //convert comparision to ordering. fn compare(a: &T, b: &T) -> Ordering { if a < b {Less} else if a == b {Equal} else {Greater} } fn firstname_comparaison(p1: &~Person, p2: &~Person) -> Ordering{ compare(&p1.firstname, &p2.firstname) } fn lastname_comparaison(p1: &~Person, p2: &~Person) -> Ordering{ compare(&p1.lastname, &p1.lastname) } fn make_comparision(comp1: fn(&~Person, &~Person)->Ordering, comp2: fn(&~Person, &~Person)->Ordering) -> |&~Person, &~Person|->Ordering { |p1: &~Person, p2: &~Person| ->Ordering{ let c = comp1(p1, p2); if (c == Equal) { comp2(p1, p2) } else { c } } } people.sort_by(make_comparision(firstname_comparaison, lastname_comparaison)); error: cannot infer an appropriate lifetime due to conflicting requirements &'a |p1: &~Person, p2: &~Person| ->Ordering{ let c = comp1(p1, p2); if (c == Equal) { comp2(p1, p2) } else { Perhaps someone can help me in finding a way to construct a closure and use it later. Philippe From gaetan at xeberon.net Wed Feb 19 15:59:06 2014 From: gaetan at xeberon.net (Gaetan) Date: Thu, 20 Feb 2014 00:59:06 +0100 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: References: Message-ID: Travis is well for unit testing with already existing compiler, however here I dont see how you will deploy the stage0 easily for automating. I think you should stick with your buildbot and improve it in order to enhance pre merge tests on PL (why not trigger the test when the creator of the PL set it in ready mode, like a dedicated comment you use for merge. This will add a lot of load on your build server however. my 0.01 euros. ----- Gaetan 2014-02-20 0:46 GMT+01:00 Ben Noordhuis : > On Wed, Feb 19, 2014 at 11:53 PM, Alex Crichton wrote: > >> Currently, all patches are being tested after they are approved. > However, I > >> think it would be of great benefit for contributors - and reviewers - to > >> test patches before and after they're approved. > > > > I would personally love to explore using Travis-CI for this. I think > > this is almost exactly what travis was built for. That being said, > > there's no way that travis could handle a full `make check` for rust. > > > > However, perhaps travis could handle `make check-stage0-lite` (not > > that this rule exists yet). I think we would have to figure out how to > > avoid building LLVM, but beyond that we *should* be able to run a > > bunch of stage0 tests and optimistically print out the results of the > > PR. This obviously won't catch many classes of bugs, but perhaps it > > would be good enough for a preemptive check. The best part about this > > is that it's almost 0 overhead of automation for us because travis > > would handle all of it. > > $0.02 from the node.js and libuv camp: we have used Travis in the past > but there were so many spurious test failures (with no way to debug > them) that we moved to dedicated Jenkins instances. In my experience, > anything involving I/O is hit and miss with Travis. > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at sb.org Wed Feb 19 16:37:27 2014 From: kevin at sb.org (Kevin Ballard) Date: Wed, 19 Feb 2014 16:37:27 -0800 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: References: <530495F7.7080106@ntecs.de> <5305316A.2050805@gmail.com> Message-ID: <92449753-CBA0-436F-998B-30E1BE63FCE8@sb.org> On Feb 19, 2014, at 3:40 PM, Jason Fager wrote: > Can you point to any scripting langs whose lines equivalent just silently ignores errors? I'm not aware of any; even perl will at least populate $!. No, because I typically don?t think about errors when writing quick scripts. If the script blows up because I had a stdin error, that?s fine, it was never meant to be robust. I just commented on #12368 saying that now I?m leaning towards suggestion #2 (make .lines() fail on errors and provide an escape hatch to squelch them). This will more closely match how scripting languages behave by default (where an exception will kill the script). > I opened https://github.com/mozilla/rust/issues/12130 a little while ago about if_ok!/try! not being usable from main and the limitations for simple use cases that can cause. Forgive a possibly dumb question, but is there a reason main has to return ()? Could Rust provide an 'ExitCode' trait that types could implement that would provide the exit code that the process would spit out if it were returned from main? IoResult's impl would just be `match self { Ok(_) => 0, Err(_) => 1 }` and your example would look like > > fn main() -> IoResult<~str> { > for line in io::stdin().lines() { > print!(?received: {}?, try!(line)); > } > } There is no precedent today for having a function whose return type must conform to a trait, without making the function generic. Furthermore, a function that is generic on return value picks its concrete return type by the type constraints of its call site, rather than by the implementation of that function. I also question whether this will work form an implementation standpoint. Today the symbol for the main() function is predictable and is the same for all main functions. With your suggested change, the symbol would depend on the return type. I don?t know if this matters to rustc; the ?start? lang item function is passed a pointer to the main function, but I don?t know how this pointer is created. But beyond that, there?s still issues here. Unlike in C, a Rust program does not terminate when control falls out of the main() function. It only terminates when all tasks have ended. Terminating the program sooner than that requires `unsafe { libc::abort() };`. Furthermore, the main() function has no return value, and does not influence the exit code. That?s set by `os::set_exit_status()`. If the return value of main() sets the error code that will overwrite any error code that?s already been set. Perhaps a better approach is to define a macro that calls a function that returns an IoResult and sets the error code to 1 (and calls libc::abort()) in the Err case, and does nothing in the Ok case. That would allow me to write fn main() { abort_on_err!(main_()); fn main_() -> IoResult<()> { something_that_returns_io_result() } } --- While writing the above code sample, I first tried actually writing the read_line() loop, and it occurs to me that it?s more complicated than necessary. This is due to the need for detecting EOF, which prevents using try!(). We may actually need some other macro that converts EOF to None, returns other errors, and Ok to Some. That makes things a bit simpler for reading, as I can do something like fn handle_stdin() -> IoResult<()> { let mut r = BufferedReader::new(io::stdin()); loop { let line = match check_eof!(r.read_line()) { None => break, Some(line) => line }; handle_line(line); } } Still not great, but at least this is better than let line = match r.read_line() { Ok(line) => line, Err(IoError{ kind: EndOfFile, .. }) => break, Err(e) => return Err(e) }; -Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From palmercox at gmail.com Wed Feb 19 17:22:41 2014 From: palmercox at gmail.com (Palmer Cox) Date: Wed, 19 Feb 2014 20:22:41 -0500 Subject: [rust-dev] reader.lines() swallows io errors In-Reply-To: <92449753-CBA0-436F-998B-30E1BE63FCE8@sb.org> References: <530495F7.7080106@ntecs.de> <5305316A.2050805@gmail.com> <92449753-CBA0-436F-998B-30E1BE63FCE8@sb.org> Message-ID: Thinking about this a bit more, I think that another solution would be: 7 - Update lines() to return an Iterator and then create an iterator wrapper that fails on any error except for EOF. I think by using an extension method, the syntax could be: fn main() { for line in io::stdin().lines().fail_on_error() { print!("received: {}", line); } } Which I don't think is too bad. Importantly, it makes it explicit at the call site what the desired behavior is while also making lines() behave like all other IO-related methods do. So, I now favor of either doing something like this or removing it. I posted more on https://github.com/mozilla/rust/issues/12368. -Palmer Cox On Wed, Feb 19, 2014 at 7:37 PM, Kevin Ballard wrote: > On Feb 19, 2014, at 3:40 PM, Jason Fager wrote: > > Can you point to any scripting langs whose lines equivalent just silently > ignores errors? I'm not aware of any; even perl will at least populate $!. > > > > No, because I typically don't think about errors when writing quick > scripts. If the script blows up because I had a stdin error, that's fine, > it was never meant to be robust. > > I just commented on #12368 saying that now I'm leaning towards suggestion > #2 (make .lines() fail on errors and provide an escape hatch to squelch > them). This will more closely match how scripting languages behave by > default (where an exception will kill the script). > > I opened https://github.com/mozilla/rust/issues/12130 a little while ago > about if_ok!/try! not being usable from main and the limitations for simple > use cases that can cause. Forgive a possibly dumb question, but is there a > reason main has to return ()? Could Rust provide an 'ExitCode' trait that > types could implement that would provide the exit code that the process > would spit out if it were returned from main? IoResult's impl would just > be `match self { Ok(_) => 0, Err(_) => 1 }` and your example would look like > > fn main() -> IoResult<~str> { > for line in io::stdin().lines() { > print!("received: {}", try!(line)); > } > } > > > There is no precedent today for having a function whose return type must > conform to a trait, without making the function generic. Furthermore, a > function that is generic on return value picks its concrete return type by > the type constraints of its call site, rather than by the implementation of > that function. I also question whether this will work form an > implementation standpoint. Today the symbol for the main() function is > predictable and is the same for all main functions. With your suggested > change, the symbol would depend on the return type. I don't know if this > matters to rustc; the "start" lang item function is passed a pointer to the > main function, but I don't know how this pointer is created. > > But beyond that, there's still issues here. Unlike in C, a Rust program > does not terminate when control falls out of the main() function. It only > terminates when all tasks have ended. Terminating the program sooner than > that requires `unsafe { libc::abort() };`. Furthermore, the main() function > has no return value, and does not influence the exit code. That's set by > `os::set_exit_status()`. If the return value of main() sets the error code > that will overwrite any error code that's already been set. > > Perhaps a better approach is to define a macro that calls a function that > returns an IoResult and sets the error code to 1 (and calls libc::abort()) > in the Err case, and does nothing in the Ok case. That would allow me to > write > > fn main() { > abort_on_err!(main_()); > > fn main_() -> IoResult<()> { > something_that_returns_io_result() > } > } > > --- > > While writing the above code sample, I first tried actually writing the > read_line() loop, and it occurs to me that it's more complicated than > necessary. This is due to the need for detecting EOF, which prevents using > try!(). We may actually need some other macro that converts EOF to None, > returns other errors, and Ok to Some. That makes things a bit simpler for > reading, as I can do something like > > fn handle_stdin() -> IoResult<()> { > let mut r = BufferedReader::new(io::stdin()); > loop { > let line = match check_eof!(r.read_line()) { > None => break, > Some(line) => line > }; > handle_line(line); > } > } > > Still not great, but at least this is better than > > let line = match r.read_line() { > Ok(line) => line, > Err(IoError{ kind: EndOfFile, .. }) => break, > Err(e) => return Err(e) > }; > > -Kevin > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at metajack.im Wed Feb 19 19:14:59 2014 From: jack at metajack.im (Jack Moffitt) Date: Wed, 19 Feb 2014 20:14:59 -0700 Subject: [rust-dev] Using a closure as a return value In-Reply-To: <530542CE.6040304@free.fr> References: <530542CE.6040304@free.fr> Message-ID: > I'am learning the functional programming paradigm with rust and to help me I > decide to translate the pattern of the book "Functional Programming Patterns > in Scala and Clojure" in Rust. In this work I have a problem to return a > closure (or a function) as a return value and I didn't find any solution. I > understand the problem but I can't find a solution. The code is : Closures in Rust are stack allocated, so you can't return them from a function since the function's stack will be gone. You can use either a proc() or a ~Trait object. A proc can only be called once, but a trait object can be called many times. If you don't need to close over any state (which it appears you don't from your example), then you can return bare functions. Here's a trait object example (untested and incomplete): trait Comparison { fn compare(&self, p1: &Person, p2: &Person) -> Ordering; } fn make_comparison() -> ~Comparison { struct ClosedOverState { ... } impl Comparison for ClosedOverState { fn compare(...) -> Ordering { .... // access state through self.foo } } ~ClosedOverState { foo: 0, } } It can be simplified with macros. jack. From adamson.benjamin at gmail.com Wed Feb 19 20:37:19 2014 From: adamson.benjamin at gmail.com (benjamin adamson) Date: Wed, 19 Feb 2014 20:37:19 -0800 Subject: [rust-dev] Question regarding modules Message-ID: Hello, I am trying to get over a hurdle I've been stuck on for a while. I can't seem to figure out how to get the 'use' statements to work. I've created the smallest possible examples I could to demonstrate the problem I am having. The first commit shows my first attempt to 'use' a rust module. With this commit I am not using a mod.rs file: https://github.com/ShortStomp/rust-submodule-confusion-example/tree/d0e5445fbfbc01850bf8a4a523a13aff4e50c229 Here is the output I get from the compiler for this basic program: https://gist.github.com/ShortStomp/9107140 My second attempt at getting this basic example to work involves adding a mod.rs file inside the submodule directory. https://github.com/ShortStomp/rust-submodule-confusion-example/tree/5f3d3058ded8b1e29800685cba7a0438f288abe6 I get the same error that I linked in the gist above. So what gives, what fundamental assumption am I getting wrong? The compiler is telling me to try adding an extern mod statement, but I don't think that is what I want. The files I want to use are in no way external to this package, unless that is a cause of my confusion? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey at octayn.net Wed Feb 19 20:45:17 2014 From: corey at octayn.net (Corey Richardson) Date: Wed, 19 Feb 2014 23:45:17 -0500 Subject: [rust-dev] Question regarding modules In-Reply-To: References: Message-ID: You're assuming `use` loads code, but it only brings names into scope. `mod submodule;` is what loads `submodule` into the crate. See http://static.rust-lang.org/doc/master/tutorial.html#crates-and-the-module-system On Wed, Feb 19, 2014 at 11:37 PM, benjamin adamson wrote: > Hello, I am trying to get over a hurdle I've been stuck on for a while. I > can't seem to figure out how to get the 'use' statements to work. > > I've created the smallest possible examples I could to demonstrate the > problem I am having. > > The first commit shows my first attempt to 'use' a rust module. With this > commit I am not using a mod.rs file: > https://github.com/ShortStomp/rust-submodule-confusion-example/tree/d0e5445fbfbc01850bf8a4a523a13aff4e50c229 > > Here is the output I get from the compiler for this basic program: > https://gist.github.com/ShortStomp/9107140 > > My second attempt at getting this basic example to work involves adding a > mod.rs file inside the submodule directory. > https://github.com/ShortStomp/rust-submodule-confusion-example/tree/5f3d3058ded8b1e29800685cba7a0438f288abe6 > > I get the same error that I linked in the gist above. So what gives, what > fundamental assumption am I getting wrong? The compiler is telling me to try > adding an extern mod statement, but I don't think that is what I want. The > files I want to use are in no way external to this package, unless that is a > cause of my confusion? > > Thanks! > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From flaper87 at gmail.com Thu Feb 20 03:39:39 2014 From: flaper87 at gmail.com (Flaper87) Date: Thu, 20 Feb 2014 12:39:39 +0100 Subject: [rust-dev] Improving our patch review and approval process (Hopefully) In-Reply-To: References: Message-ID: 2014-02-20 0:46 GMT+01:00 Ben Noordhuis : > On Wed, Feb 19, 2014 at 11:53 PM, Alex Crichton wrote: > >> Currently, all patches are being tested after they are approved. > However, I > >> think it would be of great benefit for contributors - and reviewers - to > >> test patches before and after they're approved. > > > > I would personally love to explore using Travis-CI for this. I think > > this is almost exactly what travis was built for. That being said, > > there's no way that travis could handle a full `make check` for rust. > > > > However, perhaps travis could handle `make check-stage0-lite` (not > > that this rule exists yet). I think we would have to figure out how to > > avoid building LLVM, but beyond that we *should* be able to run a > > bunch of stage0 tests and optimistically print out the results of the > > PR. This obviously won't catch many classes of bugs, but perhaps it > > would be good enough for a preemptive check. The best part about this > > is that it's almost 0 overhead of automation for us because travis > > would handle all of it. > I enabled travis in my Rust fork and added a `make tidy` job[0]. If we can come up with what `check-stage0-lite` should actually do, I think we could try to use travis until the testing infrastructure grows and we'll be able to run this in our buildbot. > $0.02 from the node.js and libuv camp: we have used Travis in the past > but there were so many spurious test failures (with no way to debug > them) that we moved to dedicated Jenkins instances. In my experience, > anything involving I/O is hit and miss with Travis. > Agreed, this sounds like a very likely scenario, that's why I didn't recommended it to begin with. However, I guess for things like `tidy` checks it should work fine. [0] https://travis-ci.org/FlaPer87/rust/builds/19245016 -- Flavio (@flaper87) Percoco http://www.flaper87.com http://github.com/FlaPer87 -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at dagits.es Thu Feb 20 16:53:42 2014 From: michael at dagits.es (Michael Dagitses) Date: Fri, 21 Feb 2014 00:53:42 +0000 Subject: [rust-dev] unique vector patterns are no longer supported at head Message-ID: The following no longer works: let file = match std::os::args() { [_prog, f] => f, _ => fail!("usage"), }; This works, but is not pretty. Is there a better solution? let file = match std::os::args().as_slice() { [ref _prog, ref f] => f.to_owned(), _ => fail!("usage"), }; Thanks, and sorry if this is a repeat, I don't see any relevant threads and I see this is a very recent change. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at crichton.co Thu Feb 20 17:02:05 2014 From: alex at crichton.co (Alex Crichton) Date: Thu, 20 Feb 2014 17:02:05 -0800 Subject: [rust-dev] unique vector patterns are no longer supported at head In-Reply-To: References: Message-ID: This feature was removed from the language in https://github.com/mozilla/rust/pull/12244. The as_slice() method will continue to work for now. On Thu, Feb 20, 2014 at 4:53 PM, Michael Dagitses wrote: > The following no longer works: > let file = match std::os::args() { > [_prog, f] => f, > _ => fail!("usage"), > }; > > This works, but is not pretty. Is there a better solution? > let file = match std::os::args().as_slice() { > [ref _prog, ref f] => f.to_owned(), > _ => fail!("usage"), > }; > > Thanks, and sorry if this is a repeat, I don't see any relevant threads and > I see this is a very recent change. > Michael > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev > From banderson at mozilla.com Fri Feb 21 12:20:27 2014 From: banderson at mozilla.com (Brian Anderson) Date: Fri, 21 Feb 2014 12:20:27 -0800 Subject: [rust-dev] RFC: About the library stabilization process In-Reply-To: References: <53040B8A.5070807@mozilla.com> Message-ID: <5307B50B.5050608@mozilla.com> On 02/19/2014 02:37 AM, Gy?rgy Andrasek wrote: > On Wed, Feb 19, 2014 at 1:40 AM, Brian Anderson wrote: >> Backwards-compatibility is guaranteed. > Does that include ABI compatibility? > >> Second, the AST is traversed and stability index is propagated downward to any indexable node that isn't explicitly tagged. > Should it be an error to use lower stability internally? > >> By default all nodes are *stable* - library authors have to opt-in to stability index tracking. This may end up being the wrong default and we'll want to revisit. > Oh dear god no. `stable` should be *earned* over time, otherwise it's > meaningless. The compiler should treat untagged code as `unstable`, > `experimental` or a special `untagged` stability and accept that level > by default. > OK, I agree let's start all code at `#[experimental]`. It's not too much burden for authors that don't want part of it to put an attribute on their crates. From kevin at sb.org Fri Feb 21 12:53:16 2014 From: kevin at sb.org (Kevin Ballard) Date: Fri, 21 Feb 2014 12:53:16 -0800 Subject: [rust-dev] RFC: About the library stabilization process In-Reply-To: <5307B50B.5050608@mozilla.com> References: <53040B8A.5070807@mozilla.com> <5307B50B.5050608@mozilla.com> Message-ID: <4FDBEADD-4721-40E8-AE91-77AB918AC0F3@sb.org> On Feb 21, 2014, at 12:20 PM, Brian Anderson wrote: > On 02/19/2014 02:37 AM, Gy?rgy Andrasek wrote: >> On Wed, Feb 19, 2014 at 1:40 AM, Brian Anderson wrote: >>> Backwards-compatibility is guaranteed. >> Does that include ABI compatibility? >> >>> Second, the AST is traversed and stability index is propagated downward to any indexable node that isn't explicitly tagged. >> Should it be an error to use lower stability internally? >> >>> By default all nodes are *stable* - library authors have to opt-in to stability index tracking. This may end up being the wrong default and we'll want to revisit. >> Oh dear god no. `stable` should be *earned* over time, otherwise it's >> meaningless. The compiler should treat untagged code as `unstable`, >> `experimental` or a special `untagged` stability and accept that level >> by default. >> > > OK, I agree let's start all code at `#[experimental]`. It's not too much burden for authors that don't want part of it to put an attribute on their crates. What's the default behavior with regards to calling #[experimental] APIs? If the default behavior is warn or deny, then I don't think we should default any crate to #[experimental]. I'm also worried that even if we default the behavior to allow(), that using #[experimental] is still problematic because anyone who turns on #[warn(unstable)] in order to avoid the unstable bits of libstd will be bitten by warnings in third-party crates that don't bother to specify stability. Could we perhaps make the default to be no stability index whatsoever, so third-party library authors aren't required to deal with stability in their own APIs if they don't want to? This would have the same effect as defaulting to #[stable], which was the original suggestion, except that it won't erroneously indicate that APIs are stable when the author hasn't made any guarantees at all. If we do this, I would then also suggest that we default to either #[warn(unstable)] or #[warn(experimental)], which would then only complain about first-party APIs unless the third-party library author has opted in to stability. It's worth noting that I have zero experience with node.js's use of stability, so I don't know how they handle defaults. -Kevin -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4118 bytes Desc: not available URL: From bjzaba at yahoo.com.au Fri Feb 21 15:18:01 2014 From: bjzaba at yahoo.com.au (Brendan Zabarauskas) Date: Sat, 22 Feb 2014 10:18:01 +1100 Subject: [rust-dev] RFC: About the library stabilization process In-Reply-To: <53040B8A.5070807@mozilla.com> References: <53040B8A.5070807@mozilla.com> Message-ID: <1F362536-65FE-46F0-98B3-BB6AFF4506E7@yahoo.com.au> We should probably start using the #[deprecated] attribute more. Using it to phase things out in the std is currently annoying because we have the #[deny(deprecated)] attribute on. ~Brendan On 19 Feb 2014, at 12:40 pm, Brian Anderson wrote: > Hey there. > > I'd like to start the long process of stabilizing the libraries, and this is the opening salvo. This process and the tooling to support it has been percolating on the issue tracker for a while, but this is a summary of how I expect it to work. Assuming everybody feels good about it, we'll start trying to make some simple API's stable starting later this week or next. > > > # What is the stability index and stability attributes? > > The stability index is a way of tracking, at the item level, which library features are safe to use backwards-compatibly. The intent is that the checks for stability catch all backwards-incompatible uses of library features. Between feature gates and stability > > The stability index of any particular item can be manually applied with stability attributes, like `#[unstable]`. > > These definitions are taken directly from the node.js documentation. node.js additionally defines the 'locked' and 'frozen' levels, but I don't think we need them yet. > > * Stability: 0 - Deprecated > > This feature is known to be problematic, and changes are > planned. Do not rely on it. Use of the feature may cause warnings. Backwards > compatibility should not be expected. > > * Stability: 1 - Experimental > > This feature was introduced recently, and may change > or be removed in future versions. Please try it out and provide feedback. > If it addresses a use-case that is important to you, tell the node core team. > > * Stability: 2 - Unstable > > The API is in the process of settling, but has not yet had > sufficient real-world testing to be considered stable. Backwards-compatibility > will be maintained if reasonable. > > * Stability: 3 - Stable > > The API has proven satisfactory, but cleanup in the underlying > code may cause minor changes. Backwards-compatibility is guaranteed. > > Crucially, once something becomes 'stable' its interface can no longer change outside of extenuating circumstances - reviewers will need to be vigilant about this. > > All items may have a stability index: crates, modules, structs, enums, typedefs, fns, traits, impls, extern blocks; > extern statics and fns, methods (of inherent impls only). > > Implementations of traits may have their own stability index, but their methods have the same stability as the trait's. > > > # How is the stability index determined and checked? > > First, if the node has a stability attribute then it has that stability index. > > Second, the AST is traversed and stability index is propagated downward to any indexable node that isn't explicitly tagged. > > Reexported items maintain the stability they had in their original location. > > By default all nodes are *stable* - library authors have to opt-in to stability index tracking. This may end up being the wrong default and we'll want to revisit. > > During compilation the stabilization lint does at least the following checks: > > * All components of all paths, in all syntactic positions are checked, including in > * use statements > * trait implementation and inheritance > * type parameter bounds > * Casts to traits - checks the trait impl > * Method calls - checks the method stability > > Note that not all of this is implemented, and we won't have complete tool support to start with. > > > # What's the process for promoting libraries to stable? > > For 1.0 we're mostly concerned with promoting large portions of std to stable; most of the other libraries can be experimental or unstable. It's going to be a lengthy process, and it's going to require some iteration to figure out how it works best. > > The process 'leader' for a particular module will post a stabilization RFC to the mailing list. Within, she will state the API's under discussion, offer an overview of their functionality, the patterns used, related API's and the patterns they use, and finally offer specific suggestions about how the API needs to be improved or not before it's final. If she can confidently recommend that some API's can be tagged stable as-is then that helps everybody. > > After a week of discussion she will summarize the consensus, tag anything as stable that already has agreement, file and nominate issues for the remaining, and ensure that *somebody makes the changes*. > > During this process we don't necessarily need to arrive at a plan to stabilize everything that comes up; we just need to get the most crucial features stable, and make continual progress. > > We'll start by establishing a stability baseline, tagging most everything experimental or unstable, then proceed to the very simplest modules, like 'mem', 'ptr', 'cast', 'raw'. > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev From philippe.delrieu at free.fr Sat Feb 22 13:48:28 2014 From: philippe.delrieu at free.fr (Philippe Delrieu) Date: Sat, 22 Feb 2014 22:48:28 +0100 Subject: [rust-dev] Using a closure as a return value In-Reply-To: References: <530542CE.6040304@free.fr> Message-ID: <53091B2C.6030507@free.fr> Thank for you reply. But I don't see any solution to my problem.. I'll explain it a little more. I want to develop a sort of GUI. The GUI has its own logic and use a rendering engine to do the work. I want my GUI separate of the rendering engine. In the first solution I use a trait that hide the renderer. So I have a struct that old the renderer that implement the trait. It is created at the start of ther application and passed to every GUI call. At the end the trait is casted to the effective renderer to do the work: trait Renderer{} struct MyRender{ API_render: ~effectiveRenderer, } impl Renderer for MyRender{} struct MyGUIWidget; impl MyGUIWidget { fn draw(&self, renderer: &renderer) { //here I know what type of renderer to use. let myrender = render as &MyRender; //error: non-scalar cast: `&Renderer` as `&MyRender` myrender.API_render.render(); } } #[main] fn main() { let render = MyRender{API_render: ~...}; //init with user choice renderer let widget = MyGUIWidget; widget.draw(render); //draw } I didn't find a way to send specific Renderer to an API with a generic trait and to polymorph it when I know which struct it is. I can use a singleton or a static variable but static allocation doesn't seem to be allowed (as I undersdant the documentation). So I try with closure and I have a sort of example working using a renderer : trait Renderer{} // struct MyRender{ API_render: ~effectiveRenderer, } impl Renderer for MyRender{} trait Container{ fn draw(&self, draw_render: ||); } struct MyContainer { value :~str, } impl Container for MyContainer { fn draw(&self, draw_render: ||) { draw_render(); } } #[main] fn main() { let render = MyRender{API_render: ~StringRender}; //init with user choice renderer let container = MyContainer{value: ~"value"}; container.draw(|| { render.API_render.render(container.value); }); //draw } To extend my API I need to use more closure and if I don't what to construct every thing in the main I have to return closure constructed by each widget for example. My last idea is to use a spawned task that hold the renderer and send it the widget to draw but It seems to me a little complicated. So I don't see any simple to do it. If anybody can help, it would be very helpful. Philippe Le 20/02/2014 04:14, Jack Moffitt a ?crit : >> I'am learning the functional programming paradigm with rust and to help me I >> decide to translate the pattern of the book "Functional Programming Patterns >> in Scala and Clojure" in Rust. In this work I have a problem to return a >> closure (or a function) as a return value and I didn't find any solution. I >> understand the problem but I can't find a solution. The code is : > Closures in Rust are stack allocated, so you can't return them from a > function since the function's stack will be gone. You can use either a > proc() or a ~Trait object. A proc can only be called once, but a trait > object can be called many times. If you don't need to close over any > state (which it appears you don't from your example), then you can > return bare functions. > > Here's a trait object example (untested and incomplete): > > trait Comparison { > fn compare(&self, p1: &Person, p2: &Person) -> Ordering; > } > > fn make_comparison() -> ~Comparison { > struct ClosedOverState { > ... > } > impl Comparison for ClosedOverState { > fn compare(...) -> Ordering { > .... // access state through self.foo > } > } > > ~ClosedOverState { > foo: 0, > } > } > > It can be simplified with macros. > > jack. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From acrichton at mozilla.com Sat Feb 22 23:58:05 2014 From: acrichton at mozilla.com (Alex Crichton) Date: Sat, 22 Feb 2014 23:58:05 -0800 Subject: [rust-dev] Travis CI is building Pull Requests Message-ID: Greetings Rustafarians! As of a few minutes ago, along with the merging of #12437, I would like to inform everyone that we're now going to be building all Pull Requests on Travis CI [1]. Travis is a continuous integration system which integrates very well with Github and makes it fairly easy to run lots of builds. Running a full "make check" would be quite burdensome for Travis, so we're limiting it to building a stage1 rustc and then running some of the broader, yet quick test suites. These builds should take about 20-30 minutes per build, and builds will be triggered for each new PR along with each force-push to the PR. We will continue to gate all PRs on bors, and bors will continue to run the exhaustive test suites on our "tier 1" platforms. Using Travis will hopefully provide quicker feedback about misformatted files, failed tests, typos, etc. It is still highly recommended to run "make check" before submitting a PR (as always), but Travis will hopefully start serving as a first line of defense for bors. The ultimate goal is for this to take some load off bors with fewer failed PRs. You can check the status of a PR by looking at the bottom of the page on Github, or visiting Travis's status page [2]. That's all for now, keep on being awesome everyone! [1] - https://travis-ci.org/ [2] - https://travis-ci.org/mozilla/rust From uzytkownik2 at gmail.com Sun Feb 23 02:57:31 2014 From: uzytkownik2 at gmail.com (Maciej Piechotka) Date: Sun, 23 Feb 2014 11:57:31 +0100 Subject: [rust-dev] Using a closure as a return value References: <530542CE.6040304@free.fr> <53091B2C.6030507@free.fr> Message-ID: <1393153051.12520.0.camel@localhost> On Sat, 2014-02-22 at 22:48 +0100, Philippe Delrieu wrote: > Thank for you reply. But I don't see any solution to my problem.. I'll > explain it a little more. > > I want to develop a sort of GUI. The GUI has its own logic and use a > rendering engine to do the work. I want my GUI separate of the > rendering engine. > In the first solution I use a trait that hide the renderer. So I have > a struct that old the renderer that implement the trait. It is created > at the start of ther application and passed to every GUI call. At the > end the trait is casted to the effective renderer to do the work: > > trait Renderer{} > struct MyRender{ > API_render: ~effectiveRenderer, > } > > impl Renderer for MyRender{} > > struct MyGUIWidget; > > impl MyGUIWidget { > fn draw(&self, renderer: &renderer) { //here I know what type > of renderer to use. > let myrender = render as &MyRender; //error: non-scalar cast: > `&Renderer` as `&MyRender` > myrender.API_render.render(); > } > } > > #[main] > fn main() { > let render = MyRender{API_render: ~...}; //init with user choice > renderer > let widget = MyGUIWidget; > widget.draw(render); //draw > } > > I didn't find a way to send specific Renderer to an API with a generic > trait and to polymorph it when I know which struct it is. > > I can use a singleton or a static variable but static allocation > doesn't seem to be allowed (as I undersdant the documentation). > > So I try with closure and I have a sort of example working using a > renderer : > trait Renderer{} // > > struct MyRender{ > API_render: ~effectiveRenderer, > } > > impl Renderer for MyRender{} > > > trait Container{ > fn draw(&self, draw_render: ||); > } > > struct MyContainer { > value :~str, > } > > impl Container for MyContainer { > fn draw(&self, draw_render: ||) { > draw_render(); > } > } > > #[main] > fn main() { > let render = MyRender{API_render: ~StringRender}; //init with > user choice renderer > let container = MyContainer{value: ~"value"}; > container.draw(|| { > render.API_render.render(container.value); > }); //draw > } > > To extend my API I need to use more closure and if I don't what to > construct every thing in the main I have to return closure constructed > by each widget for example. > > My last idea is to use a spawned task that hold the renderer and send > it the widget to draw but It seems to me a little complicated. > > So I don't see any simple to do it. If anybody can help, it would be > very helpful. > > Philippe > I believe you want something like: pub mod rendering { trait RendererTrait { fn draw_line(&mut self); } pub struct Renderer { renderer: ~RendererTrait } impl Renderer { pub fn draw_line(&mut self) { self.renderer.draw_line() } } pub fn get_renderer() -> Renderer { // TODO: Choose correct renderer Renderer {renderer: ~gtk::GtkRenderer} } mod gtk { pub struct GtkRenderer; impl ::rendering::RendererTrait for GtkRenderer { fn draw_line(&mut self) {} } } } struct Container; trait Widget { fn draw(&self, renderer: &mut rendering::Renderer); } impl Widget for Container { fn draw(&self, renderer: &mut rendering::Renderer) { renderer.draw_line() } } pub fn main() { let mut renderer = rendering::get_renderer(); let container = Container; container.draw(&mut renderer); } I'm not expert in rust so I hope I got visibility right - the rendering::Renderer and rendering::gtk are suppose to be hidden from user. Although if user have choice of renderer IMHO something like that would be better: pub mod rendering { pub trait Renderer { fn draw_line(&mut self); } pub mod gtk { pub struct GtkRenderer; impl GtkRenderer { pub fn new() -> GtkRenderer { GtkRenderer } } impl ::rendering::Renderer for GtkRenderer { fn draw_line(&mut self) {} } } } struct Container; trait Widget { fn draw(&self, renderer: &mut Renderer); } impl Widget for Container { fn draw(&self, renderer: &mut Renderer) { renderer.draw_line() } } pub fn main() { let mut renderer = rendering::gtk::GtkRenderer::new(); let container = Container; container.draw(&mut renderer); } The change is similar to change from lambdas to typeclasses in say Haskell. YMMV if it is good style in Haskell but it has nice properties in rust (like static dispatch). > Le 20/02/2014 04:14, Jack Moffitt a ?crit : > > > > I'am learning the functional programming paradigm with rust and to help me I > > > decide to translate the pattern of the book "Functional Programming Patterns > > > in Scala and Clojure" in Rust. In this work I have a problem to return a > > > closure (or a function) as a return value and I didn't find any solution. I > > > understand the problem but I can't find a solution. The code is : > > Closures in Rust are stack allocated, so you can't return them from a > > function since the function's stack will be gone. You can use either a > > proc() or a ~Trait object. A proc can only be called once, but a trait > > object can be called many times. If you don't need to close over any > > state (which it appears you don't from your example), then you can > > return bare functions. > > > > Here's a trait object example (untested and incomplete): > > > > trait Comparison { > > fn compare(&self, p1: &Person, p2: &Person) -> Ordering; > > } > > > > fn make_comparison() -> ~Comparison { > > struct ClosedOverState { > > ... > > } > > impl Comparison for ClosedOverState { > > fn compare(...) -> Ordering { > > .... // access state through self.foo > > } > > } > > > > ~ClosedOverState { > > foo: 0, > > } > > } > > > > It can be simplified with macros. > > > > jack. > > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From philippe.delrieu at free.fr Sun Feb 23 11:41:01 2014 From: philippe.delrieu at free.fr (Philippe Delrieu) Date: Sun, 23 Feb 2014 20:41:01 +0100 Subject: [rust-dev] Using a closure as a return value In-Reply-To: <1393153051.12520.0.camel@localhost> References: <530542CE.6040304@free.fr> <53091B2C.6030507@free.fr> <1393153051.12520.0.camel@localhost> Message-ID: <530A4ECD.9000202@free.fr> Thank you for your help and to spend so much time on my problem. I've already think of a similar solution but what I would like to avoid is to have a RenderTrait that propose all the rendering method because they depend on the rendering engine. You don't render the same way if you use a vector lib or OpenGL. I have a generic API for the UI behavior like you did with trait and for the rendering I call specific method of the rendering engine. If I update your example the widget draw is : impl Widget for GTKContainer { fn draw(&self, renderer: &mut rendering::Renderer) { let myrender = renderer as gtkgraphicAPI //doesn't work myrenderer.gtkdraw_line(); myrenderer.gtkdraw_rectangle(); } } or impl Widget for SFMLContainer { fn draw(&self, renderer: &mut rendering::Renderer) { let myrender = renderer as SFMLrenderer //doesn't work myrenderer.sfmldraw_linedRectangle(); } } I have to do a draw method for every renderer but I can draw as I want. Depending on what renderer you create at the beginning, it isn't the same Widget implementation that it's created. I didn't manage to make the cast work and I don't see how to use static renderer. As I think about it I'm wandering if I didn't do it the wrong way. The simpler perhaps is to have an GUI API like you did without any drawing function and a renderer API that draw everything behind the scene. Like trait Widget { fn moved(&self, newpos: (int, int)); } Struct MyWidget; impl Widget for MyWidget { fn moved(&self, newpos: (int, int)) { update_widget_position(); } } mod SFMLRenderer { fn draw(&my_widget: MyWidget, renderer: SFMLRenderer) { myrenderer.sfmldraw_linedRectangle(MyWidget.get_widget_position()); } } I have to put the glue that call the right draw but I think I don't have to keep a reference to a renderer visible everywhere. I'll try to change my code in this way to see if it works. Philippe Le 23/02/2014 11:57, Maciej Piechotka a ?crit : > On Sat, 2014-02-22 at 22:48 +0100, Philippe Delrieu wrote: >> Thank for you reply. But I don't see any solution to my problem.. I'll >> explain it a little more. >> >> I want to develop a sort of GUI. The GUI has its own logic and use a >> rendering engine to do the work. I want my GUI separate of the >> rendering engine. >> In the first solution I use a trait that hide the renderer. So I have >> a struct that old the renderer that implement the trait. It is created >> at the start of ther application and passed to every GUI call. At the >> end the trait is casted to the effective renderer to do the work: >> >> trait Renderer{} >> struct MyRender{ >> API_render: ~effectiveRenderer, >> } >> >> impl Renderer for MyRender{} >> >> struct MyGUIWidget; >> >> impl MyGUIWidget { >> fn draw(&self, renderer: &renderer) { //here I know what type >> of renderer to use. >> let myrender = render as &MyRender; //error: non-scalar cast: >> `&Renderer` as `&MyRender` >> myrender.API_render.render(); >> } >> } >> >> #[main] >> fn main() { >> let render = MyRender{API_render: ~...}; //init with user choice >> renderer >> let widget = MyGUIWidget; >> widget.draw(render); //draw >> } >> >> I didn't find a way to send specific Renderer to an API with a generic >> trait and to polymorph it when I know which struct it is. >> >> I can use a singleton or a static variable but static allocation >> doesn't seem to be allowed (as I undersdant the documentation). >> >> So I try with closure and I have a sort of example working using a >> renderer : >> trait Renderer{} // >> >> struct MyRender{ >> API_render: ~effectiveRenderer, >> } >> >> impl Renderer for MyRender{} >> >> >> trait Container{ >> fn draw(&self, draw_render: ||); >> } >> >> struct MyContainer { >> value :~str, >> } >> >> impl Container for MyContainer { >> fn draw(&self, draw_render: ||) { >> draw_render(); >> } >> } >> >> #[main] >> fn main() { >> let render = MyRender{API_render: ~StringRender}; //init with >> user choice renderer >> let container = MyContainer{value: ~"value"}; >> container.draw(|| { >> render.API_render.render(container.value); >> }); //draw >> } >> >> To extend my API I need to use more closure and if I don't what to >> construct every thing in the main I have to return closure constructed >> by each widget for example. >> >> My last idea is to use a spawned task that hold the renderer and send >> it the widget to draw but It seems to me a little complicated. >> >> So I don't see any simple to do it. If anybody can help, it would be >> very helpful. >> >> Philippe >> > I believe you want something like: > > pub mod rendering { > trait RendererTrait { > fn draw_line(&mut self); > } > > pub struct Renderer { > renderer: ~RendererTrait > } > > impl Renderer { > pub fn draw_line(&mut self) { > self.renderer.draw_line() > } > } > > pub fn get_renderer() -> Renderer { > // TODO: Choose correct renderer > Renderer {renderer: ~gtk::GtkRenderer} > } > > mod gtk { > pub struct GtkRenderer; > > impl ::rendering::RendererTrait for GtkRenderer { > fn draw_line(&mut self) {} > } > } > } > > struct Container; > > trait Widget { > fn draw(&self, renderer: &mut rendering::Renderer); > } > > impl Widget for Container { > fn draw(&self, renderer: &mut rendering::Renderer) { > renderer.draw_line() > } > } > > pub fn main() { > let mut renderer = rendering::get_renderer(); > let container = Container; > container.draw(&mut renderer); > } > > > I'm not expert in rust so I hope I got visibility right - the > rendering::Renderer and rendering::gtk are suppose to be hidden from > user. > > Although if user have choice of renderer IMHO something like that would > be better: > > pub mod rendering { > pub trait Renderer { > fn draw_line(&mut self); > } > > pub mod gtk { > pub struct GtkRenderer; > > impl GtkRenderer { > pub fn new() -> GtkRenderer { > GtkRenderer > } > } > > impl ::rendering::Renderer for GtkRenderer { > fn draw_line(&mut self) {} > } > } > } > > struct Container; > > trait Widget { > fn draw(&self, renderer: &mut > Renderer); > } > > impl Widget for Container { > fn draw(&self, renderer: &mut > Renderer) { > renderer.draw_line() > } > } > > pub fn main() { > let mut renderer = rendering::gtk::GtkRenderer::new(); > let container = Container; > container.draw(&mut renderer); > } > > The change is similar to change from lambdas to typeclasses in say > Haskell. YMMV if it is good style in Haskell but it has nice properties > in rust (like static dispatch). > >> Le 20/02/2014 04:14, Jack Moffitt a ?crit : >> >>>> I'am learning the functional programming paradigm with rust and to help me I >>>> decide to translate the pattern of the book "Functional Programming Patterns >>>> in Scala and Clojure" in Rust. In this work I have a problem to return a >>>> closure (or a function) as a return value and I didn't find any solution. I >>>> understand the problem but I can't find a solution. The code is : >>> Closures in Rust are stack allocated, so you can't return them from a >>> function since the function's stack will be gone. You can use either a >>> proc() or a ~Trait object. A proc can only be called once, but a trait >>> object can be called many times. If you don't need to close over any >>> state (which it appears you don't from your example), then you can >>> return bare functions. >>> >>> Here's a trait object example (untested and incomplete): >>> >>> trait Comparison { >>> fn compare(&self, p1: &Person, p2: &Person) -> Ordering; >>> } >>> >>> fn make_comparison() -> ~Comparison { >>> struct ClosedOverState { >>> ... >>> } >>> impl Comparison for ClosedOverState { >>> fn compare(...) -> Ordering { >>> .... // access state through self.foo >>> } >>> } >>> >>> ~ClosedOverState { >>> foo: 0, >>> } >>> } >>> >>> It can be simplified with macros. >>> >>> jack. >>> >>> > > > _______________________________________________ > Rust-dev mailing list > Rust-dev at mozilla.org > https://mail.mozilla.org/listinfo/rust-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From erick.tryzelaar at gmail.com Mon Feb 24 08:31:19 2014 From: erick.tryzelaar at gmail.com (Erick Tryzelaar) Date: Mon, 24 Feb 2014 11:31:19 -0500 Subject: [rust-dev] Breaking change: new Hash framework has landed Message-ID: I'm happy to announce that Rust's new hashing framework has landed in: https://github.com/mozilla/rust/pull/11863 https://github.com/mozilla/rust/pull/12492 This PR has has changed how to declare a type is hashable. Here's a full example on how to hash a value using either the new `#[deriving(Hash)]` or manual implementation. ``` use std::hash::{Hash, hash}; use std::hash::sip::SipState; #[deriving(Hash)] struct Foo { a: ~str, b: uint, c: bool, } struct Bar { a: ~str, b: uint, c: bool, } impl Hash for Bar { fn hash(&self, state: &mut SipState) { self.a.hash(state); self.b.hash(state); self.c.hash(state); } } fn main() { let foo = Foo { a: ~"hello world", b: 5, c: true }; println!("{}", hash(&foo)); let bar = Bar { a: ~"hello world", b: 5, c: true }; println!("{}", hash(&bar)); } ``` We also have experimental support for hashers that compute a value off a stream of bytes: ``` use std::hash::{Hash, Hasher}; use std::io::IoResult; #[deriving(Hash)] // automatically provides hashing from a stream of bytes struct Foo { a: ~str, b: uint, c: bool, } struct Bar { a: ~str, b: uint, c: bool, } #[allow(default_type_param_usage)] impl Hash for Bar { fn hash(&self, state: &mut S) { self.a.hash(state); self.b.hash(state); self.c.hash(state); } } struct SumState { sum: u64, } impl Writer for SumState { fn write(&mut self, bytes: &[u8]) -> IoResult<()> { for byte in bytes.iter() { self.sum += *byte as u64; } Ok(()) } } struct SumHasher; #[allow(default_type_param_usage)] impl Hasher for SumHasher { fn hash>(&self, value: &T) -> u64 { let mut state = SumState { sum: 0 }; value.hash(&mut state); state.sum } } fn main() { let hasher = SumHasher; let foo = Foo { a: ~"hello world", b: 5, c: true }; println!("{}", hasher.hash(&foo)); let bar = Bar { a: ~"hello world", b: 5, c: true }; println!("{}", hasher.hash(&bar)); } ``` Finally, we also support completely custom hash computation: ``` use std::hash::{Hash, Hasher}; struct Foo { hash: u64 } #[allow(default_type_param_usage)] impl Hash for Foo { fn hash(&self, state: &mut u64) { *state = self.hash } } struct CustomHasher; #[allow(default_type_param_usage)] impl Hasher for CustomHasher { fn hash>(&self, value: &T) -> u64 { let mut state = 0; value.hash(&mut state); state } } fn main() { let hasher = CustomHasher; let foo = Foo { hash: 5 }; println!("{}", hasher.hash(&foo)); } ``` This may break over the next couple days/weeks as we figure out the right way to do this. Furthermore, HashMaps have not yet been updated to take advantage of the custom hashers, but that should be coming later on this week. I hope they work well for all of you. If you run into any trouble, please file bugs and cc @erickt on the ticket. Thanks, Erick -------------- next part -------------- An HTML attachment was scrubbed... URL: From banderson at mozilla.com Mon Feb 24 11:55:47 2014 From: banderson at mozilla.com (Brian Anderson) Date: Mon, 24 Feb 2014 11:55:47 -0800 Subject: [rust-dev] Breaking change: new Hash framework has landed In-Reply-To: References: Message-ID: <530BA3C3.2030106@mozilla.com> Thanks, Erick! This was an awesome effort that greatly improves the ergonomics of hashing. On 02/24/2014 08:31 AM, Erick Tryzelaar wrote: > I'm happy to announce that Rust's new hashing framework has landed in: > > https://github.com/mozilla/rust/pull/11863 > https://github.com/mozilla/rust/pull/12492 > > This PR has has changed how to declare a type is hashable. Here's a full > example on how to hash a value using either the new `#[deriving(Hash)]` > or manual implementation. > > ``` > use std::hash::{Hash, hash}; > use std::hash::sip::SipState; > > #[deriving(Hash)] > struct Foo { > a: ~str, > b: uint, > c: bool, > } > > struct Bar { > a: ~str, > b: uint, > c: bool, > } > > impl Hash for Bar { > fn hash(&self, state: &mut SipState) { > self.a.