1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780
//! A stream that efficiently multiplexes multiple streams. //! //! This "combinator" provides the ability to maintain and drive a set of streams to completion, //! while also providing access to each stream as it yields new elements. //! //! Streams are pushed into this set and their realized values are yielded as they are produced. //! This structure is optimized to manage a large number of streams. Streams managed by //! `StreamUnordered` will only be polled when they generate notifications. This reduces the //! required amount of work needed to coordinate large numbers of streams. //! //! When a `StreamUnordered` is first created, it does not contain any streams. Calling `poll` in //! this state will result in `Ok(Async::Ready(None))` to be returned. Streams are submitted to the //! set using `push`; however, the stream will **not** be polled at this point. `StreamUnordered` //! will only poll managed streams when `StreamUnordered::poll` is called. As such, it is important //! to call `poll` after pushing new streams. //! //! If `StreamUnordered::poll` returns `Ok(Async::Ready(None))` this means that the set is //! currently not managing any streams. A stream may be submitted to the set at a later time. At //! that point, a call to `StreamUnordered::poll` will either return the stream's resolved value //! **or** `Ok(Async::NotReady)` if the stream has not yet completed. //! //! Whenever a value is yielded, the yielding stream's index is also included. A reference to the //! stream that originated the value is obtained by using [`StreamUnordered::get`] or //! [`StreamUnordered::get_mut`]. //! //! In normal operation, `poll` will yield a `StreamYield::Item` when it completes successfully. //! This value indicates that an underlying stream (the one indicated by the included index) //! produced an item. If an underlying stream yields `Async::Ready(None)` to indicate termination, //! a `StreamYield::Finished` is returned instead. Note that as soon as a stream is returned in //! `StreamYield::Finished`, its token may be reused for new streams that are added. #![deny(missing_docs)] #![deny(missing_debug_implementations)] extern crate futures; extern crate slab; use std::cell::UnsafeCell; use std::fmt::{self, Debug}; use std::iter::FromIterator; use std::marker::PhantomData; use std::mem; use std::ops::{Index, IndexMut}; use std::ptr; use std::sync::atomic::Ordering::{AcqRel, Acquire, Relaxed, Release, SeqCst}; use std::sync::atomic::{AtomicBool, AtomicPtr}; use std::sync::{Arc, Weak}; use std::usize; use futures::executor::{self, Notify, NotifyHandle, UnsafeNotify}; use futures::task::AtomicTask; use futures::{task, Async, Poll, Stream}; /// A stream multiplexer. /// /// See the crate-level documentation for details. #[must_use = "streams do nothing unless polled"] pub struct StreamUnordered<S> { inner: Arc<Inner>, streams: slab::Slab<S>, head_all: *const Node, } unsafe impl<S: Send> Send for StreamUnordered<S> {} unsafe impl<S: Sync> Sync for StreamUnordered<S> {} // StreamUnordered is an almost direct clone of futures::stream::FuturesUnordered, but adapted to // manage streams instead of futures. Since users may wish to further operate on streams after they // yield a value (e.g., by replying on a `TcpStream`), StreamUnordered also include information // about what stream each yielded item originated from. It internally maintains a Slab of all // managed streams, which can then be accessed by the user through the token they receive along // with the yielded values. // // StreamUnordered is implemented using two linked lists. One which links all // streams managed by a `StreamUnordered` and one that tracks streams that have // been scheduled for polling. The first linked list is not thread safe and is // only accessed by the thread that owns the `StreamUnordered` value. The // second linked list is an implementation of the intrusive MPSC queue algorithm // described by 1024cores.net. // // When a stream is submitted to the set a node is allocated and inserted in // both linked lists. The next call to `poll` will (eventually) see this node // and call `poll` on the stream. // // Before a managed stream is polled, the current task's `Notify` is replaced // with one that is aware of the specific stream being run. This ensures that // task notifications generated by that specific stream are visible to // `StreamUnordered`. When a notification is received, the node is scheduled // for polling by being inserted into the concurrent linked list. // // Each node uses an `AtomicUsize` to track it's state. The node state is the // reference count (the number of outstanding handles to the node) as well as a // flag tracking if the node is currently inserted in the atomic queue. When the // stream is notified, it will only insert itself into the linked list if it // isn't currently inserted. // // This implementation could likely be optimized further now that the linked lists no longer need // to contain the underlying Futures (as in FuturesUnordered). However, that's a task for later. #[allow(missing_debug_implementations)] struct Inner { // The task using `StreamUnordered`. parent: AtomicTask, // Head/tail of the readiness queue head_readiness: AtomicPtr<Node>, tail_readiness: UnsafeCell<*const Node>, stub: Arc<Node>, } struct Node { // The stream's index stream: UnsafeCell<Option<usize>>, // Next pointer for linked list tracking all active nodes next_all: UnsafeCell<*const Node>, // Previous node in linked list tracking all active nodes prev_all: UnsafeCell<*const Node>, // Next pointer in readiness queue next_readiness: AtomicPtr<Node>, // Queue that we'll be enqueued to when notified queue: Weak<Inner>, // Whether or not this node is currently in the mpsc queue. queued: AtomicBool, } enum Dequeue { Data(*const Node), Empty, Inconsistent, } impl<S> Default for StreamUnordered<S> { fn default() -> Self { StreamUnordered::new() } } /// A handle to an vacant stream slot in a `StreamUnordered`. /// /// `StreamSlot` allows constructing streams that hold the token that they will be assigned. #[derive(Debug)] pub struct StreamSlot<'a, S: 'a> { entry: slab::VacantEntry<'a, S>, backref: *mut StreamUnordered<S>, } impl<'a, S: 'a> StreamSlot<'a, S> { /// Insert a stream in the slot, and return a mutable reference to the value. /// /// To get the token associated with the stream, use key prior to calling insert. pub fn insert(self, stream: S) -> &'a mut S { let token = self.entry.key(); { // in a scope so we drop the &mut S self.entry.insert(stream); } // safe because the StreamSlot captures the &'a mut StreamUnordered (so it can't be // moved), and the only other ref to anything in StreamUnordered was in the // slab::StreamSlot, which we've now consumed. let this = unsafe { &mut *self.backref }; let node = Arc::new(Node { stream: UnsafeCell::new(Some(token)), next_all: UnsafeCell::new(ptr::null_mut()), prev_all: UnsafeCell::new(ptr::null_mut()), next_readiness: AtomicPtr::new(ptr::null_mut()), queued: AtomicBool::new(true), queue: Arc::downgrade(&this.inner), }); // Right now our node has a strong reference count of 1. We transfer // ownership of this reference count to our internal linked list // and we'll reclaim ownership through the `unlink` function below. let ptr = this.link(node); // We'll need to get the stream "into the system" to start tracking it, // e.g. getting its unpark notifications going to us tracking which // streams are ready. To do that we unconditionally enqueue it for // polling here. this.inner.enqueue(ptr); &mut this[token] } /// Return the token associated with this slot. /// /// A stream stored in this slot will be associated with this token. pub fn token(&self) -> usize { self.entry.key() } } impl<S> StreamUnordered<S> { /// Constructs a new, empty `StreamUnordered` /// /// The returned `StreamUnordered` does not contain any streams and, in this /// state, `StreamUnordered::poll` will return `Ok(Async::Ready(None))`. pub fn new() -> StreamUnordered<S> { let stub = Arc::new(Node { stream: UnsafeCell::new(None), next_all: UnsafeCell::new(ptr::null()), prev_all: UnsafeCell::new(ptr::null()), next_readiness: AtomicPtr::new(ptr::null_mut()), queued: AtomicBool::new(true), queue: Weak::new(), }); let stub_ptr = &*stub as *const Node; let inner = Arc::new(Inner { parent: AtomicTask::new(), head_readiness: AtomicPtr::new(stub_ptr as *mut _), tail_readiness: UnsafeCell::new(stub_ptr), stub: stub, }); StreamUnordered { streams: slab::Slab::new(), head_all: ptr::null_mut(), inner: inner, } } /// Returns the number of streams contained in the set. /// /// This represents the total number of in-flight streams. pub fn len(&self) -> usize { self.streams.len() } /// Returns `true` if the set contains no streams pub fn is_empty(&self) -> bool { self.streams.is_empty() } /// Returns a handle to a vacant stream slot allowing for further manipulation. /// /// This function is useful when creating values that must contain their stream token. The /// returned `StreamSlot` reserves a slot for the stream and is able to query the associated /// key. pub fn stream_slot(&mut self) -> StreamSlot<S> { let this = self as *mut _; StreamSlot { entry: self.streams.vacant_entry(), backref: this, } } /// Push a stream into the set. /// /// This function submits the given stream to the set for managing. This /// function will not call `poll` on the submitted stream. The caller must /// ensure that `StreamUnordered::poll` is called in order to receive task /// notifications. /// /// The returned token is an identifier that uniquely identifies the given stream. To get a /// handle to the pushed stream, pass the token to [`StreamUnordered::get`] or /// [`StreamUnordered::get_mut`] (or just index `StreamUnordered` directly). The same token /// will be yielded whenever an element is pulled from this stream. pub fn push(&mut self, stream: S) -> usize { let s = self.stream_slot(); let token = s.token(); s.insert(stream); token } /// Returns a reference to the stream at the given index. /// /// If the given index is not associated with a stream, then None is returned. /// /// This method is useful for getting a reference to a specific stream after it yielded a /// value. pub fn get(&self, stream: usize) -> Option<&S> { self.streams.get(stream) } /// Returns a mutable reference to the stream at the given index. /// /// If the given index is not associated with a stream, then None is returned. /// /// This method is useful for getting a mutable reference to a specific stream after it yielded /// a value. pub fn get_mut(&mut self, stream: usize) -> Option<&mut S> { self.streams.get_mut(stream) } /// Returns an iterator that allows modifying each stream in the set. pub fn iter_mut<'a>(&'a mut self) -> impl Iterator<Item = &'a mut S> { self.streams.iter_mut().map(|(_, s)| s) } fn release_node(&mut self, node: Arc<Node>) { // The stream is done, try to reset the queued flag. This will prevent // `notify` from doing any work in the stream let prev = node.queued.swap(true, SeqCst); // Drop the stream, even if it hasn't finished yet. This is safe // because we're dropping the stream on the thread that owns // `StreamUnordered`, which correctly tracks T's lifetimes and such. if let Some(idx) = unsafe { (*node.stream.get()).take() } { drop(self.streams.remove(idx)); } // If the queued flag was previously set then it means that this node // is still in our internal mpsc queue. We then transfer ownership // of our reference count to the mpsc queue, and it'll come along and // free it later, noticing that the stream is `None`. // // If, however, the queued flag was *not* set then we're safe to // release our reference count on the internal node. The queued flag // was set above so all stream `enqueue` operations will not actually // enqueue the node, so our node will never see the mpsc queue again. // The node itself will be deallocated once all reference counts have // been dropped by the various owning tasks elsewhere. if prev { mem::forget(node); } } /// Insert a new node into the internal linked list. fn link(&mut self, node: Arc<Node>) -> *const Node { let ptr = arc2ptr(node); unsafe { *(*ptr).next_all.get() = self.head_all; if !self.head_all.is_null() { *(*self.head_all).prev_all.get() = ptr; } } self.head_all = ptr; return ptr; } /// Remove the node from the linked list tracking all nodes currently /// managed by `StreamUnordered`. unsafe fn unlink(&mut self, node: *const Node) -> Arc<Node> { let node = ptr2arc(node); let next = *node.next_all.get(); let prev = *node.prev_all.get(); *node.next_all.get() = ptr::null_mut(); *node.prev_all.get() = ptr::null_mut(); if !next.is_null() { *(*next).prev_all.get() = prev; } if !prev.is_null() { *(*prev).next_all.get() = next; } else { self.head_all = next; } return node; } } impl<S> Index<usize> for StreamUnordered<S> { type Output = S; fn index(&self, stream: usize) -> &Self::Output { &self.streams[stream] } } impl<S> IndexMut<usize> for StreamUnordered<S> { fn index_mut(&mut self, stream: usize) -> &mut Self::Output { &mut self.streams[stream] } } /// An event that occurred for a managed stream. #[derive(Debug)] pub enum StreamYield<S> where S: Stream, { /// The underlying stream produced an item. Item(S::Item), /// The underlying stream has completed, and is being returned. /// /// Note that once this value is yielded, the stream's token may be reused. Finished(S), } impl<S> Stream for StreamUnordered<S> where S: Stream, { type Item = (StreamYield<S>, usize); type Error = S::Error; fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> { // Ensure `parent` is correctly set. self.inner.parent.register(); loop { let node = match unsafe { self.inner.dequeue() } { Dequeue::Empty => { if self.is_empty() { return Ok(Async::Ready(None)); } else { return Ok(Async::NotReady); } } Dequeue::Inconsistent => { // At this point, it may be worth yielding the thread & // spinning a few times... but for now, just yield using the // task system. task::current().notify(); return Ok(Async::NotReady); } Dequeue::Data(node) => node, }; debug_assert!(node != self.inner.stub()); unsafe { let stream = match (*(*node).stream.get()).take() { Some(stream) => stream, // If the stream has already gone away then we're just // cleaning out this node. See the comment in // `release_node` for more information, but we're basically // just taking ownership of our reference count here. None => { let node = ptr2arc(node); assert!((*node.next_all.get()).is_null()); assert!((*node.prev_all.get()).is_null()); continue; } }; // Unset queued flag... this must be done before // polling. This ensures that the stream gets // rescheduled if it is notified **during** a call // to `poll`. let prev = (*node).queued.swap(false, SeqCst); assert!(prev); // We're going to need to be very careful if the `poll` // function below panics. We need to (a) not leak memory and // (b) ensure that we still don't have any use-after-frees. To // manage this we do a few things: // // * This "bomb" here will call `release_node` if dropped // abnormally. That way we'll be sure the memory management // of the `node` is managed correctly. // * The stream was extracted above (taken ownership). That way // if it panics we're guaranteed that the stream is // dropped on this thread and doesn't accidentally get // dropped on a different thread (bad). // * We unlink the node from our internal queue to preemptively // assume it'll panic, in which case we'll want to discard it // regardless. struct Bomb<'a, T: 'a> { queue: &'a mut StreamUnordered<T>, node: Option<Arc<Node>>, } impl<'a, T> Drop for Bomb<'a, T> { fn drop(&mut self) { if let Some(node) = self.node.take() { self.queue.release_node(node); } } } let mut bomb = Bomb { node: Some(self.unlink(node)), queue: self, }; // Poll the underlying stream with the appropriate `notify` // implementation. This is where a large bit of the unsafety // starts to stem from internally. The `notify` instance itself // is basically just our `Arc<Node>` and tracks the mpsc // queue of ready streams. // // Critically though `Node` won't actually access `T`, the // stream, while it's floating around inside of `Task` // instances. These structs will basically just use `T` to size // the internal allocation, appropriately accessing fields and // deallocating the node if need be. let res = { let notify = NodeToHandle(bomb.node.as_ref().unwrap()); let mut stream = bomb.queue.streams.get_mut(stream).unwrap(); executor::with_notify(¬ify, 0, || stream.poll()) }; break match res { Ok(Async::NotReady) => { let node = bomb.node.take().unwrap(); *node.stream.get() = Some(stream); bomb.queue.link(node); continue; } Ok(Async::Ready(Some(e))) => { // Since we got Ready, we have to call poll() again NotifyHandle::from(NodeToHandle(bomb.node.as_ref().unwrap())).notify(0); // We're also not done with the stream just because it yielded something let node = bomb.node.take().unwrap(); *node.stream.get() = Some(stream); bomb.queue.link(node); Ok(Async::Ready(Some((StreamYield::Item(e), stream)))) } Ok(Async::Ready(None)) => { // The stream has completed and should be removed. let s = bomb.queue.streams.remove(stream); Ok(Async::Ready(Some((StreamYield::Finished(s), stream)))) } Err(e) => Err(e), }; } } } } impl<S: Debug> Debug for StreamUnordered<S> { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { write!(fmt, "StreamUnordered {{ ... }}") } } impl<S> Drop for StreamUnordered<S> { fn drop(&mut self) { // When a `StreamUnordered` is dropped we want to drop all streams associated // with it. At the same time though there may be tons of `Task` handles // flying around which contain `Node` references inside them. We'll // let those naturally get deallocated when the `Task` itself goes out // of scope or gets notified. unsafe { while !self.head_all.is_null() { let head = self.head_all; let node = self.unlink(head); self.release_node(node); } } // Note that at this point we could still have a bunch of nodes in the // mpsc queue. None of those nodes, however, have streams associated // with them so they're safe to destroy on any thread. At this point // the `StreamUnordered` struct, the owner of the one strong reference // to `Inner` will drop the strong reference. At that point // whichever thread releases the strong refcount last (be it this // thread or some other thread as part of an `upgrade`) will clear out // the mpsc queue and free all remaining nodes. // // While that freeing operation isn't guaranteed to happen here, it's // guaranteed to happen "promptly" as no more "blocking work" will // happen while there's a strong refcount held. } } impl<S> FromIterator<S> for StreamUnordered<S> { fn from_iter<I>(iter: I) -> Self where I: IntoIterator<Item = S>, { let mut new = StreamUnordered::new(); for stream in iter.into_iter() { new.push(stream); } new } } impl Inner { /// The enqueue function from the 1024cores intrusive MPSC queue algorithm. fn enqueue(&self, node: *const Node) { unsafe { debug_assert!((*node).queued.load(Relaxed)); // This action does not require any coordination (*node).next_readiness.store(ptr::null_mut(), Relaxed); // Note that these atomic orderings come from 1024cores let node = node as *mut _; let prev = self.head_readiness.swap(node, AcqRel); (*prev).next_readiness.store(node, Release); } } /// The dequeue function from the 1024cores intrusive MPSC queue algorithm /// /// Note that this unsafe as it required mutual exclusion (only one thread /// can call this) to be guaranteed elsewhere. unsafe fn dequeue(&self) -> Dequeue { let mut tail = *self.tail_readiness.get(); let mut next = (*tail).next_readiness.load(Acquire); if tail == self.stub() { if next.is_null() { return Dequeue::Empty; } *self.tail_readiness.get() = next; tail = next; next = (*next).next_readiness.load(Acquire); } if !next.is_null() { *self.tail_readiness.get() = next; debug_assert!(tail != self.stub()); return Dequeue::Data(tail); } if self.head_readiness.load(Acquire) as *const _ != tail { return Dequeue::Inconsistent; } self.enqueue(self.stub()); next = (*tail).next_readiness.load(Acquire); if !next.is_null() { *self.tail_readiness.get() = next; return Dequeue::Data(tail); } Dequeue::Inconsistent } fn stub(&self) -> *const Node { &*self.stub } } impl Drop for Inner { fn drop(&mut self) { // Once we're in the destructor for `Inner` we need to clear out the // mpsc queue of nodes if there's anything left in there. // // Note that each node has a strong reference count associated with it // which is owned by the mpsc queue. All nodes should have had their // streams dropped already by the `StreamUnordered` destructor above, // so we're just pulling out nodes and dropping their refcounts. unsafe { loop { match self.dequeue() { Dequeue::Empty => break, Dequeue::Inconsistent => abort("inconsistent in drop"), Dequeue::Data(ptr) => drop(ptr2arc(ptr)), } } } } } #[allow(missing_debug_implementations)] struct NodeToHandle<'a>(&'a Arc<Node>); impl<'a> Clone for NodeToHandle<'a> { fn clone(&self) -> Self { NodeToHandle(self.0) } } impl<'a> From<NodeToHandle<'a>> for NotifyHandle { fn from(handle: NodeToHandle<'a>) -> NotifyHandle { unsafe { let ptr = handle.0.clone(); let ptr = mem::transmute::<Arc<Node>, *mut ArcNode>(ptr); NotifyHandle::new(hide_lt(ptr)) } } } struct ArcNode(PhantomData<()>); // We should never touch `T` on any thread other than the one owning // `StreamUnordered`, so this should be a safe operation. unsafe impl Send for ArcNode {} unsafe impl Sync for ArcNode {} impl Notify for ArcNode { fn notify(&self, _id: usize) { unsafe { let me: *const ArcNode = self; let me: *const *const ArcNode = &me; let me = me as *const Arc<Node>; Node::notify(&*me) } } } unsafe impl UnsafeNotify for ArcNode { unsafe fn clone_raw(&self) -> NotifyHandle { let me: *const ArcNode = self; let me: *const *const ArcNode = &me; let me = &*(me as *const Arc<Node>); NodeToHandle(me).into() } unsafe fn drop_raw(&self) { let mut me: *const ArcNode = self; let me = &mut me as *mut *const ArcNode as *mut Arc<Node>; ptr::drop_in_place(me); } } unsafe fn hide_lt(p: *mut ArcNode) -> *mut UnsafeNotify { mem::transmute(p as *mut UnsafeNotify) } impl Node { fn notify(me: &Arc<Node>) { let inner = match me.queue.upgrade() { Some(inner) => inner, None => return, }; // It's our job to notify the node that it's ready to get polled, // meaning that we need to enqueue it into the readiness queue. To // do this we flag that we're ready to be queued, and if successful // we then do the literal queueing operation, ensuring that we're // only queued once. // // Once the node is inserted we be sure to notify the parent task, // as it'll want to come along and pick up our node now. // // Note that we don't change the reference count of the node here, // we're just enqueueing the raw pointer. The `StreamUnordered` // implementation guarantees that if we set the `queued` flag true that // there's a reference count held by the main `StreamUnordered` queue // still. let prev = me.queued.swap(true, SeqCst); if !prev { inner.enqueue(&**me); inner.parent.notify(); } } } impl Drop for Node { fn drop(&mut self) { // Currently a `Node` is sent across all threads for any lifetime, // regardless of `T`. This means that for memory safety we can't // actually touch `T` at any time except when we have a reference to the // `StreamUnordered` itself. // // Consequently it *should* be the case that we always drop streams from // the `StreamUnordered` instance, but this is a bomb in place to catch // any bugs in that logic. unsafe { if (*self.stream.get()).is_some() { abort("stream still here when dropping"); } } } } fn arc2ptr<T>(ptr: Arc<T>) -> *const T { let addr = &*ptr as *const T; mem::forget(ptr); return addr; } unsafe fn ptr2arc<T>(ptr: *const T) -> Arc<T> { let anchor = mem::transmute::<usize, Arc<T>>(0x10); let addr = &*anchor as *const T; mem::forget(anchor); let offset = addr as isize - 0x10; mem::transmute::<isize, Arc<T>>(ptr as isize - offset) } fn abort(s: &str) -> ! { struct DoublePanic; impl Drop for DoublePanic { fn drop(&mut self) { panic!("panicking twice to abort the program"); } } let _bomb = DoublePanic; panic!("{}", s); }