rust, smart pointer
- Box<>
- Box::new
- Rc<>
- Rc.new
- Weak<>
- Cell<>
- Cell.new
- Cell get/set
- RefCell<>
- RefCell borrow/borrow_mut
- Arc<>
- Arc::new
- Mutex<>
- Mutex::new
- Mutex lock/trylock/drop
Box<>
Like unique_ptr in C++.
Internally, it contains a Unqiue<> which holds the raw pointer.
1 2 3 4 5 6 7 8 9 10 11 | pub struct Box<T: ?Sized>(Unique<T>); pub struct Unique<T: ?Sized> { pointer: *const T, // NOTE: this marker has no consequences for variance, but is necessary // for dropck to understand that we logically own a `T`. // // For details, see: // https://github.com/rust-lang/rfcs/blob/master/text/0769-sound-generic-drop.md#phantom-data _marker: PhantomData<T>, } |
Box::new
It uses ‘box’ to allocate T from heap.
‘box’ -> exchange_malloc by compiler.
1 2 3 | pub fn new(x: T) -> Box<T> { box x } |
Rc<>
Like shared_ptr in C++. It implements the reference-counting pointer for single-threaded environment.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | pub struct Rc<T: ?Sized> { ptr: NonNull<RcBox<T>>, phantom: PhantomData<RcBox<T>>, } pub struct NonNull<T: ?Sized> { pointer: *const T, } struct RcBox<T: ?Sized> { strong: Cell<usize>, weak: Cell<usize>, value: T, } |
RcBox holds counters for strong and weak references. Note these two counters are Cell<> which means it is interior mutable.
Rc.new
RcBox is created by ‘box’ to hold the value and counters.
1 2 3 4 5 6 7 8 9 10 11 | pub fn new(value: T) -> Rc<T> { // There is an implicit weak pointer owned by all the strong // pointers, which ensures that the weak destructor never frees // the allocation while the strong destructor is running, even // if the weak pointer is stored inside the strong one. Self::from_inner(Box::into_raw_non_null(box RcBox { strong: Cell::new(1), weak: Cell::new(1), value, })) } |
Weak<>
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | pub struct Weak<T: ?Sized> { // This is a `NonNull` to allow optimizing the size of this type in enums, // but it is not necessarily a valid pointer. // `Weak::new` sets this to `usize::MAX` so that it doesn’t need // to allocate space on the heap. That's not a value a real pointer // will ever have because RcBox has alignment at least 2. ptr: NonNull<RcBox<T>>, } pub fn upgrade(&self) -> Option<Rc<T>> { let inner = self.inner()?; if inner.strong() == 0 { None } else { inner.inc_strong(); Some(Rc::from_inner(self.ptr)) } } pub fn downgrade(this: &Self) -> Weak<T> { this.inc_weak(); // Make sure we do not create a dangling Weak debug_assert!(!is_dangling(this.ptr)); Weak { ptr: this.ptr } } |
Cell<>
Cell is a mutable memory location which means mutation inside an immutable struct, or interior mutability. Internally it contains a UnsafeCell<> which is a wrap to the T.
Strictly it is not a pointer.
1 2 3 | pub struct Cell<T: ?Sized> { value: UnsafeCell<T>, } |
Cell.new
It creates a new
1 2 3 | pub const fn new(value: T) -> Cell<T> { Cell { value: UnsafeCell::new(value) } } |
Cell get/set
set will replace the value with new val and drop the old.
get will simple return the copy of the value. Note ‘get’ is under a constraint Copy trait.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | pub fn set(&self, val: T) { let old = self.replace(val); drop(old); } pub fn replace(&self, val: T) -> T { // SAFETY: This can cause data races if called from a separate thread, // but `Cell` is `!Sync` so this won't happen. mem::replace(unsafe { &mut *self.value.get() }, val) } pub fn get(&self) -> T { // SAFETY: This can cause data races if called from a separate thread, // but `Cell` is `!Sync` so this won't happen. unsafe { *self.value.get() } } |
RefCell<>
RefCell is a mutable memory location with dynamically checked borrow rules. Not like Cell<>, borrow rules are checked at runtime. In this case, it will panic!() if any violations found.
Roughly, RefCell<> is just like Cell<>, but provides reference/de-reference. To get access to the value in RefCell, borrow/borrow_mut must be used.
BorrowFlag is used to track the borrow relationship, and value is held by UnsafeCell<>.
1 2 3 4 | pub struct RefCell<T: ?Sized> { borrow: Cell<BorrowFlag>, value: UnsafeCell<T>, } |
RefCell borrow/borrow_mut
1 2 3 4 5 6 | pub fn borrow_mut(&self) -> RefMut<'_, T> { self.try_borrow_mut().expect("already borrowed") } pub fn borrow(&self) -> Ref<'_, T> { self.try_borrow().expect("already mutably borrowed") } |
Arc<>
A thread-safe reference-counting pointer. ‘Arc’ stands for ‘Atomically Reference Counted’.
Internally it uses ArcInner to hold the value and counters.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | pub struct Arc<T: ?Sized> { ptr: NonNull<ArcInner<T>>, phantom: PhantomData<ArcInner<T>>, } struct ArcInner<T: ?Sized> { strong: atomic::AtomicUsize, // the value usize::MAX acts as a sentinel for temporarily "locking" the // ability to upgrade weak pointers or downgrade strong ones; this is used // to avoid races in `make_mut` and `get_mut`. weak: atomic::AtomicUsize, data: T, } |
Arc::new
Use
1 2 3 4 5 6 7 8 9 10 | pub fn new(data: T) -> Arc<T> { // Start the weak pointer count as 1 which is the weak pointer that's // held by all the strong pointers (kinda), see std/rc.rs for more info let x: Box<_> = box ArcInner { strong: atomic::AtomicUsize::new(1), weak: atomic::AtomicUsize::new(1), data, }; Self::from_inner(Box::into_raw_non_null(x)) } |
Mutex<>
Mutex is a mutual exclusion primitive useful for protecting shared data.
1 2 3 4 5 6 7 8 9 10 | pub struct Mutex<T: ?Sized> { // Note that this mutex is in a *box*, not inlined into the struct itself. // Once a native mutex has been used once, its address can never change (it // can't be moved). This mutex type can be safely moved at any time, so to // ensure that the native mutex is used correctly we box the inner mutex to // give it a constant address. inner: Box<sys::Mutex>, poison: poison::Flag, data: UnsafeCell<T>, } |
The sys::Mutex is boxed as shown above.
Mutex::new
inner is a boxed sys::Mutex.
1 2 3 4 5 6 7 8 9 10 11 | pub fn new(t: T) -> Mutex<T> { let mut m = Mutex { inner: box sys::Mutex::new(), poison: poison::Flag::new(), data: UnsafeCell::new(t), }; unsafe { m.inner.init(); } m } |
Mutex lock/trylock/drop
drop will unlock the mutex.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | pub fn lock(&self) -> LockResult<MutexGuard<'_, T>> { unsafe { self.inner.raw_lock(); MutexGuard::new(self) } } pub fn try_lock(&self) -> TryLockResult<MutexGuard<'_, T>> { unsafe { if self.inner.try_lock() { Ok(MutexGuard::new(self)?) } else { Err(TryLockError::WouldBlock) } } } fn drop(&mut self) { // This is actually safe b/c we know that there is no further usage of // this mutex (it's up to the user to arrange for a mutex to get // dropped, that's not our job) // // IMPORTANT: This code must be kept in sync with `Mutex::into_inner`. unsafe { self.inner.destroy() } } |