New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewrite boilerplate impls into macro #41
Rewrite boilerplate impls into macro #41
Conversation
Two problems: 1) I accidentally rustfmt it; 2) Usize type now has TYPENAME "usize" instead of "u32" or "u64". It could get back by implementing this trait for usize as a standalone with some "cfg!()"s.
Oops... Actually, I wanted to make a draft PR, as it's not really complete: some comments should be deleted and some things in this code look really strange to me. For example, why do you need all this "as_slice", "from_slice", if you can just write use core::intrinsics::copy_nonoverlapping;
let mut buf = ToType::some_default_zero();
let Vsize = mem::size_of::<ValueType>();
let Tsize = mem::size_of::<ToType>();
unsafe {
if Tsize > Vsize {
// Some endian check, dunno if it should be cfged or checked in runtime.
if cfg!(target_endian = "little") {
copy_nonoverlapping(&value as *const u8, &mut buf as *mut u8, Vsize);
} else {
copy_nonoverlapping(&value as *const u8, (&mut buf).add(Tsize-Vsize) as *mut u8, Vsize);
}
} else {
// Tsize <= Vsize
if cfg!(target_endian = "little") {
copy_nonoverlapping(&value as *const u8, &mut buf as *mut u8, Tsize);
} else {
copy_nonoverlapping((&value as *const u8).add(Vsize-Tsize), &mut buf as *mut u8, Vsize);
}
}
}
return buf; |
Rustfmt'ed again, whatever. Tested, all tests pass (except for those in "../traits.rs/fmt!" macro, they simply didn't compile (I dunno if those are tests or not at all). Now that unsafe is pretty strait, clean and should const-eval into one line actually.
src/fields.rs
Outdated
fn resize<T, U>(value: T) -> U | ||
where T: BitStore, U: BitStore { | ||
let zero = 0usize; | ||
let mut slab = zero.to_ne_bytes(); | ||
let start = 0; | ||
|
||
/* Copy the source value into the correct region of the intermediate slab. | ||
|
||
The `BitStore::as_bytes` method returns the value as native-endian-order | ||
bytes. These bytes are then written into the correct location of the slab | ||
(low addresses on little-endian, high addresses on big-endian) to be | ||
interpreted as `usize`. | ||
*/ | ||
match mem::size_of::<T>() { | ||
n @ 1 | n @ 2 | n @ 4 | n @ 8 => { | ||
#[cfg(target_endian = "big")] | ||
let start = mem::size_of::<usize>() - n; | ||
|
||
slab[start ..][.. n].copy_from_slice(value.as_bytes()); | ||
}, | ||
_ => unreachable!("BitStore is not implemented on types of this size"), | ||
} | ||
let mid = usize::from_ne_bytes(slab); | ||
// Truncate to the correct size, then wrap in `U` through the trait method. | ||
match mem::size_of::<U>() { | ||
1 => U::from_bytes(&(mid as u8).to_ne_bytes()[..]), | ||
2 => U::from_bytes(&(mid as u16).to_ne_bytes()[..]), | ||
4 => U::from_bytes(&(mid as u32).to_ne_bytes()[..]), | ||
#[cfg(target_pointer_width = "64")] | ||
8 => U::from_bytes(&mid.to_ne_bytes()[..]), | ||
_ => unreachable!("BitStore is not implemented on types of this size"), | ||
} | ||
where | ||
T: BitStore, | ||
U: BitStore, | ||
{ | ||
use core::intrinsics::copy_nonoverlapping; | ||
|
||
let mut buf = U::all_bits_zeroes(); | ||
let size_of_from = mem::size_of::<T>(); | ||
let size_of_into = mem::size_of::<U>(); | ||
unsafe { | ||
if cfg!(target_endian = "big") { | ||
// Big endian fills from the end. | ||
if size_of_from > size_of_into { | ||
// T has more bytes, move its pointer to match the length of U. | ||
copy_nonoverlapping( | ||
(&value as *const T as *const u8).add(size_of_from - size_of_into), | ||
&mut buf as *mut U as *mut u8, | ||
size_of_from, | ||
); | ||
} else { | ||
// U has more bytes, move its pointer to match the length of T. | ||
// Won't move pointer if sizes are equal. | ||
copy_nonoverlapping( | ||
&value as *const T as *const u8, | ||
(&mut buf as *mut U as *mut u8).add(size_of_into - size_of_from), | ||
size_of_from, | ||
); | ||
} | ||
} else { | ||
// Little endian fills from the start. | ||
// Fill buffer with bits from start. | ||
// All unfilled bytes are zeroes, cuz | ||
// `let mut buf = U::all_bits_zeroes();` | ||
if size_of_from > size_of_into { | ||
copy_nonoverlapping( | ||
&value as *const T as *const u8, | ||
&mut buf as *mut U as *mut u8, | ||
size_of_into, | ||
); | ||
} else { | ||
copy_nonoverlapping( | ||
&value as *const T as *const u8, | ||
&mut buf as *mut U as *mut u8, | ||
size_of_from, | ||
); | ||
} | ||
} | ||
} | ||
|
||
buf | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As commit says: "Now that unsafe is pretty strait, clean and should const-eval into one line actually."
That's the only notable change actually. Big-endian should be tested, but I don't think there would be any problems.
Deleted things that were commented out, and as_bytes
, from_bytes
were replaced with all_bits_zeroes
. The same thing could be done with MaybeUninit::zeroed
, but this way is safer and should have the same effectiveness.
Travis failed with "could not find |
Btw, why not simply make 3-word wide BitMut pub struct BitMut<'a, O, T>
where
O: BitOrder,
T: 'a + BitStore,
{
/// A reference to a single bit in memory.
pub(super) slot: &'a mut BitSlice<O, T>,
/// A local cache for `Deref` usage.
pub(super) data: bool,
} into something like this (two words wide): pub struct BitMut<'a, O, T>
where
O: BitOrder,
T: 'a + BitStore,
{
// Tell the compiler, this struct holds mutable access of the slice.
_slice: core::marker::PhantomData<&'a mut BitSlice<O, T>>,
// Writer.
if cfg!(feature = "atomic") {
word: AtomicPtr<T>,
} else {
word: *mut T,
}
// Derefed data, writes on drop as it does now.
data: bool,
// A mask-creator.
place: u8, // or `u16`, but until `u256` would be a thing, u16 is overkill
} Or make "split_mut" create a tuple of (&mut self, BitSliceWithOneClonedStorage), so this thing doesn't even need atomics, won't has atomics overhead, and could be Send at all times. |
Thank you for the PR. I appreciate your willingness to find improvements as well as defects. Other than the copy-size note, I have nothing to really put in the review workflow for the contents of the changes, but here's what I have for the PR as a whole:
|
Welp, I'd describe writing "usize" as "u64" or "u32" as a mistake, even though it is a renamed unsigned int. Also, you can always And I don't wanna do all this "apply other's changes from diff file" work instead of real coding, so feel free to write whatever changes you did. I'd only suggest adding "unsafe" to the "Sealed" trait and writing "unsafe" in macro implementing it. |
You're right about The structure struct BitMut<'a, O, T>
where O: BitOrder, T: BitStore {
_parent: PhantomData<&'a mut BitSlice<O, T>>,
data: NonNull<T::Access>,
head: BitIdx<T>,
bit: bool,
} contains all the information needed to operate, and fits in two words. Thanks for the input! As the pointer itself is never modified, its slot does not need to be atomic. |
IIRC, AtomicPtr has atomic access to the data. |
It does not; it is equivalent to an Which is really irritating, because it does look like the kind of name you'd give to a pointer with permanent |
Suggested by GitHub user @ImmemorConsultrixContrarie in PR #41, this change structures the `BitMut` proxy reference as a memory pointer, bit index, and bit cache. This structure fits within two words instead of three (a full `&mut BitSlice` is not necessary, as the structure only ever refers to one bit in one element). This is purely a stack optimization; it has no effect on the language-level inability to use it as the reference value in ordinary access operations.
I'm going to close this, as I just applied the rollup commit to master. |
Two problems: