macro_rules! define_utypes_with {
(u128) => { ... };
(u64) => { ... };
(u32) => { ... };
(u16) => { ... };
(u8) => { ... };
}Expand description
The macro that defines the types U256, U384, U512, U768, U1024,
U2048, U3072, U4096, U5120, U6144, U7168, U8192, and U16384.
The types U256, U384, U512, U768, U1024, U2048, U3072,
U4096, U5120, U6144, U7168, U8192, and U16384 are 256-bit,
384-bit, 512-bit, 768-bit, 1024-bit, 2048-bit, 3072-bit, 4096-bit, 5120-bit,
6144-bit, 7168-bit, 8192-bit, and 16384-bit big unsigned integer types,
respectively.
They are defined based on u8, u16, u32, u64 and u128 according
to the given parameter.
So, if you give u128 as a parameter, it will define U256, U384,
U512,U768, U1024, U2048, U3072, U4096, U5120, U6144,
U7168, U8192, and U16384, based on u128. That is, U256, U384,
U512,U768, U1024, U2048, U3072, U4096, U5120, U6144,
U7168, U8192, and U16384 will be defined to be BigUInt<u128, 2>,
BigUInt<u128, 3>, BigUInt<u128, 4>, BigUInt<u128, 6>,
BigUInt<u128, 8>, BigUInt<u128, 16>, BigUInt<u128, 24>,
BigUInt<u128, 32>, BigUInt<u128, 40>, BigUInt<u128, 48>,
BigUInt<u128, 56>, BigUInt<u128, 64>, and BigUInt<u128, 128>,
respectively, for example.
Furthermore, UU32, UU48, UU64, UU96, UU128, UU256, UU384,
UU512, UU640, UU768, UU896, UU1024, and UU2048 will be also
defined to be U256, U384, U512, U768, U1024, U2048, U3072,
U4096, U5120, U6144, U7168, U8192, and U16384, respectively.
UU32 is 32-byte big unsigned integer type, and UU64 is 64-byte big
unsigned integer type, and UU128 is 128-byte big unsigned integer type,
and so on. That is, UU32 is a synonym of U256, and UU64 is a synonym
of U512, and so on.
If you define big unsigned integer types with define_utypes_with!(u128),
U1024 and UU128 will be BigUInt<u128, 8>.
If you define big unsigned integer types with define_utypes_with!(u64),
U1024 and UU128 will be BigUInt<u64, 16>.
If you define big unsigned integer types with define_utypes_with!(u32),
U1024 and UU128 will be BigUInt<u32, 32>.
They are all the same sized, but their insides or structures are all
different from one another.
According to the performance test carried out on Samsung Laptop with Intel I5 core and 32 GB RAM under Linux Mint 21.1 (Vera) on the date 5th of July, 2023, the big unsigned integer type based on u128 showed the best performance. So, you are highly recommended to use the big unsigned integer types based on u128.
However, the performance results may be different under different conditions such as on other machines, under other operating systems, etc. It means that the big unsigned integer types based on u128 may or may not show the best performance under certain conditions. You can test the performance by yourself and find the best parameter for your system. The performance test code used for test is as follows:
§Performance Test Code
use std::time::SystemTime;
use std::fmt::{ Display, Debug };
use std::ops::*;
use cryptocol::number::*;
fn main()
{
let num_txt = "1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_1111111111_";
let a_128 = U1024_with_u128::from_string(num_txt).unwrap();
let a_64 = U1024_with_u64::from_string(num_txt).unwrap();
let a_32 = U1024_with_u32::from_string(num_txt).unwrap();
let a_16 = U1024_with_u16::from_string(num_txt).unwrap();
let a_8 = U1024_with_u8::from_string(num_txt).unwrap();
calcAdd(&a_128);
calcAdd(&a_64);
calcAdd(&a_32);
calcAdd(&a_16);
calcAdd(&a_8);
calcMul(&a_128);
calcMul(&a_64);
calcMul(&a_32);
calcMul(&a_16);
calcMul(&a_8);
}
fn calcAdd<T, const N: usize>(a: &BigUInt<T, N>)
where T: SmallUInt + Display + Debug + ToString
+ Add<Output=T> + AddAssign + Sub<Output=T> + SubAssign
+ Mul<Output=T> + MulAssign + Div<Output=T> + DivAssign
+ Shl<Output=T> + ShlAssign + Shr<Output=T> + ShrAssign
+ BitAnd<Output=T> + BitAndAssign + BitOr<Output=T> + BitOrAssign
+ BitXor<Output=T> + BitXorAssign + Not<Output=T>
+ PartialEq + PartialOrd
{
let mut sum = BigUInt::<T, N>::zero();
let now = SystemTime::now();
for _ in 0..1000
{
sum += *a;
}
let elapsed = now.elapsed().unwrap();
println!("{}-bit addition operation takes\t{}", T::size_in_bits(), elapsed.as_nanos());
}
fn calcMul<T, const N: usize>(a: &BigUInt<T, N>)
where T: SmallUInt + Display + Debug + ToString
+ Add<Output=T> + AddAssign + Sub<Output=T> + SubAssign
+ Mul<Output=T> + MulAssign + Div<Output=T> + DivAssign
+ Shl<Output=T> + ShlAssign + Shr<Output=T> + ShrAssign
+ BitAnd<Output=T> + BitAndAssign + BitOr<Output=T> + BitOrAssign
+ BitXor<Output=T> + BitXorAssign + Not<Output=T>
+ PartialEq + PartialOrd
{
let mut sum = BigUInt::<T, N>::one();
let now = SystemTime::now();
for _ in 0..1000
{
sum *= *a;
}
let elapsed = now.elapsed().unwrap();
println!("{}-bit multiplication operation takes\t{}", T::size_in_bits(), elapsed.as_nanos());
}The following examples show how to use the macro define_utypes_with!(...).
§Examples
use cryptocol::define_utypes_with;
define_utypes_with!(u128);
let a = U256::from_string("1234567_1234567890_1234567890_1234567890_1234567890_1234567890_1234567890_1234567890").unwrap();
let b = a << 1;
println!("b = {}", b);
assert_eq!(b, UU32::from_string("24691342469135780246913578024691357802469135780246913578024691357802469135780").unwrap());Here, u128 is used as a base type in the macro define_utypes_with!(u128).
So, U256 and UU32 are both BigUInt<u128, 2>, but you can choose a
different parameter such as u64. Then, U256 and UU32 are both
BigUInt<u64, 4>.
The following example shows the different case that u64
is used as a base type in the macro define_utypes_with!(u64).
use cryptocol::define_utypes_with;
define_utypes_with!(u64);
let a = U256::from_string("1234567_1234567890_1234567890_1234567890_1234567890_1234567890_1234567890_1234567890").unwrap();
let b = a << 1;
println!("b = {}", b);
assert_eq!(b, UU32::from_string("24691342469135780246913578024691357802469135780246913578024691357802469135780").unwrap());