c# - understanding the logic behind swapping ranges of bits -


hello need little understanding logic behind swapping ranges of bits algorithm. "program" swaps given number of consecutive bits in given positions , works , need understand logic behind in order move on other topics. here source code full "program" http://pastebin.com/ihvpsee1 , need tell me if on right track far , clarify 1 part of code find difficult understand.

temp = ((number >> firstposition) ^ (number >> secondposition)) & ((1u << numberofbits) - 1);  result = number ^ ((temp << firstposition) | (temp << secondposition)); 
  1. (number >> firstpostion) move binary representation of given uint number(5351) right(>>) 3 times (firstposition). 00000000 00000000 00010100 11100111 (5351) becomes 00000000 00000000 00000001 01001110 , because understanding when shift bits loose digits falls out of range.is correct? or bits right side appear on left side?

  2. (number >> secondposition) apply same logic .1 , in case secondposition 27 number comprised of zeroes(0) 00000000 00000000 00000000 00000000 (which number 0) move bits of number 5351 right 27 times , results in zeroes.

  3. ((number >> firstposition) ^ (number >> secondposition)) use ^ operator on 00000000 00000000 00000001 01001110 , 00000000 00000000 00000000 00000000 results in number 00000000 00000000 00000001 01001110 aka (((number >> firstposition) ^ (number >> secondposition))

  4. ((1u << numberofbits) - 1) part find difficult (if understanding of 1. 2. 3. correct) ((1u << numberofbits) - 1) means

    • 1) put 1 @ position 3 (numberofbits) , fill rest zeroes (0) , substract 1 decimal representation of number or
    • 2) move binary representation of number 1 left 3 times (numberofbits) , substract 1 decimal representation of number

if logic far correct apply & operator on result of ((number >> firstposition) ^ (number >> secondposition)) , ((1u << numberofbits) - 1). , follow same logic result = number ^ ((temp << firstposition) | (temp << secondposition)); in order result.

sorry long , stupid question , cant ask except guys.thank in advance.

the 2 alternatives put 4. same :) trick produces string of binary 1s, given numberofbits - ie. (1 << 3) - 1 produces 7, or 111 in binary - in other words, "give me numberofbits least significant bits".

basically, you've described well, if overly wordy.

the result of first line sequence of numberofbits bits. value xor between bit sequences starting 2 different indices , numberofbits long. and discards bits higher numberofbits.

the second line exploits fact a ^ b ^ == b, , b ^ ^ b == a, , order of operations doesn't matter - xor operation commutative.

as long 2 sequences don't overlap , don't cross decimal point, should work fine :)


Comments

Popular posts from this blog

android - Get AccessToken using signpost OAuth without opening a browser (Two legged Oauth) -

org.mockito.exceptions.misusing.InvalidUseOfMatchersException: mockito -

google shop client API returns 400 bad request error while adding an item -