Endianness
From Wikipedia, the free encyclopedia.
Endianness generally refers to whichever of two arbitrary sequencing methods are used in a one-dimensional system (such as writing or computer memory). The two main types of endianness are known as big-endian and little-endian. Systems which exhibit aspects of both conventions are often described as being middle-endian. When specifically talking about bytes, endianness is also referred to as byte order or byte sex.
Whenever a sequence of small units are used to form a larger ordinal value, there must be a rule as to which order those smaller units are placed in. This could be considered similar to the situation in different written languages, where some are written left-to-right, while others (such as Arabic and Hebrew) are written right-to-left.
The western decimal numbering is big endian when written in numbers, starting at the left with the highest order magnitude and progressing to smaller order magnitudes to the right. For example, the number 1234 starts with the thousands (in this case: one thousand) and continues through the hundreds and tens to the single digit (4).
Just as right-to-left languages such as Hebrew represent numerical values left-to-right, or some other language may wrap lines up and down left or right, so might a big-endian microprocessor be reading octets in little-endian but by big-endian byte addresses (see below). Similarly, many spoken/written systems of representation, such as those for dates (where the the relative significance of values such as month, day, and year, is both ambiguous and middle-endian) and common spoken numbers in many languages, demonstrate complex endianness challenges when attempting to quantify the value they represent. (c.f. German einundzwanzig, English seventeen, or the French quatre-vingt-dix-huit).
Contents
Endianness in computers
The endianness of a particular computer system is generally described by whatever convention or set of conventions is followed by a particular processor or combination of processor/architecture and possibly operating system or transmission medium for the addressing of constants and the representations of memory addresses. Often referred to as byte order, such as when integers are represented with multiple bytes.
There seems to be no significant advantage in using one way over the other and both have remained common. Generally the byte (octet) is considered an atomic unit from the point of view of storage and all but the lowest levels of network protocols and storage formats. Therefore sequences based around single bytes (e.g., text in ASCII or one of the ISO-8859-n encodings) are not generally affected by endian issues. While variable-width text encodings using the byte as their base unit could be considered to have an inbuilt endianness this is (at least in all commonly used ones) fixed by the encoding's design. However, strings encoded with unicode UTF-16 or UTF-32 are affected by endianness, because each code unit must be further represented as two or four bytes.
[edit]
Logical and arithmetical description
Note: all numerical values in this section that appear in this typeface are in hexadecimal notation.
When some computers store a 32-bit integer value in memory, for example 4A3B2C1D at address 100, they store the bytes within the address range 100 through 103 in the following order:
Big-endian
100 101 102 103
... 4A 3B 2C 1D ...
That is, the most significant byte (also known as the MSB, which is 4A in our example) is stored at the memory location with the lowest address, the next byte in significance, 3B, is stored at the next memory location and so on.
Architectures that follow this rule are called big-endian (mnemonic: "big end first") and include Motorola 68000, SPARC and System/370.
Other computers store the value 4A3B2C1D in the following order:
Little-endian
100 101 102 103
... 1D 2C 3B 4A ...
That is, least significant ("littlest") byte (also known as LSB) first. Architectures that follow this rule are called little-endian (mnemonic: "little end first") and include the MOS Technology 6502, Intel x86 and DEC VAX.
In other words, contrary to what you might first think, endianness does not denote what the value ends with when stored in memory, but rather which end it begins with.
Note that the stated mnemonics are not the origin of the terms, see below.
Some architectures can be configured either way; these include ARM, PowerPC (but not the PPC970/G5), DEC Alpha, MIPS, PA-RISC and IA64. The word bytesexual or bi-endian, said of hardware, denotes willingness to compute or pass data in either big-endian or little-endian format (depending, presumably, on a mode bit somewhere). Many of these architectures can be switched via software to default to a specific endian format (usually done when the computer starts up); however, on some architectures the default endianness is selected by some hardware on the motherboard and sometimes even cannot be changed by software (e.g., the DEC Alpha, which runs only in big-endian mode on the Cray T3E).
Middle-endian
Still other (generally older) architectures, called middle-endian, may have a more complicated ordering such that the bytes within a 16-bit unit are ordered differently from the 16-bit units within a 32-bit word, for instance, 4A3B2C1D is stored as:
100 101 102 103
... 3B 4A 1D 2C ...
Middle-endian architectures include the PDP-11 family of processors. The format for double-precision floating-point numbers on the VAX is also middle-endian. In general, these complex orderings are more confusing to work with than consistent big- or little-endianness.
Endianness also applies in the numbering of the bits within a byte or word. In a consistently big-endian architecture the bits in the word are numbered from the left, bit zero being the most significant bit and bit 7 being the least significant bit in a byte. The favored bit endianness depends somewhat on where the computer users expect the binary point to be located in a number. It seems most intuitive to number the bits in the little-endian order if the byte is taken to represent an integer. In this case the bit number corresponds to the exponent of the numeric weight of the bit. However, if the byte is taken to represent a binary fraction, with the binary point to the left of the most significant bit, then the big-endian numbering convention is more convenient.
To summarise, here are the default endian-formats of some common computer architectures:
* Pure big-endian: Sun SPARC, Motorola 68000, PowerPC 970, IBM System/360
* Bi-endian, running in big-endian mode by default: MIPS running IRIX, PA-RISC, most POWER and PowerPC systems
* Bi-endian, running in little-endian mode by default: MIPS running Ultrix, most DEC Alpha, IA-64 running Linux
* Pure little-endian: Intel x86, AMD64, DEC VAX (excluding D-Float numbers)
C function to check if a system is big or little endian (assumes int is larger than char and will not determine if a system is middle endian):
#define LITTLE_ENDIAN 0
#define BIG_ENDIAN 1
int machineEndianness()
{
int i = 1;
char *p = (char *) &i;
if (p[0] == 1) // Lowest address contains the least significant byte
return LITTLE_ENDIAN;
else
return BIG_ENDIAN;
}
Portability issues
Endianness has grave implications in software portability. For example, in interpreting data stored in binary format and using an appropriate bitmask, the endianness is important because different endianness will lead to different results from the mask.
Writing binary data from software to a common format leads to a concern of the proper endianness. For example saving data in the BMP bitmap format requires little endian integers - if the data are stored using big-endian integers then the data will be corrupted since they do not match the format.
Software that needs to share information between hosts of different endianness typically uses one of two strategies. Either it can choose a single endianness for sharing data, or it can allow hosts to share data in any endianness that they choose, so long as they mark which one they are using. Both approaches have advantages: on the one hand, choosing a single endianness makes decoding easier, since software only needs to decode one format. On the other hand, allowing multiple endiannesses makes encoding easier, since software doesn't need to convert data out of its native order; and also enables more efficient communication when the encoder and decoder share a single endianness, since neither needs to change the byte order. Most Internet standards standards take the first approach, and specify big-endian byte order. Some other applications, notably X11, take the second approach.
The OPENSTEP operating system has software that swaps the bytes of integers and other C datatypes in order to preserve the correct endianness, since software running on OPENSTEP for PA-RISC is intended to be portable to OPENSTEP running on Mach/i386.
UTF-16 can be written in big-endian or little-endian order. It permits a Byte Order Mark (BOM) of between 2 bytes at the beginning of a string to denote its endianness. A similar 4 byte byte-order mark can be used with the rare encoding UTF-32.
[edit]
Endianness in communications
In general, the NUXI problem is the problem of transferring data between computers with differing byte order. For example, the string "UNIX", packed with two bytes per 16-bit integer, might look like "NUXI" to a machine with a different "byte sex". The problem is caused by the difference in endianness. The problem was first discovered when porting an early version of Unix from PDP-11 (a middle-endian architecture) to an IBM Series 1 minicomputer (a big-endian architecture); upon startup, the computer output replaced the string "UNIX" with "NUXI".
The Internet Protocol defines a standard "big-endian" network byte order. This byte order is used for all numeric values in the packet headers and by many higher level protocols and file formats that are designed for use over IP.
The Berkeley sockets API defines a set of functions to convert 16- and 32-bit integers to and from network byte order: the htonl and htons functions convert 32-bit ("long") and 16-bit ("short") values respectively from host to network order; whereas the ntohl and ntohs functions convert from network to host order.
Serial devices also have bit-endianness: the bits in a byte can be sent little-endian (least significant bit first) or big-endian (most significant bit first). This decision is made in the very bottom of the data link layer of the OSI model.
[edit]
Endianness in date formats
Endianness is simply illustrated by the different manners in which countries format calendar dates. For example, in the United States and a few other countries, dates are commonly formatted as Month; Day; Year (e.g. "May 24th, 2006" or "5/24/2006"). This is a middle-endian order.
In most of the world's countries, including all of Europe except Sweden, Latvia and Hungary, dates are formatted as Day; Month; Year (e.g. "24th May, 2006" or "24/5/2006" or "24/5-2006"). This is little-endian.
China, Japan and the ISO 8601 International formal standard ordering for dates displays them in the order of Year; Month; Day (e.g. "2006 May 24th", or, more properly, "2006-05-24"). This is big-endian.
The ISO 8601 ordering scheme lends itself to straightforward computerised sorting of dates in lexicographical order, or dictionary sort order. This means that the sorting algorithm does not need to treat the numeric parts of the date string any differently from a string of non-numeric characters, and the dates will be sorted into chronological order. Note, however, that for this to work, there must always be four digits for the year, two for the month, and two for the day, so for example single-digit days must be padded with a zero yielding '01', '02', ... , '09'.
[edit]
Discussion, background, etymology
Big-endian numbers are easier to read when debugging a program. Some think they are less intuitive because the most significant byte is at the smaller address. Some think they are less confusing because the significance order is the same as the order of normal textual character strings in the computer, just as in non-computer text (see below). A person's preference usually is based on which convention was studied first and on which one the person's mental models were built.
[edit]
Origin of the term
The choice of big-endian vs. little-endian was as arbitrary as the entire concept is, and has been the subject of a lot of flame wars. Emphasizing the futility of this argument, the very terms big-endian and little-endian were taken from the Big-Endians and Little-Endians of Jonathan Swift's satiric novel Gulliver's Travels, where in Lilliput and Blefuscu Gulliver finds two groups of people in conflict over which end of an egg to crack.
See the Endian FAQ, including the significant essay "On Holy Wars and a Plea for Peace" by Danny Cohen (1980).
The written system of arabic numerals is used world-wide and is such that the most significant digits are always written to the left of the less significant ones. In languages that write text left to right, this system is therefore big-endian. In languages that write right to left, this numeral system is also big-endian, because the number itself is a separate domain from the right-to-left language and must be read in its own order. To illustrate this point, if a number appears in text, whether the text is written left to right or right to left, a number too long to display on one line is broken so that the most significant digits are displayed on the first line.
The spoken numeral system in English is big-endian (with minor exceptions: we say "seventeen" instead of "ten-seven"). German and Dutch are also mainly big-endian, with an exception for the multiples-of-ten, e.g. 376 is pronounced as "Dreihundertsechsundsiebzig" and "driehonderd zes en zeventig" respectively, i.e. "three hundred six-and-seventy".
Little-endian ordering has been used in compiling reverse dictionaries, where the entries begin, for example, with "a, aa, baa, ..." and end, for example, with "... buzz, abuzz, fuzz." An actual example is the pronouncing dictionary for Cantonese jyt j?m dzi? duk dzi w?i (ISBN 9629485095) which begins with "a, ba, da, dza,…" and ends with "…, tyt, tsyt, m?, ??".
There seems to be some confusion about how the word endianness should be spelled. The two major variants are endianness and endianess. There are even some documents containing both variants. While neither of the two forms appears in current (non-computing) dictionaries, it appears that the former follows the pattern of similar words such as "barren" and "barrenness". Thus, endianness is generally more accepted and is used in this article.
[edit]
Example programming caveat
Below is an example application written in C which demonstrates the dangers of programming endianness unaware:
#include
int main (int argc, char* argv[])
{
FILE* fp;
/* Our example data structure */
struct {
char one[4];
int two;
char three[4];
} data;
/* Fill our structure with data */
strcpy (data.one, "foo");
data.two = 0x01234567;
strcpy (data.three, "bar");
/* Write it to a file */
fp = fopen ("output", "wb");
if (fp)
{
fwrite (&data, sizeof (data), 1, fp);
fclose (fp);
}
}
This code compiles properly on an i386 machine running FreeBSD and a SPARC64 machine running Solaris, but the output isn't the same when examining the files with the hexdump utility.
i386 $ hexdump -C output
00000000 66 6f 6f 00 67 45 23 01 62 61 72 00 |foo.gE#.bar.|
0000000c
sparc64 $ hexdump -C output
00000000 66 6f 6f 00 01 23 45 67 62 61 72 00 |foo..#Egbar.|
0000000c
From Wikipedia, the free encyclopedia.
Endianness generally refers to whichever of two arbitrary sequencing methods are used in a one-dimensional system (such as writing or computer memory). The two main types of endianness are known as big-endian and little-endian. Systems which exhibit aspects of both conventions are often described as being middle-endian. When specifically talking about bytes, endianness is also referred to as byte order or byte sex.
Whenever a sequence of small units are used to form a larger ordinal value, there must be a rule as to which order those smaller units are placed in. This could be considered similar to the situation in different written languages, where some are written left-to-right, while others (such as Arabic and Hebrew) are written right-to-left.
The western decimal numbering is big endian when written in numbers, starting at the left with the highest order magnitude and progressing to smaller order magnitudes to the right. For example, the number 1234 starts with the thousands (in this case: one thousand) and continues through the hundreds and tens to the single digit (4).
Just as right-to-left languages such as Hebrew represent numerical values left-to-right, or some other language may wrap lines up and down left or right, so might a big-endian microprocessor be reading octets in little-endian but by big-endian byte addresses (see below). Similarly, many spoken/written systems of representation, such as those for dates (where the the relative significance of values such as month, day, and year, is both ambiguous and middle-endian) and common spoken numbers in many languages, demonstrate complex endianness challenges when attempting to quantify the value they represent. (c.f. German einundzwanzig, English seventeen, or the French quatre-vingt-dix-huit).
Contents
- * 1 Endianness in computers
- * 2 Logical and arithmetical description
- o 2.1 Portability issues
- * 3 Endianness in communications
- * 4 Endianness in date formats
- * 5 Discussion, background, etymology
- o 5.1 Origin of the term
- * 6 Example programming caveat
- * 7 External links<
Endianness in computers
The endianness of a particular computer system is generally described by whatever convention or set of conventions is followed by a particular processor or combination of processor/architecture and possibly operating system or transmission medium for the addressing of constants and the representations of memory addresses. Often referred to as byte order, such as when integers are represented with multiple bytes.
There seems to be no significant advantage in using one way over the other and both have remained common. Generally the byte (octet) is considered an atomic unit from the point of view of storage and all but the lowest levels of network protocols and storage formats. Therefore sequences based around single bytes (e.g., text in ASCII or one of the ISO-8859-n encodings) are not generally affected by endian issues. While variable-width text encodings using the byte as their base unit could be considered to have an inbuilt endianness this is (at least in all commonly used ones) fixed by the encoding's design. However, strings encoded with unicode UTF-16 or UTF-32 are affected by endianness, because each code unit must be further represented as two or four bytes.
[edit]
Logical and arithmetical description
Note: all numerical values in this section that appear in this typeface are in hexadecimal notation.
When some computers store a 32-bit integer value in memory, for example 4A3B2C1D at address 100, they store the bytes within the address range 100 through 103 in the following order:
Big-endian
100 101 102 103
... 4A 3B 2C 1D ...
That is, the most significant byte (also known as the MSB, which is 4A in our example) is stored at the memory location with the lowest address, the next byte in significance, 3B, is stored at the next memory location and so on.
Architectures that follow this rule are called big-endian (mnemonic: "big end first") and include Motorola 68000, SPARC and System/370.
Other computers store the value 4A3B2C1D in the following order:
Little-endian
100 101 102 103
... 1D 2C 3B 4A ...
That is, least significant ("littlest") byte (also known as LSB) first. Architectures that follow this rule are called little-endian (mnemonic: "little end first") and include the MOS Technology 6502, Intel x86 and DEC VAX.
In other words, contrary to what you might first think, endianness does not denote what the value ends with when stored in memory, but rather which end it begins with.
Note that the stated mnemonics are not the origin of the terms, see below.
Some architectures can be configured either way; these include ARM, PowerPC (but not the PPC970/G5), DEC Alpha, MIPS, PA-RISC and IA64. The word bytesexual or bi-endian, said of hardware, denotes willingness to compute or pass data in either big-endian or little-endian format (depending, presumably, on a mode bit somewhere). Many of these architectures can be switched via software to default to a specific endian format (usually done when the computer starts up); however, on some architectures the default endianness is selected by some hardware on the motherboard and sometimes even cannot be changed by software (e.g., the DEC Alpha, which runs only in big-endian mode on the Cray T3E).
Middle-endian
Still other (generally older) architectures, called middle-endian, may have a more complicated ordering such that the bytes within a 16-bit unit are ordered differently from the 16-bit units within a 32-bit word, for instance, 4A3B2C1D is stored as:
100 101 102 103
... 3B 4A 1D 2C ...
Middle-endian architectures include the PDP-11 family of processors. The format for double-precision floating-point numbers on the VAX is also middle-endian. In general, these complex orderings are more confusing to work with than consistent big- or little-endianness.
Endianness also applies in the numbering of the bits within a byte or word. In a consistently big-endian architecture the bits in the word are numbered from the left, bit zero being the most significant bit and bit 7 being the least significant bit in a byte. The favored bit endianness depends somewhat on where the computer users expect the binary point to be located in a number. It seems most intuitive to number the bits in the little-endian order if the byte is taken to represent an integer. In this case the bit number corresponds to the exponent of the numeric weight of the bit. However, if the byte is taken to represent a binary fraction, with the binary point to the left of the most significant bit, then the big-endian numbering convention is more convenient.
To summarise, here are the default endian-formats of some common computer architectures:
* Pure big-endian: Sun SPARC, Motorola 68000, PowerPC 970, IBM System/360
* Bi-endian, running in big-endian mode by default: MIPS running IRIX, PA-RISC, most POWER and PowerPC systems
* Bi-endian, running in little-endian mode by default: MIPS running Ultrix, most DEC Alpha, IA-64 running Linux
* Pure little-endian: Intel x86, AMD64, DEC VAX (excluding D-Float numbers)
C function to check if a system is big or little endian (assumes int is larger than char and will not determine if a system is middle endian):
#define LITTLE_ENDIAN 0
#define BIG_ENDIAN 1
int machineEndianness()
{
int i = 1;
char *p = (char *) &i;
if (p[0] == 1) // Lowest address contains the least significant byte
return LITTLE_ENDIAN;
else
return BIG_ENDIAN;
}
Portability issues
Endianness has grave implications in software portability. For example, in interpreting data stored in binary format and using an appropriate bitmask, the endianness is important because different endianness will lead to different results from the mask.
Writing binary data from software to a common format leads to a concern of the proper endianness. For example saving data in the BMP bitmap format requires little endian integers - if the data are stored using big-endian integers then the data will be corrupted since they do not match the format.
Software that needs to share information between hosts of different endianness typically uses one of two strategies. Either it can choose a single endianness for sharing data, or it can allow hosts to share data in any endianness that they choose, so long as they mark which one they are using. Both approaches have advantages: on the one hand, choosing a single endianness makes decoding easier, since software only needs to decode one format. On the other hand, allowing multiple endiannesses makes encoding easier, since software doesn't need to convert data out of its native order; and also enables more efficient communication when the encoder and decoder share a single endianness, since neither needs to change the byte order. Most Internet standards standards take the first approach, and specify big-endian byte order. Some other applications, notably X11, take the second approach.
The OPENSTEP operating system has software that swaps the bytes of integers and other C datatypes in order to preserve the correct endianness, since software running on OPENSTEP for PA-RISC is intended to be portable to OPENSTEP running on Mach/i386.
UTF-16 can be written in big-endian or little-endian order. It permits a Byte Order Mark (BOM) of between 2 bytes at the beginning of a string to denote its endianness. A similar 4 byte byte-order mark can be used with the rare encoding UTF-32.
[edit]
Endianness in communications
In general, the NUXI problem is the problem of transferring data between computers with differing byte order. For example, the string "UNIX", packed with two bytes per 16-bit integer, might look like "NUXI" to a machine with a different "byte sex". The problem is caused by the difference in endianness. The problem was first discovered when porting an early version of Unix from PDP-11 (a middle-endian architecture) to an IBM Series 1 minicomputer (a big-endian architecture); upon startup, the computer output replaced the string "UNIX" with "NUXI".
The Internet Protocol defines a standard "big-endian" network byte order. This byte order is used for all numeric values in the packet headers and by many higher level protocols and file formats that are designed for use over IP.
The Berkeley sockets API defines a set of functions to convert 16- and 32-bit integers to and from network byte order: the htonl and htons functions convert 32-bit ("long") and 16-bit ("short") values respectively from host to network order; whereas the ntohl and ntohs functions convert from network to host order.
Serial devices also have bit-endianness: the bits in a byte can be sent little-endian (least significant bit first) or big-endian (most significant bit first). This decision is made in the very bottom of the data link layer of the OSI model.
[edit]
Endianness in date formats
Endianness is simply illustrated by the different manners in which countries format calendar dates. For example, in the United States and a few other countries, dates are commonly formatted as Month; Day; Year (e.g. "May 24th, 2006" or "5/24/2006"). This is a middle-endian order.
In most of the world's countries, including all of Europe except Sweden, Latvia and Hungary, dates are formatted as Day; Month; Year (e.g. "24th May, 2006" or "24/5/2006" or "24/5-2006"). This is little-endian.
China, Japan and the ISO 8601 International formal standard ordering for dates displays them in the order of Year; Month; Day (e.g. "2006 May 24th", or, more properly, "2006-05-24"). This is big-endian.
The ISO 8601 ordering scheme lends itself to straightforward computerised sorting of dates in lexicographical order, or dictionary sort order. This means that the sorting algorithm does not need to treat the numeric parts of the date string any differently from a string of non-numeric characters, and the dates will be sorted into chronological order. Note, however, that for this to work, there must always be four digits for the year, two for the month, and two for the day, so for example single-digit days must be padded with a zero yielding '01', '02', ... , '09'.
[edit]
Discussion, background, etymology
Big-endian numbers are easier to read when debugging a program. Some think they are less intuitive because the most significant byte is at the smaller address. Some think they are less confusing because the significance order is the same as the order of normal textual character strings in the computer, just as in non-computer text (see below). A person's preference usually is based on which convention was studied first and on which one the person's mental models were built.
[edit]
Origin of the term
The choice of big-endian vs. little-endian was as arbitrary as the entire concept is, and has been the subject of a lot of flame wars. Emphasizing the futility of this argument, the very terms big-endian and little-endian were taken from the Big-Endians and Little-Endians of Jonathan Swift's satiric novel Gulliver's Travels, where in Lilliput and Blefuscu Gulliver finds two groups of people in conflict over which end of an egg to crack.
See the Endian FAQ, including the significant essay "On Holy Wars and a Plea for Peace" by Danny Cohen (1980).
The written system of arabic numerals is used world-wide and is such that the most significant digits are always written to the left of the less significant ones. In languages that write text left to right, this system is therefore big-endian. In languages that write right to left, this numeral system is also big-endian, because the number itself is a separate domain from the right-to-left language and must be read in its own order. To illustrate this point, if a number appears in text, whether the text is written left to right or right to left, a number too long to display on one line is broken so that the most significant digits are displayed on the first line.
The spoken numeral system in English is big-endian (with minor exceptions: we say "seventeen" instead of "ten-seven"). German and Dutch are also mainly big-endian, with an exception for the multiples-of-ten, e.g. 376 is pronounced as "Dreihundertsechsundsiebzig" and "driehonderd zes en zeventig" respectively, i.e. "three hundred six-and-seventy".
Little-endian ordering has been used in compiling reverse dictionaries, where the entries begin, for example, with "a, aa, baa, ..." and end, for example, with "... buzz, abuzz, fuzz." An actual example is the pronouncing dictionary for Cantonese jyt j?m dzi? duk dzi w?i (ISBN 9629485095) which begins with "a, ba, da, dza,…" and ends with "…, tyt, tsyt, m?, ??".
There seems to be some confusion about how the word endianness should be spelled. The two major variants are endianness and endianess. There are even some documents containing both variants. While neither of the two forms appears in current (non-computing) dictionaries, it appears that the former follows the pattern of similar words such as "barren" and "barrenness". Thus, endianness is generally more accepted and is used in this article.
[edit]
Example programming caveat
Below is an example application written in C which demonstrates the dangers of programming endianness unaware:
#include
int main (int argc, char* argv[])
{
FILE* fp;
/* Our example data structure */
struct {
char one[4];
int two;
char three[4];
} data;
/* Fill our structure with data */
strcpy (data.one, "foo");
data.two = 0x01234567;
strcpy (data.three, "bar");
/* Write it to a file */
fp = fopen ("output", "wb");
if (fp)
{
fwrite (&data, sizeof (data), 1, fp);
fclose (fp);
}
}
This code compiles properly on an i386 machine running FreeBSD and a SPARC64 machine running Solaris, but the output isn't the same when examining the files with the hexdump utility.
i386 $ hexdump -C output
00000000 66 6f 6f 00 67 45 23 01 62 61 72 00 |foo.gE#.bar.|
0000000c
sparc64 $ hexdump -C output
00000000 66 6f 6f 00 01 23 45 67 62 61 72 00 |foo..#Egbar.|
0000000c
Nenhum comentário:
Postar um comentário