Let's finally get this answered...
The reality is that there is no right way of doing it. It all depends on the system you'll be using.
If you'll program the software that will be running on your PC, or on another microcontroller and you don't need to keep the format open and accessible to other platforms, then go ahead with a form of binary transfer of data. You will still need to create a header and termination field, and for that you can use ASCII characters, but the data can be serialized and sent down the line like this.
And this would be the reception code...
Obviously, bigger data types such as double, float or longs would require more bytes (and sequential shifts of 16 and 24 bits).
The bigger benefit of sending data this way is really to avoid conversion routines that are normally quite costly in terms of processing time.
Now... would there be a way of cutting down the data transfer even further? Well... in some cases, yes. If you've seen any of my code, you'll notice that I pay extra attention to keep the variable size as small as possible. But sometimes, for example floating point variables, will take up four bytes even if you keep a value very small.
So say, for example, that your application needs to transmit the temperature of something and you need to have a floating point. Normally most sensors will only give you a precision of one decimal place, so that's what we'll be looking to transmit.
If you use a floating point, you'll send 4 bytes. But as most temperatures (that you'd use in a hobbyist project at least) will be below 1000 ºC, we can change the floating point value to fit in an integer.
An integer will normally have two bytes meaning that it will go from -32.768 to 32.767 or 0 to 65.535. So this means that if you want a decimal place on your temperatures, you can represent temperatures ranging from -3276,8 to 3276,7 ºC... that's a lot of heat...
So, to do this, one would multiply it's floating point value by 10 (or 100 depending on the number of decimal places you need) and then cast it into an int.
This can also be done when sending data in ASCII as it would prevent you from converting floating point ASCII to float format ( with atof() ). But more on that later on.
Hope this cleared the question and because of the limited use of binary data transfer, I'll focus a bit more on the ASCII data transfer... although, remember that most of what is said about protocols can be used in both ways.
The reality is that there is no right way of doing it. It all depends on the system you'll be using.
If you'll program the software that will be running on your PC, or on another microcontroller and you don't need to keep the format open and accessible to other platforms, then go ahead with a form of binary transfer of data. You will still need to create a header and termination field, and for that you can use ASCII characters, but the data can be serialized and sent down the line like this.
unsigned int variable = 1234; Serial.print((unsigned char) (variable >> 8) & 0x0F); //shift the higher byte to the right and mask whatever was left behind Serial.print((unsigned char) variable & 0x0F); //mask the upper byte... and send the other
And this would be the reception code...
unsigned int variable = 0; variable = Serial.read() << 8; //shift the first byte to the left variable = variable + Serial.read(); //write the second byte in it's place.
Obviously, bigger data types such as double, float or longs would require more bytes (and sequential shifts of 16 and 24 bits).
The bigger benefit of sending data this way is really to avoid conversion routines that are normally quite costly in terms of processing time.
Now... would there be a way of cutting down the data transfer even further? Well... in some cases, yes. If you've seen any of my code, you'll notice that I pay extra attention to keep the variable size as small as possible. But sometimes, for example floating point variables, will take up four bytes even if you keep a value very small.
So say, for example, that your application needs to transmit the temperature of something and you need to have a floating point. Normally most sensors will only give you a precision of one decimal place, so that's what we'll be looking to transmit.
If you use a floating point, you'll send 4 bytes. But as most temperatures (that you'd use in a hobbyist project at least) will be below 1000 ºC, we can change the floating point value to fit in an integer.
An integer will normally have two bytes meaning that it will go from -32.768 to 32.767 or 0 to 65.535. So this means that if you want a decimal place on your temperatures, you can represent temperatures ranging from -3276,8 to 3276,7 ºC... that's a lot of heat...
So, to do this, one would multiply it's floating point value by 10 (or 100 depending on the number of decimal places you need) and then cast it into an int.
float variable = 123.4567; int variable_int = 0; variable = variable * 10; //one decimal place variable_int = (int) variable; //this is a castOne could then send the variable_int and upon receiving it on the other end, all you'd have to do is to multiply it by 10 and keep it in floating point data type.
This can also be done when sending data in ASCII as it would prevent you from converting floating point ASCII to float format ( with atof() ). But more on that later on.
Hope this cleared the question and because of the limited use of binary data transfer, I'll focus a bit more on the ASCII data transfer... although, remember that most of what is said about protocols can be used in both ways.