I spent the first part of today (and some of yesterday) implementing point dragging and addition in our flight planning software. As it stands it’s working quite well… it remains to be added the ability to change the altitude of points, and delete them, but things are well underway. After that, I just need to get the points displaying in the map view as well, then provide a method of exporting paths to file (for use by the UAV), and we’ll have the core functionality. After that of course is the desire to get maps texturing onto the Earth in the 3D views… I’ve come up with a couple of possible approaches, but haven’t tested any yet. The one I’m most leaning towards is maintaining a “focal” texture of some large size (e.g. 512×512) which will be rendered onto the sphere. Into that texture I can render individual maps and so forth. It won’t look as pretty as Google Earth, but it’s probably the easiest half-efficient way… to try and draw each tile individually as a texture would be a pain, I think, and be a bit of a management nightmare.
Anyway, today I headed around to Rob’s place at about lunch time to work together with him on the board itself. When we last left it Bluetooth was sort of working, as I noted previously, so we were in a position to actually test other things. But I really wanted to get Bluetooth working reliably – and at a reasonable speed.
So I reviewed the code again. While I was thinking about it Rob tried to tie in his GPS code to my Bluetooth transmission code. Oddly it was outputting only the end of the line… at first, anyway. Eventually, after some tweaking, it was outputting garbage. Rob tried a transmit function of his own, which worked fine (all except one single case where it output the exact same corrupt data as mine, but we couldn’t reproduce that). So I couldn’t immediately write this off as a compiler bug; first I had to investigate it, then I could determine what the compiler bug was.
The key difference between my code & Rob’s is that mine expects string literals to be in Flash memory, while his only supports strings in SRAM. I’m insistent that we put them only in Flash, to save space – especially if we’re going to have long, unwieldy GPS config strings all over the place, which are only used once and can then be thrown out. It also allows me to put in log messages throughout my code, which will be invaluable for debugging.
So, I began simulating simple test cases inside AVR Studio to try and figure out what was going on. I knew we’d had success with my transmission code previously, and that I’d tested it in the simulator and it had worked fine there. So I was most curious to see what the simulator now said. As it turns out, it agreed exactly with what we were seeing in reality. Given how crap the simulator is – it rarely ever works even for perfectly valid and correct code – this was certainly not a coincidence. It also made the problem a lot easier to debug – it was clearly a compiler issue, and one that was deterministic, which is a blessing.
Long story short, I found pretty quickly that my function was being passed in junk. That is, any string pointers it was given were invalid. Hmm. The critical presumption I’d made, when dealing with strings, is that since strings were always in Flash, if we also allowed them in SRAM they’d be copied to the same address. This is not the case. It is actually documented as such – I suppose I should have known, given I probably did read that part of the documentation a while back. I suppose it’s reasonable, in hindsight, although the lack of explicit storage qualifiers in the code makes it extremely difficult to manage this. It worked in my code because I’d told the compiler to only store strings in Flash, so all char pointers were Flash-based. In Rob’s code he didn’t have that option turned on, so by default all [non-const] char pointers were SRAM-based. Damn. While you can get around that by declaring your string literals as static const char[]’s, that’s a bit of a pain when it’s so much more convenient just to use string literals inline. So, I decided it was time to update Rob’s code so that I could remove strings from SRAM.
This turned out to be relatively easy – I only had to modify his UART code a tiny bit, and rejig his main() to account for the changes. Then, when run, it worked! Well, more or less… I’d added a formatter (%S, as opposed to %s) for strings that were in SRAM; i.e. buffers, such as for data from the GPS. Unfortunately, in my copy-paste haste I’d neglected to change the string incrementer, so I ended up in an infinite loop. Whoops.
With that fixed, however, things were finally starting to work. Bluetooth communication was still a bit flakey at times, but provided we didn’t pour too much data into it at a time, it seemed okay.
Rob’s UART transmission code used circular buffers and interrupts to do it’s work, which is possibly a good idea… I shied away from interrupts initially because I intended to use the transmission for error logging, which needs to be as simple as possible – and blocking, ideally. However, since we’re going to be using it far more generally as it turns out, it’s probably a good idea to merge in Rob’s code. While my code for printing is relatively slow (as a result of it’s printf-like formatting capabilities), it can probably output a character in less than 640 cycles, which is the speed it takes to transmit a character at 250k baud – which is probably already faster than we can actually go.
I could probably optimise my code for strings, too, which would be a good idea… anyway, I digress. The point is, we were still having issues even after all this, with the Bluetooth module locking up. I was pretty sure the hardware flow control wasn’t working – after all, it exists to prevent these sorts of problems – so I reviewed it again. I began to wonder if the CTS/RTS lines were active low… I’d tried that at some previous point, but because of the confusion over which is which (they’re reversed in the datasheet, a common problem as it turns out with RS-232 interfacing devices) I may have still had things wrong.
So, with it sorted out now that the RTS/CTS were back-the-front, the only issue left was determining the signal levels. After some time of combing through the AT command reference, I finally chanced upon the one little paragraph, hidden well away, that actually informs you that they are active-low. Pfft. Their circuit diagrams show everything as being active high. And we hadn’t inverted the TX/RX lines – although that’s probably because the AVR’s UART does that automatically… anyway, with the help of Rob’s CRO to verify, we soon had hardware handshaking working properly. Finally!
Sadly, the Bluetooth module would still lock up, but now it seems only in the case where you send a lot of data to it while it’s not connected… the manual does warn that sending a lot of data in command mode (which is the default while not connected) will potentially lock up the module, so at least we were warned. How we’re going to control that I’m not sure… we can request the connection status periodically and only actually send data once we’ve seen it connected… alternatively, we can just set it into “fast” mode, where it no longer recognises the “+++” sequence to take it into command mode. That makes it tricky to play with, since you need to hard-reset the whole board to undo that, but in final use that would of course be ideal; we don’t want to accidentally get into command mode because of a coincidental occurrence of “+++” in some data.
Anyway, while you can configure the module to start up in fast mode, there’s no way for us to reset that, short of pulling the module of it’s daughterboard and fiddling with some of the I/O pins (the daughterboard hides these from us). That’s not a very good solution at this point – maybe when we’re dead certain we’ve got it all working otherwise.
The better alternative is to put it into fast mode automatically when the device turns on, once we’ve configured it however we like. Seems simple enough… one way to do this is to connect from my laptop, for example, and issue the commands via Zterm. That’s not so good however in that it introduces a race condition – I need to connect to the module and enter command mode before any data the AVR is sending overflows it’s buffer. At full rate it takes only a fraction of a second to overflow the buffer… not good.
Luckily, the AT command manual says that you can issue commands locally as well (i.e. from the AVR). In fact, as I already noted when the device isn’t connected it reverts to command mode by default. Superb.
Or not. My testing indicates this does not in fact work – issuing a “+++” followed by “ATMF” (the commands to get into command mode (if we’re not already) and then enter fast mode) does nothing. Great. I’m still not sure what the problem there is… we’re supposed to give the module up to half a second to boot itself up properly… perhaps because we’re not giving it that long (more like 70ms) it’s not processing the commands properly. I didn’t get around to testing that hypothesis tonight… but next time, perhaps.
Anyway, the workaround that Rob & I are both relatively happy with is to leave the Bluetooth output disabled until the user arms the board. They need to arm it anyway before it’ll try to perform any flight control, so that could work… although it then makes it more difficult for us to use the Bluetooth interface to configure the UAV on the ground… we can’t require it to be armed before we configure it…
Hopefully a solution will present itself in due time. I’m getting a bit annoyed at the Bluetooth module for being so sensitive, but at least now it’s at a level we can manage. Seemingly, anyway.
But with the Bluetooth connection now working, Rob & I took some time to test the Bluetooth range and GPS reception. I stood at the end of his driveway while he slowly wandered down his street. The range wasn’t too impressive – by Rob’s measure we lost the connection at 35 metres (unobstructed), although when we’re out on an oval with the UAV above us, we should hopefully see an improvement. The GPS reception was also a bit flakey… I suspect a loose connection to the antenna, as once Rob tweaked it a bit it came good, but we’ll need to investigate further. It’s been proven to work fine, at least most of the time, so we’re not too worried about that right now.
Once back inside, I set about testing my printf-style formatting, while Rob went to go write code for the ADC. During the week Paul sent through the inductor and resistor we needed for the ADC, so we could now configure it and start using the numerous devices that hang off it – the accelerometre, pressure & temperature sensors, and battery level sensor. Rob choose the latter as the first item to measure, figuring it would be the easiest. Ha!
My printf-style formatting was quite buggy to start with… it worked somewhat for integers, just fine for characters and strings, but not at all for more exotic types – hex ints, unsigned chars, etc. I fixed up the code pretty quickly, and got everything working quite nicely. While we don’t have support for types larger than 16-bits, I could add 32-bit support if necessary without much trouble at all, and for the moment the functionality we have is just fine for our purposes. Once we get the FAT32 stuff up and running, however, 32-bit support will probably be quite handy.
Once my formatting was working, we put it to work outputting the ADC measurement for the battery level (which at this point is DC-adapter-level). Unfortunately, that came out as 1023 for all inputs… 1023 being the maximum value that can be read from the ADC. D’oh.
Because our nominal supply voltage is 7.2V, while our ADC reference voltage is 5.0V, we have to divide the voltage down. Rob choose to divide it in half with a trivial voltage divider, using two 510kΩ SMT resistors. This yields a leakage current of only half a dozen or so µA, which is small enough not to worry us at all. Unfortunately, as it turns out, it doesn’t work. At first we thought it was because the ADC pin we were using was also connected to the JTAG header via a 10kΩ resistor. Since that header should be floating when not in use, that shouldn’t be a problem… unfortunately we were both rather weary at this point, so we decided to cut the connection. Silly idea. While we can always solder a wire to restore it if we end up using the JTAG connection – which at this point seems unlikely – it was a silly mistake. Of course, we found that didn’t help one bit. Well, I lie… it may have made some trivial difference, but our problem remained. I started wondering about the input current to the ADC pin, which while an irrelevant detail (±50nA, btw) did perhaps trigger Rob to think about the input impedance. Or maybe not… I can’t remember, but one way or another he came in reading out the portion of the datasheet on the ideal impedance to connect to the ADC… ≤10kΩ, in fact. Whoops… 255kΩ might be a bit much then. So Rob removed the 510k’s and replaced them with 22k’s. Voilá, it worked! Thanks to the variable-voltage DC adapter we tested voltages ranging from 9V or so right down to 3.3V (which turned the board off, yes :) ). Brilliant. We’re now wasting a hundred µA or so on the battery sensor, but given that the servos and motor use amps, I’m pretty sure we don’t care. We can easily make that up again by tweaking the Bluetooth module, for example.
Next up we tried the temperature sensor. Utilising Rob’s handy little portable blow torch and a can of compressed air, we were able to test heating and cooling of the sensor and observe rational output. We’ll need to review the datasheet to convert the raw measurement to temperature, but that’s relatively trivial.
After that, the accelerometres. These we didn’t have much luck with… the ‘X’ accelerometre (which may be the Y; Rob wasn’t sure) displayed some virtually fixed value (~670 from memory) while the ‘Y’ accelerometre was fixed at 1023. I think it was eventually discovered that the ‘Y’ accelerometre was being pulled to Vcc by something… whoops. As for the other… well, we don’t know. Rob was fiddling with them just before I left, and after some sort of modification was able to get a proper voltage variance on his DMM. Even better, it seems like the voltage varies by tilt, not acceleration… which if ultimately correct is a great bonus for us, as it makes our lives a whole lot easier. I left Rob to play with that, anyway, now that he has a working serial interface to spit data out onto (with printf formatting). All in all, we got Bluetooth working, and half our sensors, so it wasn’t that bad a day.
Now I just need to write some appropriate test code for the FAT16/32 & MMC/SD systems, for use on the AVR, and to keep banging away at the flight planning software. Rob’s going to try to get the remaining sensors to work, and hopefully sometime in the next week all this will come together into our maiden (manually-controlled) flight, to gather real-world sensor data and whatnot. Hopefully… our luck, while not entirely abysmal, has been far from spectacular thus far.