Monday, August 29, 2016

Reading Serial IMU Data

Last time we left off, we were able to send data serially to our python client which in turn, published the data to our ROS topic. Since we made a proof of concept (POC) with static values, we should be able to use this functionality to send the data of the Inertial Measurement Unit (IMU) serially. One known thing about IMU's is they are notoriously noisy so the readings from the accelerometer and gyroscope need to be filtered in order to observe meaningful data. Below we will explore one way to read IMU data serially, run it through a complimentary filter, and write to the output buffer.

Reading the IMU

If you remember from an earlier blog post where I listed the sensors I bought, the IMU is a LSM6DS33 3D Accelerometer and Gyro. It includes a 3.3V voltage regulator that allows a range of 2.5. to 5.5V which is nice since the Arduino Pro Mini pulls 5V. Pololu, the website where I purchased the sensors, has a resources section which directed me to a library specifically written for the sensor. Upon reviewing the source on GitHub, I decided this would be a good way to expedite development since the library is very simple and I have no experience reading analog values. After installing the library, an example sketch can be loaded through the Arduino IDE menu.

                       

The code on the right is the LSM6 Sketch and it contains everything you need in order to read the data from the IMU. All you need to do now is upload this sketch to the Arduino, make sure the IMU is connected correctly to the Arduino, and to provide power. While the program is running, if you have the serial monitor open, you will see a stream of data filling the screen with readings.

(Note: The stutter in the readings is caused by the Serial Monitor and not the sensor as we will see later when our ROS topic is publishing data in real time without lag.)


Interpreting the Data

The above image shows readings that are not very intuitive at first glance and can scare new hobbyists, hopefully the explanation below will help demystify the numbers and process.  

Accelerometer

The expected values for the accelerometer are X = 0, Y = 0, Z = -1 which just means that there is a downward force of 1 G. For those that might not find the above expected values intuitive, there are two reasons we are expecting these values. First, gravity is always acting, causing us to fall towards the center of the earth (1 G Force or -9.8 m/s^2). Second, since we are modeling the real world the sign (+-) of the Force really only determines the direction the Force is applied. For example, the Z axis represents up(+) and down(-). The accelerometer can also measure different scales of forces but in our case we have set the full scale setting to 2 which means we can effectively measure forces between -2G and 2G.

Now that we have expected values, we can begin to convert the raw readings into actual Forces. Looking at the image above we can take a random value from the 3rd column (Z acceleration force); I will pick -16,642. From page 15 of the LSM6 Data Sheet, we can find the units for the full scale setting of 2, linear acceleration sensitivity, 0.061mg/LSB (milli g's / Least Significant Bit). Taking the raw data and multiplying the sensitivity ratio we get the following:

1. -16,642 * 0.061 = -1,015.162 mg (These units are milli g's)

We need to account for the milli g's by dividing the number by 1000 to get g units.
2. -1,015.162 mg / 1000 mg = -1.015162 g

This value is close enough to -1 that we can dismiss the extra force measured as background noise. The above math can be made more generic so it can be applied to each dimension measured by the accelerometer.

3. Raw Data * Linear_Acceleration_Sensitivity * (1 g / 1000 mg)

If we convert the Linear_Acceleration_Sensitivity value from millig's to g's during setup we can reduce total calculations per iteraiton.

4. Raw Data * Linear_Acceleration_Sensitivity_G = result
(Note: Steps 1 & 2 combined are Step 4)


Gyroscope

The expected values for the gyroscope at rest are 0 dps (degrees per second) for X, Y and Z dimensions because the gyroscope measures the rate of change in angular acceleration. The raw data is measured in mdps/LSB and must be converted in order to be a meaningful value. Again, looking at page 15 of the LSM6 Data Sheet, we can find the angular rate sensitivity for the full gain setting 245 which is 8.75mdps/LSB. (milli degrees per second / Least Significant Bit)

First our angular rate sensitivity is in mdps and we need to turn it into dps.
1. 8.75 / 1000 = 0.00875 dps/LSB (This result is the Angular_Rate_Sensitivity in terms of g's)

Multiply the raw value by the angular rate sensitivity to get degrees per second
2. Raw Data * Angular_Rate_Sensitivity = result

If we calculate some of the values in the above image, we will notice that the results are not 0 and in fact they oscillate between a specific range. This oscillation is due to imperfections in the sensor and must be accounted for by setting a threshold that floors the readings and helps remove the noise. In our case, I checked the data sheet and noticed there is a rate of (+-)10 dps so any readings between -10 and 10 are floored to 0.

Note: The result is in degrees per second but we are reading data from the IMU much quicker than 1 second intervals. To remedy this we need to integrate (sum the data over time) the result by multiplying by the time it took to run the last iteration (delta time [normally in milliseconds]). As soon as we integrate the gyro data (gyroData * dt), we introduce error because our sample size is not continuous which will cause error interpreted as drift; the main reason why we will smooth the data using a complimentary filter.

Complimentary Filter

The idea of smoothing out the data is not new and there is even a standard call the Kalman Filter. While the Kalman Filter is very complex, there is a simpler approach that requires little overhead. (We will be implementing a Kalman Filter in the future, just not right now.) For now, we will use what is called a Complimentary Filter:

angle = (angle + gyroData * dt) * 0.98 + (angle of acceleration * 0.02) 


To get the angle of acceleration we can use the arc tangent but this causes problems because quite frequently our divisor will be 0 and we cannot divide by 0. Luckily we have access to the atan2 function which allows us to supply 2 inputs and get an angle representative of the inputs. There are plenty of websites that will explain what the arctangent and atan2 functions are so if you are interested I suggest reading further

(Note: The above code will probably not work copy pasta unless the rest is configured exactly the same.)


(Disregard the linear values as they are debug terms I am using)

We see that at rest the IMU is reading 0 and when disturbed, we get precise readings. Moving forward these values will be fed into a PID controller which will be used to stabilize and move the quadcopter.

Findings

There is very little documentation for hobbyists online at the moment and I am continually scouring the internet for resources but they are far and few between. I think this will be a good opportunity to add to the collective knowledge base of the internet with my findings and hopefully my struggles will make someone else's experience easier. As always, feel free to leave a comment or question. Happy flying!





Tuesday, August 23, 2016

Roadblocks, an Opportunity for Learning

After getting the virtual machine set up, I installed ROS and went through all of the tutorials in order to better understand the framework. In the end, I had a working topic, message, publisher, and subscriber. The examples provided on the ROS website were enough to get a simple application set up and understand how to interact with the framework as well and use ROSSerial to communicate with the simulator; Rviz.




The Hill

Coming from a software developer background, I am not used to hardware constraints. If I need another variable, I just make one. With the Arduino, we only have 32kb of Flash memory, 2KB of which is reserved for the boot loader, 2KB of SRAM, and 1KB of EEPROM so virtual space is at a premium. After finishing the tutorials and making a working example, I decided it is time to use ROSSerial to publish the IMU data so Rviz can simulate. I thought combining the IMU example with the ROSSerial example would produce a desirable result, but in the end it proved nothing but a headache. We will still use ROS but not on the Arduino.




The Battle

As soon as I added a node handler, the 2KB of SRAM dwindled as seen in the picture above. Even with the most basic usage, I was running out of SRAM before I could even do any calculations AND the motors aren't even considered. I looked for other ways to solve this memory problem because it caused syncing problems during sketch uploads. The Arduino gives the capability to store common parameters in EEPROM then queried during run time so I tried to utilize this space to hold the publisher and node handler, since these objects are required globally; unfortunately, they are too large for this reserved space so the search continued. By this time a few days had passed and I decide that the ROSSerial library is too large for the Arduino so I needed to find a different way to send the IMU data to Rviz.


Success

Eventually, writing the data serially became my last option and incidentally, it turned out to be the easiest solution. I read up on Serial Communication and a few other hardware topics that are unfamiliar to me then came up with the simple solution seen below. Arduino exposes functionality that allows us to write data serially with the Serial.write() command and supports strings, arrays, and bytes. The test data is written while, on a port connected to the Arduino, a python client listens and decodes the data to be used by Rviz.
























The python script handles everything including initializing the node, creating the publisher, connecting to the Arduino, reading the serial data, publishing the data and printing the data to the logs. This script will be heavily used for simulating data on Rviz which will be the subject of the next blog post. Aside from the script, I also created a launch package for the python client, roscore, and rviz so I no longer need to have a separate terminal open for each. Going forward, new executables will be added to this launch package to make deploying everything easier. Below is an example of the data being sent by the Arduino and received by the python client on the VM.




Lessons Learned

Over the course of the week I spent many hours reading blogs, documentation, and other sources. After countless failed attempts, continuing can be frustrating and somewhat disheartening but by keeping a level head and repeatedly attacking the problem from different angles, I was able to solve the issue AND learn several things in the process. Another important thing I learned is IMU's are prone to drifting errors which requires a remedy. The next item to work on is a Kalman Filter that will allow us to smooth the drift by using a feedback loop.

As always, feel free to leave a comment or question.

Sunday, August 14, 2016

Setting Up Our Development Environment!!!

Parts are arriving daily! Soon we can begin connecting the basic sensors and start getting actual readings. Our goal over this week was to get our development environment setup so we can start reading sensor data on the Arduino Pro Mini.


Sensors1

Environment Setup


Virtual Machine (VM)


We need a development environment that can be isolated from our workstation without being burdensome. A good tool that satisfies this requirement is a virtual machine. Software is capable of emulating hardware, allowing for sand-boxed operating systems within your current workstation. There are several vendors of virtual machines but I normally stick with Oracle VM VirtualBox due to already being familiar with the software. Once installed, we can download an operating system image and use it to install a fresh version on the VM so we can begin to setup our dev environment.

Operating System (Ubuntu)


In order to utilize the VM, we will need an operating system to run. In this series, I will use Ubuntu version 16.04 as it is recommened for some of our third party libraries. Install Ubuntu onto an instance of a VM and follow the on screen instructions. After some time, the software will be installed and we can move on to the next step.

IDE (Arduino)


Our programs need to run on the Arduino Pro Mini and Raspberry Pi Zero. In order to write program that CAN run on those electronics, we need to install the latest Arduino IDE. Take note, I ran into issues installing the latest using "sudo apt-get arduino" so I did some searching and found a workaround. Get the latest IDE installed and we can move on to installing some third party libraries that will make development much easier.

Third Party Libraries (ROS)

ROS is a framework for creating robot software. We will use the concepts provided to design our own framework for communication between all of the sensors. In order to utilize the ROS communication framework on our Arduino, we need to use the RosSerial library. This will expose ROS functionality on the Arduino and make development and integration of all the sensors painless. The first sensor we will be testing is the accelerometer/gyro and luckily, someone already wrote a lean library for reading the raw data.

Goals


Our goals for this week are to get a working, visual, simulation of the accelerometer/gyro in ROS which will mean we need to complete the tutorials. Also, research quaternion math and how it applies to orientation.


Extras!


I got super excited while setting up my environment that I made a little surprise for you! Here is a sneak peak at what all of the above will look like.


Tuesday, August 9, 2016

Road Map Going Forward

Now that we have simulated the basics of flight in MATLAB it is time to think about translating this functionality into real life. With that in mind, there is an entirely new set of problems to deal with now that hardware is in the picture. Normally you would buy a prefabricated quad copter for up $1k or buy each piece separately and get a flight controller and build the quad like a Lego set. What we will do is obtain raw hardware such as Arduino, Raspberry Pi and other sensors, combine them into a working flight controller and eventually a fully functional quad copter. After the hardware is built, we will program autonomous behavior so multiple quad copters can function together as a group and complete designated goals.

With such an ambitious goal, it is imperative we have a road map to keep us focused and provide some structure as we move forward. This project will follow multiple phases that separate various concerns and provide milestones that will guide us to the end goal; a fully functional autonomous robot.

Phase 1: Acquire Hardware

Before we can even begin to program the PID controller or buy the motors, we need to know how much our quad copter will weigh. In order to know what sensors to buy, we must decide the quad's capability. The quad we are building in this blog will be a general purpose with the ability to navigate in and outside. Through research, the below items were selected to compliment each other in order to maximize power (MHz and Watts) and quality relative to cost ($$ and time).

Ultrasonic Range Finder
WiFi
Frame

Phase 2: Assemble and Test

After the hardware is purchased it is time to assemble and ensure the readings are accurate. This process will be iterative and consist of trial and error. Arduino's IDE makes uploading programs and monitoring the output very easy. We will also begin to layout the framework of how we want the sensors to communicate with each other which will also dictate the physical layout of the electronics. The more sensors added to the quad the more data that needs to be interpreted. In order to manage the massive amount of data, we will create programs to parse the data in order to gain meaningful information and run relevant tests.

Phase 3: Software

PID Controller
Flight Controller
Linear Movement
Non-Linear Movement
Quaternions (Stretch goal for the flight controller)

Once the quad copter is actually built, it is time to start programming the software used in the electronics. We will do this by using the knowledge we gained during the Coursera class for Arial Robotics, extra research, and a little intuition. The first part is the PID controller, this will control the stability of the quad copter during flight and orientation so the quad can move. The flight controller will send commands to the quad which will be interpreted by the PID controller causing the quad to move to desired locations. At the start, we assume all our movements are linear. When linear motion is completely implemented and tested we will move to non-linear movement sans cubic splines. If we want to test our skills we can add quaternions to the flight controller as an extra feature.

Phase 4: Automate

After we build one quad copter, we need to build a few more and make them work together. We implement swarm logic and goal oriented behavior to control each individual quad. There are a few simple rules we can implement when paired with sensors and libraries to provide the quad copters with a framework for cooperating with neighbors. We will use something similar to a Kalman Filter to help the quad autonomously decide from which sensors to pull data. In order for the quad copter to move within boundaries and reach goals, it needs to be able to localize itself.

Thoughts

This is a rough sketch of what I would like to accomplish and is neither complete nor comprehensive. Throughout these blog posts this road map will be refined and details will emerge at the appropriate times. There are a few programs we will need to install in order to simulate and test our sensors such as ROS, RVIZ and Unbuntu; we will go over these later.


Monday, August 8, 2016

3 Dimensional Movement

Now that we have a basic understanding of motion in 2 dimensions, we can begin to expand the range of motion and introduce another dimension of movement. If you recall from the previous post, the quad copter either moved along the z or y axis. While interesting, it does not model real world movement so we need a better approach. In this post, we will go through the necessary steps to add the x direction and enable full 3 dimensional movement.

(Helix Pattern)

Thrust

Traditionally, the quad copter's frame and propeller are rigid which means the Force generated by the propeller is always perpendicular to the quad's frame. Locally we define this perpendicular angle to be the z axis.1 Knowing this, the minimum Force needed to either hover or rise can be calculated using the following equation:

Force = Mass * Acceleration

Our mass is the mass of the quad copter and our acceleration is gravity plus any additional force we want to add. Since gravity is always acting on our quad copter in the downward direction
(-9.8 m/s^2), we need to counter this force by providing at least this much acceleration upwards in Thrust. If we wanted to rise in altitude, we would need to provide more Thrust than 9.8 m/s^2. The equation to calculate the amount of acceleration needed can be found below.

Acceleration= Gravity + Desired Acceleration in z direction (z_command)

z_command = desired acceleration z
                        + kd_z * (desired velocity z - current velocity z)
                        + kp_z * (desired position z - current position z)

If we look at the previous post, this is the same equation except, instead of using the y axis we are now using z; for reasons explained above. The kd_z, derivative and kp_z, proportional terms, help reduce the amount of error in the system by modifying the speed, proportional gain, and rate, derivative gain, at which the quad copter reacts to change.

In the end, our Force equation looks something like this:

Force = Mass * (Gravity + z_command)

The other dimensions, x and y, are calculated the same way but their values are not used in the Thrust calculations, rather they are used to calculate Orientation. Note that each calculation has its own derivative and proportional terms with respect to their axis.

x_command = desired acceleration x
                        + kd_x * (desired velocity x - current velocity x)
                        + kp_x * (desired position x - current position x)

y_command = desired acceleration y
                        + kd_y * (desired velocity y - current velocity y)
                        + kp_y * (desired position y - current position y)

Orientation

The second component to 3D movement is orientation. We have 3 types of orientations we can apply to the quad copter; pitch rotating on x axis, yaw rotating on z axis, and roll rotating on y axis. Our output for this term is going to be a 1x3 rotation matrix representing the x, y, and z rotations; phi, theta, psi respectively. The x and y rotations have their equations described below but the z calculation is a simple difference to determine the yaw.

Pitch
phi_command= 1 / Gravity
                            * (x_command * sin(desired yaw rotation) 
                            - (y_command * cos(desired yaw rotation)

Roll
theta_command= 1 / Gravity
                            * (x_command * cos(desired yaw rotation) 
                            + (y_command * sin(desired yaw rotation)

We use the desired yaw rotation (z angle) because both the x and y axis need to be rotated over the z axis. This allows for some measure of control when dealing with rapid movements. Now, the above equation only calculates the raw angle needed so we will need to apply some error smoothing in order to reduce choppy movement.2

Orientation = [kp_phi * (phi_command - current phi) - kd_phi * current angular accel x
                        kp_theta * (theta_command - current theta) - kd_theta * current angular accel y
  kp_psi * (desired yaw - current yaw) + kd_psi * (desired angular accel z - current angular accel z)];

When everything is put together it will look something like this.


Conclusion

Adding a layer of trajectory planning on top of the PID controller gives the quad copter the ability to perform maneuvers as seen in the images below. This Coursera class taught me so much about how a quad rotor maneuvers and the calculations needed in order to program a flight controller. In the coming weeks, I will begin the journey and build my own quad copter from the ground up. This includes building the hardware as well as programming the flight controller and autonomous behavior.


(Straight Line)

(Waypoints)


More Information

1. I come from a background of graphics programming in OpenGL and the coordinate system that is traditionally used in that context has the z axis projecting out of the screen, y pointing upwards, and x being horizontal. In the Coursera class I am taking, the coordinate system used is different so it is worth noting the difference. In the context of quad copters, z is pointing upwards, y is projecting out of  the screen, and x is still horizontal.

2. Note that each angle has its own derivative, kd and proportional, kp term.