Saturday, October 15, 2016

Hybrid Application Development

I recently installed Ionic2 and MongoDB so I can begin to familiarize myself with the new world of hybrid application design. I chose Ionic because it is currently the leader in front end standardization for mobile (iOS and Android) as well as web applications. It uses Cordova to build the application for the desired mobile platform which makes development easy since I only have to code everything once then deploy to all environments. MongoDB is a SQL-less database technology that, instead of storing data in rows and tables, data is stored in a JSON format which allows for easy schema modification and scaling for growing applications. I chose this because, as I am building this app on the fly, I do not necessarily know exactly every data point that needs to be stored upfront. While working through some of the finer points, the below will outline some of the issues that I encountered that aware not necessarily intuitive so I feel they should be documented somewhere.

Ionic 2


In order to install Ionic, we need to install node.js. One thing to note here, we normally always install the latest version of software and it normally works but sometimes node can be fickle and be really buggy in latest versions. I suggest picking a version that is a few versions behind the latest to ensure the major bugs are worked out and patched; at the time of writing this blog, I am using v6.1.0.

There are two Command Line Interface (CLI) commands that I find very useful when developing with Ionic; they are "--dev" and "--save". --dev is currently deprecated in v6.1.0 and is replaced with "--only=dev" but the functionality remains the same. This command recursively fetches node packages which means it pulls all dependencies along with the parent package. This is incredibly useful for new developers who need to set up a new environment and might not have the granular package detail as existing environments. An example of this, on Windows, is as follows: "npm update --dev" or "npm update --only=dev".

The second command is "--save". This command pulls the plugin AND adds a reference to the plugin in your config.xml file which contains all plugins your project is using. Many times while developing we might want to test some plugin but not actually save it so we can omit the "--save" and ensure it is not coupled with the project. An example of this, on Windows, is as follows: "cordova plugin add cordova-plugin-facebook4 --save".

Command Prompt


When I first started developing in Ionic I used a non administrator command prompt which eventually led to much frustration because it would serve me errors that had little support on the internet. It took me a little time to discover that these errors were not caused by any code or setup problems but rather the fact that I installed/ran Ionic2 with this non administrator command prompt. Make sure when starting up a command prompt, that you open it with administrator privileges otherwise you will encounter errors and become frustrated with it not working.

Conclusion


I learned a ton about the beginning stages of building a hybrid app over this week and I hope to continue this learning by extending it to native plugin integration such as Facebook / Google+ / Twitter authentication and database integration. One theme that continues to pop up while working with all these technologies is the difficulty in learning new software. Every time there is something new, it is inevitable that we will encounter hardships that feel impossible, but if we can break the problem down and understand its parts, we will eventually solve the problem and grow our knowledge continually.

As always, feel free to leave a comment or question. Happy programming!

Saturday, October 1, 2016

Click Events and Components

Looking Back and the Road Ahead


It is the start of my third month from the start of this blog's and I am pleased with my progress so far. Looking back and remembering all the things I have done up until now is very satisfying. Here is to another three months and more!


Click Events


In any modern game, the user needs to be able to interact with objects they see on the screen whether it is with a mouse, controller, or touch. In Unity, MonoBehaviors extend specific events such as "OnMouseDown() and OnMouseOver()" and when a collider component is attached to the GameObject the events will trigger the predefined methods if specific requirements are met. It is important to note that if your GameObject DOES NOT have a "Mesh Collider" component the events will not be fired. Another caveat about the above two methods is OnMouseDown() is only useful for left mouse button clicks, however, the OnMouseOver() method is able to capture all three mouse button inputs. An example can be seen below of this behavior.

(On this example I click each mouse button four times but only the left click is caught.)


Distinguishing the differences between these two methods was painstaking since there is no documentation of this behavior and many stack overflow answers referenced older versions of Unity which was not helpful. Hopefully this will help someone in the future not make the same mistakes and save them some time. Now that I am able to capture left and right mouse clicks I can create and destroy cell tiles with just a few clicks!



Components


Another crucial part of Unity are Components as they are the building blocks of all things and if you want to do anything useful you will need to understand Components and how they affect GameObjects. Currently I am only using a few Components, such as colliders and Scripts, in order to keep things simple.

Game Map


The first script I created is the MapEditorScript which holds the relevant information pertaining to the whole game map such as cell stacks, units, and other map related logic. From the above image you can see the 5x5 grid of hexagons and in order to display those I need a few things defined. First, I need a prefab; creating one is easy if you follow these steps found in the documentation. In order for my script to run, it needs to be attached to a GameObject but before the map is created there are no GameObjects in the scene. This obstacle can be overcome by creating an empty GameObject and attaching the MapEditorScript to it. Once the scripted is added, I can run the preview and see the map generated at run time.



Cell Stack


I want to be able to click on each tile and create a new tile on top of the existing tile. To do this, I need some data structure to hold the stack of cells to be rendered and handle events on each click. I made a CellStackScript which handles everything related to the Cell Stack such as creating/destroying tiles which can be seen below.



The bulk of the work is done by the CreateNewCellTile() method which instantiate's the prefab, sets the position, orientation, and parent, adds the CellTileScript, and finally pushes the newly created tile onto the Cell Stack. This sequence of events allows me to click on each cell stack individually and grow and trim each stack separately.




Cell Tile


I mentioned above about adding the CellTileScript to the children cells. Before this script is added, only the very first tile (the bottom tile) has click events which is not realistic in this scenario especially when tiles begin to grow in size and hide tiles that are behind the stack. We must attach some click event to each child that allows it to create a new tile in its Cell Stack. By giving each Cell Tile the reference to the parent cell (bottom most tile), we enable the ability to call back to the parent and create or destroy cells with click events on each tile in the stack. 




Conclusion


The topics discussed above took some time to understand and I ran into many road blocks along the way. Fortunately I resolved these through some creative problem solving and deductive reasoning. I now understand how Components fit into the Unity framework and moving forward will begin to utilize them more often. Click events will be used quite frequently since it is the main method the user will interact with the game world.

The next thing I want to accomplish is to add a few textures dynamically to the tiles in order to give the them a bit more character as well as be more visually appealing. Lastly I want to modify the camera so I can move around and rotate, allowing me to view all sides of the map.

As always, feel free to leave a comment or question. Happy gaming!

Tuesday, September 20, 2016

Back to the Basics

To begin the process of building this game I needed to decide a few things.
  1. In what framework am I going to build this game. 
  2. What program will I use to create models. (2D or 3D)
  3. In what language will the scripting be written.
Another important thing I need to consider is cross platform compatibility. I personally would like for this game to be run on many different platforms with as little effort as possible. Taking into account all of the above I decided to work with the free game engine Unity and the free modeling software Blender. Now both of these frameworks are advanced tools that are used to create amazing AAA games, but for my purposes I will be using a small fraction of the features in the beginning as I learn.

Blender

The shear amount of features in Blender can cause sensory overload and even the most veteran developer can feel intimidated. When I first opened the application, I had no idea what to do and I almost closed it and quit but I really wanted to build this game so I persevered. The community built around this tool is incredibly helpful and answered so many of my beginner questions that I feel indebted to them.

My first task is to create a simple hexagon that will be used to tile game cells. These hexagons will eventually be textured to look like different terrain types as well as have additional functionality not yet determined. There are a lot of ways to create a hexagon but in my short time with Blender I feel I have found the quickest and easiest solution that is not already a top search result on Google.

Step 1. Open Blender



Step 2. Delete everything in the scene



Step 3. Create a Circle Mesh

Step 4. Modify the Circle properties to have 6 Vertices, set the fill type to Ngon, and set the Location to 0,0,0.


Step 5. Select the model and change to "Edit" mode. Use the hotkey Alt + E to enter "Extrude" mode.
-I gave mine a depth of 0.25.



Step 6. Save the model. (I saved mine with a .blender extension)

Now we have a 3D model of a hexagon that will be the base model for our terrain tiles. Next task on our list is to boot up Unity and get this tile to display in the engine.

Unity

Similar to Blender, Unity offers numerous advanced features that can intimidate a developer.  When things get too chaotic and you feel like you are being discouraged by too many unknowns, take a step back and take some time to go over what you already know. This will help you stay on track while maintaining the vision of your game.

To start off I created a new Unity project and dragged the hexagon I created above into the "Assets" folder. Once in the folder, I clicked and dragged the model into the scene.


This click and drag approach won't work for more complex features such as a map editor so I need to add scripts to the scene in order to achieve some sort of dynamic object creation. The simplest way I was able to do this was by right clicking in the Assets folder and creating a new script; named MapEditorScript.

Double clicking on this script opens in my Visual Studio so I can edit the file. Scripts start out with two functions, Start and Update. From their names it is pretty obvious what they do. Code inside the start method is run once on initialization and code inside the update method is run once per game loop.



Now that the script is created, in order for the script to run it MUST BE attached to a game object. To do this, click the desired script and drag it onto an object in the scene. If this is not done, the script will not run and you will wonder why nothing is working. (It happened to me.) When everything is finished it will look something like this.





Conclusion

This week was spent learning the bare basics of Blender and Unity so I can begin to implement the core features of this game. Now I know how to create basic primitive shapes in blender as well as create a script that allows dynamic object creation at run time. Both of these concepts will be drawn upon heavily moving forward so it is important that I understand them fully.

Next week I plan on adding click detection so I can click on areas in the scene to create new objects. Once I can create new tiles with click events, I can implement logic to allow a user to create a dynamic map on the fly.

Wednesday, September 14, 2016

The New Game!

After working on my quadcopter for a while I decided to branch out and start a new project since I am waiting on hardware. I am fascinated by video games and the process of developing them so I figure making one of my own will be fun and a new learning experience. I always had a few game ideas floating in my head so now is the time to put pen to paper and start developing my very own game!

The Gist

Of all the games I have played, the ones that survive are games that have modding capabilities and a strong community that creates new content. Developers struggle to push out new content for their users in a timely fashion and it always seems that they are never fast enough. This problem can be solved by opening up modding and content creation tools that allows the community to create and share new things. The main focus of this game will be the modding potential and content creation by the community so I will need to create and expose tools that allow players the ability to add new stuff to the game.

What is it?!?

If you have ever played Final Fantasy Tactics AdvanceFire Emblem, or Advanced Wars this is basically a merging of features from each game that I find critical to fun game play. The main game mechanics that I like are as follows:

Final Fantasy Tactics Advanced:
- Dynamic Class System
- Laws that change each maps game play

Fire Emblem
- Permanent Character Death
- Large maps
- Shops in the map instead of ambient merchants available outside matches

Advanced Wars
- Buildings that generate resources and units
- Player specials (e.g. each commander has a separate but unique super move)
- Map Editor

Over the coming weeks and months I will be implimenting these awesome features into a single game and eventually releasing it to the world through some sort of marketplace. I haven't decided if I want to make it a mobile game or PC only. 

Starting Point

Starting off I will be building the map editor first since this is an important part of the game. My vision for this is to allow any player the ability to create a map and expose it to the rest of the community with little effort. The tiles will be hexagons with the sides being horizontal in orientation and can be stacked on top of each other to achieve height effects. I will be using Visual Studio as my IDE and for starters using only Components for the graphics. I have thoughts of moving it to Android and iOS but I will need to research the difficulty and effort needed to perform such a migration.

As always, feel free to post any questions or comments; happy gaming!

Monday, August 29, 2016

Reading Serial IMU Data

Last time we left off, we were able to send data serially to our python client which in turn, published the data to our ROS topic. Since we made a proof of concept (POC) with static values, we should be able to use this functionality to send the data of the Inertial Measurement Unit (IMU) serially. One known thing about IMU's is they are notoriously noisy so the readings from the accelerometer and gyroscope need to be filtered in order to observe meaningful data. Below we will explore one way to read IMU data serially, run it through a complimentary filter, and write to the output buffer.

Reading the IMU

If you remember from an earlier blog post where I listed the sensors I bought, the IMU is a LSM6DS33 3D Accelerometer and Gyro. It includes a 3.3V voltage regulator that allows a range of 2.5. to 5.5V which is nice since the Arduino Pro Mini pulls 5V. Pololu, the website where I purchased the sensors, has a resources section which directed me to a library specifically written for the sensor. Upon reviewing the source on GitHub, I decided this would be a good way to expedite development since the library is very simple and I have no experience reading analog values. After installing the library, an example sketch can be loaded through the Arduino IDE menu.

                       

The code on the right is the LSM6 Sketch and it contains everything you need in order to read the data from the IMU. All you need to do now is upload this sketch to the Arduino, make sure the IMU is connected correctly to the Arduino, and to provide power. While the program is running, if you have the serial monitor open, you will see a stream of data filling the screen with readings.

(Note: The stutter in the readings is caused by the Serial Monitor and not the sensor as we will see later when our ROS topic is publishing data in real time without lag.)


Interpreting the Data

The above image shows readings that are not very intuitive at first glance and can scare new hobbyists, hopefully the explanation below will help demystify the numbers and process.  

Accelerometer

The expected values for the accelerometer are X = 0, Y = 0, Z = -1 which just means that there is a downward force of 1 G. For those that might not find the above expected values intuitive, there are two reasons we are expecting these values. First, gravity is always acting, causing us to fall towards the center of the earth (1 G Force or -9.8 m/s^2). Second, since we are modeling the real world the sign (+-) of the Force really only determines the direction the Force is applied. For example, the Z axis represents up(+) and down(-). The accelerometer can also measure different scales of forces but in our case we have set the full scale setting to 2 which means we can effectively measure forces between -2G and 2G.

Now that we have expected values, we can begin to convert the raw readings into actual Forces. Looking at the image above we can take a random value from the 3rd column (Z acceleration force); I will pick -16,642. From page 15 of the LSM6 Data Sheet, we can find the units for the full scale setting of 2, linear acceleration sensitivity, 0.061mg/LSB (milli g's / Least Significant Bit). Taking the raw data and multiplying the sensitivity ratio we get the following:

1. -16,642 * 0.061 = -1,015.162 mg (These units are milli g's)

We need to account for the milli g's by dividing the number by 1000 to get g units.
2. -1,015.162 mg / 1000 mg = -1.015162 g

This value is close enough to -1 that we can dismiss the extra force measured as background noise. The above math can be made more generic so it can be applied to each dimension measured by the accelerometer.

3. Raw Data * Linear_Acceleration_Sensitivity * (1 g / 1000 mg)

If we convert the Linear_Acceleration_Sensitivity value from millig's to g's during setup we can reduce total calculations per iteraiton.

4. Raw Data * Linear_Acceleration_Sensitivity_G = result
(Note: Steps 1 & 2 combined are Step 4)


Gyroscope

The expected values for the gyroscope at rest are 0 dps (degrees per second) for X, Y and Z dimensions because the gyroscope measures the rate of change in angular acceleration. The raw data is measured in mdps/LSB and must be converted in order to be a meaningful value. Again, looking at page 15 of the LSM6 Data Sheet, we can find the angular rate sensitivity for the full gain setting 245 which is 8.75mdps/LSB. (milli degrees per second / Least Significant Bit)

First our angular rate sensitivity is in mdps and we need to turn it into dps.
1. 8.75 / 1000 = 0.00875 dps/LSB (This result is the Angular_Rate_Sensitivity in terms of g's)

Multiply the raw value by the angular rate sensitivity to get degrees per second
2. Raw Data * Angular_Rate_Sensitivity = result

If we calculate some of the values in the above image, we will notice that the results are not 0 and in fact they oscillate between a specific range. This oscillation is due to imperfections in the sensor and must be accounted for by setting a threshold that floors the readings and helps remove the noise. In our case, I checked the data sheet and noticed there is a rate of (+-)10 dps so any readings between -10 and 10 are floored to 0.

Note: The result is in degrees per second but we are reading data from the IMU much quicker than 1 second intervals. To remedy this we need to integrate (sum the data over time) the result by multiplying by the time it took to run the last iteration (delta time [normally in milliseconds]). As soon as we integrate the gyro data (gyroData * dt), we introduce error because our sample size is not continuous which will cause error interpreted as drift; the main reason why we will smooth the data using a complimentary filter.

Complimentary Filter

The idea of smoothing out the data is not new and there is even a standard call the Kalman Filter. While the Kalman Filter is very complex, there is a simpler approach that requires little overhead. (We will be implementing a Kalman Filter in the future, just not right now.) For now, we will use what is called a Complimentary Filter:

angle = (angle + gyroData * dt) * 0.98 + (angle of acceleration * 0.02) 


To get the angle of acceleration we can use the arc tangent but this causes problems because quite frequently our divisor will be 0 and we cannot divide by 0. Luckily we have access to the atan2 function which allows us to supply 2 inputs and get an angle representative of the inputs. There are plenty of websites that will explain what the arctangent and atan2 functions are so if you are interested I suggest reading further

(Note: The above code will probably not work copy pasta unless the rest is configured exactly the same.)


(Disregard the linear values as they are debug terms I am using)

We see that at rest the IMU is reading 0 and when disturbed, we get precise readings. Moving forward these values will be fed into a PID controller which will be used to stabilize and move the quadcopter.

Findings

There is very little documentation for hobbyists online at the moment and I am continually scouring the internet for resources but they are far and few between. I think this will be a good opportunity to add to the collective knowledge base of the internet with my findings and hopefully my struggles will make someone else's experience easier. As always, feel free to leave a comment or question. Happy flying!





Tuesday, August 23, 2016

Roadblocks, an Opportunity for Learning

After getting the virtual machine set up, I installed ROS and went through all of the tutorials in order to better understand the framework. In the end, I had a working topic, message, publisher, and subscriber. The examples provided on the ROS website were enough to get a simple application set up and understand how to interact with the framework as well and use ROSSerial to communicate with the simulator; Rviz.




The Hill

Coming from a software developer background, I am not used to hardware constraints. If I need another variable, I just make one. With the Arduino, we only have 32kb of Flash memory, 2KB of which is reserved for the boot loader, 2KB of SRAM, and 1KB of EEPROM so virtual space is at a premium. After finishing the tutorials and making a working example, I decided it is time to use ROSSerial to publish the IMU data so Rviz can simulate. I thought combining the IMU example with the ROSSerial example would produce a desirable result, but in the end it proved nothing but a headache. We will still use ROS but not on the Arduino.




The Battle

As soon as I added a node handler, the 2KB of SRAM dwindled as seen in the picture above. Even with the most basic usage, I was running out of SRAM before I could even do any calculations AND the motors aren't even considered. I looked for other ways to solve this memory problem because it caused syncing problems during sketch uploads. The Arduino gives the capability to store common parameters in EEPROM then queried during run time so I tried to utilize this space to hold the publisher and node handler, since these objects are required globally; unfortunately, they are too large for this reserved space so the search continued. By this time a few days had passed and I decide that the ROSSerial library is too large for the Arduino so I needed to find a different way to send the IMU data to Rviz.


Success

Eventually, writing the data serially became my last option and incidentally, it turned out to be the easiest solution. I read up on Serial Communication and a few other hardware topics that are unfamiliar to me then came up with the simple solution seen below. Arduino exposes functionality that allows us to write data serially with the Serial.write() command and supports strings, arrays, and bytes. The test data is written while, on a port connected to the Arduino, a python client listens and decodes the data to be used by Rviz.
























The python script handles everything including initializing the node, creating the publisher, connecting to the Arduino, reading the serial data, publishing the data and printing the data to the logs. This script will be heavily used for simulating data on Rviz which will be the subject of the next blog post. Aside from the script, I also created a launch package for the python client, roscore, and rviz so I no longer need to have a separate terminal open for each. Going forward, new executables will be added to this launch package to make deploying everything easier. Below is an example of the data being sent by the Arduino and received by the python client on the VM.




Lessons Learned

Over the course of the week I spent many hours reading blogs, documentation, and other sources. After countless failed attempts, continuing can be frustrating and somewhat disheartening but by keeping a level head and repeatedly attacking the problem from different angles, I was able to solve the issue AND learn several things in the process. Another important thing I learned is IMU's are prone to drifting errors which requires a remedy. The next item to work on is a Kalman Filter that will allow us to smooth the drift by using a feedback loop.

As always, feel free to leave a comment or question.

Sunday, August 14, 2016

Setting Up Our Development Environment!!!

Parts are arriving daily! Soon we can begin connecting the basic sensors and start getting actual readings. Our goal over this week was to get our development environment setup so we can start reading sensor data on the Arduino Pro Mini.


Sensors1

Environment Setup


Virtual Machine (VM)


We need a development environment that can be isolated from our workstation without being burdensome. A good tool that satisfies this requirement is a virtual machine. Software is capable of emulating hardware, allowing for sand-boxed operating systems within your current workstation. There are several vendors of virtual machines but I normally stick with Oracle VM VirtualBox due to already being familiar with the software. Once installed, we can download an operating system image and use it to install a fresh version on the VM so we can begin to setup our dev environment.

Operating System (Ubuntu)


In order to utilize the VM, we will need an operating system to run. In this series, I will use Ubuntu version 16.04 as it is recommened for some of our third party libraries. Install Ubuntu onto an instance of a VM and follow the on screen instructions. After some time, the software will be installed and we can move on to the next step.

IDE (Arduino)


Our programs need to run on the Arduino Pro Mini and Raspberry Pi Zero. In order to write program that CAN run on those electronics, we need to install the latest Arduino IDE. Take note, I ran into issues installing the latest using "sudo apt-get arduino" so I did some searching and found a workaround. Get the latest IDE installed and we can move on to installing some third party libraries that will make development much easier.

Third Party Libraries (ROS)

ROS is a framework for creating robot software. We will use the concepts provided to design our own framework for communication between all of the sensors. In order to utilize the ROS communication framework on our Arduino, we need to use the RosSerial library. This will expose ROS functionality on the Arduino and make development and integration of all the sensors painless. The first sensor we will be testing is the accelerometer/gyro and luckily, someone already wrote a lean library for reading the raw data.

Goals


Our goals for this week are to get a working, visual, simulation of the accelerometer/gyro in ROS which will mean we need to complete the tutorials. Also, research quaternion math and how it applies to orientation.


Extras!


I got super excited while setting up my environment that I made a little surprise for you! Here is a sneak peak at what all of the above will look like.


Tuesday, August 9, 2016

Road Map Going Forward

Now that we have simulated the basics of flight in MATLAB it is time to think about translating this functionality into real life. With that in mind, there is an entirely new set of problems to deal with now that hardware is in the picture. Normally you would buy a prefabricated quad copter for up $1k or buy each piece separately and get a flight controller and build the quad like a Lego set. What we will do is obtain raw hardware such as Arduino, Raspberry Pi and other sensors, combine them into a working flight controller and eventually a fully functional quad copter. After the hardware is built, we will program autonomous behavior so multiple quad copters can function together as a group and complete designated goals.

With such an ambitious goal, it is imperative we have a road map to keep us focused and provide some structure as we move forward. This project will follow multiple phases that separate various concerns and provide milestones that will guide us to the end goal; a fully functional autonomous robot.

Phase 1: Acquire Hardware

Before we can even begin to program the PID controller or buy the motors, we need to know how much our quad copter will weigh. In order to know what sensors to buy, we must decide the quad's capability. The quad we are building in this blog will be a general purpose with the ability to navigate in and outside. Through research, the below items were selected to compliment each other in order to maximize power (MHz and Watts) and quality relative to cost ($$ and time).

Ultrasonic Range Finder
WiFi
Frame

Phase 2: Assemble and Test

After the hardware is purchased it is time to assemble and ensure the readings are accurate. This process will be iterative and consist of trial and error. Arduino's IDE makes uploading programs and monitoring the output very easy. We will also begin to layout the framework of how we want the sensors to communicate with each other which will also dictate the physical layout of the electronics. The more sensors added to the quad the more data that needs to be interpreted. In order to manage the massive amount of data, we will create programs to parse the data in order to gain meaningful information and run relevant tests.

Phase 3: Software

PID Controller
Flight Controller
Linear Movement
Non-Linear Movement
Quaternions (Stretch goal for the flight controller)

Once the quad copter is actually built, it is time to start programming the software used in the electronics. We will do this by using the knowledge we gained during the Coursera class for Arial Robotics, extra research, and a little intuition. The first part is the PID controller, this will control the stability of the quad copter during flight and orientation so the quad can move. The flight controller will send commands to the quad which will be interpreted by the PID controller causing the quad to move to desired locations. At the start, we assume all our movements are linear. When linear motion is completely implemented and tested we will move to non-linear movement sans cubic splines. If we want to test our skills we can add quaternions to the flight controller as an extra feature.

Phase 4: Automate

After we build one quad copter, we need to build a few more and make them work together. We implement swarm logic and goal oriented behavior to control each individual quad. There are a few simple rules we can implement when paired with sensors and libraries to provide the quad copters with a framework for cooperating with neighbors. We will use something similar to a Kalman Filter to help the quad autonomously decide from which sensors to pull data. In order for the quad copter to move within boundaries and reach goals, it needs to be able to localize itself.

Thoughts

This is a rough sketch of what I would like to accomplish and is neither complete nor comprehensive. Throughout these blog posts this road map will be refined and details will emerge at the appropriate times. There are a few programs we will need to install in order to simulate and test our sensors such as ROS, RVIZ and Unbuntu; we will go over these later.


Monday, August 8, 2016

3 Dimensional Movement

Now that we have a basic understanding of motion in 2 dimensions, we can begin to expand the range of motion and introduce another dimension of movement. If you recall from the previous post, the quad copter either moved along the z or y axis. While interesting, it does not model real world movement so we need a better approach. In this post, we will go through the necessary steps to add the x direction and enable full 3 dimensional movement.

(Helix Pattern)

Thrust

Traditionally, the quad copter's frame and propeller are rigid which means the Force generated by the propeller is always perpendicular to the quad's frame. Locally we define this perpendicular angle to be the z axis.1 Knowing this, the minimum Force needed to either hover or rise can be calculated using the following equation:

Force = Mass * Acceleration

Our mass is the mass of the quad copter and our acceleration is gravity plus any additional force we want to add. Since gravity is always acting on our quad copter in the downward direction
(-9.8 m/s^2), we need to counter this force by providing at least this much acceleration upwards in Thrust. If we wanted to rise in altitude, we would need to provide more Thrust than 9.8 m/s^2. The equation to calculate the amount of acceleration needed can be found below.

Acceleration= Gravity + Desired Acceleration in z direction (z_command)

z_command = desired acceleration z
                        + kd_z * (desired velocity z - current velocity z)
                        + kp_z * (desired position z - current position z)

If we look at the previous post, this is the same equation except, instead of using the y axis we are now using z; for reasons explained above. The kd_z, derivative and kp_z, proportional terms, help reduce the amount of error in the system by modifying the speed, proportional gain, and rate, derivative gain, at which the quad copter reacts to change.

In the end, our Force equation looks something like this:

Force = Mass * (Gravity + z_command)

The other dimensions, x and y, are calculated the same way but their values are not used in the Thrust calculations, rather they are used to calculate Orientation. Note that each calculation has its own derivative and proportional terms with respect to their axis.

x_command = desired acceleration x
                        + kd_x * (desired velocity x - current velocity x)
                        + kp_x * (desired position x - current position x)

y_command = desired acceleration y
                        + kd_y * (desired velocity y - current velocity y)
                        + kp_y * (desired position y - current position y)

Orientation

The second component to 3D movement is orientation. We have 3 types of orientations we can apply to the quad copter; pitch rotating on x axis, yaw rotating on z axis, and roll rotating on y axis. Our output for this term is going to be a 1x3 rotation matrix representing the x, y, and z rotations; phi, theta, psi respectively. The x and y rotations have their equations described below but the z calculation is a simple difference to determine the yaw.

Pitch
phi_command= 1 / Gravity
                            * (x_command * sin(desired yaw rotation) 
                            - (y_command * cos(desired yaw rotation)

Roll
theta_command= 1 / Gravity
                            * (x_command * cos(desired yaw rotation) 
                            + (y_command * sin(desired yaw rotation)

We use the desired yaw rotation (z angle) because both the x and y axis need to be rotated over the z axis. This allows for some measure of control when dealing with rapid movements. Now, the above equation only calculates the raw angle needed so we will need to apply some error smoothing in order to reduce choppy movement.2

Orientation = [kp_phi * (phi_command - current phi) - kd_phi * current angular accel x
                        kp_theta * (theta_command - current theta) - kd_theta * current angular accel y
  kp_psi * (desired yaw - current yaw) + kd_psi * (desired angular accel z - current angular accel z)];

When everything is put together it will look something like this.


Conclusion

Adding a layer of trajectory planning on top of the PID controller gives the quad copter the ability to perform maneuvers as seen in the images below. This Coursera class taught me so much about how a quad rotor maneuvers and the calculations needed in order to program a flight controller. In the coming weeks, I will begin the journey and build my own quad copter from the ground up. This includes building the hardware as well as programming the flight controller and autonomous behavior.


(Straight Line)

(Waypoints)


More Information

1. I come from a background of graphics programming in OpenGL and the coordinate system that is traditionally used in that context has the z axis projecting out of the screen, y pointing upwards, and x being horizontal. In the Coursera class I am taking, the coordinate system used is different so it is worth noting the difference. In the context of quad copters, z is pointing upwards, y is projecting out of  the screen, and x is still horizontal.

2. Note that each angle has its own derivative, kd and proportional, kp term.

Saturday, July 30, 2016

2 Dimensional Movement

The coolest part of a quad copter is the movement. When we imagine a quad copter, we can assume the 4 rotors are always providing thrust while the machine is powered. The ability to hover in place and orient itself provides the basic functions for movement. In order to keep things simple, we will only move in 2 dimensions for now (z [forward] and y [up]). I use the following equation to implement the PD controller. (I did not implement the integral parts so it is not a traditional PID controller) The below math might look intimidating, but after being demystified, it is rather simple.


Assumptions

Since we are only operating in 2 dimensions, we make some assumptions that allows for simple calculations of motion.
1. We assume the angles used are near equilibrium. (Hover state)
2. The quad copter is small. (Agile)

U1.

The value of u1, represents the amount of thrust needed to overcome the mass of the quad copter by determining the amount of error between the current position and desired position. This can be calculated with this formula: u1 = mass * (gravity + z_ddot + kv_z + kP_z)


Variables

params.mass: The total mass of the quad copter.
params.gravity: -9.8 m/s^2.
des_state.acc(2): The double derivative of our z acceleration; also known as z_ddot. 
(Remember acceleration is the rate of change of velocity[m/s] over time so m/s^2)

The below two variables are used to "tune" the PD controller to account for the error in the system for the z axis. (over/under compensation)
kv_z: The derivative gain dampens rapidly increasing inputs. This higher the derivative gain the stronger the system reacts to error.
kp_z: The proportional gain determines the speed at which the system reacts to change. Larger inputs have a possibility to cause reactions too quickly which will make the system unstable.

U2.

The value of u2 represents the angle at which the quad copter needs to be in order to move the correct direction by determining the amount of error between the current orientation and the desired orientation. This can be calculated with this formula: u2 = Ixx * (phiC_ddot + kv_phi + kp_phi)

Variables

The below two variables are used to "tune" the PD controller to account for the error in the system for the y axis. (over/under compensation)
kv_y: The derivative gain dampens rapidly increasing inputs. This higher the derivative gain the stronger the system reacts to error.
kp_y: The proportional gain determines the speed at which the system reacts to change. Larger inputs have a possibility to cause reactions too quickly which will make the system unstable.

The below two variables are used to "tune" the PD controller to account for the error in the system for the angle of orientation. (Remember our assumptions above, this angle will be close to 0)
kv_phi: The derivative gain for the angle of orientation. The higher the value the stronger the system will react to error.
kp_phi: The proportional gain for the angle of orientation. The higher the value the quicker the system will respond to error.
phiC: PhiC is the amount of error between the current orientation and the desired orientation accounting for gravity and acceleration. This can be calculated with the following formula: (-1/gravity) * (y_ddot + kv_y + kp_y). *Note y_ddot is the 2nd derivative of y (acceleration in the y direction)
phiC_ddot: Our above assumption of small angles near equilibrium allows us to assume 0 for this value.
params.Ixx: This is the inertia of the current frame.

Tuning

Effects of increasing a parameter independently
ParameterRise timeOvershootSettling timeSteady-state errorStability
DecreaseIncreaseSmall changeDecreaseDegrade
DecreaseIncreaseIncreaseEliminateDegrade
Minor changeDecreaseDecreaseNo effect in theoryImprove if  small

There are many different methods of tuning a PID controller; the most obvious being through trial and error. Using the above table, you can manually tweak the values of Kp, Ki (if implemented), and Kd in order to adjust how the quad copter reactions to over/under shoots. In my assignments for Coursera, I tuned the controller manually through trial and error but when I build a quad copter in the coming series; I will use a tuning algorithm.

Results

(Straight Line Trajectory)


(Sine Wave Trajectory)

Areas of Improvement

There are two improvements I would make to the controller. 
1. Add the integral variable to the controller. This will help with error regarding unknowns such as wind resistance or other unknown variables. 

2. We are only operating in 2 dimensions so this does not really model the real world. Adding the 3rd dimension will allow us to model real world movement and facilitate the actual building of a real quad copter.


Friday, July 29, 2016

My first post

I recently took a Quad Copter Robotics class on Coursera and I became fascinated with the whole process. Over the course, we learned how to program a PID controller and interact with a Flight Controller using Mat Lab. Now that I know how to program a quad copter I want to build one! This blog will document my journey of building and programming a real quad copter and eventually a swarm of quads that will work together to accomplish goals.




The goal of this blog is to write at least once a week about the various projects that I work on during the week. I sometimes jump around the different projects so be sure to check out the other blogs to keep up with my work! :)