Sunday, July 17, 2016

Sense And Avoid Selection

The ability for small unmanned aerial systems (sUAS) to sense and avoid has been a prevalent topic in unmanned aviation and somewhat controversial.  There have been products that have come to market claiming to help to meet this requirement, however the Federal Aviation Administration (FAA) does not recognize these sensors as being able to meet this requirement yet.  This paper will look at ultrasonic sensors as well as advances made in visual sensors.
Ultrasonic sensors utilize high-frequency sound pulses and then by calculating how long it takes for the initial sound to echo to come back, it can then compute a range from the object that reflected the sound (Ultrasonic Distance Sensor, 2016).  As depicted in Figure 1, one part of the sensor will transmit the sound wave, meanwhile the other side “listens,” this is known as the receiver (Ultrasonic Distance Sensor, 2016).  By approximating the speed of sound at 1,100 feet per second this factor becomes a known in the distance equation. The distance equation is as follows: distance (D) equals the time (t) it takes the sound to return and then multiplying it by the known speed of sound (1,100ft/s) and then dividing it by 2 (D = (t x 1100)/2) (Ultrasonic Distance Sensor, 2016). 

Figure 3. Ultrasonic sensors working example.  Courtesy of  Cornell University Electrical and Computer Engineering.


Visual sensors or what we commonly think of as cameras can be used to help system “see” and avoid as well.  Recent software advances have allowed for cameras to be utilized on commercial products, for example Subaru’s “EyeSight” which utilizes stereoscopic cameras to sense range and is available on select Subaru models (SUBARU DEBUTS NEXT GENERATION EyeSight SYSTEM, 2014).  Additionally, companies like Chinese manufacturer DJI, have marketed commercial off the shelf (COTS) solutions with their latest Phantom 4 quadcopter.  The Phantom 4 uses stereoscopic cameras mounted to the front in order to sense and avoid objects in front of it (Sense and Avoid, 2016). 
DJI has also introduced a product know as Guidance as part of their developer series.  This system incorporates both the ultrasonic and visual technologies discussed earlier (Guidance User Manual V1.6, 2015).  It includes five sets of ultrasonic and image sensors and these sensors are all connected by a single core control which can be connected to any DJI control system or other systems via USB or UART (Guidance User Manual V1.6, 2015).  The system requires 11.1-25 volts for power and draws 12 watts of power with all 5 guidance sensors (GUIDANCE SPECS, 2016).  Additionally the system weighs in at 282.4 grams with the guidance core, five sensors, and associated cables (GUIDANCE SPECS, 2016).  Since the systems comes with five sensors, one set could be used fore and aft, one set port and starboard, and on sensor facing down giving five sides of protection.  Lastly the sensor effective range is at maximum 20 meters or just over 65 feet (GUIDANCE SPECS, 2016).  Depending on the processing power of the controller, if an avoidance decision could be made in less than one that could allow for a flight speed of 44 miles per hour, best case scenario.
Currently these systems do not meet the FAA requirements but are being utilized to allow for obstacle avoidance of non-cooperative objects such as birds, debris, or in cases where line of sight position is in question due to viewing angles.  Products such as DJI’s Guidance uses the incorporation of multiple sensors to complete one task.  As time goes, visual recognition software will continue to evolve and progress, and in this authors opinion it will be a combination of sensors such as these that will fit the need of the FAA’s “see” and avoid or for the case of unmanned systems “sense” and avoid requirement. 



References
Guidance User Manual V1.6. (2015 Oct). DJI. Retrieved from http://download.dji-innovations.com/downloads/dev/Guidance/en/Guidance_User_Manual_en_V1.6.pdf
GUIDANCE SPECS. (2016 Jul 12). DJI.com Retrieved from http://www.dji.com/product/guidance/info#specs
Phantom 4 User Manual V1.2. (2016 Mar). DJI. Retrieved from https://dl.djicdn.com/downloads/phantom_4/en/Phantom_4_User_Manual_en_v1.2_160328.pdf
Sense and Avoid. (2016 Jun 29). Dji.com. Retrieved from https://www.dji.com/product/phantom-4
SUBARU DEBUTS NEXT GENERATION EyeSight SYSTEM. (2014 Jan 23). Subaru.com. Retrieved from http://media.subaru.com/newsrelease.do?id=562&mid=123&allImage=1&teaser=subaru-debuts-next-generation-eyesight-system

Ultrasonic Distance Sensor. (2016 Jul 11). Arduino-info.wikispaces.com. Retrieved from http://arduino-info.wikispaces.com/Ultrasonic+Distance+Sensor

Sunday, July 10, 2016

Control Station Analysis

“OpenROV is an open-source, low-cost underwater robot for exploration and education. It's also a passionate community of professional and amateur ocean explorers and technologists” (Welcome to OpenROV!, 2016).  The OpenROV is capable of descending to depths of 328feet of seawater and has up to a two hour life (Welcome to OpenROV!, 2016).  David Lang, the co-creator of OpenROV, wanted to make this system simple, low-cost, and accessible to allow more people to purchase and discover underwater exploration.
The OpenROV 2.8 weights 2.6kg; is 30cm long, 20cm wide, and 15cm tall; and has a maximum speed of 2 knots (OpenROV 2.8 Mini Observation Class ROV, 2016).  Additonally, it has a 120 degree field of view (FOV) camera that transmits video back via a 100 meter tether (can support up to a 300 meter tether) (OpenROV 2.8 Mini Observation Class ROV, 2016).  Onboard processing is completed through a BeagleBone Black and Arduino Mega microprocessors and the system connects to a PC that runs OS/X/Windows/Linux via Google Chrome browser and uses OpenROV open source software which is installed onboard the OpenROV (OpenROV 2.8 Mini Observation Class ROV, 2016). 
The OpenROV uses what they call a top-side adapter to connect the tether to the control computer, this can also be connected to a wireless router to allow a wireless connection to the adapter (Jakobi, 2016).  Once connected to the top-side adapter, Google chrome can be opened and connected to the IP address 192.168.254.1:8080, which will access the onboard OpenROV Control software (OpenROV, 2016).  The OpenROV software is stored on the BealeBone Black and can be updated via SD card (OpenROV, 2016).  The OpenROV software provides a plethora of information and can be configured to use keyboard inputs or game pad inputs for command functions to the ROV (OpenROV, 2016). 


\Figure 1. OpenROV open source control software screenshot.  Red is connectivity status, Blue shows compass heading, Orange shows latency, Yellow is current draw, and Green is battery voltage.  Courtesy of OpenROV.com.


Figure 1. OpenROV open source control software screenshot.  Red Compass heading, Orange shows motor thrust, Yellow is depth, and Green is roll and artificial horizon.  Courtesy of OpenROV.com.
The use of visuals to communicate information to the surface controller is a very common means of communication.  However, with the amount of information that systems can send this can become a very visually intense control method.  Other methods of communicating information to the operator are being utilized such as aural warnings, which OpenROV does not currently use.  Additionally, when on board a surface vessel, an operator at a surface control station can become victim to spatial disorientation (SD).
Spatial disorientation (SD) is defined as, “a failure to sense correctly the attitude, motion, and/or position of the aircraft with respect to the surface of the earth” (Cooke, 2006).  Because the operator is not physically inside of an unmanned vehicle, this can lead to false perceptions and the primary cause of SD (Cooke, 2006).  SD taxonomy in unmanned aerial systems (UAS) can be divided in to three groups: Visual Reference (VR), Operator Platform (OP), and Control Method (CM); furthermore these groups can be further divided in to VR: exocentric (EX), egocentric (EG), and External View (EV); OP: Mobil (M), and Stationary (S); and CM: Manual Control (MC), Supervisory Control (SC) and Fully Autonomous (FA) (Cooke, 2006).
Haptic feedback has been a recent topic that is being explored to help combat reduced situational awareness (SA) and SD.  “Haptic feedback, often referred to as simply "haptics", is the use of the sense of touch in a user interface design to provide information to an end user” (What is "haptic feedback"?, 2016).  Haptic feedback can be as simple as a vibrating wrist band or as complex as the proposed Tesla Suit which allows for full body haptic feedback (Rigg, 2016).  The Army Aviation Association of America also experimented with a motion simulator allowing the operator to feel as if they were in the cockpit of the aircraft (Bobryk, 2012).  Regardless of how simple or complex the haptic system is, it does serve to provide more SA to the operator and makes for easier processing since so much information is already being processed visually and aurally.  However, a large drawback is the increasing amount of information that must be sent back to the surface control station.
The OpenROV software could integrate aural warnings associated with depth to increase operator SA.  For example if a set depth is configured for warning, an aural tone would let the operator know the depth has been exceeded.  This could be especially important when reaching the maximum operating depth.  While not beyond the scope of OpenROV’s open source software, haptic feedback might be a harder to integrate because the premise of the OpenROV is affordability and adding haptic feedback devices would add more to the overall cost, however aural warnings could be integrated easier by taking advantage of speakers already incorporated in the control PC. 




References
Bobryk, B. (2012 Jun 13). UAV Motion Ground Station. Retrieved from https://www.youtube.com/watch?v=z7dBJsLlq8E
Cooke, N. J. (2006). Human factors of remotely operated vehicles (1st ed.).Boston, Mass: JAI. Retrieved from http://site.ebrary.com.ezproxy.libproxy.db.erau.edu/lib/erau/detail.action?docID=10139446
Jakobi, N. (2016 Jul 5). How to build a WiFi enabled Tether Management System. Openrov.donzuki.com. Retrieved from http://openrov.dozuki.com/Guide/How+to+build+a+WiFi+enabled+Tether+Management+System/59
OpenROV. (2016 Jul 5). OpenROV Operators Manual. Openrov.donzuki.com. Retrieved from http://openrov.dozuki.com/Guide/OpenROV+Operators+Manual/80
OpenROV 2.8 Mini Observation Class ROV. (2016 Jul 5). Openrov.com. Retrieved from http://www.openrov.com/products/2-8.html
Rigg, J. (2016 Jan 06). Teslasuit does full-body haptic feedback for VR. Engadget.com. Retrieved from https://www.engadget.com/2016/01/06/teslasuit-haptic-vr/
What is "haptic feedback"?. (2016 Jul 4). Mobileburn.com. Retrieved from http://www.mobileburn.com/definition.jsp?term=haptic+feedback
Welcome to OpenROV!. (2016 Jul 5). Openrov.com. Retrieved from http://www.openrov.com/index.html


Saturday, June 25, 2016

Unmanned System Data Protocol and Format


In September 2007 the United States Air Force (USAF) transferred to pre-production Global Hawk aircraft to the National Aeronautics and Space Administration (NASA) (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2009b).  This came to fruition due to a failed plans for NASA to uses USAF aircraft, this was because the Department of Defense (DoD) priorities dictated otherwise (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2008b).  This paper will examine the communications payloads onboard, data format, protocols, and storage methods used to make this platform effective and functional. Additionally, onboard sensors will be examined and the overall data strategy, as well as possible improvements.
The NASA Global Hawk communication payload consists of: two UHF/LOS links, two Iridium link, and one Inmarsat link which provide command and control (C2) communications; two iridium links for communications with Air Traffic Control (ATC); and  six Iridium links for Payload C2 and health status (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2009b).  These communications payloads are exemplified in FIGURE 1.
Figure 1.  Concept of operations for NASA’s Global Hawk communications payloads. Courtesy of NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2009b.
            The NASA Global Hawk can fly missions of up to 30 hours, and for this reason status packet can be monitored by the Mission Scientist and Payload Operator so that they can have situational awareness (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2008a).  These packet are in a Comma-separated values American Standard Code for Information Intercahnge (CSV ASCII) format which is similar to Interagency Working Group standard format number 1 (IWG1) (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2008a).  In the CSV ASCII format, “a leading identifier, with comma separated values, and with the first value being a timestamp” (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2008a).  Additional parameters include the instrument status code as the second parameter and there shall not be more than 16 total parameters (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2008a). 
The Link Module system is the on-board file serve and data base that instruments can use for back up storage of data and caching of flight data request (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2008a).  Additionally, wide band satellite communications (SATCOM) is available when within the geographical footprint and can provide from 56 Kbps and up to 50 Mbps of service (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2008a). 
The NASA Global Hawk sensor suite can be changed to accommodate different sensors for different missions (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2014).  For example, in 2011 a National Oceanic and Atmospheric Administration (NOAA) sponsored flight called for the deployment of dropsondes (NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration, 2014).  A dropsonde is a tube about the size of a paper towel roll and transmits temperature and humidity as it drifts and transmits this information to include its GPS location (Newman, 2015).  In 2014 a LIDAR instrument was fitted to the NASA Global Hawk, both a real-time and stored product were available to the ground users (McGill et al., 2014).  As a default practice, images were transferred and saved every five minutes during flight (McGill et al., 2014).
NASA’s Global Hawk sensor payload can range depending on the mission and the customer’s requirements.  Onboard storage will usually give better fidelity in a case where large amounts of data are unable to be transmitted or have to be converted to a lower fidelity before being transmitted.  Sensors are continually evolving, for example the ARGUS Eye can collect one million terabytes of high definition video a day which equates to 5,000 hours of video (Rise of the Drone, 2013).  This creates a need for onboard storage or more data links to stream information.  Data links tend to come a premium because there is only so much of the frequency spectrum that can be used and there are lots of every day devices that use these frequencies as well, such as cell phones and Wi-Fi.  So, it would seem most practical to keep data on board and only transmit when requested by the ground station.
NASA’s Global Hawk is a highly capable asset and can be reconfigured to meet the mission requirements set forth by the customer.  However, the Global Hawk faces an issue many unmanned platforms will begin to see, it is the fact that sensor technologies are moving faster than the ability to process, store, and transmit the data collected.

References
Graves, B. (2015). Special gear for global hawk? San Diego Business Journal, 36(28), 12. Retrieved from http://bi.galegroup.com.ezproxy.libproxy.db.erau.edu/essentials/article/GALE%7CA423235424?u=embry&sid=summon&userGroup=embry
McGill, M., Hlavka, D., Kupchock, A., Palm, S., Selmer, P., Hart, B. (2014, Apr 29). Cloud Physics Lidar on the Global Hawk. Greenbelt, MD: NASA Goddard Space Flight Center. Retrieved from http://ntrs.nasa.gov/search.jsp?R=20140017377&hterms=GLOBAL+HAWK&qs=N%3D0%26Ntk%3DAll%26Ntt%3DGLOBAL%2520HAWK%26Ntx%3Dmode%2520matchallpartial%26Nm%3D123%7CCollection%7CNASA%2520STI%7C%7C17%7CCollection%7CNACA
NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration. (2008 Nov). Global Hawk: Payload Network Communications Guide. Edwards, CA: NASA Center for AeroSpace Information. Retrieved from https://www.eol.ucar.edu/raf/Software/iwgadts/DFRC-GH-0029-Baseline.pdf
NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration. (2008). NASA global hawk project overview. Hanover, MD: NASA Center for AeroSpace Information. Retrieved from http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20080017500.pdf
NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration. (2009). NASA global hawk: A new tool for earth science research. Hanover, MD: NASA Center for AeroSpace Information. Retrieved from http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20090019745.pdf
NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration. (2009). NASA global hawk: Project update and future missions. Hanover, MD: NASA Center for AeroSpace Information. Retrieved from http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20090001264.pdf
NASA Center for AeroSpace Information, & United States. National Aeronautics and Space Administration. (2014 Nov). NASA Global Hawk Overview November 2014. Edwards, CA: NASA Center for AeroSpace Information. Retrieved from http://ntrs.nasa.gov/search.jsp?R=20140017744&hterms=GLOBAL+HAWK&qs=N%3D0%26Ntk%3DAll%26Ntt%3DGLOBAL%2520HAWK%26Ntx%3Dmode%2520matchallpartial%26Nm%3D123%7CCollection%7CNASA%2520STI%7C%7C17%7CCollection%7CNACA
Newman, P. (2015 Jul 31). What the Heck is a Dropsonde? Nasa.gov. Retrieved from http://www.nasa.gov/content/goddard/what-the-heck-is-a-dropsonde
Optical Bar Camera. (2016). UTC Aerospace Systems. Retrieved from http://utcaerospacesystems.com/cap/products/Pages/optical-bar-camera.aspx

Rise of the Drones. (2013 Feb 12). Public Broadcasting Station. Retrieved from https://www.youtube.com/watch?v=HopKAYthJV4

Saturday, June 18, 2016

UAS Sensor Placement

In the world of small unmanned aerial systems (sUAS) sensor placement is a critical design element and decision that is based on the desired mission or use of the platform.  In this paper the author, who is an avid quadcopter hobbyist, will look at two different design of sUAS platforms that have a similar set of sensors but are used in different ways thus resulting in different sensor placement.  This paper will examine the DJI Phantom 2 Vision Plus and the Walkera Runner 250 Advance.  Both of these systems are quadcopters that are GPS enabled, but the Phantom is designed for aerial photography, while the Runner is designed for what is known as first person view (FPV) racing.
The Phantom 2 Vision Plus is a 350mm quadcopter, which means it measures 350mm from the furthest two propellers (i.e. port forward to starboard rear).  It is equipped with a camera on a gimbal that can tilt and roll, this not only acts to point the camera in a direction other than where the craft is facing, but allows for stabilization of the image independent of the aircrafts attitude.  This image is sent back to the user via 2.4 gHz so that the operator at the control of the ground control station (GCS) can see in real-time what the aircraft sees (Phantom2 Vision+ User Manual V1.8, 2015).  Furthermore, this camera is mounted underneath of the aircraft for two reasons typically, one is due to the size of the gimbal and arms, and the second allows the camera to keep the aircraft out of the image (Phantom2 Vision+ User Manual V1.8, 2015). Additionally the Phantom is equipped with GPS to allow for precise positioning while flying and knowing where it started from, this antenna is built in to the housing of the quadcopter (Phantom2 Vision+ User Manual V1.8, 2015).  Lastly newer versions of the Phantom such as the Phantom 4 is equipped with ultrasonic sensors and vision sensors to allow it to hold a much more precise position and allow it to avoid obstacles if the camera is not facing the forward direction (Phantom 4 User Manual V1.2, 2016).
The Walkera Runner 250 Advanced is a 250mm quadcopter.  It is one of the few FPV Racing quadcopter equipped with GPS from the factory (Walkera Runner 250(R) Quick Start Guide, 2015).  This is a fairly unusual option since weight is the enemy in most any form of racing.  In a class where most racers are under 500 grams, the additional weight of a few grams from a GPS antenna can cost a race, but this platform was chosen to try and make a closer comparison of the two types of platforms.  The camera on the Runner is not attached to a gimbal and mounted directly to the quadcopter on the forward point on the body, this gives the controller the feel of being inside the cockpit of an aircraft (Walkera Runner 250(R) Quick Start Guide, 2015).  The image is stabilized with two rubber bulbs that act as shocks so that vibrations from the propeller does not create what is known as the “Jello” effect.  This is where the image moves as if the user is looking through moving JELLO or water.  Just like the Phantom this image is sent back to the operator at the GCS and displayed on a screen or what is known as FPV Goggles.  Additionally the image is not stabilized independent of the aircraft because this allows the users to infer the attitude of the aircraft as if they were actually in it.  Lastly the GPS is not integrated into the body shell, as it does not have one in the same way the Phantom does.  The Runner is a racer, so again, weight is the enemy and the frame is the main support and made of carbon fiber, so the antenna is placed on the top of the carbon fiber with a plastic post to set it up a little higher for a less obstructed view to the sky.  While the GPS obtains the same information, it is mainly used as a way for the user to see their position and relative to the aircraft and allows for a return home function, both similar features found in the Phantom, but unlike the Phantom the positioning is not meant to be as precise and is susceptible to drift from the wind and altitude is only accurate up to about +/- 3 meters (Walkera Runner 250(R) Quick Start Guide, 2015).
While these platforms offer a similar sensors, they are arranged differently and operate differently due to the intended use or mission of the system.


References
Phantom2 Vision+ User Manual V1.8. (2015 Jan) DJI. Retrieved from http://dl.djicdn.com/downloads/phantom_2_vision_plus/en/Phantom_2_Vision_Plus_User_Manual_v1.8_en.pdf
Phantom 4 User Manual V1.2. (2016 Mar). DJI.com. Retrieved from https://dl.djicdn.com/downloads/phantom_4/en/Phantom_4_User_Manual_en_v1.2_160328.pdf
Walkera Runner 250(R) Quick Start Guide. (2015 Oct 20). Walkera

Wednesday, June 8, 2016

Unmanned Systems Maritime Search and Rescue

On October 1, 2015, the U.S. Flagged El Faro went missing as it traveled from Puerto Rico to Jacksonville, Florida.  Onboard was its crew of 33, who were all presumed dead.  At the time of its trip Hurricane Joaquin, a category 4 storm, threatened its route back to Jacksonville.  During the trip the ship went missing, when its main propulsion failed stranding the crew in the path of the storm.
The U.S. Navy used an unmanned underwater vehicle (UUV) called CURV 21 which was able to locate and identify the sunken El Faro, (Almasy, 2015).  The CURV 21 is a 6,400 poubd UUV capable of reaching depths of 20,000 feet.  It uses a “.680 fiber-optic umbilical cable and a shared handling system that can switch at sea between side-scan sonar and ROV operations” (CURV 21 - REMOTELY OPERATED VEHICLE, 2015).  Among its exteroceptive sensors is found a side-scan sonar, CTFM sonar, high resolution still camera, and black and white and color video cameras (CURV 21 - REMOTELY OPERATED VEHICLE, 2015).  Its proprioceptive sensors include an electronic gyrocompass, attitude and heading reference unit, 1200 kHz Doppler velocity log, and 200 kHz altimeter.
The CURV 21 could benefit from the loss of the umbilical, however this umbilical is required due to the amount of data that must pass to and from the remotely located operators.  Currently underwater wireless systems would not allow the CURV 21 to operate at depth of 20,000ft.
With the declining prices of small unmanned aerial systems (sUAS), these systems are finding their ways in to search and rescue as well.  “In the vast wilderness of the Everglades, the SAR operations are often conducted in remote areas accessible by boat or aircraft,” (Safety, 2016).  In a search and rescue environment time is of the essence and sUAS give support teams the ability to quickly launch air assets to begin the SAR process.  The relatively flat area of the everglades would help to keep operators with in visual line of sight (VLOS); remaining VLOS is a current restriction imposed by the Federal Aviation Administration (FAA) of sUAS usage.   Unmanned Surface Vehicles (USV) could also be used to access areas covered by trees or inaccessible by boat.  Additionally, the use of USVs can keep searchers in one central area without the possibility of losing a rescue member or exposing rescuers to dangerous wildlife such as snakes, alligators, and mosquitoes.  While this is not the same environment where the El Faro sank, the benefits of a multi-sensor search and keeping all of the searchers in one area could be beneficial.
Sensor suits of sUAS and UUVs can be similar, but are often used differently.  For example, a camera on a sUAS or on a larger UAS often serve as a long range and short range visual cues.  However on a UUV, light does not penetrate the water as well and built-in lights only have a limited range, thus the cameras are used in close up viewing.  RADAR and SONAR work in similar ways as an exteroceptive sensor, however this is the primary way an UUV is able to see underwater, and would most likely be a tertiary way for the UAS to see.
Platforms in the air, surface (ground and water), and underwater can work together to make search and rescue efforts executed in a timely fashion and centralizing all information.  While collaboration will be key in unmanned operations, this also holds true in the realm of people and sUAS products.  For example an all-volunteer group called Search With Areal Rc Multirotor (SWARM), who has, “over 1,100 SAR Drone Pilots dedicated to searching for missing persons. Our primary mission is to offer and provide multi-rotor (drone) and fixed wing aerial search platforms for ongoing Search and Rescue operations at no cost to the SAR organization or to the family,” (Search With Aerial RC Multirotors (SWARM), 2016).  Through continued advances in sensor technology, all missions of unmanned systems will continue to benefit.


References
CURV 21. (2016 Jun 7). Office of the Director of Ocean Engineering Supervisor of Salvage and Diving. Retrieved from http://www.supsalv.org/00c2_curv21Rov.asp?destPage=00c2
CURV 21 - REMOTELY OPERATED VEHICLE. (2015 Nov 13). US Navy. Retrieved from http://www.navy.mil/navydata/fact_display.asp?cid=4300&tid=50&ct=4
Safety. (16 Feb 2016). Everglades National Park, Florida. National Park Service.  Retrieved from http://www.nps.gov/ever/getinvolved/supportyourpark/safety.htm  
Search With Aerial RC Multirotors (SWARM). (16 Feb 2016). SAR Drones. Retrieved from http://sardrones.org/

Almasy, S. (2015 Nov 2). Sub with camera to dive on sea wreck believed to be missing ship El Faro. CNN. Retrieved from http://www.cnn.com/2015/11/01/us/el-faro-search/

Thursday, June 2, 2016

Are more sensors on sUAS better?

On May 3, 2016, a man from Ohio who was flying his small unmanned aerial system (sUAS) near Cape Marco, FL crashed into a condominium, (Video: After drone crash, Marco council nixes ordinance, 2016).  “The owners of the condo where the drone landed, fearing they were being spied upon, were very upset by the incident, according to a police report obtained by WBBH. Officials, however, did not find any evidence to support that fear” (Man will not face charges after drone crashes into Fla. high-rise condo, 2016).  Furthermore the pilot was registered with the FAA according to the article.  Additionally, in my research he was not within 5NM of an airport and based on the video it was Visual Meteorological Conditions (VMC).

The crash happened after the signal was lost and the fail-safe was triggered for the sUAS to return home, (Video: After drone crash, Marco council nixes ordinance, 2016).  Victor Rios, a council member of the Belize, the condominium that the sUAS crashed at, wrote in concerns: “Based upon my experience and due to the position of the drone on the master bedroom lanai, I believe that it was hovering just above the railing and maneuvering for better position in an attempt to get closer - and this is when the drone hit the edge of the railing damaging the propellers,” (Video: After drone crash, Marco council nixes ordinance, 2016).  The Ohio man was cooperative and allowed the police chief to play the video for the council, clearly showing that there was no intentions of spying.

The DJI Phantom 4, a newer model of the one in the Cape Marco incident, was released early in 2016. “The Phantom 4 is equipped with an Obstacle Sensing System that constantly scans for obstacles in from of it, allowing it to avoid collisions by going around, over or hovering. The DJI Vision Positioning System uses ultrasound and image data to help the aircraft maintain its current positon,” (Phantom 4 User Manual V1.2, 2016).  With this new feature, I can assume that the sUAS would not have run strait in to the condo, but makes me wonder what might have happened?  Could it have stopped and hovered, then crashed after loosing battery life? Or tried to go around and still crashed due to the inability to sense obstacles on either side it?  How might the issue of spying changed if it would have stopped and hovered because of this new sensor employed?


References

Man will not face charges after drone crashes into Fla. high-rise condo. (2016 May 6). News Channel 8 (WFLA). Retrieved from http://wfla.com/2016/05/06/man-will-not-face-charges-after-drone-crashes-into-fla-high-rise-condo/
Phantom 4 User Manual V1.2. (2016 Mar). DJI.com. Retrieved from https://dl.djicdn.com/downloads/phantom_4/en/Phantom_4_User_Manual_en_v1.2_160328.pdf
Video: After drone crash, Marco council nixes ordinance. (2016 May 6). Sun Times. Retrieved from http://www.marcoislandflorida.com/story/news/2016/05/04/drone-crash-marco-council-nixes-ordinance/83921064/

Thursday, May 19, 2016

Finishing ASCI638

      As I round out the final week of my third nine week term at Embry Riddle Aeronautical University, I will be completing two courses.  Specifically, one of the courses is Human Factors in Unmanned Systems, ASCI 638, to obtain my concentration in Human Factors.  As an active duty member of the armed forces, this course has been challenging but very rewarding.  In this course we have completed nine weeks’ worth of discussions and research assignments, which have been very insightful and eye opening.  Additionally, we conducted a term long Case Analysis.  This Case Analysis too has served as an excellent tool in not only preparing for the capstone, but in preparing for work outside of academia. 

      The Case Analysis was a term long research project, which I chose to examine the Standardization of a Single Measurement Unit in Unmanned Aviation.  Without going into all of the details of the project, this is a holdover from manned aviation and I believe this will be a concern as sUAS become more and more popular especially with the integration into the NAS and parts continuing to come from overseas.  This project also required us to interact with our peers via peer reviews.  I like the idea of peer reviews in this format, as in my mind it helps to prepare us for a career where we might be a project manager overseeing a project of a similar scope.


      In this course we have also started a blog to share our research and thoughts.  I plan to maintain this blog and continue to share my ideas, thoughts, and research as I continue in the process to obtain my Master’s Degree in Unmanned Systems with a concentration in Human Factors.  I believe this degree will not only broaden my horizons but will allow me to further contribute ideas and research to my employer.  I am looking forward to continuing this process of learning and continuing classes to complete my degree.

UAS Crew Member Selection

In this paper we will examine a scenario where we work at a company that has just purchased the Insitu ScanEagle and a variant of the General Atomics Ikhana to conduct oceanic environmental studies.  We are now responsible for identifying the required crew positions to be filled and determine the qualifications, certifications, and training requirements.  Additionally a minimum and ideal set of criteria for those potential operators.
In order to identify the crew positions available we will look at each platform’s published figures starting with the Insitu ScanEagle.  The ScanEagle uses a pneumatically actuated launcher and a recovery system called the SkyHook, so it does not require a runway like a conventional aircraft.  Additionally its wing span is only 10.2ft and its maximum takeoff weight is 48.5lbs, making it fairly light and relatively portable.  It has a maximum speed of 80knots and a 24+ hour endurance. Its ground control station (GCS) is portable and allows for point-and-click control, (ScanEagle, 2015). 
With the above information, we can use the ScanEagle in a somewhat mobile operations such as having the launcher on a ship or on the ground.  We would also require, two positions to launch fly and recover a ScanEagle, a launch crewman and operator.  Depending on the length of operations required we may need more than one operator, additionally the launch crewman can be trained in maintenance so that a traveling crew can be kept to a minimum of two if operations allowed.
Next, we will look at the variant of the General Atomics Ikhana.  The Ikhana is an MQ-9 Predator B that has been adapted and instrumented for use by NASA, (NASA Armstrong Fact Sheet: Ikhana Predator B Unmanned Science and Research Aircraft System, 2015).  The Predator B has a wingspan of 66ft and maximum gross takeoff weight of 10,500lbs, thus the system is not as portable as the ScanEagle, (Guardian Multi-Mission Maritime Patrol, 2015).  The Ikhana GCS is portable in the fact that it fits inside of a large trailer.  The trailer houses “the pilot control station, engineering monitoring workstations, science monitoring stations, and range safety oversight,” (NASA - Large UAS Aircraft, 2008). “Predators and Reapers currently require two crew to function: a pilot and a sensor operator,” (Whittle, 2015).  In this scenario, one pilot will be required for Ikhana operations and a ground handling crew will be required.  However depending on the length of a particular mission more than one pilot may be required.
In examining current Certificates of Waiver or Authorization (COA) issued by the Federal Aviation Administration, we will require pilots for either platform to hold a minimum of a private pilot license and pass a third class medical examination, (Freedom of Information Act Responses, 2016).  While it is assumed that most flights will be conducted due regard, where the pilot is responsible for safe separation, it can also be inferred that the possibility exists for taking off while in the National Airspace System (NAS) before switching operations over to due regard.  Additionally, we will not let a pilot act as Pilot in Command (PIC) unless three proficiency flights have been conducted in the past 90 days.  We will ensure we work with General Atomics and Insitu to get the required proficiency before we take delivery of the vehicles and we will maintain this proficiency.
Finally, minimum and ideal set of criteria for these potential operators will be established.  Minimum criteria for selection for pilots on either platform without UAS experience, will be a private pilot certificate and a third class medical examination with at least 18 months remaining.  These are just minimums and ideal set of criteria for no experience in a UAS platform.  Ideally, operators from the military with experience in these platforms would be ideal, but military operators may not have a private pilot certificate or third class medical.  In a case such as this as long as the potential operator was medically cleared for UAS flights upon military discharge, we can supplement obtaining a private pilots certificate.  In either case training will have to be conducted before the operator is able to be a PIC on either platform.
While, both platforms will assumably be operating in the maritime environment in due regard, it is conceivable that the UAS could operate in the UAS.  In order for our company to ensure that we operator safely in the NAS as well as protecting our investment in these UAS systems we have set forth the above minimums for operator minimums.

References
Freedom of Information Act Responses. (2016 May 9). Federal Aviation Administration. Retrieved from https://www.faa.gov/uas/public_operations/foia_responses/
Guardian Multi-Mission Maritime Patrol. (2015). General Atomics Aeronautical Systems. Retrieved from http://www.ga-asi.com/Websites/gaasi/images/products/aircraft_systems/pdf/Guardian_032515.pdf
NASA - Large UAS Aircraft. (2008 Oct 3). NASA.gov. Retrieved from https://www.nasa.gov/centers/dryden/research/ESCD/ikhana.html#.VzEAO_krK00


Friday, May 13, 2016

sUAS Operational Risk Management

Enclosed is an Operational Risk Management worksheet created for the DJI Phantom 2 Vision.  I chose this platform because this is a small unmanned aerial system (sUAS) that I fly.  Additionally, I am awaiting a section 333 exemption, which will allow me charge for services I can provide DJI Phantom 2 Vision.
I started this ORM assessment tool by making a Primary Hazard List (PHL).  The PHL consists of likelihood category and severity category. Each category has 4 possible outcomes; not likely, possible, probable, and certain for likelihood; and negligible, moderate, high, and catastrophic for severity.  All range in value from 1-4 where one has a low value and 4 has a high value.  A matrix could then be built giving a risk assessment value by multiplying the value of the severity and likelihood, see FIGURE 1.
FIGURE 1. PHL for the DJI Phantom 2 Vision.
Once the PHL was completed I moved on to the Preliminary Hazard Assessment (PHA).  A list of hazards were compiled and the likelihood and severity were assessed giving a risk level (RL) value by multiplying the values.  Mitigating actions are also listed for each hazard to decrease the risk level resulting in the residual risk level (RRL).  An attempt was made to ensure all RL could be mitigated to some point but not all resulted in a lower RRL.
An Operational Hazard Review and Analysis (OHR&A) was created but has not been filled out because no flights have taken place.

FIGURE 2. ORM Worksheet for the DJI Phantom 2 Vision.

Lastly, an ORM Worksheet was created.  The worksheet can be completed digitally or manually, the digital method automatically sums entered line values.  The total value can then give the operator or myself an ORM level.  One caveat with the line listed as “AIRFIELD TOWER NOTIFIED” created with the assumption that operations will be taking place within the required 5nm of an airfield where the airfield would need to be notified.  If the flight is outside of this radius then the line can be skipped.

I will incorporate this ORM tool into flying of my DJI Phantom 2 Vision in addition with the checklist that I already operate with.  This tool can also let me assess the operational risk of flying when operating with my section 333 exemption.

Wednesday, May 11, 2016

Automatic Takeoff and Landing

There’s a saying in aviation, “Flying is the 2nd greatest thrill known to man. Landing is the 1st.”  This is because landing along with takeoff is a very dynamic part of a flight, and often the most dynamic part of a flight.  Pilots are often talking to approach or tower controllers, running checklists, scanning gauges, and physically controlling the aircraft during this part of flight.  Even in the early years of aviation, inventors and engineers have sought to ease the demands of an aviator, and this was seen as early only about 10 years after the Wright Brothers first flight, with Elmer Sperry’s gyrostabilizer, (Elmer Sperry, Sr., 2016).  The world of aviation has seen many changes from the early days of flight, but engineers still face many challenges including automation.
            For pilots, talks of automation in takeoff and landing can be a touchy subject for those who pride themselves on being able to land an airplane with hand-eye skills.  The Airbus A320 is capable of taking off and landing through automation.  Once the pilot has aligned the aircraft for the approach, speed is adjusted through an auto-throttle system, in where the pilot turns a dial to select an indicated speed and the aircraft adjusts the throttle levers accordingly, (Airbus A320: Auto Landing Tutorial, 2012).  Once the appropriate speed is set and the navigation aid (NAVAID) is selected, the approach button can be depressed, (Airbus A320: Auto Landing Tutorial, 2012).  During all of these processes the pilot is still receiving information from the aircraft and NAVAID and should be backing up the automation by checking the speed, ensuring NAVAID guidance is being received.  Once the glideslope is intercepted the pilot will further input the final approach speed, lower the landing gear, set the flaps, and set the spoilers and autobrakes, (Airbus A320: Auto Landing Tutorial, 2012).  The pilot will leave their hands on the power lever in the event of a go around, (Airbus A320: Auto Landing Tutorial, 2012).
            This system helps to alleviate the additional demands put on a pilot in the terminal by automating parts of the evolution, allowing the pilots to focus on things such as navigation (visual), and communication.  The system in the A320 is capable of flying in a zero vertical visibility and 50 meter horizontal visibility situation, (Airbus A320: Auto Landing Tutorial, 2012).  With the automated system taking care of the aviation portion of the evolution, the pilot must never forget the basics of, “aviate, navigate, communicate.”  This is the priority of skills often taught to aviators early in flight training.  This being said the pilot is always there in the cockpit ready to take control if required or making the judgement call for a go around.
            These systems have also found their way in to unmanned aerial systems (UAS).  The MQ-9 Reaper, remotely piloted aircraft (RPA), boasted in September 2012 that it had, “successfully completed 106 full-stop Automatic Takeoff and Landing Capability (ATLC) landings, a first for the multi-mission aircraft. The milestone was first achieved with four ATLC landings on June 27 at the company’s Gray Butte Flight Operations Facility in Palmdale, Calif,” (Predator B Demonstrates Automatic Takeoff and Landing Capability, 2012).  Other systems have been created to assist in this process since in its simplest for the computer is simply following a predefined flight path.  “The Visually Assisted Landing System (VALS), lets the drones use their cameras to identify landmarks, adjust speed and direction accordingly, and navigate to a smooth landing. And since runways are clearly defined, flat, obvious pieces of topography, identifying them should be easier… By utilizing the drones existing cameras, the system can be used for both the larger UAVs like the Predator, and smaller drones like the Scan Eagle,” (Fox, 2009).
                                     
FIGURE 1. Vision based landing technology called The Visually Assisted Landing System (VALS).  Courtesy of Popular Science.

            Both manned and RPA can be equipped with systems that can follow a preset published approach to get back on the ground.  But, an RPA is missing one key thing that manned aircraft has, a pilot in the aircraft making judgment calls based on what they are seeing and feeling.  While a RPA operator can see and make decisions, these are typically in a limited field of view and are subject to communications links.  If a communication link is broken, the RPA operator is now, for all intents and purposes, blind.  “Vision-based landing has been found attractive since it is passive and does not require any special equipment other than a camera and a vision processing unit onboard,” (Huh, 2010).  A visual based system would let the RPA make a split second decision such as a go around due to reduced visibility or a fouled runway.
            While aviation has made major changes from its beginnings in 1903, we still face the some of the same basic issues, of trying to find ways to assist the pilot to make aviation safer and easier.  We have even gone so far to pull the pilot out of the aircraft, but still face some of the same issues.

References
Airbus A320: Auto Landing Tutorial. (2012 Apr 6). BAA Training. Retrieved from https://www.youtube.com/watch?v=LIaMALJjOEc

Elmer Sperry, Sr.. (2016 Apr 27). The National Aviation Hall of Fame. Retrieved from http://www.nationalaviation.org/sperry-sr-elmer/

Fox, S. (2009 Aug 4). Popular Science. Retrieved from http://www.popsci.com/military-aviation-amp-space/article/2009-08/new-system-allow-automated-predator-drone-landings

Huh, S., & Shim, D. H. (2010). A vision-based automatic landing method for fixed-wing UAVs. Journal of Intelligent and Robotic Systems, 57(1), 217-231. doi:10.1007/s10846-009-9382-2. Retrieved from http://search.proquest.com.ezproxy.libproxy.db.erau.edu/docview/873356418?pq-origsite=summon


Predator B Demonstrates Automatic Takeoff and Landing Capability. (2012 Sep 17). General Atomics Aeronautical. Retrieved from http://www.ga-asi.com/predator-b-demonstrates-automatic-takeoff-and-landing-capability

Tuesday, May 10, 2016

UAS Shift Work Schedule

              As a human factors consultant for the MQ-1B squadron for the United States Air Force (USAF) this paper will attempt to address concerns of fatigue brought to the attention of the squadron’s Commanding Officer (CO).  Figure [1], has been submitted on behalf of the MQ-1B squadron outlining current operations.
“Fatigue is a condition characterized by increased discomfort with lessened capacity for work, reduced efficiency of accomplishment, loss of power or capacity to respond to stimulation, and is usually accompanied by a feeling of weariness and tiredness,” (Salazar).  Military life can lead to, “Late nights, deadlines, night-shift work, early briefs, time-zone travel, deployments, combat stress, and anxiety all compete for limited sleep time,” (Davenport, 2009).  These factors along with dealing with stresses of home life and collateral jobs create what is known as operational tempo (OPTEMPO).  “[C]ombat-aviation missions are presumably significantly more stressful than commercial air-transportation operations. For instance, although airline-transport pilots no doubt experience stress from their responsibility for the safety of up to 400 passengers, they are rarely targets of enemy aggression. Combat pilots, however, routinely perform their duties under imminent and palpable threats to their own safety and, in fact, their very lives,” (Caldwell, 2008).
Current OPTEMPO requires 3 shifts, Day, Swing, and Night.  Each team takes one particular shift for a 6 day week and then shifts the following week.  While this helps to create a circadian rhythm for the week, this is desynchronized the following week. By the end of the week as synchronization is happening, it is then changed again.

Figure 1
Figure 2
Proposal of the schedule in Figure [2], still has a 6 day work week stating with 2 Nights, 2 Swings, 2 Days shifts.  This schedule allows for a clockwise flowing shift change, allowing the body to adapt quicker and easier.  Additionally, since the week start on a Night shift and ends on a Day shift it allows for 78.5 hours off or 3.2 days of continuous time off.
Additionally, since this appears to be a high OPTEMPO environment, it is suggested that the CO seek advice from the flight surgeon on the USAF “Go, No Go” Program or the prescription use of stimulants and sedatives.  In this author’s opinion, having flown ISR missions for the United States Navy (USN), the prescription use of the sedatives can help in a high OPTEMPO environment.

References
Caldwell, J. A. (2008). Go Pills in Combat: Prejudice, Propriety, and Practicality. Air & Space Power Journal (Fall 2008). Retrieved from http://www.airpower.maxwell.af.mil/airchronicles/apj/apj08/fal08/caldwell.html

Davenport, N. (2009). MORE FATIGUE! (yawn). Approach, 54(3), 3. Retrieved from http://search.proquest.com.ezproxy.libproxy.db.erau.edu/docview/274584729?pq-origsite=summon


Salazar, G. Fatigue in Aviation. Federal Aviation Administration (Publication # OK-07-193) Retrieved from https://www.faa.gov/pilots/safety/pilotsafetybrochures/media/Fatigue_Aviation.pdf