Innovation in the agribusiness is not a new phenomenon. It has been going on for many years often without many people being aware of it outside that business itself. That continuous innovation has resulted in todays agribusinesses being much more productive than previous generations of agribusinesses.
Some studies put the average agribusinesses ability to produce at 262% better than it was as recently as 1950. This has been achieved with 2% fewer inputs than before where inputs are things like labor, seeds, feed and fertilizer.
To that end it is clear the agribusiness has come a long way from where it started in the 18th century. At that time it was driven by Oxen and horses for power, crude wooden plows and lots of work done by hand to sow seeds and cultivate crops.
If we fast forward through the many innovations, triggered by the industrial revolution (iron ploughs, cotton gin, tractors etc), we eventually arrive to the point where information technology started to be used extensively in the in the late 1980’s and early 1990’s quickly followed by the use of satellites to plan work in the late 1990’s.
To the outside the agribusiness looks like a slow moving industry when it fact it is clear to those closer that it is moving at an astonishing pace!
This post is the eleventh, and final, post in documenting the steps I went through on my journey to build an autonomous, voice-controlled, face recognizing drone. There are 10 other posts building up to this one which you can find at the end of this post.
Focus of this post
In this post I will share a video of the complete end-to-end demo and share details of the architecture which sits behind it. I will also share information on what I bought/used to bring this all together and relist all the different software, services and node packages in a single place.
Pulling It All Together
A lot of what we have been doing with this project is humanizing the way we communicate with machines/computers/things. That means talking and observing to drive intelligent interaction rather than using a mouse, keyboard or touch screen.
Our Autonomous Voice-Controlled, Face Recognizing, Drone is a smart drone which showcases, albeit crudely, how interaction with services filled with intelligence is going to evolve. It highlights the importance of cognitive services to the success of organizations in the future.
So with that said take a look at the entire end to end demo in the video below.
This post is the tenth post in documenting the steps I went through on my journey to build an autonomous, voice-controlled, face recognizing drone. There are 9 other posts building up to this one which you can find at the end of this post.
Focus of this post
Up until now we have been mostly working on controlling the drone, using the Microsoft Cognitive Services Face API to identify people and lastly making use of the Microsoft Cognitive Services API to convert text to speech and speech to text.
Ultimately we will have built an intelligent end-to-end IoT solution featuring analytics and visualization. The instructions here can be used to also understand how to get data in from other devices as well!