Delft AI Toolkit - Version 2
Visual Authoring Toolkit for Smart Things
This is a new 2.0 version of this project with a significantly changed architecture (original version). The NodeCanvas node system has been replaced with xNode, which is being enhanced by Siccity as part of this project. In addition, the system now communicates directly with the Raspberry Pi (instead of using node.js and bluetooth).
**As of November 2018, this version is going through significant changes. We hope to have a more stable release by the end of 2018. At that time, we’ll post a RasPi image that’s ready to go to use with the toolkit.
The Delft Toolkit a system for designing smart things. It provides a visual authoring environment that incorporates machine learning, cognitive APIs, and other AI approaches, behavior trees, and data flow to create smart behavior in autonomous devices.
The toolkit is currently in rough prototype form as a part of my research. It is likely to change significantly as I iteratively develop a technical and design strategy.
The goal of this project is to develop an authoring system approach that enables designers to easily and iteratively prototype smart things. This approach includes the ability to Wizard-of-Oz AI behaviors and simulate physical hardware in 3D, and then migrate these simulations to working prototypes that use machine learning and real hardware.
The system currently has two parts:
- Authoring & Control System running on a computer
- Visual Authoring with nodes in the Unity3D authoring environment
- Raspberry Pi
- Arduino (we may transition to the Adafruit Crikit for RPi once it comes out and we have a chance to evaluate it)
- Motors, servos, sensors, LEDs, microphone, speaker, camera, etc.
The below is currently being revised and is not complete. Stay Tuned.
Starting the system
- Power robot: Power on the Arduino and Raspberry Pi (RPi)
- Arduino Powered by the USB cable from the RPi
- Motors: Turn on the 6V AA battery pack
- RPi: Connect the fast charging USB battery to the micro USB connector
- Login to RPi: Open a terminal app on your computer and login to the RPi by typing:
- ssh [email protected] (change the last digit to match your setup)
- Get IP addresses:
- Mac: Hold the option key down, and click on your Wifi toolbar icon.
- PC: See https://www.windowscentral.com/4-easy-ways-find-your-pc-ip-address-windows-10-s
- RPi: On the command line, type the command: ifconfig In the output section for “wlan0” you’ll see the IP address
- Start software: In the following order
- Motors: Power on the AA battery pack
- RPi: Power and boot the RPi
- In the terminal, connect to the RPi and start the toolkit software. In the below, change the server_ip IP address to that of your computer. After launching delfToolkit.py the software will take a minute or two to finish setting up the object recognition models.
ssh [email protected] cd /home/pi/tutorials/image/imagenet python3 delftToolkit.py --server_ip 10.0.1.15
- Open the “delft-toolkit” project in Unity3D
- In the Hierarchy, open the main scene
- In the Graphs directory, double click on the toolkit visual graph you are currently using (or one of the example graphs)
- If you are using the robot hardware, click on the simulated robot in the Hierarchy, and enable the “Physical Ding” script. If you are not using the physical robot, keep this script unchecked and inactive.
- Click on the Play button
- Click on the 3D window (this is to ensure Unity is receiving all commands – if you find it is not responding to the keyboard or OSC, try this)
Installing The software
- Install dependencies: Unity3D
- Download the toolkit software and place on your computer drive
- Install delftToolkit.ino on your Arduino
- RPi: Burn the RPi image to your SD card
- Set up your WiFi
- Change the hostname from the default of delftbt0 (e.g. delftbt1, delftbt2, etc.) if you are using more than one robot on your network
- Install NodeCanvas in the toolkit Project if it is not there
- Click on the Project tab, and double click the “Main” scene