Blog: add duckietown blogs
This commit is contained in:
parent
12c853b596
commit
311010aef9
330
src/content/blog/duckietown/lab1.md
Normal file
330
src/content/blog/duckietown/lab1.md
Normal file
|
@ -0,0 +1,330 @@
|
||||||
|
---
|
||||||
|
title: 'DuckieTown - Lab 1'
|
||||||
|
description: "Let's get rolling!"
|
||||||
|
pubDate: 'Jan 22 2023'
|
||||||
|
heroImage: '../../../src/assets/duckietown/sleeping_duckies.avif'
|
||||||
|
---
|
||||||
|
|
||||||
|
<p style="font-size: max(2vh, 10px); margin-top: 0; text-align: right">
|
||||||
|
"Sleeping Duckies" by Justin Valentine. All rights reserved
|
||||||
|
</p>
|
||||||
|
|
||||||
|
# Week 2&3 - First lab
|
||||||
|
|
||||||
|
After spending [a lot of time with the network](./pre_lab1), Akemi threw in the
|
||||||
|
white flag and overwrote an old MacbookAir6,1 with ubuntu. So the ssh setup is
|
||||||
|
now like:
|
||||||
|
|
||||||
|
```
|
||||||
|
MacMini8,1 <--- MacbookAir8,2 ---> MacbookAir6,1 ---> duckiebot
|
||||||
|
```
|
||||||
|
|
||||||
|
# Boting around
|
||||||
|
|
||||||
|
I'll be using `csc22927` as a hostname, since that was my bot's hostname
|
||||||
|
|
||||||
|
### SSH
|
||||||
|
I started by playing around with the bot over ssh. Even if mDNS isn't up, we
|
||||||
|
can use the LAN address of the duckiebot to ssh in. For example, something like
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh duckie@192.168.1.21
|
||||||
|
```
|
||||||
|
|
||||||
|
Password is `quackquack` by default, which is shocking hard to type
|
||||||
|
|
||||||
|
Using ssh allows us to control docker directly! See the docker section for more
|
||||||
|
on that, though in general it's far less buggy than `dts`'s poor python code
|
||||||
|
|
||||||
|
### DTS
|
||||||
|
|
||||||
|
`dts` proved consistently to be a barrier in the whole process, with some
|
||||||
|
awfully importable code. `systemd-networkd`'s approach to configuring the
|
||||||
|
networking interfaces prevented SOME dts commands from not working, such as the
|
||||||
|
intrinsics calibration, despite others like the extrinsics calibration working
|
||||||
|
perfectly fine
|
||||||
|
|
||||||
|
|
||||||
|
The first command I learned was a litmus test to determine if mDNS is working
|
||||||
|
(it wasn't)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dts fleet discover
|
||||||
|
```
|
||||||
|
As a hack around things, if this works for someone else, let them find the ip.
|
||||||
|
Then you can just use the ip address directly... unless `dts` starts getting in
|
||||||
|
the way. Luckily docker doesn't care, so generally the ip works just as well
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh duckie@csc22927.local ip a
|
||||||
|
```
|
||||||
|
|
||||||
|
Then came shutting down, which can use the ip address from above, instead of the
|
||||||
|
hostname
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dts duckiebot shutdown csc22927
|
||||||
|
```
|
||||||
|
|
||||||
|
If this doesn't work, then use the button on the top. When that decides not to
|
||||||
|
work, pull out all 3 micro-usb cables from the main board. That'll hard-cut the
|
||||||
|
power, so it's best to not use this method
|
||||||
|
|
||||||
|
We can also access gui stuff like the camera with the command below. Note that
|
||||||
|
this actually pulls up a docker container on YOUR laptop, which communicates
|
||||||
|
with the docker container on the bot... There're a lot of containers... However,
|
||||||
|
this means that it'll still pull up the container, __even if it can't actually
|
||||||
|
connect to the bot__! Make sure you can connect using some other `dts` command
|
||||||
|
first to not spend minutes instructing a disconnected container to do things
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dts start_gui_tools csc22927
|
||||||
|
rqt_image_view # This one should pull up a live camera feed
|
||||||
|
```
|
||||||
|
|
||||||
|
The live camera feed can be configured to show other ros topics, such as the
|
||||||
|
line detector included in the old lane-following demo. You can see it at the end
|
||||||
|
of [this video](https://youtu.be/rctMkwxfjC4). Notice how the lane following is
|
||||||
|
actually closer to white-line-avoidance, which was the case for the lane
|
||||||
|
following demo prior to updating
|
||||||
|
|
||||||
|
BTW, gui tools is an example of a `dts` command that doesn't work with the ip
|
||||||
|
address. It also doesn't work with mDNS enabled, if the network interface was
|
||||||
|
configured by `systemd-networkd`
|
||||||
|
|
||||||
|
Here's a complete reference of [every dts command I ran in the past 3
|
||||||
|
weeks](/raw_text/dts_history.txt)! It served as my live-updating cheatsheet
|
||||||
|
while working on this lab, through a clever `.inputrc` trick with `fzf` and an
|
||||||
|
unlimited `bash_history` file size
|
||||||
|
|
||||||
|
### Driving around
|
||||||
|
|
||||||
|
Before getting the MacbookAir6,1, I could only use the cli version of
|
||||||
|
movement keys. The command looked like
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dts duckiebot keyboard_control --cli csc22927
|
||||||
|
```
|
||||||
|
|
||||||
|
Interestingly, in the cli mode, the correct way to stop is `<enter>` not
|
||||||
|
`e<enter>`. I found this by accident, though it's certainly accurate. [Here's my
|
||||||
|
bot traveling forward for a bit](https://youtu.be/se6O96lvCgc), using the cli
|
||||||
|
mode
|
||||||
|
|
||||||
|
With the MacbookAir6,1 booting ubuntu, I was able to get the cli working! Just
|
||||||
|
drop the `--cli` above
|
||||||
|
|
||||||
|
After an update, launching a separate container, and the graphical joystick on
|
||||||
|
the MacbookAir6,1, I was in the first duckies in the class to get [a proper
|
||||||
|
lane-following demo running](https://youtu.be/w0gNg1HoaJw)!
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dts duckiebot update csc22927 # This takes ~30min
|
||||||
|
dts duckiebot demo --demo_name lane_following --duckiebot_name csc22927.local --package_name duckietown_demos
|
||||||
|
ssh duckie@csc22927.local docker container list # Just to check if it's acc running the container
|
||||||
|
dts duckiebot keyboard_control --cli csc22927
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dashboard
|
||||||
|
|
||||||
|
There's a dashboard! Connect to it at `http://csc22927.local` or the ip directly
|
||||||
|
in a browser. It has a live camera feed and sensor signals. Here's a dashboard
|
||||||
|
with my bot driving forward in a straight line
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/dashboard_motors_forward.avif"
|
||||||
|
alt="Motors of duckiebot driving forward, as seen from the dashboard"
|
||||||
|
/>
|
||||||
|
|
||||||
|
Notice how the angular speed is 0. That's since it's not turning. Below is a
|
||||||
|
picture of it spinning in a circle, now with no forward velocity
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/dashboard_motors_spin.avif"
|
||||||
|
alt="Motors of duckiebot spinning in a circle, as seen form the dashboard"
|
||||||
|
/>
|
||||||
|
|
||||||
|
You can also login using your duckietown account
|
||||||
|
|
||||||
|
1. Login @ `https://www.duckietown.org/pm_login`
|
||||||
|
2. Navigate to `https://www.duckietown.org/site/your-token` and login with that
|
||||||
|
|
||||||
|
Then a lot more tabs pop up. One of them is for browsing files, though ssh
|
||||||
|
already let us do that. The new one is a VNC server. Click into the desktop tab
|
||||||
|
and wait a minute or so for it to launch. This is a full desktop running on the
|
||||||
|
duckiebot! Of course things like the gui joystick work there, even on my Waybook
|
||||||
|
that could only use the cli version otherwise
|
||||||
|
|
||||||
|
# Docker
|
||||||
|
|
||||||
|
Docker keeps making new containers, nearly daily. These can end up polluting the
|
||||||
|
system. Use `docker container list -a` to view them and `docker-system-prune(1)`
|
||||||
|
to remove them
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker system df
|
||||||
|
docker image ls
|
||||||
|
docker image prune -a
|
||||||
|
```
|
||||||
|
|
||||||
|
In general, it's better to use `docker-compose` with docker, since it's so clean
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
Alternatively you can just run the container and `^p^q` to detach it
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run --rm -it <container-name>
|
||||||
|
```
|
||||||
|
|
||||||
|
You can then execute commands using new terminal connections on the running
|
||||||
|
container
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec <container-id> ip a
|
||||||
|
docker exec <container-id> -it bash
|
||||||
|
```
|
||||||
|
|
||||||
|
I generally find these commands useful too
|
||||||
|
|
||||||
|
```bash
|
||||||
|
systemctl start docker.service
|
||||||
|
docker ps
|
||||||
|
docker image ls
|
||||||
|
docker image rm <image>
|
||||||
|
docker container ls
|
||||||
|
docker container attach <container>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker vision system
|
||||||
|
|
||||||
|
Found in section
|
||||||
|
[B-5](https://docs.duckietown.org/daffy/duckietown-robotics-development/out/creating_docker_containers.html)
|
||||||
|
this was undoubtedly the hardest part of this project
|
||||||
|
|
||||||
|
The instructions skip over the obvious fact that we'll need a virtual
|
||||||
|
environment for this
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 -m venv .
|
||||||
|
```
|
||||||
|
|
||||||
|
Next we need some packages in our `requirements.txt`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
source bin/activate
|
||||||
|
pip install numpy opencv-python
|
||||||
|
pip freeze > requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
Then we use docker to build and run on the duckiebot. Note that since this isn't
|
||||||
|
`dts` we could just as easily use the duckiebot's ip directly instead of relying
|
||||||
|
on an mDNS server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker -H csc22927.local build -t colordetector:v4 .
|
||||||
|
docker -H csc22927.local run -it --privileged -v /tmp/argus_socket:/tmp/argus_socket colordetector:v4
|
||||||
|
```
|
||||||
|
|
||||||
|
Either way, despite 26 iterations of the code and a lot of debugging by many
|
||||||
|
members in the class, the duckiebot's camera refused to connect, even with a
|
||||||
|
fixed gst pipeline
|
||||||
|
|
||||||
|
|
||||||
|
```python
|
||||||
|
gst_pipeline = f'''nvarguscamerasrc ! \\
|
||||||
|
sensor-mode={camera_mode}, exposuretimerange="100000 80000000" ! \\
|
||||||
|
video/x-raw(memory:NVMM), width={res_w}, height={res_h}, format=NV12, framerate={fps}/1 ! \\
|
||||||
|
nvjpegenc ! \\
|
||||||
|
appsink'''
|
||||||
|
```
|
||||||
|
|
||||||
|
Thanks to [Steven Tang](https://steventango.github.io/cmput-412-website/) who
|
||||||
|
discovered there's a cli tool to verify if our gst-pipeline is correct. I
|
||||||
|
verified the following pipeline, which claimed to work, though wasn't able to
|
||||||
|
communicate with the duckiebot's camera in python. You'll need to use escapes to
|
||||||
|
prevent bash from interpreting things, though the codeblock below escaped the
|
||||||
|
`\"`, `\(`, and `\)`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gst-launch-1.0 nvarguscamerasrc sensor-mode=3 exposuretimerange="100000 80000000" \\
|
||||||
|
! video/x-raw\(memory:NVMM\), width=1280, height=720, \
|
||||||
|
format=NV12, framerate=30/1 \
|
||||||
|
! nvjpegenc ! appsink
|
||||||
|
```
|
||||||
|
|
||||||
|
# Networking adventures
|
||||||
|
|
||||||
|
Continued from the [previous post](./pre_lab1)...
|
||||||
|
|
||||||
|
### Breakthrough 2 - Use NetworkManager instead
|
||||||
|
|
||||||
|
`dts fleet discover` and `.local` resoultion fails with the `systemd-networkd` +
|
||||||
|
`iwd` combo. mDNS appears broken again. Instead use NetworkManager... except
|
||||||
|
NetworkManager doesn't see DuckieNet, so the steps look like:
|
||||||
|
|
||||||
|
1. Boot with NetworkManger configing the device
|
||||||
|
2. Start both `systemd-networkd` and `iwd`
|
||||||
|
3. Restart `systemd-networkd` to give `iwd` configuration permissions
|
||||||
|
4. Scan for DuckieNet
|
||||||
|
5. Connect to DuckieNet
|
||||||
|
6. Send a stop signal to `systemd-networkd` and `iwd` **twice**
|
||||||
|
7. Start NetworkManager
|
||||||
|
|
||||||
|
Or in short
|
||||||
|
|
||||||
|
```bash
|
||||||
|
systemctl stop NetworkManager.service
|
||||||
|
systemctl start systemd-networkd iwd
|
||||||
|
systemctl restart systemd-networkd
|
||||||
|
iwctl station wlan0 scan && sleep 3
|
||||||
|
iwctl station wlan0 connect DuckieNet
|
||||||
|
systemctl stop systemd-networkd iwd
|
||||||
|
systemctl stop systemd-networkd iwd
|
||||||
|
systemctl start NetworkManager.service
|
||||||
|
```
|
||||||
|
|
||||||
|
### Breakthrough 3 - Give systemd-networkd full control
|
||||||
|
|
||||||
|
For the third time, there may be another issue. NetworkManager seems to start up
|
||||||
|
`wpa_supplicant.service`, regardless of it being disabled in systemd. However,
|
||||||
|
it still appears to run after NetworkManager is stopped. This means when we
|
||||||
|
start up both `systemd-networkd` and `iwd`, there are 3 programs attempting to
|
||||||
|
configure wlan0 at the same time. Stopping `wpa_supplicant` explicitly appears
|
||||||
|
to have brought back mDNS support through `systemd-networkd`, only if
|
||||||
|
`systemd-networkd` is allowed to configure the device itself
|
||||||
|
|
||||||
|
Also, using `systemd-networkd` for the connection is a lot more stable than
|
||||||
|
NetworkManager, which would drop the connection every few minutes and require a
|
||||||
|
restart to fix
|
||||||
|
|
||||||
|
### Breakthrough 4 - Give systemd-resolved full DNS control
|
||||||
|
|
||||||
|
OMG. Okay, following the [archwiki's
|
||||||
|
article](https://wiki.archlinux.org/title/systemd-resolved#mDNS) for resolved,
|
||||||
|
it's clear:
|
||||||
|
|
||||||
|
1. Avahi will fight with systemd-resolved for mDNS control, so disable Avahi
|
||||||
|
2. MulticastDNS under wlan0's `[Network]` needs to be enabled. I used `=resolve`
|
||||||
|
|
||||||
|
I further found:
|
||||||
|
|
||||||
|
3. It's helpful to throw `Multicast=true` in the `[Link]` section of wlan0
|
||||||
|
4. Run `dhcpc.service`... tho still let `iwd` configure the device. Not sure why
|
||||||
|
5. In `/var/lib/iwd/*.*` put `AutoConnect=true` under `[Settings]`. Otherwise it guesses the network
|
||||||
|
|
||||||
|
We expect `networkctl` to say wlan0 is "routable" and "configuring". Journalctl
|
||||||
|
states `dhcpcd` only acquired the carrier for wlan0 4s after loading, while iwd
|
||||||
|
doesn't mention anything about wlan0, though finished all its confings 4s prior
|
||||||
|
to `dhcpcd`'s acquisition
|
||||||
|
|
||||||
|
When booting with all 3 services enabled, restart systemd-networkd. `networkctl`
|
||||||
|
should now show both "routable" and "configured"
|
||||||
|
|
||||||
|
For some commands it works, like `dts start_gui_tools` and `dts duckiebot shutdown`,
|
||||||
|
though others are just not happy, like `dts duckiebot calibrate_intrinsics` and
|
||||||
|
`dts duckiebot keyboard_control`. I'm now borrowing a MacbookAir6,1 from the
|
||||||
|
university to get around these issues
|
||||||
|
|
656
src/content/blog/duckietown/lab2.md
Normal file
656
src/content/blog/duckietown/lab2.md
Normal file
|
@ -0,0 +1,656 @@
|
||||||
|
---
|
||||||
|
title: 'DuckieTown - Lab 2'
|
||||||
|
description: "Camera and kinematics"
|
||||||
|
pubDate: 'Feb 13 2023'
|
||||||
|
heroImage: '../../../src/assets/duckietown/lab2/quiver_plot_sparse.avif'
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- THIS IS THE OUTLINE, NOT VISIBLE THO KEEP IT FOR REFERENCE
|
||||||
|
|
||||||
|
# Inverse Mathematics - Second lab
|
||||||
|
+ Link to intro image here
|
||||||
|
|
||||||
|
### Heartbeat ros package
|
||||||
|
- First ros package
|
||||||
|
- Read robot name from environment variable
|
||||||
|
- Name spaced using the vehicle argument
|
||||||
|
+ Path to repo
|
||||||
|
|
||||||
|
### Camera modification
|
||||||
|
- First look at actually using pre-made topics on the bot
|
||||||
|
- Remapped namespacing in a funky way through camera_demo.launch
|
||||||
|
- Used a callback function
|
||||||
|
- Basename of topic **must** be `compressed` or it won't show up in rqt_image_view
|
||||||
|
+ Submit screenshot of camera
|
||||||
|
+ Path to repo, explicity mention it's in the repo instead of a screenshot
|
||||||
|
+ Also submit a screenshot of the code?
|
||||||
|
|
||||||
|
### Robot frame stuff math
|
||||||
|
|
||||||
|
TODO \()
|
||||||
|
+ Link straight-line video
|
||||||
|
+ Path to repo
|
||||||
|
+ Image with difference in measurements
|
||||||
|
- Explain what different speed levels did
|
||||||
|
- Explain we used the `/$(arg veh)/wheels_driver_node/wheels_cmd` topic to move
|
||||||
|
- Explain how the topic above was found
|
||||||
|
+ Link video with different speed levels
|
||||||
|
- Explain the rotation task (link failed video, it's the only one we have)
|
||||||
|
- TODO: explain math about driving duckiebot in the circle
|
||||||
|
- Talk about the use of `rospy.on_shutdown()` hooks to cleanly exit
|
||||||
|
- Explain how rosbags work
|
||||||
|
+ Link script and matplotlib quiver plot of pose
|
||||||
|
|
||||||
|
## Driving robot through stages
|
||||||
|
|
||||||
|
### General architecture
|
||||||
|
|
||||||
|
1. An odometry publisher node that reads in wheel ticks
|
||||||
|
2. An odometry driver that makes decisions from the published pose
|
||||||
|
3. An LED node working as a service which the driver node communicates to
|
||||||
|
+ Link to new waddle workspace
|
||||||
|
|
||||||
|
### LED service
|
||||||
|
- Mention topic being used to publish LEDs
|
||||||
|
- Mention self-constructing the message
|
||||||
|
- How service was wrapped around this and testing
|
||||||
|
- Use of `rospy.on_shutdown()` hooks twice, in the service node and odometry
|
||||||
|
|
||||||
|
### Movement
|
||||||
|
- Constructed a simulator, by pretty much ripping out parts of the duckietown wheels node
|
||||||
|
- Created a PID controller with good dead reckoning which worked perfectly in the simulator
|
||||||
|
- ... Didn't even move when tested on the bot. Requires minimum 0.6 speed to turn
|
||||||
|
- Continued to scrap pieces of the PID controller, like the duckietown exit notice
|
||||||
|
- Turned into a hardcoded procedural program in the end
|
||||||
|
- Talk about params.json workflow optimization
|
||||||
|
+ Link to driving video
|
||||||
|
+ Insert AR image of distance measure
|
||||||
|
- Explain why it's off...
|
||||||
|
|
||||||
|
### Putting it all together
|
||||||
|
- Pretty much hardcoded, explain the steps
|
||||||
|
- Mention shutdown hooks and rosbag recorders
|
||||||
|
+ Link to uploaded rosbag file, really
|
||||||
|
+ Display matplotlib quiver of bag file from before
|
||||||
|
- Talk about the perfect circle and imperfect square, and how the circle is wrong
|
||||||
|
|
||||||
|
-->
|
||||||
|
|
||||||
|
|
||||||
|
# Inverse Mathematics - Second lab
|
||||||
|
|
||||||
|
## Part 1 - ROS basics
|
||||||
|
|
||||||
|
For this first section, we'll be referencing code mostly in the
|
||||||
|
[heartbeat-ros](https://codeberg.org/akemi/duckietown/src/branch/main/lab2/heartbeat-ros)
|
||||||
|
workspace
|
||||||
|
|
||||||
|
### Heartbeat ros package
|
||||||
|
|
||||||
|
We kicked things off by reading about ROS's general publisher-subscriber model.
|
||||||
|
To extend on the tutorial, we made a [small
|
||||||
|
package](https://codeberg.org/akemi/duckietown/src/branch/main/lab2/heartbeat-ros/packages/heartbeat_package)
|
||||||
|
which simply sent a log message every ~50s. The publisher node publishes a
|
||||||
|
custom message. The subscriber node then receives the message and prints it with
|
||||||
|
loglevel INFO
|
||||||
|
|
||||||
|
To avoid hard coding the robot's name, while still using topic names from root,
|
||||||
|
we used an odd method to get the hostname by [launching a
|
||||||
|
subprocess](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/heartbeat-ros/packages/heartbeat_package/src/heartbeat_publisher_node.py#L9).
|
||||||
|
This works and technically is the most correct way to find a hostname, though we
|
||||||
|
used the launch-file's `arg` approach to pass in the hostname at runtime for all
|
||||||
|
of the later packages
|
||||||
|
|
||||||
|
### Camera modification
|
||||||
|
|
||||||
|
Next up was learning how to interact with pre-made topics on the duckiebot. We
|
||||||
|
created a single node publishing 2 topics and subscribing to a different two.
|
||||||
|
|
||||||
|
To print the image size, we subscribed to `camera_info` and set a callback to
|
||||||
|
print out a loglevel INFO message about the image's dimensions. The callback
|
||||||
|
also then republishes the string of the image dimensions to a topic called
|
||||||
|
`published_image_info`.
|
||||||
|
|
||||||
|
On a tip from Justin, who mentioned it may be useful in future labs to remove
|
||||||
|
color from the camera's image, we attempted to republish a monochrome version of
|
||||||
|
the image from the duckiebot's camera. Doing this was simply a task for opencv.
|
||||||
|
The camera callback reads in the raw image into a cv2 buffer, decodes it into
|
||||||
|
cv2's intermediary representation, applies a `COLOR_BRG2GRAY` filter, re-encodes
|
||||||
|
it and published it to a topic called `published_compressed`.
|
||||||
|
|
||||||
|
So far we've only been mentioning the basename of the topic we're publishing,
|
||||||
|
and that's since we didn't actually publish to any of these topics. For this
|
||||||
|
package, we tried to keep all our remapping in the
|
||||||
|
[camera.launch](https://codeberg.org/akemi/duckietown/src/branch/main/lab2/heartbeat-ros/packages/camera_demo_node/launch/camera.launch)
|
||||||
|
file. Our actually source code publishes to topics like `~published_image_info`,
|
||||||
|
though that's just a descriptive placeholder for us to then remap in the launch
|
||||||
|
file. The actual topics we published to were `~/camera_info` and `~/compressed`.
|
||||||
|
The same method was used for the subscriber paths, taking advantage of the `veh`
|
||||||
|
argument at runtime to determine the duckiebot's name.
|
||||||
|
|
||||||
|
Keeping all the remapping in the launch file was helpful when we were still
|
||||||
|
learning how to use name spacing properly. Quickly modifying the paths based on
|
||||||
|
`rqt_graph` was easier when our launch file was dealing with all the paths,
|
||||||
|
instead of having to constantly modify source code.
|
||||||
|
|
||||||
|
Here's a screenshot of the node in `rqt_graph`:
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab2/custom_publisher_and_subscriber_allinone.avif"
|
||||||
|
alt="Custom camera node in rqt_graph. It publishes two outgoing topics"
|
||||||
|
/>
|
||||||
|
|
||||||
|
Notice how we used the word `compressed` as the basename of our published
|
||||||
|
modified image's topic name. While it usually doesn't matter how topics are
|
||||||
|
named, in this case it was very important our topic name ended with
|
||||||
|
`compressed`. `rqt_image_view` is hardcoded to only consider topics ending with
|
||||||
|
`compressed`, so publishing the same image to a topic called `modified_image`
|
||||||
|
would fail to show up in `rqt_image_view`! Thanks to our classmate `KarlHanEdn`
|
||||||
|
for discovering this and posting it on the discord
|
||||||
|
|
||||||
|
Here's a screenshot of our modified topic being published (this one isn't black
|
||||||
|
and white):
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab2/custom_published_image.avif"
|
||||||
|
alt="Picture of screen with rqt_image_view streaming our topic"
|
||||||
|
/>
|
||||||
|
|
||||||
|
You can see the [source code
|
||||||
|
here](https://codeberg.org/akemi/duckietown/src/branch/main/lab2/heartbeat-ros/packages/camera_demo_node),
|
||||||
|
though the assignment asks for a picture of the source code... so here's a
|
||||||
|
picture of where the link leads:
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab2/lab2_camera_node_code_screenshot.avif"
|
||||||
|
alt="Screenshot of 2 Chromium windows displaying source code"
|
||||||
|
/>
|
||||||
|
|
||||||
|
### Robot Kinematics
|
||||||
|
|
||||||
|
**What is the relation between your initial robot frame and world frame? How do
|
||||||
|
you transform between them?** **How do you convert the location and theta at the
|
||||||
|
initial robot frame to the world frame?**
|
||||||
|
|
||||||
|
The robot frame is always centered on the robot, so it is given by
|
||||||
|
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math1.avif" />
|
||||||
|
|
||||||
|
The initial world frame is given by
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math2.avif" />
|
||||||
|
|
||||||
|
To transform the initial world frame to the robot frame is trivial, keep the
|
||||||
|
angle theta the same, and `x_R = 0` and `y_R = 0`. This is equivalent to this
|
||||||
|
matrix multiplication:
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math11.avif" />
|
||||||
|
|
||||||
|
To get the initial world frame from the initial robot frame,
|
||||||
|
we keep the angle theta the same, and set `x_I = 0.32` and `y_I = 0.32`.
|
||||||
|
This is equivalent to this matrix multiplication:
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math3.avif" />
|
||||||
|
|
||||||
|
|
||||||
|
We used the following matrix multiplication to transform between the two:
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math4.avif" />
|
||||||
|
|
||||||
|
with
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math5.avif" />
|
||||||
|
|
||||||
|
Then we can update the world frame by integrating the above changes in world
|
||||||
|
frame
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math6.avif" />
|
||||||
|
|
||||||
|
We also must apply the modulo of `2 * pi` to the angle theta to keep it between
|
||||||
|
0 and `2 * pi`.
|
||||||
|
|
||||||
|
We note that the equation for getting the change in robot frame is given by
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math7.avif" />
|
||||||
|
|
||||||
|
where `d_r` and `d_l` are the integrated displacement traveled by the right and
|
||||||
|
left wheels and `l` is the distance between the wheels and the center of the
|
||||||
|
rotation.
|
||||||
|
|
||||||
|
To get the integrated displacements `d_r` and `d_l`, we use the wheel encoder
|
||||||
|
ticks formula:
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math8.avif" />
|
||||||
|
|
||||||
|
where `r = 0.025` is the radius of the Duckiebot wheel and `resolution = 135`
|
||||||
|
is the number of ticks in one rotation of the wheel.
|
||||||
|
|
||||||
|
**How did you estimate/track the angles your DuckieBot has traveled?**
|
||||||
|
|
||||||
|
To update the angle theta that our DuckieBot has traveled, we used the matrix
|
||||||
|
multiplication above, which breaks down to the following equations for angle:
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math9.avif" />
|
||||||
|
|
||||||
|
where
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/lab2/math10.avif" />
|
||||||
|
|
||||||
|
**Can you explain why there is a difference between actual and desired
|
||||||
|
location?**
|
||||||
|
|
||||||
|
There are small errors that are mostly due to slippage and momentum. Since we do
|
||||||
|
dead reckoning, as the DuckieBot moves more the small errors compound. We note
|
||||||
|
that errors in the angle tend to drastically affect the bot's x, y position due
|
||||||
|
to the trigonometric functions used in matrix. This causes the Duckiebot's
|
||||||
|
desired location to drastically different from the actual location.
|
||||||
|
|
||||||
|
**Which topic(s) did you use to make the robot move? How did you figure out the
|
||||||
|
topic that could make the motor move?**
|
||||||
|
|
||||||
|
For our first wheel-moving demo we published into the `/$(arg
|
||||||
|
veh)/wheels_driver_node/wheels_cmd` topic. We found this topic with a mix of
|
||||||
|
`rostopic list`, `rostopic info`, and `rostopic echo` while using the joystick
|
||||||
|
to figure out which topics published what. Just as with the camera node, we took
|
||||||
|
advantage of remapping in the launch file with `arg` to let this run on
|
||||||
|
different duckiebots. We only used the wheel velocities on this topic's message
|
||||||
|
(`WheelsCmdStamped`) to guide it forward and negative velocities to guide it
|
||||||
|
backwards. However we mostly scrapped this code, since the node didn't bother
|
||||||
|
localizing itself. This'd mean if one of the motors spins slower the other, the
|
||||||
|
bot would be completely off. It even makes it harder to guarantee we traveled
|
||||||
|
1.25m, since the two wheels had different distances traveled. While there is a
|
||||||
|
pre-built duckietown topic which gives pose, we instead opted to make a small
|
||||||
|
modification out our dead reckoning code from section 2 to record the video
|
||||||
|
linked below.
|
||||||
|
|
||||||
|
[Here's a link](https://www.youtube.com/watch?v=IpDTdR3xOlE) to our duckiebot
|
||||||
|
going forward then back. We were using a pretty high speed for this one, hence
|
||||||
|
the little jump it does when starting to reverse. We also hypothesized that part
|
||||||
|
of the problem here lies in how we immediately go in reverse gear, instead of
|
||||||
|
slowing down to a stop, then going in reverse. We use this slowing-down idea for
|
||||||
|
section 2, particularly with our turning logic.
|
||||||
|
|
||||||
|
We tried several speeds and came to an amusing conclusion. Using very slow
|
||||||
|
speeds ~0.3 and very high speeds like 1.0 resulted in about a 10-20cm difference
|
||||||
|
between the starting and final position. However using intermediary speeds like
|
||||||
|
0.7, which we thought would do best, actually performed the worst, with and
|
||||||
|
about 20-40cm difference between the starting and ending positions. For the slow
|
||||||
|
case, the movement are accurate just since the duckiebot doesn't introduce
|
||||||
|
nearly as much noise from slippage and momentum. However for high speeds, it
|
||||||
|
seems the slippage and momentum actually balances out! While the duckiebot will
|
||||||
|
drift longer from momentum, it will also read more traveled distance when its
|
||||||
|
wheels quickly slip. We were quite surprised about this, since we expected the
|
||||||
|
higher the speed the higher the noise, though our 3 empirical runs simply don't
|
||||||
|
support that conclusion
|
||||||
|
|
||||||
|
### Rotation task
|
||||||
|
|
||||||
|
The rotation task was pretty much identical to the driving forward task. We used
|
||||||
|
the same publisher (`/$(arg
|
||||||
|
veh)/wheels_driver_node/wheels_cmd`), just now we'd make one of the wheels go [in a
|
||||||
|
negative](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/waddle/packages/odometry_node/src/odometry_driver_node.py#L158)
|
||||||
|
velocity, while the other is positive. To figure out our turn angle, we used a
|
||||||
|
pretty [simple
|
||||||
|
formula](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/waddle/packages/odometry_node/src/odometry_driver_node.py#L174)
|
||||||
|
to measure the difference in angle. When we forgot the modulo, our bot sometimes
|
||||||
|
continued spinning forever. As you'll notice in the code, there're no guarantees
|
||||||
|
that bot will nail its turn on the first try. Especially when we used high
|
||||||
|
velocity turns, the bot would often miss the target angle and make a complete
|
||||||
|
rotation once more before aligning up correctly. While this seems inefficient,
|
||||||
|
when we tried to make it turn in the closest direction, the slippage this
|
||||||
|
back-forth adjustment introduced so much noise, the duckiebot's angle ended up
|
||||||
|
being absurdly far off at high speeds.
|
||||||
|
|
||||||
|
In general, slow speeds worked much better for turning. Just [look at
|
||||||
|
this](https://www.youtube.com/watch?v=WniOrK1jwZs) supposed -90deg turn at max
|
||||||
|
velocity on both wheels. When we tested in the simulator, we used a turning
|
||||||
|
velocity of 0.3 and that worked perfectly. It was going to work nicely in
|
||||||
|
practice too... until we actually tried it. Turns out, the duckiebot is so
|
||||||
|
heavy, it cannot complete a 90deg turn at any velocity under 0.6. On our second
|
||||||
|
duckiebot, it sometimes managed with 0.5, though both just came to a halt when
|
||||||
|
we tried 0.4. This requirement for high-speed turns persisted as the largest
|
||||||
|
problem in getting demo right in section 2.
|
||||||
|
|
||||||
|
### Graceful shutdown
|
||||||
|
|
||||||
|
This wasn't mentioned in the previous sections, despite being used in the
|
||||||
|
driving one, though we used a `rospy.on_shutdown()` hook to make sure our nodes
|
||||||
|
would shutdown correctly. This method takes in a callback function which is
|
||||||
|
called when rospy is shutdown. Notably for the driving nodes, it was very
|
||||||
|
important that we stop the wheels before the node exits, otherwise it'd
|
||||||
|
endlessly spin. However, that just lead us to write a small script to stop the
|
||||||
|
wheels
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rostopic pub -f my_msg.yml /path/to/topic msg/type
|
||||||
|
# Actually used example
|
||||||
|
rostopic pub -f stopstop /csc22927/wheels_driver_node/wheels_cmd duckietown_msg/WheelsCmdStamped
|
||||||
|
```
|
||||||
|
|
||||||
|
Since the duckiebots occasionally still crash the script without calling
|
||||||
|
shutdown hooks, this method still ended up being a useful emergency stop for us
|
||||||
|
|
||||||
|
## Section 2 - Driving robot through stages
|
||||||
|
|
||||||
|
### General architecture
|
||||||
|
|
||||||
|
For section 2, we used 3 nodes across 2 packages. You can [view the code
|
||||||
|
here](https://codeberg.org/akemi/duckietown/src/branch/main/lab2/waddle).
|
||||||
|
|
||||||
|
The odometry package includes two nodes. The first node called
|
||||||
|
`odometry_publisher_node` does our kinematics and localization calculations.
|
||||||
|
Each wheel is represented by its own dictionary. We mostly do calculations off
|
||||||
|
the `/{hostname}/{left,right}_wheel_encoder_node/ticks` topic, which publishes
|
||||||
|
the number of total ticks a wheel has accumulated. According to [the
|
||||||
|
documentation](https://docs.duckietown.org/daffy/duckietown-robotics-development/out/odometry_modeling.html),
|
||||||
|
this is 135 per full rotation. Since each wheel likely has a non-zero amount of
|
||||||
|
ticks when our node starts up, we start by [overwriting the number of wheel
|
||||||
|
ticks](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/waddle/packages/odometry_node/src/odometry_publisher_node.py#L85)
|
||||||
|
taken on the first callback, then relative changes can be calculated properly
|
||||||
|
|
||||||
|
The other package contains the LED node, which provides a service interface.
|
||||||
|
This is called from the `odometry_driver_node`, which acts as our central
|
||||||
|
"state" node
|
||||||
|
|
||||||
|
### LED service
|
||||||
|
|
||||||
|
We first implemented the LED node using the standard publisher-subscriber model
|
||||||
|
that's usually performed in ROS. This was pretty simple, we just construct an
|
||||||
|
`LEDPattern` message then [publish
|
||||||
|
it](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/waddle/packages/led_controls/src/led_controller_node.py#L50)
|
||||||
|
to the `/{hostname}/led_emitter_node/led_pattern` topic. This worked pretty
|
||||||
|
well, though we did notice in slow-motion footage that the front LEDs end up
|
||||||
|
switching noticeably faster than the back LEDs. We didn't notice any difference
|
||||||
|
with our human eyes. This approach also sometimes just went wonky. Roughly 1/4
|
||||||
|
runs ended up with a heterochromic duckiebot, at least at one of the states.
|
||||||
|
|
||||||
|
In retrospect, we realized the problem is likely since we only publish once. We
|
||||||
|
should have taken advantage of the service-wrapper to set a message on our led
|
||||||
|
publisher node, then just have it continuously publish whatever the current
|
||||||
|
message is at a few Hz. That may have turned out duckiebot back to solid-colors
|
||||||
|
if the first message got poorly read or something.
|
||||||
|
|
||||||
|
Now with the publisher-subscriber model in place, we needed to introduce a
|
||||||
|
service interface to this node. This took [1
|
||||||
|
line](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/waddle/packages/led_controls/src/led_controller_node.py#L55),
|
||||||
|
though about an hour of playing around with it since the [official
|
||||||
|
tutorial](http://wiki.ros.org/ROS/Tutorials/WritingServiceClient%28python%29)
|
||||||
|
was a bit confusing about mixing the two models together. In the end, it looks
|
||||||
|
very similar to a subscriber from the server-side and a publisher from the
|
||||||
|
client-side, though of course the code is blocking. For our specific case, the
|
||||||
|
code works just as well with the publisher-subscriber model, though [this
|
||||||
|
community forum
|
||||||
|
post](https://answers.ros.org/question/11834/when-should-i-use-topics-vs-services-vs-actionlib-actions-vs-dynamic_reconfigure/)
|
||||||
|
seems generally in support of a service for how we're using it.
|
||||||
|
|
||||||
|
We tested our service by having the node call its own service in a while loop
|
||||||
|
while alterating colors. The bot ended up looking like a Christmas tree, though
|
||||||
|
we noticed when the node shutdown it'd leave the last color running. To address
|
||||||
|
this, we read up on [shutdown
|
||||||
|
hooks](https://wiki.ros.org/rospy/Overview/Initialization%20and%20Shutdown#Registering_shutdown_hooks)
|
||||||
|
and set one so that when the led controller node shuts down, it [turns off its
|
||||||
|
lights](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/waddle/packages/led_controls/src/led_controller_node.py#L69).
|
||||||
|
|
||||||
|
On an amusing note, the `LEDPattern` message is written in `rgba`. Usually the
|
||||||
|
`a` means transparency, so Akemi guessed it must mean the brightness in our
|
||||||
|
context. However it turns out a lower `a` value gives "transparency" in the
|
||||||
|
sense of white saturation, so we'd get very pale colors. This leaves us
|
||||||
|
wondering if we can adjust the brightness beyond on-off, though we didn't figure
|
||||||
|
that out of this assignment.
|
||||||
|
|
||||||
|
### Movement
|
||||||
|
|
||||||
|
Getting the movement right took by far the most effort in this lab. Before we
|
||||||
|
continue, look at [how well it ended
|
||||||
|
up](https://www.youtube.com/watch?v=NFU9NcNew_w)! Except, that was from the
|
||||||
|
wrong starting position :sob:. We weren't able to get such a lucky run from the
|
||||||
|
correct starting position, though here's [our best
|
||||||
|
recording](https://www.youtube.com/watch?v=mHwQ-8XmVzc). Something seemed to
|
||||||
|
keep tripping it up on that specific starting tile, since it worked so much
|
||||||
|
better from the middle.
|
||||||
|
|
||||||
|
Before even trying to move the bot, Steven Tang decided to make a simulator so
|
||||||
|
that we could develop the code at home without the duckiebot. He pretty much
|
||||||
|
took the [duckietown wheel driver
|
||||||
|
node](https://github.com/duckietown/dt-duckiebot-interface/blob/daffy/packages/wheels_driver/src/wheels_driver_node.py)
|
||||||
|
then added [tick
|
||||||
|
publishers](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/heartbeat-ros/packages/mock_wheels_driver/src/mock_wheels_driver_node.py#L56)
|
||||||
|
for each wheel. He then added a [run
|
||||||
|
function](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/heartbeat-ros/packages/mock_wheels_driver/src/mock_wheels_driver_node.py#L114)
|
||||||
|
which simulated the bot actually moving, while updating its wheel velocities
|
||||||
|
based on what our odometry node is publishing to the
|
||||||
|
`/{hostname}/wheels_driver_node/wheels_cmd`.
|
||||||
|
|
||||||
|
This is really cool, since it let us work out all the math and dead-reckoning
|
||||||
|
ahead of time. It was working in the simulator perfectly. Then we put it on the
|
||||||
|
duckietbot and it didn't even move...
|
||||||
|
|
||||||
|
As mentioned previously, we found out the duckiebot doesn't have enough power to
|
||||||
|
turn without setting the wheel velocity magnitude to at least 0.6, at which
|
||||||
|
point so much noise is added, the calculations done in the simulator ended up
|
||||||
|
completely off.
|
||||||
|
|
||||||
|
We still retained the localization-based turning from before though. Instead of
|
||||||
|
telling the duckiebot to turn -90deg at the start, we told it to turn to 0deg in
|
||||||
|
the world frame.
|
||||||
|
|
||||||
|
In the end, our code ended up being [very
|
||||||
|
procedural](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/waddle/packages/odometry_node/src/odometry_driver_node.py#L275).
|
||||||
|
For each state, we have a different function. In general they start off by using
|
||||||
|
a service request to change the LED color, then either move or just wait until
|
||||||
|
the state is considered over. As usually goes for procedural code like this,
|
||||||
|
manual tuning becomes really important. We needed to tune a lot, which was an
|
||||||
|
extremely slow process with the provided duckietown docker container. Every time
|
||||||
|
we changed a single parameter like the speed, the entire catkin build system
|
||||||
|
would rerun, which took about 2mins.
|
||||||
|
|
||||||
|
To accelerate this, we ended up with a very hacky [parameters
|
||||||
|
file](https://codeberg.org/akemi/duckietown/src/branch/main/lab2/waddle/config/params.json).
|
||||||
|
This json effectively stores global variables for the workspace. Nodes read this
|
||||||
|
[file in
|
||||||
|
directly](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/waddle/packages/odometry_node/src/odometry_driver_node.py#L47)
|
||||||
|
and adjust their instance parameters. The genius behind this is how the
|
||||||
|
Dockerfile doesn't have to rebuild the entire workspace, it only needs to [copy
|
||||||
|
over the updated
|
||||||
|
param.json](https://codeberg.org/akemi/duckietown/src/commit/392ef3a55c166f4a18ada428a5793feac5ffc613/lab2/waddle/Dockerfile#L79)
|
||||||
|
file, which reduced the build time to just a few seconds. We do know that ROS
|
||||||
|
actually has a parameters system built in, which would allow modifying
|
||||||
|
parameters at runtime too, though this method was quicker and easier for this
|
||||||
|
lab. We'll be looking to use the native ROS version in the upcoming labs, since
|
||||||
|
the build time is unlikely to improve.
|
||||||
|
|
||||||
|
Our final video [is here](https://www.youtube.com/watch?v=mHwQ-8XmVzc), the same
|
||||||
|
one from the start of this section. The final distance was 64cm when measured by
|
||||||
|
AR-ruler. About 62cm when measured by a prehistoric 90cm-stick:
|
||||||
|
|
||||||
|
<div><img
|
||||||
|
src="../../../src/assets/duckietown/lab2/lab2_final_position.avif"
|
||||||
|
alt="64cm measured distance in an AR-ruler screenshot of an iPhone"
|
||||||
|
style="width: 100%; height: 100%"
|
||||||
|
/></div>
|
||||||
|
|
||||||
|
<div><img
|
||||||
|
src="../../../src/assets/duckietown/lab2/lab2_final_dist_ar.avif"
|
||||||
|
alt="64cm measured distance in an AR-ruler screenshot of an iPhone"
|
||||||
|
/></div>
|
||||||
|
|
||||||
|
### Putting it all together
|
||||||
|
|
||||||
|
For the video above, we recorded [this
|
||||||
|
bag](https://codeberg.org/akemi/duckietown/src/branch/main/lab2/bag-decoder/kinematics.bag).
|
||||||
|
[Here we play it back](https://youtu.be/WjkHO1CmJFQ) on a local rosmaster and
|
||||||
|
see the messages being sent live with `rostopic echo`. We then plotted it with
|
||||||
|
matplotlib in [this
|
||||||
|
ipython-notebook](https://codeberg.org/akemi/duckietown/src/branch/main/lab2/bag-decoder/decode.ipynb),
|
||||||
|
with the resulting image here:
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab2/quiver_plot_sparse.avif"
|
||||||
|
alt="A quiver plot (end-to-end arrows) of the robot traveling in a square"
|
||||||
|
/>
|
||||||
|
|
||||||
|
This plot is showing what the duckiebot thinks it's doing, which looking at the
|
||||||
|
video clearly doesn't align with what's actually taking place. A quiver-plot is
|
||||||
|
nice, since it gives us a pretty good idea of what the pose is. Notably, unlike
|
||||||
|
scatter-plots, we can see the duckiebot turning on the spot in the 4 corners. We
|
||||||
|
can also see the duckiebot go backwards from the top left corner down, just like
|
||||||
|
we coded it up to do.
|
||||||
|
|
||||||
|
The initial turn, which in reality was the worst, looks as if it was nailed spot
|
||||||
|
on. The duckiebot does pick up the slight drift is has when going in a straight
|
||||||
|
line, which matches the results we saw in our forward-backward test from
|
||||||
|
section 1.
|
||||||
|
|
||||||
|
We found it amusing how far off the results for our circle were though. In the
|
||||||
|
quiver-plot, the duckiebot completed most of a perfect circle. In reality, it
|
||||||
|
did about 1.5 circles in an ellipse. This was likely a mix of slippage, the
|
||||||
|
wheel's wobble, and the angle of the wheel on the duckiebot, which was certainly
|
||||||
|
not perfectly parallel with the sides. To test out how bad it is, we modified
|
||||||
|
our program to make the duckies go in an endless circle and launched it on both
|
||||||
|
our duckiebots. [Here's a video](https://www.youtube.com/shorts/Qf5KefGTbXg).
|
||||||
|
Even in that short clip, the two robots clearly had their circles change the
|
||||||
|
amount they overlap from the start to the end, so there's actually quite a bit
|
||||||
|
of noise during the turn. On Tuesday Feb 6th's lecture, we also learned how
|
||||||
|
turning is disastrously more noisy than going in the straight line, so the
|
||||||
|
quiver-plot's disconnection from reality makes sense.
|
||||||
|
|
||||||
|
<!-- COMMENTED STUFF STARTS HERE, IGNORE THIS
|
||||||
|
|
||||||
|
## Part 1 - Reading and preparation
|
||||||
|
|
||||||
|
### Heartbeat package
|
||||||
|
|
||||||
|
Out of interest, or rather to make the template repository useful. I made a
|
||||||
|
simple heartbeat package to indicate the bot is still running. It has two
|
||||||
|
publishers and two receivers, each in their own separate private namespace,
|
||||||
|
which publish and receive a message about once per minute
|
||||||
|
|
||||||
|
The launch file has a nice way of remapping topics for each namespace. In the
|
||||||
|
following example, the subscriber is actually listening for the topic
|
||||||
|
`~chatter`, though our publisher is publishing to `~heartbeat`. This can all be
|
||||||
|
remapped in the launch file, without needing to change the underlying code
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<remap from="heartbeat_subscriber/chatter" to="heartbeat_publisher/heartbeat" />
|
||||||
|
<node
|
||||||
|
pkg="heartbeat_package"
|
||||||
|
type="heartbeat_subscriber_node.py"
|
||||||
|
name="heartbeat_subscriber"
|
||||||
|
output="screen"/>
|
||||||
|
```
|
||||||
|
|
||||||
|
Launch files have two ways to specific remaps. The one above I feel is much
|
||||||
|
worse, though might work for multiple nodes at once. I prefer a more explicit
|
||||||
|
html style:
|
||||||
|
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<node
|
||||||
|
pkg="heartbeat_package"
|
||||||
|
type="heartbeat_subscriber_node.py"
|
||||||
|
name="heartbeat_subscriber"
|
||||||
|
output="screen">
|
||||||
|
<remap from="heartbeat_subscriber/chatter" to="heartbeat_publisher/heartbeat" />
|
||||||
|
</node>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Emergency
|
||||||
|
|
||||||
|
`rostopic pub` sends a repeated stream of the same message to the bot. This can
|
||||||
|
be used for emergency stops
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rostopic pub -f my_msg.yml /path/to/topic msg/type
|
||||||
|
# Actually used example
|
||||||
|
rostopic pub -f stopstop /csc22927/wheels_driver_ndoe/wheels_cmd duckietown_msg/WheelsCmdStamped
|
||||||
|
```
|
||||||
|
|
||||||
|
We can also add a shutdown hook to do something similar in the code, though I'm
|
||||||
|
not sure it'll work if rospy crashes before shutting down properly
|
||||||
|
|
||||||
|
```python
|
||||||
|
def emergency_halt():
|
||||||
|
node.publish_speed(0.0)
|
||||||
|
rospy.loginfo("Sent emergency stop")
|
||||||
|
|
||||||
|
rospy.on_shutdown(emergency_halt)
|
||||||
|
```
|
||||||
|
|
||||||
|
All node types in duckietown can be found
|
||||||
|
[here](https://github.com/duckietown/dt-ros-commons/blob/59d4b0b1b565408b7238eafd02be673035771ccf/packages/duckietown/include/duckietown/dtros/constants.py#L46)
|
||||||
|
|
||||||
|
### ROS Commands
|
||||||
|
|
||||||
|
Here are translated commands between ros one and two. First line is ros1 second
|
||||||
|
line is ros2:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rostopic list
|
||||||
|
ros2 topic list
|
||||||
|
|
||||||
|
rostopic echo /rossytopic
|
||||||
|
ros2 topic echo /rossytopic
|
||||||
|
|
||||||
|
# Get an idea of publishing rate
|
||||||
|
rostopic hz /rossytopic
|
||||||
|
ros2 topic hz /rossytopic
|
||||||
|
|
||||||
|
# TODO
|
||||||
|
rosparam TODO
|
||||||
|
TODO
|
||||||
|
|
||||||
|
# Bags from `cd bag_files` ====
|
||||||
|
TODO
|
||||||
|
ros2 bag record -o rossy_bag /rossytopic /camera/feed
|
||||||
|
|
||||||
|
TODO
|
||||||
|
ros2 bag info rossy_bag
|
||||||
|
|
||||||
|
TODO
|
||||||
|
ros2 bag play rossy_bag
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
There's also the catkin vs colcon stuff
|
||||||
|
|
||||||
|
```bash
|
||||||
|
catkin_make # Sort of... like a zeroconf version of catkin build
|
||||||
|
colcon build --symlink-install # Simlink flag is optional
|
||||||
|
```
|
||||||
|
|
||||||
|
For messages, look directly into the source code, since there is no
|
||||||
|
documentation
|
||||||
|
|
||||||
|
[For sensor_msg](http://wiki.ros.org/sensor_msgs)
|
||||||
|
[For rospy classes](https://github.com/ros/ros_comm/tree/noetic-devel/tools/rosbag/src)
|
||||||
|
|
||||||
|
### Dockerz
|
||||||
|
|
||||||
|
Dts attempts to handle everything for us, though since it's quite buggy, it'd be
|
||||||
|
nicer to use directly docker instead. Also,
|
||||||
|
|
||||||
|
Startup commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# On host
|
||||||
|
docker compose up -d
|
||||||
|
docker attach $(docker compose ps -q)
|
||||||
|
|
||||||
|
# Now we're in the container
|
||||||
|
cd /media/duckie_time/duckietown/lab2/<workspace>
|
||||||
|
catkin_make
|
||||||
|
source devel/setup.bash
|
||||||
|
|
||||||
|
# Ready to go. For example...
|
||||||
|
roslaunch custom_camera_node camera.launch veh:=csc22927
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Other references
|
||||||
|
|
||||||
|
[This UM course](https://liampaull.ca/ift6757/schedule/index.html) might be
|
||||||
|
helpful?
|
||||||
|
|
||||||
|
[Duckietown drivers](https://github.com/duckietown/dt-duckiebot-interface)
|
||||||
|
|
||||||
|
-->
|
439
src/content/blog/duckietown/lab3.md
Normal file
439
src/content/blog/duckietown/lab3.md
Normal file
|
@ -0,0 +1,439 @@
|
||||||
|
---
|
||||||
|
title: 'DuckieTown - Lab 3'
|
||||||
|
description: "Localization through Sensor Fusion"
|
||||||
|
pubDate: 'March 05 2023'
|
||||||
|
heroImage: '../../../src/assets/duckietown/lab3/ar-ducks.avif'
|
||||||
|
---
|
||||||
|
|
||||||
|
A screenshot of our [Unit A-4 Advanced Augmented Reality
|
||||||
|
Exercise](https://docs.duckietown.org/daffy/duckietown-classical-robotics/out/cra_apriltag_augmented_reality_exercise.html)
|
||||||
|
results.
|
||||||
|
|
||||||
|
# Localization and Sensor Fusion - Third lab
|
||||||
|
|
||||||
|
## Part One - Computer Vision
|
||||||
|
|
||||||
|
### Deliverable 1: April Tag Detection and Labeling
|
||||||
|
|
||||||
|
The following video depicts our apriltag detector image topic viewed with
|
||||||
|
`rqt_image_view` demonstrating our apriltag node detecting several apriltags and
|
||||||
|
labeling each with its bounding box and ID number.
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/gAck5-vHF6U" title="YouTube video player"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
### What does the april tag library return to you for determining its position?
|
||||||
|
|
||||||
|
The april tag library returns `pose_R` with shape `(3, 3)` which is the rotation
|
||||||
|
matrix and `pose_t` with shape `(3,)` which is a translation vector of the
|
||||||
|
april tag in the `camera_optical_frame` frame.
|
||||||
|
|
||||||
|
### Which directions do the X, Y, Z values of your detection increase / decrease?
|
||||||
|
|
||||||
|
`X` corresponds to the horizontal translation of the april tag relative to the camera,
|
||||||
|
it increases when the april tag is moved towards the right of the camera and
|
||||||
|
decreases when the april tag is moved towards the left of the camera.
|
||||||
|
|
||||||
|
`Y` corresponds to the vertical translation of the april tag relative to the camera,
|
||||||
|
it increases when the april tag is moved above the camera and
|
||||||
|
decreases when the april tag is moved below the camera.
|
||||||
|
|
||||||
|
`Z` corresponds to the depth of the april tag relative to the camera, it
|
||||||
|
increases when the april tag is moved further from the camera and decreases
|
||||||
|
when the april tag is moved closer the camera.
|
||||||
|
|
||||||
|
### What frame orientation does the april tag use?
|
||||||
|
|
||||||
|
The [april tag frame](https://github.com/AprilRobotics/apriltag/wiki/AprilTag-User-Guide#coordinate-system)
|
||||||
|
is centered at the center of the tag, with the direction of the positive x-axis
|
||||||
|
towards the right of the tag, the direction of the positive y-axis towards the
|
||||||
|
tag.
|
||||||
|
|
||||||
|
This is nicely visualized in this diagram from Figure 4.4 of [Unit B-4 Exercises
|
||||||
|
- state estimation and sensor fusion in the
|
||||||
|
Duckmentation](https://docs.duckietown.org/daffy/duckietown-classical-robotics/out/exercise_sensor_fusion.html#fig:at-lib-frame-convention-wrap).
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab3/at-frame-convention.jpg"
|
||||||
|
alt="Frame convention used by april tag library when returning pose"
|
||||||
|
/>
|
||||||
|
|
||||||
|
### Why are detections from far away prone to error?
|
||||||
|
|
||||||
|
Far away april tags have fewer pixels to detect, so they are more prone to
|
||||||
|
error in their pose estimations and unstable detection as can be seen in the
|
||||||
|
video with apriltag ID 94.
|
||||||
|
|
||||||
|
### Why may you want to limit the rate of detections?
|
||||||
|
|
||||||
|
April tag detection is computationally expensive, so limiting the rate of
|
||||||
|
detections can reduce Duckiebot CPU usage. Furthermore, if the image does not
|
||||||
|
change much, it is can unnecessary to recompute April tag detections.
|
||||||
|
|
||||||
|
### Learnings
|
||||||
|
|
||||||
|
From looking at the
|
||||||
|
[camera_driver](https://github.com/duckietown/dt-duckiebot-interface/blob/daffy/packages/camera_driver/)
|
||||||
|
source code, I learned that we can use
|
||||||
|
[`cv_bridge`](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython)
|
||||||
|
to convert between OpenCV images and `CompressedImage` messages
|
||||||
|
|
||||||
|
The [image_geometry](https://docs.ros.org/en/api/image_geometry/html/python/)
|
||||||
|
package was useful for undistorting raw images using the intrinsics
|
||||||
|
calibrations.
|
||||||
|
|
||||||
|
### Challenges
|
||||||
|
|
||||||
|
The `template-ros` repository has a `.dockerignore` and it ignores our
|
||||||
|
additional files and directories, we have to whitelist them like:
|
||||||
|
|
||||||
|
```.dtignore
|
||||||
|
!maps
|
||||||
|
```
|
||||||
|
|
||||||
|
After upgrading to Docker 23.0.1, `dts devel run` would error with message:
|
||||||
|
`docker: Error response from daemon: No command specified.`.
|
||||||
|
|
||||||
|
I had to [downgrade](https://docs.docker.com/engine/install/ubuntu/) to Docker
|
||||||
|
20.10.23 to get it to work again.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export VERSION_STRING=5:20.10.23~3-0~ubuntu-focal
|
||||||
|
sudo apt-get install docker-ce=$VERSION_STRING docker-ce-cli=$VERSION_STRING containerd.io docker-buildx-plugin docker-compose-plugin
|
||||||
|
```
|
||||||
|
|
||||||
|
## Part Two - Lane Following
|
||||||
|
|
||||||
|
### Deliverable 2: Lane Following English Driver Style
|
||||||
|
|
||||||
|
This video has our bot complete a full map with the English-driver style
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/lVeuNHGCy6w"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
The English-driver was implemented with a P-controller and drives on the left.
|
||||||
|
|
||||||
|
Here are some videos for other driving styles we implemented:
|
||||||
|
|
||||||
|
- [American driver](https://www.youtube.com/watch?v=K4qqWjGXsec). 0.3 velocity,
|
||||||
|
P-controller, drives on the right
|
||||||
|
- [Drunk driver](https://www.youtube.com/watch?v=Nt8Qxyd7FjQ). 0.6 velocity,
|
||||||
|
P-controller, attempts to drive on the left
|
||||||
|
|
||||||
|
### What is the error for your PID controller?
|
||||||
|
|
||||||
|
We use a multi-stage Opencv pipeline to process the image
|
||||||
|
|
||||||
|
1. Crop out the top `1/3` and bottom `1/5`.
|
||||||
|
2. Blur the entire image by 5 pixels in both directions
|
||||||
|
3. Apply HSV thresholds to mask out everything except the dashed yellow line.
|
||||||
|
We have two separate thresholds, one for each room. Both thresholds mask out
|
||||||
|
ducks, though the `csc229` one doesn't correctly mask out the carpet in
|
||||||
|
`csc235`.
|
||||||
|
4. Detect contours in the new black-white image
|
||||||
|
5. Sort contours by area. We initially considered using the top-k contours and
|
||||||
|
averaging them, though we found outlier contours can do a lot more damage
|
||||||
|
this way, so we just used the single largest one instead
|
||||||
|
6. Find the center of the largest contour
|
||||||
|
7. Draw a line from the center of the largest contour to the right (left with
|
||||||
|
English-driving) with a length of exactly the distance of the center point
|
||||||
|
from the top of the image
|
||||||
|
8. **Set the error** as the signed difference between x-coordinate of the
|
||||||
|
line's right-most point and the x-coordinate of the center of the image.
|
||||||
|
This means the error is measured in pixels.
|
||||||
|
|
||||||
|
### If your proportional controller did not work well alone, what could have caused this?
|
||||||
|
|
||||||
|
Initially, our controller failed to work since we processed the images at 3 Hz.
|
||||||
|
The logic is that OpenCV is a "heavy" process, so processing at a high frequency
|
||||||
|
would just lead to pose calculations going into a queue. The system
|
||||||
|
suddenly started working, when we change this to 30 Hz at 0.3 velocity.
|
||||||
|
|
||||||
|
However, at 0.6 velocity, the P-term-only controller really struggled (see
|
||||||
|
[Drunk driver video](https://youtu.be/Nt8Qxyd7FjQ)). Our P-term-only controller made
|
||||||
|
the English driver look more like the drunk driver. It didn't work well since a
|
||||||
|
P term fails to consider the momentum built up by the system. At 0.3 velocity,
|
||||||
|
there isn't enough momentum to effect our controller noticeably. However, at 0.6
|
||||||
|
velocity this momentum leads to noticeable oscillation and hard turning
|
||||||
|
overshoots.
|
||||||
|
|
||||||
|
### Does the D term help your controller logic? Why or why not?
|
||||||
|
|
||||||
|
The D term was pointless at 0.3. Actually it was detrimental, since the logic
|
||||||
|
of the program got harder to debug. We did try to implement one when using 0.6
|
||||||
|
velocity, however with 2 minute build times, we weren't able to sufficiently
|
||||||
|
tune it. An untuned D-term was far worse than a tuned P-controller, mostly since
|
||||||
|
it kept fighting against itself on the turns, which made it end up off the
|
||||||
|
track.
|
||||||
|
|
||||||
|
### (Optional) Why or why not was the I term useful for your robot?
|
||||||
|
|
||||||
|
Given how problematic the D-term was, we never even considered adding an I-term
|
||||||
|
during this assignment. We doubt it'd be much help either, since there isn't
|
||||||
|
any steady-state error to correct against with an integral term when we're just
|
||||||
|
driving on a horizontal surface.
|
||||||
|
|
||||||
|
## Part Three - Localization using Sensor Fusion
|
||||||
|
|
||||||
|
### Deliverable 3: Record a video on your computer showing RViz: displaying your camera feed, odometry frame and static apriltag frames as it completes one circle. You can drive using manual control or lane following.
|
||||||
|
|
||||||
|
Here's a video of RViz with our Duckiebot driving in a circle. We observe that
|
||||||
|
while at first dead reckoning is reasonable for predictions. However, after a few
|
||||||
|
turns the position estimated using the wheel encoders drifts significantly
|
||||||
|
from the actual position of the robot.
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/-LOutfERpKI"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
### Where did your odometry seem to drift the most? Why would that be?
|
||||||
|
|
||||||
|
Most of the drifting occurs as a result of turning the Duckiebot. We learned in
|
||||||
|
class that turns introduce much more uncertainty than forward driving, as
|
||||||
|
turns involve more slippage which makes the dead reckoning angle inaccurate.
|
||||||
|
A slightly inaccurate angle quickly results in the error in the odometry
|
||||||
|
compounding quickly, as dead reckoning integrates the errors over time.
|
||||||
|
|
||||||
|
### Did adding the landmarks make it easier to understand where and when the odometry drifted?
|
||||||
|
|
||||||
|
Yes, quite a bit. Especially at areas that are dense with landmarks, like the
|
||||||
|
intersections, we're able to really quickly tell how far the duckiebot has
|
||||||
|
drifted. In our video this particularly shows itself around the middle, when
|
||||||
|
the bot is an entire intersection ahead of where Rviz seems to show it.
|
||||||
|
|
||||||
|
### Deliverable 4: Attach the generated transform tree graph, what is the root/parent frame?
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="325"
|
||||||
|
src="../../../src/assets/duckietown/lab3/footprint_as_root_tf.pdf"
|
||||||
|
frameborder="0">
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
It is quite the big graph, if you need to zoom in, click
|
||||||
|
[here](../../../src/assets/duckietown/lab3/footprint_as_root_tf.pdf) to open the generated
|
||||||
|
transform tree graph. The root of this tree is
|
||||||
|
`csc22927/footprint`.
|
||||||
|
|
||||||
|
### Move the wheels and make note of which joint is moving, what type of joint is this?
|
||||||
|
|
||||||
|
The moving joints are named: `csc22927_left_wheel_axis_to_left_wheel` and
|
||||||
|
`csc22927_left_wheel_axis_to_right_wheel`. The type of joint is `continuous`, as
|
||||||
|
seen in this
|
||||||
|
[urdf](https://github.com/duckietown/dt-duckiebot-interface/blob/daffy/packages/duckiebot_interface/urdf/duckiebot.urdf.xacro#L107).
|
||||||
|
This [joint type](http://wiki.ros.org/urdf/XML/joint) makes sense as a wheel can
|
||||||
|
be thought of as a continuous hinge joint that rotates around its axis with no
|
||||||
|
bounded limits. The links `csc22927/left_wheel` and `csc22927/right_wheel` are
|
||||||
|
the transformations that visually spins in Rviz.
|
||||||
|
|
||||||
|
### You may notice that the wheel frames rotate when you rotate the wheels, but the frames never move from the origin? Even if you launch your odometry node the duckiebot’s frames do not move. Why is that?
|
||||||
|
|
||||||
|
That's since our parent frame is the `csc22927/footprint` frame of reference.
|
||||||
|
Even if the entire robot and odometry is moving, the relative positions of the
|
||||||
|
duckiebot's frames to one another never change from their description in the
|
||||||
|
URDF file.
|
||||||
|
|
||||||
|
The wheels do rotate, since that relative change is related by a transform with
|
||||||
|
respect to the footprint's frame. The wheel's rotation isn't fixed relative to
|
||||||
|
the footprint's frame on two axes.
|
||||||
|
|
||||||
|
### What should the translation and rotation be from the odometry child to robot parent frame? In what situation would you have to use something different?
|
||||||
|
|
||||||
|
The transformation from the odometry child to robot parent frame is zero
|
||||||
|
rotation, zero translation (written as an identity transformation homogeneous
|
||||||
|
transformation matrix). In other words, our robot parent frame is identical to
|
||||||
|
the odometry child frame. If they weren't the same, we'd have to actually apply
|
||||||
|
a non-trivial transformation to align their coordinate frames.
|
||||||
|
|
||||||
|
### After creating this link generate a new transform tree graph. What is the new root/parent frame for your environment?
|
||||||
|
|
||||||
|
The `world` frame is the new root of our environment.
|
||||||
|
|
||||||
|
### Can a frame have two parents? What is your reasoning for this?
|
||||||
|
|
||||||
|
No, it's called a transform *tree* graph for a reason. To clarify, it is
|
||||||
|
possible to transform between any two nodes in the same tree graph, though
|
||||||
|
having two parent frames is not possible.
|
||||||
|
|
||||||
|
Consider a situation where a frame really is defined by two parents. Say it's a
|
||||||
|
4 cm x-translation relative to parent A's frame and a 2cm x-translation relative
|
||||||
|
to parent B's frame. This immediately implies parent A's frame is -2 cm in
|
||||||
|
parent B's frame. However, since there isn't any direct dependency between them,
|
||||||
|
it'd be possible to violate this assumption by changing the child's position
|
||||||
|
relative to the two parent frames in an inconsistent way. To guarantee this
|
||||||
|
correlation, we'd have to express either parent A or B in the other's frame,
|
||||||
|
which brings us back to 1 parent per node.
|
||||||
|
|
||||||
|
### Can an environment have more than one parent/root frame?
|
||||||
|
|
||||||
|
Yes, unlike the one parent per node requirement in transform trees, having two
|
||||||
|
or more separate trees in the same environment doesn't create any implicit
|
||||||
|
assumptions, since the trees are completely disjoint. However, by doing so you
|
||||||
|
remove the ability to transform between frames in two different trees.
|
||||||
|
|
||||||
|
### Deliverable 5: Attach your newly generated transform tree graph, what is the new root/parent frame?
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="260"
|
||||||
|
src="../../../src/assets/duckietown/lab3/footprint_as_child_tf.pdf"
|
||||||
|
frameborder="0">
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
It is quite the big graph, if you need to zoom in, click
|
||||||
|
[here](../../../src/assets/duckietown/lab3/footprint_as_child_tf.pdf) to open the newly generated
|
||||||
|
transform tree graph.
|
||||||
|
|
||||||
|
The new root/parent is the `world` frame.
|
||||||
|
|
||||||
|
### Deliverable 6: Record a short video of your robot moving around the world frame with all the robot frames / URDF attached to your moving odometry frame. Show the apriltag detections topic in your camera feed and visualize the apriltag detections frames in rviz.
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/6CkAkg7tt18"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
### How far off are your detections from the static ground truth?
|
||||||
|
|
||||||
|
At first it was surprisingly not bad. Up until the third (bottom left) turn, it
|
||||||
|
was within 60 cm of where it really was. However, on that third turn the odometry
|
||||||
|
failed to record the turn as hard as it actually was. This made the estimates
|
||||||
|
extremely far off from that point forward.
|
||||||
|
|
||||||
|
### What are two factors that could cause this error?
|
||||||
|
|
||||||
|
Our detections are quite far off from the ground truth. Two factors that could
|
||||||
|
cause this error are inaccurate odometry from wheel encoders and camera
|
||||||
|
transform errors arising from camera distortion.
|
||||||
|
|
||||||
|
### Challenges
|
||||||
|
|
||||||
|
One challenge we faced was actually trying to get the recording of Rviz. If we
|
||||||
|
start Rviz before our transforms are being broadcast, it just doesn't see them
|
||||||
|
and there doesn't seem to be any way to refresh Rviz. If we start it too late,
|
||||||
|
the screen recording starts with the bot having moved for several seconds
|
||||||
|
already.
|
||||||
|
|
||||||
|
Our solution to this was pure genius:
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab3/bottle_bots.avif"
|
||||||
|
alt="Big waterbottle on top of duckiebot"
|
||||||
|
/>
|
||||||
|
|
||||||
|
By holding down the bot at the start, the odometry node doesn't move at all!
|
||||||
|
This gave us the time we needed to startup Rviz, then we just took the bottle
|
||||||
|
off. The videos we present here had that initial ~30 s of just sitting there
|
||||||
|
cropped out.
|
||||||
|
|
||||||
|
Another big challenge was the latency introduced by the `deadreckoning` and
|
||||||
|
`apriltag` nodes. When we did part 2, we did it using only the lane-related
|
||||||
|
nodes, nothing else was running. However, when we enabled our 30 Hz `apriltag` node and
|
||||||
|
10Hz `deadreckoning` node, our lane following code gained about 3 s of latency.
|
||||||
|
3 s of latency is fatal on a turn, since the lane will go out of frame, so lane
|
||||||
|
following effectively stopped working.
|
||||||
|
|
||||||
|
We fixed this by lowering the `apriltag` rate to 1 Hz, which is why the video
|
||||||
|
stream is so much more delayed than the transforms in Rviz, and lowering the
|
||||||
|
dead reckoning node to 4 Hz. Then it restored its ability to lane follow. We
|
||||||
|
also turned off the front LEDs to not mess up the lane detector's color masking,
|
||||||
|
instead indicating the whole apriltag detection color with the two rear LEDs.
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/0DealqGGdek"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
|
||||||
|
### Deliverable 7: Show a short video of your robot moving around the entire world (see image below) using lane following and have your sensor fusion node teleport the robot if an apriltag is found and use odometry if no apriltag is detected. Try to finish as close to your starting point as possible.
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/nXkt3ZIBBx0"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
We do 3 laps in the video above, to get a better sense of how little it drifts
|
||||||
|
over time. We ended up finishing very close to where we started and much of the
|
||||||
|
error can be attributed to the stochastic city of the lane-finder node, so
|
||||||
|
landmark-based localization was a great help.
|
||||||
|
|
||||||
|
### Is this a perfect system?
|
||||||
|
|
||||||
|
No, while this system is certainly better than dead reckoning, there are still
|
||||||
|
some inaccuracies in our localization. In the video above, we observe errors in
|
||||||
|
localization when our bot sporadically teleports at the top of the map. However,
|
||||||
|
using the fixed landmarks meant that even after 3 laps, it was able to localize
|
||||||
|
itself quite close to its actual position.
|
||||||
|
|
||||||
|
### What are the causes for some of the errors?
|
||||||
|
|
||||||
|
When april tags are far away, the pose estimates are not perfect and can cause
|
||||||
|
our bot to teleport to the wrong place as seen in the video when it only sees
|
||||||
|
tag 200 without tag 201. Latency in the april tag node also causes minor errors
|
||||||
|
when our bot has moved since the detection was made. Sometimes our localization
|
||||||
|
fails right after we turn, as the april tag is no longer visible, and another
|
||||||
|
april tag has not been detected yet. The bot reverts to relying on only dead
|
||||||
|
reckoning, which is especially problematic after turning for reasons described
|
||||||
|
in an earlier question.
|
||||||
|
|
||||||
|
### What other approaches could you use to improve localization?
|
||||||
|
|
||||||
|
We could have tried using multiple april tags (when visible) to improve our
|
||||||
|
localization estimate.
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab3/birds_eye_view.avif"
|
||||||
|
alt="Bird's eye view from camera"
|
||||||
|
/>
|
||||||
|
Bird's eye view obtained using a projective transformation of the image.
|
||||||
|
|
||||||
|
Another approach that would address situations where
|
||||||
|
april tags are not visible is that we could use our code that gets a bird's eye
|
||||||
|
view of the lane markings, fit lines to the lane markings and compare them to a
|
||||||
|
measured ground truth as alternative source of information for localization.
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- <https://docs.duckietown.org/daffy/duckietown-classical-robotics/out/cra_basic_augmented_reality_exercise.html>
|
||||||
|
- <http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython>
|
||||||
|
- <https://github.com/duckietown/dt-duckiebot-interface/blob/daffy/packages/camera_driver/>
|
||||||
|
- <https://docs.ros.org/en/api/image_geometry/html/python/>
|
||||||
|
- <https://docs.docker.com/engine/install/ubuntu/>
|
||||||
|
- <https://github.com/duckietown/dt-core/blob/daffy/packages/complete_image_pipeline/include/image_processing/ground_projection_geometry.py#L161>
|
||||||
|
- <https://bitesofcode.wordpress.com/2018/09/16/augmented-reality-with-python-and-opencv-part-2/>
|
||||||
|
- <https://einsteiniumstudios.com/beaglebone-opencv-line-following-robot.html>
|
||||||
|
- <http://wiki.ros.org/tf2/Tutorials/Writing%20a%20tf2%20static%20broadcaster%20%28Python%29>
|
||||||
|
- <https://nikolasent.github.io/opencv/2017/05/07/Bird's-Eye-View-Transformation.html>
|
||||||
|
- <http://wiki.ros.org/urdf/XML/joint>
|
||||||
|
- <https://github.com/duckietown/dt-duckiebot-interface/blob/daffy/packages/duckiebot_interface/urdf/duckiebot.urdf.xacro#L107>
|
||||||
|
- <https://wiki.ros.org/tf2/Tutorials/Writing%20a%20tf2%20listener%20(Python)>
|
||||||
|
- <https://github.com/AprilRobotics/apriltag/wiki/AprilTag-User-Guide>
|
143
src/content/blog/duckietown/lab4.md
Normal file
143
src/content/blog/duckietown/lab4.md
Normal file
|
@ -0,0 +1,143 @@
|
||||||
|
---
|
||||||
|
title: 'DuckieTown - Lab 4'
|
||||||
|
description: "Robots Following Robots"
|
||||||
|
pubDate: 'March 19 2023'
|
||||||
|
heroImage: '../../../src/assets/duckietown/lab4/state_machine_flowchart.avif'
|
||||||
|
---
|
||||||
|
|
||||||
|
# Don't crash! Tailing behaviour - Fourth lab
|
||||||
|
|
||||||
|
"It was just a fluke"
|
||||||
|
|
||||||
|
In this exercise we implement an autonomous safe tailing behavior on
|
||||||
|
our Duckiebot. We aim to tail another Duckiebot at a safe distance while still
|
||||||
|
following the rules of the road.
|
||||||
|
This lab was a hard sprint right from the start. Despite the overall
|
||||||
|
implementation being much more obvious than the straightforward than all
|
||||||
|
previous labs, we experienced many difficulties stemming from the real-time
|
||||||
|
nature of this lab.
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/9q5b_eB7rlk"
|
||||||
|
title="YouTube video player"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
A video of our Duckiebot safely driving around the town behind another Duckiebot
|
||||||
|
(controlled with keyboard controls). Observe our Duckiebot taking a right hand
|
||||||
|
turn and left hand turn at the intersections and driving at least two full laps
|
||||||
|
of the town.
|
||||||
|
|
||||||
|
## Strategy
|
||||||
|
|
||||||
|
Our strategy centres around a state machine. We use the following states:
|
||||||
|
|
||||||
|
- **Lane Following**: Runs lane following node. This is the default when we lose
|
||||||
|
track of the bot ahead
|
||||||
|
- **Stopping**: We approach a red line and come to a halt. While we wait, we look
|
||||||
|
for a bot ahead of us to decide which way to turn
|
||||||
|
- **Tracking**: We follow the robot in front of us
|
||||||
|
- **Blind movements**: We override the lane following to force a turn/forward
|
||||||
|
across the intersection to follow the bot based on where we observed it
|
||||||
|
turning while stopped
|
||||||
|
|
||||||
|
Our state machine starts off lane following, then switches to tracking when it
|
||||||
|
sees the bot and back when it loses it. The stopping state takes precedence over
|
||||||
|
both of these states, so the bot will unconditionally stop when seeing a red
|
||||||
|
line. The blind states only last for a few seconds each, though also get a "stop
|
||||||
|
immunity" where the bot ignores all red lines. This is important when going
|
||||||
|
through an intersection, otherwise we'd stop in the middle of the intersection
|
||||||
|
by seeing the red line from the other line.
|
||||||
|
|
||||||
|
Check out the diagram below!
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab4/state_machine_flowchart.avif"
|
||||||
|
alt="Flow chart of internal state machine"
|
||||||
|
/>
|
||||||
|
|
||||||
|
Our LEDs sort of indicate the state... though they're very delayed sometimes due
|
||||||
|
to duckiebot driver issues, so take them with a grain of salt. Blue only happens
|
||||||
|
when the bot stops or moves backward while tracking.
|
||||||
|
|
||||||
|
side from the contour-masking code for the yellow lane from the previous lab,
|
||||||
|
the new sensors in this one come from
|
||||||
|
|
||||||
|
1. Detecting the grid on the back of another Duckiebot
|
||||||
|
2. Using the time-of-flight (TOF) sensor to detect distance
|
||||||
|
|
||||||
|
We initially struggled to get the TOF to integrate well with the distance
|
||||||
|
transform we were getting from the grid-tracking-camera node. As the Duckiebot
|
||||||
|
detection node is intermittent, we had to use the TOF to fill in the gaps,
|
||||||
|
however, we noticed that the two measurements did not perfectly correspond.
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab4/camera_vs_tof.avif"
|
||||||
|
alt="Camera readings against TOF distance readings on a chart"
|
||||||
|
/>
|
||||||
|
|
||||||
|
To resolve this discrepancy, we plotted them against
|
||||||
|
each other and fitted a line to the points. Using the slope, of this line, we
|
||||||
|
were able to convert between both measurements, allowing us to fuse both
|
||||||
|
sensors. We found TOF particularly important at close distances, where the
|
||||||
|
camera could no longer fully see the grid on the back. The camera provided a far
|
||||||
|
great field-of-view, which is exactly when the TOF lost distance measures, so
|
||||||
|
the two sensors were able to cover for the weaknesses of the other. When both
|
||||||
|
were sensing, we took the minimum to get an conservative measurement of
|
||||||
|
distance.
|
||||||
|
|
||||||
|
## Discussion
|
||||||
|
|
||||||
|
Our Duckiebot managed to follow the bot ahead very nicely in a straight line.
|
||||||
|
When it wasn't lagging, the LEDs provided a great way to introspect our state,
|
||||||
|
which made it much easier for the person doing keyboard_control on the bot in
|
||||||
|
front. With extensive tuning, our PID stopped the wobbles and kicks coming from
|
||||||
|
the P and D terms respectively.
|
||||||
|
|
||||||
|
The biggest challenge were the turns at the intersections. Here, our bot had to
|
||||||
|
keep a backlog of the latest positions its seen from the bot ahead, while also
|
||||||
|
making sure none of those measurements are irregular or stale. We modified our
|
||||||
|
detection publisher to publish the entire transformation matrix from the camera
|
||||||
|
to the grid on the back, which let see the angle in the pose of the grid. Using
|
||||||
|
this information, we could predict which direction the bot in front of us turned
|
||||||
|
or if it had gone forward.
|
||||||
|
|
||||||
|
|
||||||
|
### How well did your implemented strategy work?
|
||||||
|
|
||||||
|
Our implemented strategy worked pretty well, it successfully fulfilled the
|
||||||
|
requirements for the video, also it can reverse when the tracked Duckiebot
|
||||||
|
starts to reverse!
|
||||||
|
|
||||||
|
### Was it reliable?
|
||||||
|
|
||||||
|
Overall our strategy was fairly reliable, as it's able to drive around the town
|
||||||
|
without crashing into other Duckiebots and tail another Duckiebot autonomously
|
||||||
|
at a safe distance. Furthermore, it also follows the rules of the road by
|
||||||
|
driving within the lanes, stays on the right side of the road, stopping at
|
||||||
|
intersections, and blinking the appropriate rear LED to signal turns.
|
||||||
|
|
||||||
|
### In what situations did it perform poorly?
|
||||||
|
|
||||||
|
Sometimes, our Duckiebot detection node fails to detect the tracked bot making
|
||||||
|
a turn at an intersection. This especially happens on sharp turns, where the
|
||||||
|
node might only detect the bot at a frame right before the bot ahead turns and
|
||||||
|
assume the bot went straight. This makes us lose any information
|
||||||
|
about the angle the bot ahead turned at, which makes it hard to plan a turn,
|
||||||
|
so we default to lane following in these uncertain times.
|
||||||
|
|
||||||
|
## Reference material
|
||||||
|
|
||||||
|
- [Starting Template](https://github.com/XZPshaw/CMPUT412503_exercise4)
|
||||||
|
- [Lane Follow
|
||||||
|
Package](https://eclass.srv.ualberta.ca/mod/resource/view.php?id=6952069)
|
||||||
|
- [ros
|
||||||
|
cv_bridge](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython)
|
||||||
|
- [rospy tf library](http://wiki.ros.org/tf)
|
||||||
|
- [rospy shutdown
|
||||||
|
hooks](https://wiki.ros.org/rospy/Overview/Initialization%20and%20Shutdown#Registering_shutdown_hooks)
|
||||||
|
- [duckietown/dt-duckiebot-interface/blob/daffy/packages/tof_driver/src/tof_node.py](https://github.com/duckietown/dt-duckiebot-interface/blob/daffy/packages/tof_driver/src/tof_node.py)
|
281
src/content/blog/duckietown/lab5.md
Normal file
281
src/content/blog/duckietown/lab5.md
Normal file
|
@ -0,0 +1,281 @@
|
||||||
|
---
|
||||||
|
title: 'DuckieTown - Lab 5'
|
||||||
|
description: "Machine Learning Robotics - Vision"
|
||||||
|
pubDate: 'March 26 2023'
|
||||||
|
heroImage: '../../../src/assets/duckietown/lab5/test-contour.avif'
|
||||||
|
---
|
||||||
|
|
||||||
|
# MNIST Dataset and ML basic terminologies
|
||||||
|
|
||||||
|
### Deliverable 1
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="600"
|
||||||
|
src="../../../src/assets/duckietown/lab5/deliverable-1-backprop.pdf"
|
||||||
|
frameborder="0">
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
### Deliverable 2
|
||||||
|
|
||||||
|
### What data augmentation is used in training? Please delete the data augmentation and rerun the code to compare.
|
||||||
|
|
||||||
|
A random rotation of the image between (-5, 5) degrees, then add 2 pixels of
|
||||||
|
padding to the image, and then randomly cropping the image to 28 x 28 pixels.
|
||||||
|
Rerunning the code without data augmentation results in a lower test accuracy,
|
||||||
|
of 97.89% as opposed to 97.99% with data augmentation. This is because data
|
||||||
|
augmentation helps prevent overfitting by increasing the amount of variation in
|
||||||
|
the training data. Removing data augmentation results in a decreased training
|
||||||
|
time of 1 minute 1 seconds as opposed to 1 minute 20 seconds with data
|
||||||
|
augmentation, as less processing is performed.
|
||||||
|
|
||||||
|
### What is the batch size in the code? Please change the batch size to 16 and 1024 and explain the variation in results.
|
||||||
|
|
||||||
|
The batch size in the code is 64. Changing the batch size to 16 results in a
|
||||||
|
decreased test accuracy of 97.94% as opposed to 97.99% with a batch size of 64.
|
||||||
|
This is because a larger batch size results in a batch gradient that is closer
|
||||||
|
to the true gradient, which allows the neural network to converge faster.
|
||||||
|
Changing the batch size to 1024 results in a decreased test accuracy of 97.53%.
|
||||||
|
This is because a larger batch size results in a less stochastic gradient, which
|
||||||
|
can result in the weights converging to a local minimum rather than a minimum
|
||||||
|
closer to the global minimum. Changing the batch size to 16 results in an
|
||||||
|
increased training time of 3 minute and 3 seconds, as we do not fully utilize
|
||||||
|
the GPU. Changing the batch size to 1024 results in a decreased training time of
|
||||||
|
44 seconds as we utilize the GPU more.
|
||||||
|
|
||||||
|
### What activation function is used in the hidden layer? Please replace it with the linear activation function and see how the training output differs. Show your results before and after changing the activation function in your written report.
|
||||||
|
|
||||||
|
ReLU. Replacing it with the linear activation function results in only 87.02%
|
||||||
|
test accuracy. This is because the activation function has to be non-linear in
|
||||||
|
order to be able to learn a non-linear function. This is really evident in the
|
||||||
|
t-SNE plots below, where using the linear activation function leads to a model
|
||||||
|
that is unable to separate the classes well.
|
||||||
|
|
||||||
|
t-SNE before changing the activation function:
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab5/t-SNE.avif"
|
||||||
|
alt="Clusters without changing activation function"
|
||||||
|
/>
|
||||||
|
|
||||||
|
t-SNE after changing the activation function:
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab5/t-SNE-linear.avif"
|
||||||
|
alt="Clusters after changing activation function"
|
||||||
|
/>
|
||||||
|
|
||||||
|
Changing the activation function to the linear activation function results in
|
||||||
|
the same training time of 1 minute 20 seconds, this is because the ReLU
|
||||||
|
activation function is not that computationally expensive in both the forward
|
||||||
|
and backward passes.
|
||||||
|
|
||||||
|
### What is the optimization algorithm in the code? Explain the role of optimization algorithm in training process
|
||||||
|
|
||||||
|
Adam is an optimization algorithm used in the code. Adam is similar to SGD but
|
||||||
|
it computes adaptive learning rates for each parameter through exponential
|
||||||
|
moving averages of past gradients and the squared gradients. The role of the
|
||||||
|
optimization algorithm is to efficiently update the weights of the neural
|
||||||
|
network to minimize the loss function.
|
||||||
|
|
||||||
|
### Add dropout in the training and explain how the dropout layer helps in training
|
||||||
|
|
||||||
|
Adding dropout (`p = 0.2`) increased the test accuracy to 98.33%, this is
|
||||||
|
because dropout is a form of regularization that helps prevent overfitting by
|
||||||
|
randomly dropping neurons from the neural network during training, this forces
|
||||||
|
the neural network to learn more robust features.
|
||||||
|
|
||||||
|
Adding dropout results in an increased training time of 1 minute 23 seconds as
|
||||||
|
more processing is required.
|
||||||
|
|
||||||
|
## Number Detection Node
|
||||||
|
|
||||||
|
### Deliverable 3
|
||||||
|
|
||||||
|
In the video below, the terminal on the left prints the detected number along
|
||||||
|
with an array logging all previous detections. The number is the detection per
|
||||||
|
index, so for example in the final array
|
||||||
|
|
||||||
|
```
|
||||||
|
0 1 2 3 4 5 6 7 8 9 <-- Which number it is
|
||||||
|
[2, 1, 3, 1, 1, 4, 1, 2, 3, 3] <-- Number of times we've detected it
|
||||||
|
```
|
||||||
|
|
||||||
|
5 was detected 4 times, though 1 was only detected once. We didn't penalize
|
||||||
|
double-detection in a single drive-by, nor did the route we take expose all the
|
||||||
|
digits the same amount of times, so the high variance here is understandable.
|
||||||
|
You can see the position for the april tag coo residing with the detected digit,
|
||||||
|
relative to the world frame, being published right above the equal-sign
|
||||||
|
delimited message. The coordinates are transforms for `x, y, z` respectively.
|
||||||
|
|
||||||
|
The camera on the right only publishes a new frame when a new detection is made,
|
||||||
|
which makes it appear really slow. We had to reduce the publishing rate of the
|
||||||
|
detection topic and the apriltag detector, otherwise the load would be too heavy
|
||||||
|
for the duckie, leading to high latency which ruined our detection frames. You
|
||||||
|
can still see the odometry guessing where it is in the rviz transforms
|
||||||
|
visualization.
|
||||||
|
|
||||||
|
After we find all the digits, we use a `rospy.signal_shutdown()` and print a
|
||||||
|
very explicit message to the terminal. However, since rospy publishers sometimes
|
||||||
|
get stuck, we force our node to continue publishing `0` velocity messages,
|
||||||
|
regardless of if rospy already went down. This is very important, otherwise the
|
||||||
|
wheels don't stop about half the time, though it does result in a scary red
|
||||||
|
error message at the end of the video. Don't worry, rospy was gracefully
|
||||||
|
shutdown, we just didn't trust duckietown enough to listen right away, just like
|
||||||
|
the solution code for lab 3.
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/t46bPy30FkM"
|
||||||
|
title="YouTube video player"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
Our strategy for detecting the numbers was to use OpenCV to extract the numbers
|
||||||
|
from the image. From the [Wikipedia page on the MNIST
|
||||||
|
database](https://en.wikipedia.org/wiki/MNIST_database), we learned that a
|
||||||
|
2-layer MLP can classify with a 1.6% error rate, thus we decided on using a
|
||||||
|
simple MLP to classify the numbers. Our model flattens the 28 x 28 pixel image
|
||||||
|
into a 784 x 1 vector, which is passed through a 800 x 1 hidden layer with
|
||||||
|
`ReLU` activation, and then a 10 x 1 output layer with `softmax` activation.
|
||||||
|
|
||||||
|
Our model architecture diagram created with
|
||||||
|
[NN-SVG](https://github.com/alexlenail/NN-SVG) is shown below:
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab5/model.avif"
|
||||||
|
alt="Model architecture"
|
||||||
|
/>
|
||||||
|
|
||||||
|
We collected 270 train images and 68 test images of the numbers by saving
|
||||||
|
images from the bot like below:
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab5/test-raw.avif"
|
||||||
|
alt="Example raw image"
|
||||||
|
/>
|
||||||
|
|
||||||
|
For preprocessing, we first get the blue HSV colour mask of the image to
|
||||||
|
extract the blue sticky note.
|
||||||
|
(`cv.inRange`).
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab5/test-mask1.avif"
|
||||||
|
alt="Example mask image"
|
||||||
|
/>
|
||||||
|
|
||||||
|
Then, we find the contour (`cv.findContours`) of the blue sticky note.
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab5/test-contour.avif"
|
||||||
|
alt="Example contour image"
|
||||||
|
/>
|
||||||
|
|
||||||
|
Then, we use OpenCV to get the convex hull of the contour (`cv.convexHull`).
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab5/test-convex_hull.avif"
|
||||||
|
alt="Example convex hull image"
|
||||||
|
/>
|
||||||
|
|
||||||
|
Then, we retrieve the corners of the convex hull using Douglas-Peucker algorithm
|
||||||
|
(`cv.approxPolyDP`) suggested by this [Stack Overflow
|
||||||
|
answer](https://stackoverflow.com/a/10262750). We decided to use binary search
|
||||||
|
to adjust the `epsilon` value until we get exactly 4 corners, interestingly, we
|
||||||
|
came up with the same idea as this [Stack Overflow
|
||||||
|
answer](https://stackoverflow.com/a/55339684).
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab5/test-corners.avif"
|
||||||
|
alt="Example corners image"
|
||||||
|
/>
|
||||||
|
|
||||||
|
Using OpenCV we calculate a perspective transform matrix to transform the
|
||||||
|
image to 28 x 28 pixels.
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab5/test-warp.avif"
|
||||||
|
alt="Example warp image"
|
||||||
|
width="30%"
|
||||||
|
/>
|
||||||
|
|
||||||
|
We get the black HSV colour mask of the warped image (`cv.inRange`) to extract
|
||||||
|
the number.
|
||||||
|
|
||||||
|
<img
|
||||||
|
src="../../../src/assets/duckietown/lab5/test-mask2.avif"
|
||||||
|
alt="Example mask image"
|
||||||
|
width="30%"
|
||||||
|
/>
|
||||||
|
|
||||||
|
To prevent noise from the warping, we set the left and right 2 px borders to 0.
|
||||||
|
Then, we normalize the image to have zero mean and unit variance.
|
||||||
|
|
||||||
|
We then trained a MLP using PyTorch on the MNIST dataset as required following
|
||||||
|
[this example](https://github.com/pytorch/examples/blob/main/mnist/main.py). Our
|
||||||
|
very simple MLP gets 97% accuracy on MNIST test set. However, it generalizes
|
||||||
|
poorly from handwritten digits to the typed numbers, only achieving an accuracy
|
||||||
|
of 56% on our test set. Therefore, we fine tuned the model on our data, with a
|
||||||
|
reduced initial learning rate of 0.1. After fine tuning on our data, our model
|
||||||
|
achieves 100% accuracy on our data's test set.
|
||||||
|
|
||||||
|
For inference, we apply the same preprocessing steps above, and the we use
|
||||||
|
`numpy` to load the weights and manually implement the forward pass of the
|
||||||
|
neural network with equation:
|
||||||
|
|
||||||
|
```latex
|
||||||
|
\\[ y = \text{softmax}(W_2 \max(W_1 x + b_1, 0) + b_2) = [0, 0, 0.01, 0.99, 0, 0, 0, 0, 0, 0] \\]
|
||||||
|
```
|
||||||
|
|
||||||
|
Then we get the digit using the `argmax` of the output of the neural
|
||||||
|
network.
|
||||||
|
|
||||||
|
```latex
|
||||||
|
\\[ p = \text{argmax } y = 3\\]
|
||||||
|
```
|
||||||
|
|
||||||
|
### How well did your implemented strategy work?
|
||||||
|
|
||||||
|
Our strategy for number detection works very well. We were able to classify all
|
||||||
|
the numbers in the video with very high accuracy.
|
||||||
|
|
||||||
|
### Was it reliable?
|
||||||
|
|
||||||
|
Our strategy was reliable. Using perspective transforms, we were able to
|
||||||
|
detect the numbers from different angles and reasonable distances consistently
|
||||||
|
and accurately.
|
||||||
|
|
||||||
|
### In what situations did it perform poorly?
|
||||||
|
|
||||||
|
Our strategy performed poorly when the numbers were very far away from the
|
||||||
|
camera or when the Duckiebot was moving. To mitigate these issues, when we get
|
||||||
|
within a certain distance from an AprilTag, we stop the Duckiebot and take a
|
||||||
|
picture of the number for classification. Some of the misclassifications in
|
||||||
|
the video were due to AprilTag detection inaccuracies, where the AprilTag
|
||||||
|
library misreports the AprilTag's position as closer than it really is.
|
||||||
|
We could mitigate this issue by slowing down the Duckiebot when it first detects
|
||||||
|
an AprilTag, and waiting for a second detection to confirm the AprilTag's
|
||||||
|
position prior to capturing the image for number detection.
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [NN-SVG](https://github.com/alexlenail/NN-SVG)
|
||||||
|
- [Wikipedia MNIST database](https://en.wikipedia.org/wiki/MNIST_database)
|
||||||
|
- [Pytorch MNIST
|
||||||
|
example](https://github.com/pytorch/examples/blob/main/mnist/main.py)
|
||||||
|
- [Pytorch Data Random
|
||||||
|
Split](https://pytorch.org/docs/stable/data.html#torch.utils.data.random_split)
|
||||||
|
- [OpenCV Warp Perspective](https://theailearner.com/tag/cv2-warpperspective/)
|
||||||
|
- [Stack Overflow How to force approxPolyDP() to return only the best 4
|
||||||
|
corners? - Opencv 2.4.2](https://stackoverflow.com/a/10262750)
|
||||||
|
- [OpenCV Contours and Convex
|
||||||
|
Hull](https://medium.com/analytics-vidhya/contours-and-convex-hull-in-opencv-python-d7503f6651bc)
|
||||||
|
- [OpenCV Docs](https://docs.opencv.org/4.x/)
|
||||||
|
- [Stack Overflow Getting corners from convex
|
||||||
|
points](https://stackoverflow.com/a/10262750)
|
||||||
|
- [Stack Overflow Image processing - I need to find the 4 corners of any
|
||||||
|
quadrilater](https://stackoverflow.com/questions/38677434/image-processing-i-need-to-find-the-4-corners-of-any-quadrilater)
|
||||||
|
- [Stack Overflow Detect card MinArea Quadrilateral from contour
|
||||||
|
OpenCV](https://stackoverflow.com/questions/44127342/detect-card-minarea-quadrilateral-from-contour-opencv)
|
||||||
|
-->
|
237
src/content/blog/duckietown/lab6.md
Normal file
237
src/content/blog/duckietown/lab6.md
Normal file
|
@ -0,0 +1,237 @@
|
||||||
|
---
|
||||||
|
title: 'DuckieTown - Lab 6'
|
||||||
|
description: "Final Assignment - Obstacle Course"
|
||||||
|
pubDate: 'April 16 2023'
|
||||||
|
heroImage: '../../../src/assets/duckietown/final/three_ducks_on_a_windowsill.avif'
|
||||||
|
---
|
||||||
|
|
||||||
|
<img src="../../../src/assets/duckietown/final/nine_ducks_on_two_dividers.avif" alt="Nine ducks on two dividers" style="height: 900px; margin: 0, padding: 0">
|
||||||
|
<img src="../../../src/assets/duckietown/final/six_ducks_on_two_windowsills.avif" alt="Six ducks on two windowsills">
|
||||||
|
<img src="../../../src/assets/duckietown/final/bam.avif" alt="Bam!">
|
||||||
|
|
||||||
|
## Round 1
|
||||||
|
|
||||||
|
This was our best round, where we actually managed to park help-free, but had
|
||||||
|
some issues with lane following as our understanding of the "stay within the
|
||||||
|
correct lane" requirement assumed we could touch the yellow line, but turns out
|
||||||
|
we can't.
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/HpoDrbf7JZs"
|
||||||
|
title="YouTube video player"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
## Round 2
|
||||||
|
|
||||||
|
We did better on not touching the yellow line, but as we hastily adjusted the
|
||||||
|
tuning, the bot would lose the line and then go off the road. We also had some
|
||||||
|
issues with the Duckiebot's motors not turning as fast (likely as the bot's
|
||||||
|
battery percentage had gone down a bit), so we had issues with parking the bot
|
||||||
|
effectively.
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/tiARxHdgdK8"
|
||||||
|
title="YouTube video player"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
## Round 3
|
||||||
|
|
||||||
|
We achieved similar results to round 2. An interesting issue is that we did not
|
||||||
|
park as well in parking stall 2 and 3. In machine learning terms, we should have
|
||||||
|
trained harder on the testing distribution. We only had the time to tune the
|
||||||
|
parking parameters for stall 1 and 4, so the parking behaviour on stalls 2 and 3
|
||||||
|
was untested prior to the demo.
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="100%"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/XmtXdPu80Jo"
|
||||||
|
title="YouTube video player"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
allowfullscreen>
|
||||||
|
</iframe>
|
||||||
|
|
||||||
|
## Stage 1: Apriltag Detection and Lane Following
|
||||||
|
|
||||||
|
### Implementation
|
||||||
|
|
||||||
|
Akemi mostly worked on this part. After trying to be too clever, he switched
|
||||||
|
over to a simple strategy the day before the project was due, having thought up
|
||||||
|
the approach in the shower.
|
||||||
|
|
||||||
|
I started with the lab 4 code, which Steven had set up to read apriltags a month
|
||||||
|
back. I removed all the stale timeouts and distance measures of the apriltag in
|
||||||
|
the robot frame, storing just whatever was the latest seen apriltag on the
|
||||||
|
node's instance. With a red-line detection, the bot switches a flag to `True`,
|
||||||
|
when it sees a sufficient "amount" of red line in its camera. Then, if the
|
||||||
|
flag is `True` and we no longer see the red line, that must mean the
|
||||||
|
duckiebot is almost perfectly positioned with its wheels on the line. At this
|
||||||
|
point, we look up the latest seen apriltag and make a hardcoded forward/left/
|
||||||
|
right movement to get through the intersection. I was sort of surprised how
|
||||||
|
well this worked right away.
|
||||||
|
|
||||||
|
### Challenges
|
||||||
|
|
||||||
|
Most our problems came from mass murdering the Duckiebots... We ended up with 4
|
||||||
|
in our fleet, since something we did to them kept breaking them in different
|
||||||
|
ways. Don't worry, we asked for permission or made sure the bots weren't being
|
||||||
|
used in all cases
|
||||||
|
|
||||||
|
- `csc22902`: Failed to boot, it'd get to the white light stage and just stay
|
||||||
|
there for hours. Prior to that had a RAM usage issue, where Docker would just
|
||||||
|
eat up all the RAM and stop building our containers.
|
||||||
|
- `csc22920`: (We used this one for the demo). This bot was unable to build
|
||||||
|
using `dts`, requiring us to extract the docker command used by Duckietown
|
||||||
|
shell and run it directly.
|
||||||
|
- `csc22927`: Right wheel started clicking and getting stuck. We swapped the
|
||||||
|
wheel with `csc22902`'s, which fixed that issue. This bot occasionally had the high
|
||||||
|
RAM usage issue too.
|
||||||
|
- `csc22933`: RAM overflow caused docker to be unable to build at all. We could
|
||||||
|
fix this through `systemctl restart docker.service`... though that would
|
||||||
|
break the camera container until we `systemctl reboot`. The whole process
|
||||||
|
takes about 7 - 10 minutes, so we stopped using this bot
|
||||||
|
|
||||||
|
Now the problem with working with so many different Duckiebots is that the
|
||||||
|
tuning between bots was completely different. To make matters worse, tuning on a
|
||||||
|
Duckiebot with `<` 60% battery consistently gave completely wrong numbers when the
|
||||||
|
bot got fully charged again. This tuning issue resulted in our hard-coded turns
|
||||||
|
sometimes going into the wrong lane, before lane following was re-enabled to
|
||||||
|
correct them and sometimes our bot getting stuck in the parking stage.
|
||||||
|
|
||||||
|
## Stage 2: Obstacle Avoidance
|
||||||
|
|
||||||
|
### Implementation
|
||||||
|
|
||||||
|
First, Akemi has had it with the make-shift HSV masking we were constantly
|
||||||
|
doing, so he quickly built it out to a proper software tool. *[Available
|
||||||
|
here](https://github.com/Aizuko/hsv_tool) for an unlimited time only! Get it
|
||||||
|
while it's still uncommented!*
|
||||||
|
|
||||||
|
Using this tool and the picture-saving node from lab 5, we quickly found HSV
|
||||||
|
masks that were strong in all conditions, even working in both rooms, for the
|
||||||
|
crossing duckies. We then used a Jupyter notebook to guesstimate the plan:
|
||||||
|
|
||||||
|
1. Get a callback from the camera and crop it to about at 150 pixel strip,
|
||||||
|
roughly in the middle on the image
|
||||||
|
2. Apply our HSV mask
|
||||||
|
3. Flatten the image into a single channel greyscale, using the HSV mask
|
||||||
|
4. Turn all pixels picked up by the HSV mask to 1 and all others to 0
|
||||||
|
5. Sum the pixels and compare with the threshold number we set. We found about
|
||||||
|
8000 was pretty good in the end
|
||||||
|
|
||||||
|
We only do this callback if the last seen apriltag is a crossing one. If this
|
||||||
|
callback returns `True`, the wheels immediately get killed. Then we scan every
|
||||||
|
second to check if there are any Duckie crossing, or rather if the sum of the
|
||||||
|
hsv mask pixels is higher than the threshold. If we get 3 consecutive 1s
|
||||||
|
intervals saying it's safe to drive, then we go through the intersection and
|
||||||
|
switch to lane following. We used 3 intervals to fend off any possible noise
|
||||||
|
from the camera, especially important when Duckie lives are on the line.
|
||||||
|
|
||||||
|
For the going-around the broken Duckiebot, we again simplified the lab 4
|
||||||
|
following code to a single boolean indicating whether another bot is visible.
|
||||||
|
When it flips to `True`, the state machine initiates a sequence of completely
|
||||||
|
hard-coded movements to get around the bot, which actually ended up working
|
||||||
|
pretty well.
|
||||||
|
|
||||||
|
### Challenges
|
||||||
|
|
||||||
|
The biggest challenge was Akemi not feeling like reading the rubric. Notably he
|
||||||
|
missed the important bit about having to stop at all crossings, and not just
|
||||||
|
charging through if there aren't any Duckies in the way... Oops
|
||||||
|
|
||||||
|
We also initially tried to use English driving to get around the broken
|
||||||
|
Duckiebot, but we kept running into issues with the bot doing a 180 rotation in
|
||||||
|
its current lane and just driving us back to stage 1. The more we hard-coded
|
||||||
|
this section, the more reliable our going-around got, so in the end we ended up
|
||||||
|
with a complete hard-code that performed consistently.
|
||||||
|
|
||||||
|
## Stage 2.5: Handoff
|
||||||
|
|
||||||
|
Since our group was crunched for time, we did our parts largely separately. So
|
||||||
|
separately in fact, we were working on different workspaces. Since our code
|
||||||
|
worked in our own workspace, we though up a pretty simple solution to merge the
|
||||||
|
two: a service.
|
||||||
|
|
||||||
|
Both the parking node and stage 1-2 node start up immediately, though the
|
||||||
|
parking node doesn't do anything, simply checking once a second if its ready to
|
||||||
|
start. When the stage 1-2 node reaches the end of stage 2, it makes a service
|
||||||
|
call over to the parking node, initiating the takeover. The stage 1-2 node also
|
||||||
|
unsubscribes from all topics to reduce any unnecessary load on the duckiebot.
|
||||||
|
The shutdown for both is later signaled by the parking node, once it finishes
|
||||||
|
parking.
|
||||||
|
|
||||||
|
We initially thought we'd be able to simply shut down the stage 1-2 node at this
|
||||||
|
point, though something about duckietown made that really not work. In fact it
|
||||||
|
resulted in `csc22902` not being able to boot, from which it never recovered, so
|
||||||
|
we steered really clear of that afterwards...
|
||||||
|
|
||||||
|
## Stage 3: The Parking Lot
|
||||||
|
|
||||||
|
### Implementation
|
||||||
|
|
||||||
|
Steven initially decided to take the computer vision "dark magic" he learned in
|
||||||
|
Martin's infamous CMPUT 428, to fuse measurements the time of flight (TOF)
|
||||||
|
sensor, apriltag detector, camera, and odometry to get perfect parking every
|
||||||
|
time. We intially used the camera to both calculate our pose relative to the
|
||||||
|
april tags and by computing the vanishing-point of the yellow parking lines to
|
||||||
|
attempt to center ourselves in the lane. Unfortunately, not only is this quite
|
||||||
|
complex task, the camera sensors and apriltag detections were not reliable and
|
||||||
|
consistent enough. Our final solution ended up using __just__ the TOF sensor.
|
||||||
|
|
||||||
|
The only measurement TOF gives is a fairly accurate forward distance with only a
|
||||||
|
few degrees of field of view, which is problematic when we need to figure out
|
||||||
|
our pose in the parking lot. With some clever thinking, Steven made the bot
|
||||||
|
systematically wiggle left and right, until it detected the apriltag opposite of
|
||||||
|
the entrance with the TOF sensor! This would give a good distance estimate
|
||||||
|
relative to that apriltag. After driving to a specified distance to it, the bot
|
||||||
|
would turn towards the parking stall opposite to our desired one. Next it again
|
||||||
|
did a wiggle to find the apriltag in the opposite stall, aligning itself such
|
||||||
|
that the TOF sensor reads the minimum distance. This allowed us to tell the
|
||||||
|
apriltag apart from the wooden-backboard. Once aligned, we just just reverse
|
||||||
|
into the parking stall, until the TOF sensor read a distance over 1.15 m (the
|
||||||
|
TOF sensor goes out of range after approximately 1.20 m). We then reversed a bit
|
||||||
|
more to make sure we were fully in the stall. This strategy was fairly robust
|
||||||
|
and reliable as long as the parameters were tuned correctly. As a bonus, it is
|
||||||
|
completely invariant to lighting conditions, so in theory, we could park well
|
||||||
|
even with the lights off.
|
||||||
|
|
||||||
|
### Challenges
|
||||||
|
|
||||||
|
A major challenge was that there was not a lot of time to tune the parking
|
||||||
|
parameters. We had only the time to tune for parking stalls 1 and 4, and we were
|
||||||
|
tested on parking stall 1 which we tuned for and parking stalls 2 and 3 which
|
||||||
|
were completely untested. If we had more time, we would have tuned for all
|
||||||
|
stalls so that we could have a more reliable parking solution.
|
||||||
|
|
||||||
|
Particularly on `csc22920`, the left-turning was __much__ stronger than
|
||||||
|
the right turning, which meant we needed to use a very asymmetric force between
|
||||||
|
the two directions. We found right turns that used an omega 3 times higher than
|
||||||
|
the left seemed to truly balance things out.
|
||||||
|
|
||||||
|
Luckily, Steven pulled off a miracle and guessed the correct
|
||||||
|
tuning parameters seconds before our first demo. It struggled to work well 2
|
||||||
|
minutes before our demo, but the parameters guessed worked perfectly.
|
||||||
|
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
- [Starting Template](https://github.com/XZPshaw/CMPUT412503_exercise4)
|
||||||
|
- [Lane Follow
|
||||||
|
Package](https://eclass.srv.ualberta.ca/mod/resource/view.php?id=6952069)
|
||||||
|
- [ros
|
||||||
|
cv_bridge](http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython)
|
||||||
|
- [rospy tf library](http://wiki.ros.org/tf)
|
||||||
|
- [rospy shutdown hooks](https://wiki.ros.org/rospy/Overview/Initialization%20and%20Shutdown#Registering_shutdown_hooks)
|
||||||
|
- [duckietown/dt-duckiebot-interface/blob/daffy/packages/tof_driver/src/tof_node.py](https://github.com/duckietown/dt-duckiebot-interface/blob/daffy/packages/tof_driver/src/tof_node.py)
|
||||||
|
- [Find Middle of Line Using Moments](https://stackoverflow.com/questions/64396183/opencv-find-a-middle-line-of-a-contour-python)
|
||||||
|
- [Multiple View Geometry in Computer Vision, Second Edition](http://www.r-5.org/files/books/computers/algo-list/image-processing/vision/Richard_Hartley_Andrew_Zisserman-Multiple_View_Geometry_in_Computer_Vision-EN.pdf)
|
92
src/content/blog/duckietown/pre_lab1.md
Normal file
92
src/content/blog/duckietown/pre_lab1.md
Normal file
|
@ -0,0 +1,92 @@
|
||||||
|
---
|
||||||
|
title: 'DuckieTown - Pre-Lab 1'
|
||||||
|
description: 'Getting ready for CMPUT 412'
|
||||||
|
pubDate: 'Jan 11 2023'
|
||||||
|
heroImage: '../../../src/assets/duckietown/dashboard_motors_spin.avif'
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
# Week 1 - Before the first lab
|
||||||
|
|
||||||
|
Mostly did setup stuff. No programming yet
|
||||||
|
|
||||||
|
## Connecting to the robots
|
||||||
|
|
||||||
|
As it turns out, Duckietown needs support for `.local` name resolution. This
|
||||||
|
doesn't happen by default if using systemd-resolved. This
|
||||||
|
[stackexchange](https://unix.stackexchange.com/questions/43762/how-do-i-get-to-use-local-hostnames-with-arch-linux#146025)
|
||||||
|
provides a good overview
|
||||||
|
|
||||||
|
Additionally, use firewalld to unblock UDP on port 5353
|
||||||
|
|
||||||
|
```bash
|
||||||
|
firewall-cmd --zone=public --permanent --add-port=5353/udp
|
||||||
|
```
|
||||||
|
|
||||||
|
It likely won't work with `mdns_minimal` so instead use `mdns` in
|
||||||
|
`/etc/nsswitch.conf` and create another file `/etc/mdns.allow` with the lines
|
||||||
|
|
||||||
|
```
|
||||||
|
.local.
|
||||||
|
.local
|
||||||
|
```
|
||||||
|
|
||||||
|
See section 4 of the
|
||||||
|
[ArchWiki](https://wiki.archlinux.org/title/avahi#Troubleshooting) for more info
|
||||||
|
|
||||||
|
## Webpage
|
||||||
|
|
||||||
|
I used the markdown notes setup I've been working on for this site the past week
|
||||||
|
and slightly reworked it to suit the DuckieTown blog you're reading right now!
|
||||||
|
This also finished up a lot of styling problems that remained with the notes
|
||||||
|
|
||||||
|
## GUI Tools
|
||||||
|
|
||||||
|
As it turns out, I didn't have xhost installed, so of course the camera didn't
|
||||||
|
work.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
please pacman -S xorg-xhost
|
||||||
|
```
|
||||||
|
|
||||||
|
With that, the camera now works at least... Use the following to get a camera
|
||||||
|
window running over xwayland. Make sure `dockerd` is running in systemd
|
||||||
|
|
||||||
|
```bash
|
||||||
|
systemctl start docker
|
||||||
|
dts start_gui_tools <addr-no-.local>
|
||||||
|
rqt_image_view # Run this in the docker container
|
||||||
|
```
|
||||||
|
|
||||||
|
## Keyboard control
|
||||||
|
|
||||||
|
Keyboard controls also work!... except only in the non-graphic mode
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dts duckiebot keyboard_control --cli <add-no-.local>
|
||||||
|
```
|
||||||
|
|
||||||
|
When the shell loads, it'll ask for one of `wasde`. Sometimes it won't register
|
||||||
|
keys right away. I found that after stopping with `e`, immediately using one of
|
||||||
|
the movement commands fails, since the duckiebot doesn't seem to have understood
|
||||||
|
it stopped. Instead send yet another `e` and then the movements should work
|
||||||
|
|
||||||
|
Conversely, if it's already moving with `w`, immediately using `asd` tends to
|
||||||
|
work. Additionally, if it's refusing to start moving forward with `w` and the
|
||||||
|
extra `e`s aren't helping, try doing the opposite movement with `s` then
|
||||||
|
immediately switch into `w`
|
||||||
|
|
||||||
|
## When things stop working
|
||||||
|
|
||||||
|
Sometimes, mDNS just doesn't feel it today with systemd-networkd. In this case
|
||||||
|
just gives up
|
||||||
|
|
||||||
|
```bash
|
||||||
|
systemctl stop systemd-networkd iwd
|
||||||
|
systemctl start NetworkManger
|
||||||
|
nmtui
|
||||||
|
```
|
||||||
|
|
||||||
|
Once everything is running, including docker, **open a new terminal**. Otherwise
|
||||||
|
there'll be errors like "QT Plugin not installed" and "Failed to open DISPLAY".
|
||||||
|
This has to do with updated environment variables
|
Loading…
Reference in a new issue