Blog: update urls to public

This commit is contained in:
Akemi Izuko 2023-12-30 12:42:58 -07:00
parent fa5928c178
commit 78cc506cf8
Signed by: akemi
GPG key ID: 8DE0764E1809E9FC
7 changed files with 49 additions and 49 deletions

View file

@ -2,7 +2,7 @@
title: 'DuckieTown - Lab 1' title: 'DuckieTown - Lab 1'
description: "Let's get rolling!" description: "Let's get rolling!"
pubDate: 'Jan 22 2023' pubDate: 'Jan 22 2023'
heroImage: '../../../src/assets/duckietown/sleeping_duckies.avif' heroImage: '/images/duckietown/sleeping_duckies.avif'
--- ---
<p style="font-size: max(2vh, 10px); margin-top: 0; text-align: right"> <p style="font-size: max(2vh, 10px); margin-top: 0; text-align: right">
@ -132,7 +132,7 @@ in a browser. It has a live camera feed and sensor signals. Here's a dashboard
with my bot driving forward in a straight line with my bot driving forward in a straight line
<img <img
src="../../../src/assets/duckietown/dashboard_motors_forward.avif" src="/images/duckietown/dashboard_motors_forward.avif"
alt="Motors of duckiebot driving forward, as seen from the dashboard" alt="Motors of duckiebot driving forward, as seen from the dashboard"
/> />
@ -140,7 +140,7 @@ Notice how the angular speed is 0. That's since it's not turning. Below is a
picture of it spinning in a circle, now with no forward velocity picture of it spinning in a circle, now with no forward velocity
<img <img
src="../../../src/assets/duckietown/dashboard_motors_spin.avif" src="/images/duckietown/dashboard_motors_spin.avif"
alt="Motors of duckiebot spinning in a circle, as seen form the dashboard" alt="Motors of duckiebot spinning in a circle, as seen form the dashboard"
/> />

View file

@ -2,7 +2,7 @@
title: 'DuckieTown - Lab 2' title: 'DuckieTown - Lab 2'
description: "Camera and kinematics" description: "Camera and kinematics"
pubDate: 'Feb 13 2023' pubDate: 'Feb 13 2023'
heroImage: '../../../src/assets/duckietown/lab2/quiver_plot_sparse.avif' heroImage: '/images/duckietown/lab2/quiver_plot_sparse.avif'
--- ---
<!-- THIS IS THE OUTLINE, NOT VISIBLE THO KEEP IT FOR REFERENCE <!-- THIS IS THE OUTLINE, NOT VISIBLE THO KEEP IT FOR REFERENCE
@ -136,7 +136,7 @@ instead of having to constantly modify source code.
Here's a screenshot of the node in `rqt_graph`: Here's a screenshot of the node in `rqt_graph`:
<img <img
src="../../../src/assets/duckietown/lab2/custom_publisher_and_subscriber_allinone.avif" src="/images/duckietown/lab2/custom_publisher_and_subscriber_allinone.avif"
alt="Custom camera node in rqt_graph. It publishes two outgoing topics" alt="Custom camera node in rqt_graph. It publishes two outgoing topics"
/> />
@ -152,7 +152,7 @@ Here's a screenshot of our modified topic being published (this one isn't black
and white): and white):
<img <img
src="../../../src/assets/duckietown/lab2/custom_published_image.avif" src="/images/duckietown/lab2/custom_published_image.avif"
alt="Picture of screen with rqt_image_view streaming our topic" alt="Picture of screen with rqt_image_view streaming our topic"
/> />
@ -162,7 +162,7 @@ though the assignment asks for a picture of the source code... so here's a
picture of where the link leads: picture of where the link leads:
<img <img
src="../../../src/assets/duckietown/lab2/lab2_camera_node_code_screenshot.avif" src="/images/duckietown/lab2/lab2_camera_node_code_screenshot.avif"
alt="Screenshot of 2 Chromium windows displaying source code" alt="Screenshot of 2 Chromium windows displaying source code"
/> />
@ -175,43 +175,43 @@ initial robot frame to the world frame?**
The robot frame is always centered on the robot, so it is given by The robot frame is always centered on the robot, so it is given by
<img src="../../../src/assets/duckietown/lab2/math1.avif" /> <img src="/images/duckietown/lab2/math1.avif" />
The initial world frame is given by The initial world frame is given by
<img src="../../../src/assets/duckietown/lab2/math2.avif" /> <img src="/images/duckietown/lab2/math2.avif" />
To transform the initial world frame to the robot frame is trivial, keep the To transform the initial world frame to the robot frame is trivial, keep the
angle theta the same, and `x_R = 0` and `y_R = 0`. This is equivalent to this angle theta the same, and `x_R = 0` and `y_R = 0`. This is equivalent to this
matrix multiplication: matrix multiplication:
<img src="../../../src/assets/duckietown/lab2/math11.avif" /> <img src="/images/duckietown/lab2/math11.avif" />
To get the initial world frame from the initial robot frame, To get the initial world frame from the initial robot frame,
we keep the angle theta the same, and set `x_I = 0.32` and `y_I = 0.32`. we keep the angle theta the same, and set `x_I = 0.32` and `y_I = 0.32`.
This is equivalent to this matrix multiplication: This is equivalent to this matrix multiplication:
<img src="../../../src/assets/duckietown/lab2/math3.avif" /> <img src="/images/duckietown/lab2/math3.avif" />
We used the following matrix multiplication to transform between the two: We used the following matrix multiplication to transform between the two:
<img src="../../../src/assets/duckietown/lab2/math4.avif" /> <img src="/images/duckietown/lab2/math4.avif" />
with with
<img src="../../../src/assets/duckietown/lab2/math5.avif" /> <img src="/images/duckietown/lab2/math5.avif" />
Then we can update the world frame by integrating the above changes in world Then we can update the world frame by integrating the above changes in world
frame frame
<img src="../../../src/assets/duckietown/lab2/math6.avif" /> <img src="/images/duckietown/lab2/math6.avif" />
We also must apply the modulo of `2 * pi` to the angle theta to keep it between We also must apply the modulo of `2 * pi` to the angle theta to keep it between
0 and `2 * pi`. 0 and `2 * pi`.
We note that the equation for getting the change in robot frame is given by We note that the equation for getting the change in robot frame is given by
<img src="../../../src/assets/duckietown/lab2/math7.avif" /> <img src="/images/duckietown/lab2/math7.avif" />
where `d_r` and `d_l` are the integrated displacement traveled by the right and where `d_r` and `d_l` are the integrated displacement traveled by the right and
left wheels and `l` is the distance between the wheels and the center of the left wheels and `l` is the distance between the wheels and the center of the
@ -220,7 +220,7 @@ rotation.
To get the integrated displacements `d_r` and `d_l`, we use the wheel encoder To get the integrated displacements `d_r` and `d_l`, we use the wheel encoder
ticks formula: ticks formula:
<img src="../../../src/assets/duckietown/lab2/math8.avif" /> <img src="/images/duckietown/lab2/math8.avif" />
where `r = 0.025` is the radius of the Duckiebot wheel and `resolution = 135` where `r = 0.025` is the radius of the Duckiebot wheel and `resolution = 135`
is the number of ticks in one rotation of the wheel. is the number of ticks in one rotation of the wheel.
@ -230,11 +230,11 @@ is the number of ticks in one rotation of the wheel.
To update the angle theta that our DuckieBot has traveled, we used the matrix To update the angle theta that our DuckieBot has traveled, we used the matrix
multiplication above, which breaks down to the following equations for angle: multiplication above, which breaks down to the following equations for angle:
<img src="../../../src/assets/duckietown/lab2/math9.avif" /> <img src="/images/duckietown/lab2/math9.avif" />
where where
<img src="../../../src/assets/duckietown/lab2/math10.avif" /> <img src="/images/duckietown/lab2/math10.avif" />
**Can you explain why there is a difference between actual and desired **Can you explain why there is a difference between actual and desired
location?** location?**
@ -464,13 +464,13 @@ one from the start of this section. The final distance was 64cm when measured by
AR-ruler. About 62cm when measured by a prehistoric 90cm-stick: AR-ruler. About 62cm when measured by a prehistoric 90cm-stick:
<div><img <div><img
src="../../../src/assets/duckietown/lab2/lab2_final_position.avif" src="/images/duckietown/lab2/lab2_final_position.avif"
alt="64cm measured distance in an AR-ruler screenshot of an iPhone" alt="64cm measured distance in an AR-ruler screenshot of an iPhone"
style="width: 100%; height: 100%" style="width: 100%; height: 100%"
/></div> /></div>
<div><img <div><img
src="../../../src/assets/duckietown/lab2/lab2_final_dist_ar.avif" src="/images/duckietown/lab2/lab2_final_dist_ar.avif"
alt="64cm measured distance in an AR-ruler screenshot of an iPhone" alt="64cm measured distance in an AR-ruler screenshot of an iPhone"
/></div> /></div>
@ -485,7 +485,7 @@ ipython-notebook](https://codeberg.org/akemi/duckietown/src/branch/main/lab2/bag
with the resulting image here: with the resulting image here:
<img <img
src="../../../src/assets/duckietown/lab2/quiver_plot_sparse.avif" src="/images/duckietown/lab2/quiver_plot_sparse.avif"
alt="A quiver plot (end-to-end arrows) of the robot traveling in a square" alt="A quiver plot (end-to-end arrows) of the robot traveling in a square"
/> />

View file

@ -2,7 +2,7 @@
title: 'DuckieTown - Lab 3' title: 'DuckieTown - Lab 3'
description: "Localization through Sensor Fusion" description: "Localization through Sensor Fusion"
pubDate: 'March 05 2023' pubDate: 'March 05 2023'
heroImage: '../../../src/assets/duckietown/lab3/ar-ducks.avif' heroImage: '/images/duckietown/lab3/ar-ducks.avif'
--- ---
A screenshot of our [Unit A-4 Advanced Augmented Reality A screenshot of our [Unit A-4 Advanced Augmented Reality
@ -60,7 +60,7 @@ This is nicely visualized in this diagram from Figure 4.4 of [Unit B-4 Exercises
Duckmentation](https://docs.duckietown.org/daffy/duckietown-classical-robotics/out/exercise_sensor_fusion.html#fig:at-lib-frame-convention-wrap). Duckmentation](https://docs.duckietown.org/daffy/duckietown-classical-robotics/out/exercise_sensor_fusion.html#fig:at-lib-frame-convention-wrap).
<img <img
src="../../../src/assets/duckietown/lab3/at-frame-convention.jpg" src="/images/duckietown/lab3/at-frame-convention.jpg"
alt="Frame convention used by april tag library when returning pose" alt="Frame convention used by april tag library when returning pose"
/> />
@ -223,12 +223,12 @@ the bot is an entire intersection ahead of where Rviz seems to show it.
<iframe <iframe
width="100%" width="100%"
height="325" height="325"
src="../../../src/assets/duckietown/lab3/footprint_as_root_tf.pdf" src="/images/duckietown/lab3/footprint_as_root_tf.pdf"
frameborder="0"> frameborder="0">
</iframe> </iframe>
It is quite the big graph, if you need to zoom in, click It is quite the big graph, if you need to zoom in, click
[here](../../../src/assets/duckietown/lab3/footprint_as_root_tf.pdf) to open the generated [here](/images/duckietown/lab3/footprint_as_root_tf.pdf) to open the generated
transform tree graph. The root of this tree is transform tree graph. The root of this tree is
`csc22927/footprint`. `csc22927/footprint`.
@ -293,12 +293,12 @@ remove the ability to transform between frames in two different trees.
<iframe <iframe
width="100%" width="100%"
height="260" height="260"
src="../../../src/assets/duckietown/lab3/footprint_as_child_tf.pdf" src="/images/duckietown/lab3/footprint_as_child_tf.pdf"
frameborder="0"> frameborder="0">
</iframe> </iframe>
It is quite the big graph, if you need to zoom in, click It is quite the big graph, if you need to zoom in, click
[here](../../../src/assets/duckietown/lab3/footprint_as_child_tf.pdf) to open the newly generated [here](/images/duckietown/lab3/footprint_as_child_tf.pdf) to open the newly generated
transform tree graph. transform tree graph.
The new root/parent is the `world` frame. The new root/parent is the `world` frame.
@ -338,7 +338,7 @@ already.
Our solution to this was pure genius: Our solution to this was pure genius:
<img <img
src="../../../src/assets/duckietown/lab3/bottle_bots.avif" src="/images/duckietown/lab3/bottle_bots.avif"
alt="Big waterbottle on top of duckiebot" alt="Big waterbottle on top of duckiebot"
/> />
@ -411,7 +411,7 @@ We could have tried using multiple april tags (when visible) to improve our
localization estimate. localization estimate.
<img <img
src="../../../src/assets/duckietown/lab3/birds_eye_view.avif" src="/images/duckietown/lab3/birds_eye_view.avif"
alt="Bird's eye view from camera" alt="Bird's eye view from camera"
/> />
Bird's eye view obtained using a projective transformation of the image. Bird's eye view obtained using a projective transformation of the image.

View file

@ -2,7 +2,7 @@
title: 'DuckieTown - Lab 4' title: 'DuckieTown - Lab 4'
description: "Robots Following Robots" description: "Robots Following Robots"
pubDate: 'March 19 2023' pubDate: 'March 19 2023'
heroImage: '../../../src/assets/duckietown/lab4/state_machine_flowchart.avif' heroImage: '/images/duckietown/lab4/state_machine_flowchart.avif'
--- ---
# Don't crash! Tailing behaviour - Fourth lab # Don't crash! Tailing behaviour - Fourth lab
@ -56,7 +56,7 @@ by seeing the red line from the other line.
Check out the diagram below! Check out the diagram below!
<img <img
src="../../../src/assets/duckietown/lab4/state_machine_flowchart.avif" src="/images/duckietown/lab4/state_machine_flowchart.avif"
alt="Flow chart of internal state machine" alt="Flow chart of internal state machine"
/> />
@ -76,7 +76,7 @@ detection node is intermittent, we had to use the TOF to fill in the gaps,
however, we noticed that the two measurements did not perfectly correspond. however, we noticed that the two measurements did not perfectly correspond.
<img <img
src="../../../src/assets/duckietown/lab4/camera_vs_tof.avif" src="/images/duckietown/lab4/camera_vs_tof.avif"
alt="Camera readings against TOF distance readings on a chart" alt="Camera readings against TOF distance readings on a chart"
/> />

View file

@ -2,7 +2,7 @@
title: 'DuckieTown - Lab 5' title: 'DuckieTown - Lab 5'
description: "Machine Learning Robotics - Vision" description: "Machine Learning Robotics - Vision"
pubDate: 'March 26 2023' pubDate: 'March 26 2023'
heroImage: '../../../src/assets/duckietown/lab5/test-contour.avif' heroImage: '/images/duckietown/lab5/test-contour.avif'
--- ---
# MNIST Dataset and ML basic terminologies # MNIST Dataset and ML basic terminologies
@ -12,7 +12,7 @@ heroImage: '../../../src/assets/duckietown/lab5/test-contour.avif'
<iframe <iframe
width="100%" width="100%"
height="600" height="600"
src="../../../src/assets/duckietown/lab5/deliverable-1-backprop.pdf" src="/images/duckietown/lab5/deliverable-1-backprop.pdf"
frameborder="0"> frameborder="0">
</iframe> </iframe>
@ -53,13 +53,13 @@ that is unable to separate the classes well.
t-SNE before changing the activation function: t-SNE before changing the activation function:
<img <img
src="../../../src/assets/duckietown/lab5/t-SNE.avif" src="/images/duckietown/lab5/t-SNE.avif"
alt="Clusters without changing activation function" alt="Clusters without changing activation function"
/> />
t-SNE after changing the activation function: t-SNE after changing the activation function:
<img <img
src="../../../src/assets/duckietown/lab5/t-SNE-linear.avif" src="/images/duckietown/lab5/t-SNE-linear.avif"
alt="Clusters after changing activation function" alt="Clusters after changing activation function"
/> />
@ -144,7 +144,7 @@ Our model architecture diagram created with
[NN-SVG](https://github.com/alexlenail/NN-SVG) is shown below: [NN-SVG](https://github.com/alexlenail/NN-SVG) is shown below:
<img <img
src="../../../src/assets/duckietown/lab5/model.avif" src="/images/duckietown/lab5/model.avif"
alt="Model architecture" alt="Model architecture"
/> />
@ -152,7 +152,7 @@ We collected 270 train images and 68 test images of the numbers by saving
images from the bot like below: images from the bot like below:
<img <img
src="../../../src/assets/duckietown/lab5/test-raw.avif" src="/images/duckietown/lab5/test-raw.avif"
alt="Example raw image" alt="Example raw image"
/> />
@ -161,21 +161,21 @@ extract the blue sticky note.
(`cv.inRange`). (`cv.inRange`).
<img <img
src="../../../src/assets/duckietown/lab5/test-mask1.avif" src="/images/duckietown/lab5/test-mask1.avif"
alt="Example mask image" alt="Example mask image"
/> />
Then, we find the contour (`cv.findContours`) of the blue sticky note. Then, we find the contour (`cv.findContours`) of the blue sticky note.
<img <img
src="../../../src/assets/duckietown/lab5/test-contour.avif" src="/images/duckietown/lab5/test-contour.avif"
alt="Example contour image" alt="Example contour image"
/> />
Then, we use OpenCV to get the convex hull of the contour (`cv.convexHull`). Then, we use OpenCV to get the convex hull of the contour (`cv.convexHull`).
<img <img
src="../../../src/assets/duckietown/lab5/test-convex_hull.avif" src="/images/duckietown/lab5/test-convex_hull.avif"
alt="Example convex hull image" alt="Example convex hull image"
/> />
@ -187,7 +187,7 @@ came up with the same idea as this [Stack Overflow
answer](https://stackoverflow.com/a/55339684). answer](https://stackoverflow.com/a/55339684).
<img <img
src="../../../src/assets/duckietown/lab5/test-corners.avif" src="/images/duckietown/lab5/test-corners.avif"
alt="Example corners image" alt="Example corners image"
/> />
@ -195,7 +195,7 @@ Using OpenCV we calculate a perspective transform matrix to transform the
image to 28 x 28 pixels. image to 28 x 28 pixels.
<img <img
src="../../../src/assets/duckietown/lab5/test-warp.avif" src="/images/duckietown/lab5/test-warp.avif"
alt="Example warp image" alt="Example warp image"
width="30%" width="30%"
/> />
@ -204,7 +204,7 @@ We get the black HSV colour mask of the warped image (`cv.inRange`) to extract
the number. the number.
<img <img
src="../../../src/assets/duckietown/lab5/test-mask2.avif" src="/images/duckietown/lab5/test-mask2.avif"
alt="Example mask image" alt="Example mask image"
width="30%" width="30%"
/> />

View file

@ -2,12 +2,12 @@
title: 'DuckieTown - Lab 6' title: 'DuckieTown - Lab 6'
description: "Final Assignment - Obstacle Course" description: "Final Assignment - Obstacle Course"
pubDate: 'April 16 2023' pubDate: 'April 16 2023'
heroImage: '../../../src/assets/duckietown/final/three_ducks_on_a_windowsill.avif' heroImage: '/images/duckietown/final/three_ducks_on_a_windowsill.avif'
--- ---
<img src="../../../src/assets/duckietown/final/nine_ducks_on_two_dividers.avif" alt="Nine ducks on two dividers" style="height: 900px; margin: 0, padding: 0"> <img src="/images/duckietown/final/nine_ducks_on_two_dividers.avif" alt="Nine ducks on two dividers" style="height: 900px; margin: 0, padding: 0">
<img src="../../../src/assets/duckietown/final/six_ducks_on_two_windowsills.avif" alt="Six ducks on two windowsills"> <img src="/images/duckietown/final/six_ducks_on_two_windowsills.avif" alt="Six ducks on two windowsills">
<img src="../../../src/assets/duckietown/final/bam.avif" alt="Bam!"> <img src="/images/duckietown/final/bam.avif" alt="Bam!">
## Round 1 ## Round 1

View file

@ -2,7 +2,7 @@
title: 'DuckieTown - Pre-Lab 1' title: 'DuckieTown - Pre-Lab 1'
description: 'Getting ready for CMPUT 412' description: 'Getting ready for CMPUT 412'
pubDate: 'Jan 11 2023' pubDate: 'Jan 11 2023'
heroImage: '../../../src/assets/duckietown/dashboard_motors_spin.avif' heroImage: '/images/duckietown/dashboard_motors_spin.avif'
--- ---