August 2019
Climate art, hacking Autodesk, and other funemployment projects
It’s been a great summer. I quit my job in May and have been free of obligations for the last few months. Today I start a PhD in Computer Science at MIT!
It’s nice to breathe and reorient before starting up the next mountain. I spent the first half of the summer traveling in Europe, Asia, and along the West Coast. The last month or so I’ve been back home in Cambridge, relaxing and spending a lot of time outside, and casually jamming on some projects with friends.
Here are some rough notes on what I’ve been up to:
Climate change art
Climate change has been on my mind. It’s getting harder to look away.
I spent a few days with my friend Seth working on a project for the Climate Fixathon, a monthlong hackathon focused on climate change. We felt a little uneasy with the premise: what hubris, to think the climate can be “hacked” with software, especially since we only wanted to devote a few days to our project.
So we decided to make a simple art piece, trying to spread the message about climate change in a new way. We wanted to publish something that made the issue feel tangible—not just the effects, but also the causes.
The result was HQ→CO2: a site that juxtaposes Street View photos of fossil fuel headquarters with satellite imagery of places affected by climate change. The message is that there are executives in corporate headquarters—real people in buildings—making billions of dollars selling a product that’s ruining the Earth.
Climate change is a complicated problem, to say the least. Fossil fuels aren’t the only cause of it, and regulating or destroying these companies won’t magically fix things unless we adjust the rest of our society to not need so many fossil fuels. Still, these companies bear part of the responsibility for this mess we’re in, and I think it’s right to put pressure on them to be part of the solution.
The tech
Why do a side project if not to use fancy tech and over-invest in tools? 😀
Some reviews of our tech stack:
Svelte: Great concept. Reactive programming is a solid foundation, and the compiler is super useful. Fantastic tutorial. I’m excited to see where this framework goes.
WebGL w/ Regl: WebGL opens up endless creative possibilities for web design, which I find very refreshing. Shaders hurt my brain in a good way; they’re such a different paradigm from what I’m used to. “Print debugging” by outputting colored pixels to the screen is odd but fun. I’ve heard that WebGL has painful boilerplate but Regl makes it easy to work with, which seemed to be true.
Glitch: We started out the project on Glitch, collaborating in instant realtime Google Docs style. I found the workflow pretty great for two people working on a small site, and I’m convinced that live collaboration with integrated hosting has a major role in the future of IDEs. There’s still a lot to figure out to make that experience smooth for larger projects, but it’s so simplifying to have everyone working on the same live-deployed codebase—it basically eliminates “give you my code” and “deploy our code” as separate steps from editing the code.
The Glitch VSCode plugin got a bit glitchy (potentially caused by coffeeshop Wifi) so we ended up reverting to a classic git workflow for the latter half of working on the project. This felt familiar and comfortable, but nonetheless a step back from the awesomeness of instant deployment. Every time we were trying to share in-progress work with each other, I felt the pain.
ZEIT: We deployed on ZEIT because they’re one of the hackathon sponsors. I’m still getting used to the serverless architecture and find it a bit overcomplicated at times, but maybe that’s just getting over years of server-ful indoctrination? We also ran into a few annoying problems on ZEIT that felt like they resulted from too much invisible magic in the deployment process; some of their documented functionality for setting environment variables on on servers..err..serverless?…didn’t work as expected.
On the internal tooling front: I made a system for managing the data on companies and satellite locations in Airtable, and syncing it to our code repo via the Airtable API. Much nicer editing experience than manually editing a giant JSON file of data.
I also made a little tool for tweaking parameters to Google Maps API requests and seeing the results in realtime. This was super helpful for fine tuning the imagery with a fast feedback loop, and made adding new locations about 10x faster.
Other climate related things:
I recently signed up for Project Wren, which helps you calculate your carbon footprint and buy high-quality carbon offsets. I love how much detail they give you about exactly what offset project you’re funding.
If you have ideas on how software can help address climate change, shoot me an email or tweet, I’d love to hear from you. I’m hoping I can partially orient my PhD research towards this topic at some point.
Also, I just gotta say: years of professional software engineering has trained me to work sustainably, but there’s something to be said for a few long, unsustainable days of furious programming. Early-stage creative prototyping seems to benefit from a certain energy level that’s not easily attainable in a sustainable environment.
Adventures in computer vision
My friend Kevin has a CNC machine with a webcam pointed at it. This seemed like a fun jumping off point for using computer vision to help with fabrication.
We started by trying to use Python and OpenCV to automatically recognize the corners of rectangular objects. Our first demo was finding the corners of an index card taped to some cardboard. In this screenshot you can see the stages of the pipeline. Clockwise from top left:
- Input image (in black and white)
- Apply slight blur
- Detect edges with Canny edge detection
- Detect lines with Hough line transform
- Plot detected corners of the index card (intersections of lines) in a transformed coordinate space
This was interesting but proved quite finicky, so we pivoted to a new task. The idea was to display live video from the CNC bed inside CAD software, at scale, to help understand how objects in the CAD design correspond to the real world fabrication surface. Also, the CAD software was running on a Windows VM, so this involved learning a lot more about Win32 COM and VBScript than I ever expected to.
Step 1: get live video displaying in Autodesk Inventor! The workflow was pretty janky:
- Mac host machine writes images from webcam out to file
- filesystem is shared with Windows guest VM
- Python script on Windows VM tells Autodesk to reload image from file, many times per second
Seeing this working the first time was thrilling:
Step 2:
- User selects the corners of a known rectangular area in the image, by marking points in the CAD software.
- A Python program on the Windows VM reads out the corner positions from the CAD software, to send to the Mac host machine (by updating a file shared with the host)
In the image below you can see the corners selected with purple crosses:
Pro tip: when sending coordinates between multiple programs, spend some time understand the coordinate systems of the various programs before you start! We spent forever debugging issues with this. Also, why can’t everyone just agree on which corner is 0, 0?
Step 3: Using OpenCV on the Mac host, perform a perspective transform on the webcam image to create a new image that looks like an “overhead” view of the rectangular area. If the rectangular area has known dimensions, this also lets us scale the image to the correct physical dimensions in the CAD software.
Here we’re using Autodesk to measure some calipers set to 2 inches. The software measurement turned out pretty accurate!
Quick overall impressions:
The OpenCV API is really fun for amateurs. It provides powerful built-in functions, but still lets you flexibly compose your own pipelines, unlike lots of neural net based tools.
The Windows COM system has some rough patches, but I was amazed by how much power we had to query and control every single object in the Autodesk system. It felt way more direct than the web APIs I’m used to. Is all Windows software this extensible?? (I’m guessing not)
Arithmetic viewer
I started a couple other projects that are still in very early days, and may continue developing if I have time:
My friend Eli and I spent a couple days hacking on a UI for exploring math expressions. We ended up with a textbox where you can type in arithmetic, and a UI to hover over any sub-expression and see the result of that sub-expression. (inspired by my friend Glen’s awesome project, Legible Mathematics)
It was my first time building something in Elm (pairing with friends who know other languages is a fantastic way to learn). Elm was such a good fit for this problem that I found myself repeatedly amazed. Check out the parser code: so beautiful! I’m interested in eventually growing this into some sort of structured Lisp editor. It’ll be a challenge to combine the input and output boxes and make the interaction feel intuitive.
Research explorations
I’ve also been exploring some early prototypes which I plan to eventually grow into a fuller research project on end user programming.
One concept I’m exploring is a UI for transforming nested JSON objects with live feedback. What does a good formula language look like for this sort of task when it’s coupled to an environment that shows the data?
Another kinda related idea I’m exploring is a visual environment for making interactive UIs based on reactive streams (building on tools like RxJS and RxMarbles). I’ve come across a fair amount of work on writing reactive UIs in code, and visualizing reactive code, but haven’t seen much on actually creating the reactive code in a visual environment in the first place.
This is a super rough screenshot of a prototype I’m mocking up, where you can see streams and visually manipulate them to create a UI. On the left is a mobile app UI, on the right are a series of streams being derived from a stream of click events, all defined with a functional formula language in something like a spreadsheet formula editor. Still very early but I’m excited about the potential.
If you have thoughts about any of these ideas, I’d love to chat—send me an email or tweet!
Also, now that I’m doing research full time at MIT, I’m planning to post roughly monthly updates about my work here. If you’re interested, you can follow along via Twitter or RSS.