Signal #79674NEUTRAL

A "Lay" Introduction to "On the Complexity of Neural Computation in Superposition"

70

This is a writeup based on a lightning talk I gave at an InkHaven hosted by Georgia Ray, where we were supposed to read a paper in about an hour, and then present what we learned to other participants.Introduction and BackgroundSo. I foolishly thought I could read a theoretical machine learning paper in an hour because it was in my area of expertise. Unfortunately, it turns out that theoretical CS professors know a lot of math and theoretical CS results that they reference constantly in their work, which makes their work very hard to read, even if you’re familiar with the general area.Instead of explaining a bunch of the substantial actual math behind the paper, the best I can do is give an overview of what the setup for the paper is, what the contributions of the paper are, and how they fit in.Back in the olden days (2021) there was a dream that you could just open up a neural network and understand it by looking at individual neurons. For example, you might ask, “is this neuron a ‘ca...

AI Alignment Forumabout 5 hours ago
Read Full Article

Explore with AI-Powered Tools

View All Signals

Explore more AI intelligence

Want to discover more AI signals like this?

Explore Steek
A "Lay" Introduction to "On the Complexity of Neural Computation in Superposition" — Steek | Steek