Technical Milestones
Milestone 1: Music instrument [MVP]
Description: A live electronic music instrument which turns dance moves into music. MWE only needs to support enough motion and sound for one track.
Target Completion Date: May 2025 (Completed: video)
Key Deliverables:
Stream data from motion capture gloves to Python
Motion2music network: a basic motion recognition AI which maps up to 10 distinct gestures to 10 distinct sounds (sufficient for playing one track)
Music engine state machine logic
Deep integration with DAW (Ableton)
-------------------
Milestone 2: Synced visuals [Design only]
Description: Create a short video to demonstrate how an animated projection can mimic the DJ dancer.
Target Completion Date: June 2025 (Completed: video)
Key Deliverables:
Wear full-body motion capture suit to demonstrate that live streaming to a rigged character on the desktop app is possible
Use existing footage to show massive commercial potential
-------------------
Milestone 3: [redacted]
-------------------
Milestone 4: Full audiovisual performance
Description: Create a sensational stage-worthy performance with dance, sound, and light. Perform in SF to get reach.
Target Completion Date: November 2025
Key Deliverables:
Replace Rokoko mocap gear with that from Manus-Meta
Integrate Mudra EMG wristband
Integrate T3 smart watch controller
Integrate gaming projecter or lasers
Find extremely intuitive dance-sound-light mappings
Practice !!!
-------------------
Milestone 5: [redacted]
-------------------
Milestone 6: [redacted]
-------------------
Milestone 7: Bidirectional online learning
Description: Implement true online learning whereby the AI gets continually better at interpreting the user’s motion while the instrument is played, without separate training vs inference stages.
Target Completion Date:
Backgroud:
Online learning is an as-of-yet unsolved problem in machine learning. The difficulty lies therein that when the “answers” to the problem being predicted change over time, it is not possible to have a testing dataset with which to verify that the retrained model is still accurate. Doubly so for real-time (i.e. neurotech) applications, where learning and utilization should be parts of the same process in such a way that a test dataset is not necessary. For AI to feel like part of the self, it must learn like the self. That this is possible, the human brain proves.
Intensive online learning research is being done by experts on the topic, which begets the question of why we should have any better a chance at solving it. I postulate that not only do we have a chance, but we may well be ideally positioned for solving online learning. The reason being simply that some things can only be discovered if you have the right tools. If you don’t have a submarine, you will never reach the Mariana trench, and if you don’t have a spaceship, you won’t land on the moon. For the problem at hand, discovering the solution to online learning requires a tool, which 1] inherently lends itself to learning and improving from both user and AI, 2] has an excellent UI/UX, since a sufficiently subpar interface may well make human learning impossible even before attempting online learning, and where 3] the thing which the AI learns closely mimics that which the human learns (ideally the learning processes should almost mirror each other), because this automatically elucidates their similarities and differences and thus highlights where online learning falls short.
The alternative of trying to solve online learning theoretically is pointless due to the massive amount of unknowns, especially the unknown unknowns. Instead, one should build a real-world application as if the problem of online learning is already solved, since doing so constrains the problem sufficiently that the remaining challenges are elucidated. Reaching the point where one truly does not know how to continue, invariably highlights which problems really need to be solved, and often shows that many conceptual questions actually just turn out to be engineering issues.
Right now we might try to answer questions such as,
Is needing separate training and inference stages really a problem, given that they can be smartly parallelised?
Are current learning algorithms fundamentally not capable of dealing with data drift?
Can we “cheat” the online learning problem by adding a variable which the system could not have known, and then figuring out how to compute said variable after all?
and more,
But when actually building the system we might figure out that completely different questions are relevant - a task which is almost always much harder than figuring out the right answers.
Key Deliverables:
figure out when we get there
-------------------
Milestone 8: [redacted]
Lorem ipsum dolor sit amet consectetur adipiscing elit. Quisque faucibus ex sapien vitae pellentesque sem placerat. In id cursus mi pretium tellus duis convallis. Tempus leo eu aenean sed diam urna tempor. Pulvinar vivamus fringilla lacus nec metus bibendum egestas. Iaculis massa nisl malesuada lacinia integer nunc posuere. Ut hendrerit semper vel class aptent taciti sociosqu. Ad litora torquent per conubia nostra inceptos himenaeos.