Support for sequence, navigation, and multimedia
brylie opened this issue · 6 comments
Feature
Make it possible for users to create rich, media experiences through direct manipulation. These experiences would allow non-linear navigation, exploration, and evolution.
User stories
As a creator
I would like to create a rich multimedia experience
so that I can integrate new media elements into my narrative
As a creator
I would like to add navigational possibilities to my creation
so that people can interact with and explore my idea through sequence and flow
Collaboration
The p5.js project may offer a good toolkit for canvas rendering, animation, sound, and other multimedia objects for inclusion in an Apparatus project. Specifically, issue #189 may prove a good starting point for collaboration.
Thanks for making the "introduction", @brylie. I have great respect for p5 and the people behind it, especially in how they organize together as a community for learning. I agree that we share an aspiration towards making dynamic media accessible to everyone, and especially visual thinkers. I'm not sure exactly what a collaboration would entail, but I'd be happy to discuss if people have ideas.
As for the two user stories you mention:
Apparatus should definitely support more multimedia, images and videos at least. Images should not be very hard to integrate. Really the only tricky part is how images would be packaged with saved Apparatus diagrams (e.g. as base64 blobs, or you have to reference them as external urls, etc?). I think video has a lot of potential as well, since the frame of the video would be one of its attributes. You could thus create layered "slit scans" via spreads etc. You could do things like this eToys experiment.
Adding sequence and navigation is also a goal for Apparatus, though it will be longer in coming. Currently the only way to interact with an Apparatus diagram is through drag and drop. You cannot, for example, create a checkbox or a link. To do this would require being able to listen for events (mousedown, on tick, etc) and then perform stateful updates of variables/attributes. I have ideas for this, but it requires careful design work in order to not overcomplicate the interface/programming model.
Thanks for the response.
In what ways might Apparatus build on the p5.js library/ecosystem, I.e. how might p5 serve as a foundation for additional direct-manipulation objects?
Also, what are your thoughts on managing 'state' using a data flow model and reactive functional programming?
Over the weekend, I went to a Hackathon in Helsinki. My team, who consisted of non-coders aside from myself, wanted to make an interactive hypermedia narrative about refugee experiences in the EU. While the team were planning the narrative sequence, which had branching choices and variables, I discoverd Twine.
Twine has a flow chart interface that makes it simple to design branching, narrative sequences, as well as support for variables and template logic.
What do you think about extending the Twine system model framework with direct manipulation graphics via Apparatus?
What do you think about extending the Twine system model framework with direct manipulation graphics via Apparatus?
Would it work if Apparatus diagrams were embeddable? If so, what kind of API would embedded diagrams need in order to effectively integrate with the Twine (or other) system?
Twine simply uses HTML, CSS, and JavaScript with a templating language option.
If Apparatus works with Web Platform APIs, embedding would work fine. Embedded creations would simply use web standards such as SVG, CSS, Web Components, etc., and would align with other excellent projects -- to mention a few, D3.js, P5.js, and Three.js.
Right, I am just watching the Allan Kay presentation where he demonstrates the dropping balls experiment you show above. Really cool how these direct manipulation interfaces can accelerate thought :-)