Friday, December 23, 2022

space as interaction

Spatial interaction. Using perspective warp to get a more accurate positional data. When tracking position in video recording the output data is subject to the same perspective as the source footage. By distorting the perspective to be more top down we can get a more accurate position.

Spatial forms by compressing rooms down to organic crystal shapes. Tracking how people move through a space and use the position data to transform and distort an abstract representation of the space. When a spectator sees distortion, they can feel the presence of people inside the original space (or something like this).

Monday, August 15, 2022

Interactive 3D scene

Simple render + image composite. All the objects are modelled in Cinema 4D and ZBrush except for the girl. Everything is composited together in Cinema 4D where the models and textures are baked and exported as GLTF files.

The scene is then recreated in Three.js to make it semi - interactive. A live demo is in the link below. NOTE: the loading time is currently very very slow as the models / textures could be optimised way more. Also only cursor based, could for example make it gyroscope based for phone interaction as well.

https://holo-scene.netlify.app/

immersive memories

The thought behind this project was to be able to enter past memories through photos. Less so about telepresence, but a way to allow people to connect with the past. This was made in Cinema 4D and after effects and is simply a conceptual video. To develop this I think it would be possible to recreate something similar by using depth maps.

It is currently very linear. Given for example a 360 photo as input it could be way more immersive, but given that most people don't have a 360 degree camera, I thought the current version is more accessible.

Chose a photo. Then use HiFill or any other image Inpainting model to remove subjects / objects in focus from image.

Keep two copies of the image, one with subject, one without. Then use a depth estimation model to calculate a depth map, e.g MiDaSv2.

Use image as a texture on a plane and use the depth map for vertex displacement.

Wednesday, July 20, 2022

step counter experiment

Tried to mess around a little bit with the step counter.

Progress bars are very user friendly but there is so much fun stuff that could be done to show progression. However I do think it comes at the expense of "readability" / effectiveness. As much fun as I think this is to play around with I think it does take too much focus away from the other parts of the application / distracts way too much.

I think at the core of this is that gyroscope / device motion is such a nice and easy way to add some extra interaction which also feels super nice. E.g. just a slight color change shifting around like when you tilt iredescent foil. Just ended up going overboard with the idea.

Currently super sensitive to demonstrate how the progression works. Adds 25 steps at a time instead of one by one, also the accelerometer is too senstive as the movement ends up being a little bit chaotic, should be more subtle.

Also when shaking it doesn't really do anything currently. I think it would be much nicer if it had a proper fluid sim or something resembling meta balls instead where it breaks into a bunch of smaller spheres before lumping back together again into one blob.

Below is a live demo. To activate the accelerometer click on the step number (big 250 in upper left corner). Only tested on iPhone so not completely sure if it works on other phones.

https://step-experiment.netlify.app/

Sunday, July 10, 2022

glsl focus

Using depth maps to control dynamic focus. The focal point is currently way too narrow as it doesn't really seem like anything is totally in focus. Also the depth map has a different framerate than the actual video which results in a mismatch where you can see an "invisible cone".

Try to make the depth map control something else, not focus. e.g. displacement?