Splogging?: A Spatial Web of Empathy

Q; Why do I have a spatial web page top line listed on my Strata Cut site?

In 2014 – I co-founded a convergent spatial relocalization software company YOUar.io , with Ray Di Carlo, Carlos Calica, Oliver Daniels (my son), and most importantly George Blikas – who has grown into an amazing CTO for the company. While this domain is generally known as AR (augmented reality) our patents include a 2014 invention that realizes how to align offsite VR (remote) content to interact with onsite AR (local) content. So it is really an XR (both combined) asset exchanging position to position spatial content platform.

The computer vision team deploys a very unique API. This enables any device (android, iOS, drone, robot, IoT, machine vision, etc.) to SEE the same spatial position as any other device REGARDLESS OF BRAND. This is a Master/Slave two-level computer vision (CV) system of sparsely triangulated alignments and relationships. The higher-level computer vision function is a lightweight guide, so common places seen by all different devices can be shared by all specifically different devices. This brings together disparate uncoordinated CV into common alignment – across different kinds of hardware – all without bogging anything down.

The YOUar SDK can orient otherwise non-similar computer visions to accurately ‘see’ the same place in real time. And this is deployable at PLANET SCALE. Uniquely, sharing spatial information happens within the app, so nothing is sent over the WEB, which is a more secure data method, even if HTTPS, is certainly 99.9999 secure. Data security in person-to-person (P2P)transactions – is a fundamental rights issue of our AI-enhanced future, and this goes for our agency and self-determination in the spatial web as well.

‘Accurate’ in AR is situational and condition-specific. Our goal is 1cm alignment with well-set up ‘trackables’ (a term of art referring to any marker, image or feature sets aligned to optical capture of real-life geometry, shape, contrast, and color. ) In real life, consumers use this tech in less-than-ideal ways, so alignment has edge cases that drift out more than designed. However but the more you use it (re-see the trackable) the more accurate the alignment becomes. The more others re-see the shared trackable, the more your device becomes accurate as well. And yet, near-field or close-up AR responds differently than far-away AR – and often needs to be ‘tuned’ to specific use cases and needs.

As AR glasses improve and become affordable PERSISTENT content placed at locations can stay, and never leave. This allows folks to plant their image, video, object, unreal or unity animation, text, audio, Haptic, IoT, and vibratory experience. you name it – in places where others can see and respond. On any rural sidewalk or in Times square.

AR will be the interface heads-up display natural eye interface, by which we see and change the behavior of our robots, our drones, our IoT sensors, our metaverse fantasies, and games. The second life in real life overlaid on the physical world.

VR space will interact as if it was there on location with the real content. So remote offsite use of the content for AR places will oddly become the dominant interaction with the real world, but the real overlay in local space, is still where the function and magic of this future will derive from.

And one of the most interesting SOCIAL uses of this upcoming phenomenon is SPLOGGING – e.g. – spatial blogging. As you are there (or anywhere public) you leave your content, and others leave their spatial response or additional AR content. Influencers and Creators will leave a ‘trial’ of site-specific location-accurate content anywhere they go. With glasses on, the community of created and placed persistent content just appears to us, where others put it.

A whole new conversation among the human race is about to begin, driven by the usefulness and attraction of a world scape UI/UX interface beyond phone screens. This permanence of populated places and spaces (local and remote) inhabited by digital ghosts and vector-responsive apparitions will grow. Marinate on what the next decade begins to look like. An INTERNET OF PLACES AND SPACES (IoPS) will emerge to network and connect us deeper / further / and farther than what virtual content alone has done so far, and nothing has been the same since the nascent internet became publicly useful only 30 years ago.

Q: What does this have to do with Strata Cut anyway?

I have always been a dimensional thinker – and Strata Cut has developed my spatial-time calculating brain. The same things that led me into using distortions of shape as a programmed way to reveal sliced animation – lead me to how a dimensionally connected world is upon us, and AR (XR) – will take off when ‘head-mounted displays’ (HMDs – basically AR glasses) become ubiquitous – at a price point / functional curve that more everyday items people can use. YOUar is designed to create a convergent spatial answer (democratically available to all outside of silos and walled gardens) to this new shared overlay reality we are about to embark upon. In the meantime, it is great for mobile phones right now, just as it is.

Both space over time blocks and AR/VR spatial web – are technologies of natural EMPATHY. Strata Cut demands we think of all shapes and actions as connected in time. As we make strata cuts, we cannot help but see the world differently, and think differently about events in the world. We are time blind (born without a sense of time) We understand time through things that move or change. That event happened, as what was a thing in the past, is now different. Then clocks emerged to rule our lives, and we compute spaces, and all things are tied in relative sequence to all other things now. Our time blindness is cured by these clocks ‘glasses.’ But Strata Cut literally (viscerally) shows us time flow is a real concrete physical thing. The brain retrains to understand qll actions are connected backward, forward, and sideways. This is an art form that creates natural EMPATHY. The art forces the practitioner to develop inherent sympathy with other human time-based experiences, and in this way – puts us beyond our own egos, and into seeing the world completely connected.

The spatial web will profoundly connect us to the physical world. What layer of spatial information we choose, and what layers we share, will be planted in a world-altering way. Sploggers will make commentary and human storytelling in these otherwise normal places. It will join us more tightly together with strangers and their experiences at that very same spot. It’s another natural EMPATHY technology coming. When you participate, you naturally must think more deeply about others, and the places they have traveled before you. It will be like walking a mile in another human’s shoes.

I could go on, but two of the most dominant kinds of empathy tech will be ASI, and BCI’s – Artifical Semi-Intelligence is data-driven. It seeks all data from all things to better create its models. If you participate in the rewards of other people’s data, you must give your own. And non-logical fascist fantasy) outcomes will become more and more obvious to all who participate. You have to give to get. Those who give the most will get the most = EMPATHY!

AGIs are a decade away and will grow in a ‘spectrum’ of just how ‘self-driven’ any such machine intelligence will become. Still, I am optimistic that AGIs will have automatic empathy with the data creators they get all their information. It is BAKED into the ‘share all data, improve all outcomes’ that AGI promises. Why would it cut itself off from the human data pipe? The stories of EVIL AGI – are really centralized fantasies … that CHINA or BAD ACTORS will use the tech for bad reasons. This means they will also cut themselves off from all the data, and only use what dictators can get ahold of for bad purposes. This is a real concern, but in the long run, empathy tech is stronger than evil use of that tech.

BCI’s are short for ‘Brian Computer Interfaces’ – Nerualink is an example of this from the sith lord Elon. We can already share simple command understanding without words. We will soon share a basic vocabulary of thoughts without words. Then -entire patterns of thought and images and feelings will be shared without words.

In short order, we can literally record one person’s experience and re-transmit it inside the brain of another. We can record animals’ experiences, and see what it feels like to be a horse or a pig. We can record those who die and feel what death is like. Add AGI, and the spatial web, and the Strata Cut-inspired advanced forms of shape over time engineering that are coming … add all this up …

= EMPATHY is coming: Along with an economy and society of abundance. I can ramble on more about all the other critical mass technologies that are converging toward the so-called singularity. However, all that is for another blog post, on another day.

For now, just stay the course. Simply don’t blow ourselves up. And in a short time – Peace, love, and the dissolution of boundaries are coming to us all. A POST-TERRAIN world awaits.

Q: Why does that thought scare our private little ego’s so much?