
Register / log in
Data-collecting website and IOS app, (Sep 2017)
In this network R1RA finds its physical expression and thus becomes a simulated reality meaning it becomes ”to a degree indistinguishable from true reality” or virtual reality. The purpose of this technological setup is to retrain visual expectations in humans by coordinating an intentional condensing of circumstances of a rhythmic simultaneity of private and social visual expectations of the collectively perceived reality.
The wearables shoot first-person gaze photos each time participants blink simultaneously and reveal the imagery none of them would have seen otherwise on screens of their wearables. Thus the network persistently substitudes their expectations of the external world which accumulate density through their rhythmic alignment to evolve into a distinguishable reality.
The network captures and makes them persistently reoccur as participants' expectations of the external world, so that overtime they become consciously detected, accumulate their density through their rhythmic alignment to evolve into a distinguishable reality.
– Why is it not videos but photos that cameras make?
– The choice of a static image over a motion recording responds to the brievity of a regular blink. This network is the first stage of R1RA prototype. By “multiplying” the unseen reality of its users, it gives “weight” to the “borders” of external world. By showing it to the users – it allows them to “re-affirm their existence”. Gradually “pushing out” natural expectations by the growing quantitative density, changing the rhythm of private formulation, which increases the proximity of reaching at a qualitatively new peak of simultaneity.
In the spirit of experiments on quantum entanglement , the main experiment will 35 involve durational use of an AR version of the glasses in an unpopulated location. The minimization of the amount of agents of the external world per distance will create circumstances under which the participants circulate a selected type of expectations and generate new perceptions which “will evolve” into R1RA.
An example of the technologically advanced implementation of this project could be smart contact lenses recording everything their wearer sees and replays it on a screen inside their eye lids. Such lenses are rumoured to appear on market under Samsung’s 2014 patent called “Gear Blink”. Their purpose is to overcome the limitations in image quality of AR glasses. While it is unknown if or when they will appear on the consumer market, it allows us time to speculate about the implications of their durational use.
In even further scaling up, such environment can potentially be created with the use of TMS neuroimaging decoding technology developed by researchers of UC Berkeley University and Tokyo University. Similarly to Ai machine learning and more related technology of e-telepathy, its protocol is based on collecting descriptive feedback about what a subject is viewing and mapping associations onto actively reacting brain regions. The essential difference of this method to AR lenses would be in indistinguishably realistic quality of visual experience. The images of expectations would be embedded directly into the “mind’s eye”. This could potentially mean, witnessing a real rather than a metaphorical network’s effect on visual perception.
Experiments
(*as of 2017 unless noted otherwhise)R1RA IOT network – is an experimental platfrom which explores relationship between visual perception and our co-perceived (perhaps, i.e. co-created) external reality. It comprises of sensory wearables, a data-analysis website and a phone app connecting the two. It conducts experiments in creating a communicative trans-reality by condensing simultaneous blinks of its participants – or rather of what they simultaneously consistently don’t see. The purpose is to imagine as precisely as possible distance communication with no gadgets or implants.
Attentional illusions are a complex of gaps in visual perception, limitations of brain processing causing unknown misperception of external world. One of them, attentional blink – constitutes the rhythm of encoding a stimuli into consciously accessible representations and thus is a binding border between the perceived realities. This is one of reasons why the network wearables detect blinks.
It hosts a variety of rotational uses. Bellow is their list placed on a “realistic to sci-fi” scale:
Neurophysiological function 1: retraining of visual expectations.
a. Sci-fi. Form a layer of visually perceived reality and use it as an etherial field for communication akin to visual or sensorial telepathy;
b. Enhance sensitivity to rhythm of decision-making which in turn mya allow more conscious choosing and sequencing of behavioral patterns. This may become a new design practice – design of personality: designers would create various sequenecs of behavioral which could be used a pallete for creating new personalities for their human clients. In case with applying this method to Ai, it might have advantages to generic machine learning exactly due to its ratios between specificity and limits.
c. From neurobehavioral standpoint, it could it be curious to know if this process may be used as a tool in treating psychological disorders where unpleasant memories effect formation of expectations about the future.
vs. other tech Its psychological and cognitive effects are different from VR simulations since in VR a physical person has a digital avatar whilst in case with R1RA – one’s physical body is the avatar. Could this pontetionally imply possibility to study possible effect of suggested retraining in comparatively more natural setting and for the wearer – deeper penetration of the skill being exercised?
Hardware, iteration 1
A box full of blink-detecting LED glasses (Raspberry Pi Zero + Raspberry Pi camera + Raspberry); with pre-programed 0.4 seconds as a threshold for a blink and 0.2 seconds as a threshold for simultaneous blinks; each programmed on Python; all three connected to its own Raspberry Pi Waveshare 2.8” display and between each other – via a local Wifi network; and one - connected to a GPIO master-controller with mini HDMI and HDMI outputs:
Photos, (Sep 2017)

Harware, iteration 2
(Sep 2020, post-thesis)




The wearables are intended to resemble a triangle-shaped faux-piercing or a pince-nez. The sensors are seated on a nose of experiment participant and the battery is laying of shoulders as a necklace. It is still only insides that are ready. Since fitting of the sensors may be tricky, the intention for their outer shell is to 4D print an algae-based kit for forming a custom-size wearables. We thus also hope to reduce the overalll cost of wearbles. Once one set pof experiments is over – the outer shells may be biodegrated and the insides may be passsed on to the subsequent set of experiment participants.
Experimental protocols
Scenarios involve subjects each
wearing the glasses - either in same or
different location, either aware or unaware
of each other. Once they simultaneously
blink within a range of 0.4 seconds - the cameras of all the subjects shoot photos
from first person’s gaze, forward them
to their mini displays and to their Instagram or Snapchat which are in turn used to collect the data about temporal, location and visual correlations and from there to database website which provides spec to track and analyse the received information.– Aren’t there other ways to become enhance sensitivity to the borders of the perceived worlds?
– Somewhat similar abilities to those described in the R1RA scenario, are being achieved by a famous cyborg Moon Ribas, whose electronic implant gives him the ability to feel the Earth's seismic activity. The method of external augmentation doesn’t align with my understanding of the holistic principles of the rearrangement of factors affecting our perceptual capabilities and feels somewhat outdated once cross-examined with a prospect of self-sufficient forms of interaction with the realities we share. At the same time, such attempts to enhance perception have no choice but to be mechanic because they choose to contribute to the “today” where they impact a single agent of reality rather than in a fantasised metacognitive utopia. The example for popularization they are set for, is empowering in the face of having to contest with the artificial intelligence. From the perspective of R1RA, limitations into social expectations about flexibility of perception seem to be embedded.
– Does the location of participants matter? Or is this about quantum-like entanglement?
The Zone - a phantasmagoric space depicted by Andrey Tarkovsky in his 1979 movie Stalker, has many interpretations. One is that it’s a metaphor for the postulates of life where causes and effects of events are apparent by being temporally condensed and thus sensorially exaggerated. Both mental and physical events have equal densities which inter-flick in such close proximity that one can hear the thoughts of another. The temporal bending of the space makes perceived distance from point A to point B impossible to determine and imaginings instantaneously imprint in physical reality. If these features of the Zone are physical translations of the simple rules of life, they are irresistibly reminiscent of superposition principle of quantum mechanics. In this sense, the Zone reminds me of R1RA. The multiplicity of particles is orchestrated by a single wave function (rhythm in R1RA) which allows quantum entanglement to happen. It then by multiplication can form new wave functions and result into a Hilbert space (R1RA). It’s as if the Zone is a colide of gradations of perceived natural realities similar to R1RA but existing outside space and time, rather than racing to become like R1RA does.
Experimenting with simultaneity of blinks in relation to location
Attend to experimental protocols which allow to collect data about the amount and frequency of simultaneous blinks in relation to distance between the participants:
Participants in different locations (precise location TBD) which at this point are settled by available arrangements rather than metaphoric ratio overlays of the amount of participants and their distance: (e.g. equal amount of participants per each highly populated cities.
– Wouldn’t it make sense to create this network in a virtual reality?
– The avoidance of VR technology is
deliberate. In the scenario of R1RA, virtual
worlds are elements equal to natural
illusions. Hence, embedding R1RA into
fraction of the external world which “is” its
equal - would contradict its logic and limit
public understanding of it. It is detrimental
for the network to be experienced within
our “habitual” environment which consists
of the whole spectrum of gradations.
“Habitual” experience of external world is
fragmented and the purpose of the
network to amplify not its single element
but the borders between all of the
gradations.
Ongoing developments
- Re-programing of temporal allowance for simultaneity of blinks to fit the increasing number of participants;
- Building a database for collecting and storing information about the ammount of simalteneous blinks;
- Adding geolocation feature to the wearables and adjusting the system to database to collect and store information about amount of simultaneous blinks in regards to location of it participants of the network
- Making an AR DIY 3D-printable model of wearables. They will be more engaging by allowing users to see the photos made during blinks as an overlay of the natural image. They will also be simple to replicate and thus facilitate growth of the network;
- Releasing the blink detecting protocol and model of the wearables in open source