Clock(Time) synchronization — Part 2


Various methods of time synchronization

In our previous article, we gave you a brief introduction to clock synchronization and mentioned that this is essential to most systems using clocks, including bio-signal analysis. Today, we’d like to go a little deeper into the topic and cover some different methods of clock synchronization. Let’s find out how it actually works!

Time measurements and cause of errors

Let’s look at what error occurs in the synchronization process before we consider how time synchronization actually works. The determinants of clock synchronization errors in a wireless system can be classified into 4 types of durations — send time, access time, propagation time, and receive time. In a wired network, access time is excluded, since problems such as channel interference are rare (H. Kopetz & W. Schwabl, 1989).

Among these, propagation delay is the most important one. As send time and receive time are the processing time through the entire system, the offset is almost negligible considering the desired degree of accuracy in clock synchronization. (But of course, there are several ways to reduce this part, in which the process for synchronization is handled in the lowest layer.) In the case of propagation time, the clock of the sender and the receiver aren’t synchronized, and therefore, there is no way to measure it. The solution to this obstacle is methods like NTP (Network Time Protocol).

Basic time synchronization algorithm: Network Time Protocol

NTP is regarded as a basic algorithm to solve clock synchronization problems. In fact, many algorithms developed afterward have been modified based on NTP by the characteristics of communication medium.

In NTP, when a device is set as a master clock(server), a message is sent to all slaves(clients) to synchronize the clock. Then, the slaves calculate their local time and the drift of the master clock. But the problem in this process is that a propagation delay exists. Let’s say, for example, that the master sends a PTP(Precision Time Protocol) message signaling 1:00:00 pm on a network with a transmission time of 1 second. The slave, however, will receive the message at 1:00:01 pm. A correction is then required for the network latency, which can be done as follows.

First, we define the offset as the difference between the slave clock and the master clock. Then, let s (t) be the slave clock and m (t) the master clock for time t, and the offset o (t) is

o(t) = s(t) — m(t)

To calculate this, master and slave need two message exchanges.

Figure 2.

<image credit:>

1. M: send a sync message at T1

2. S: receive sync message at T1 ‘

3. S: send a delay request at T2

4. M: receive delay request at T2 ‘

Through this process, the master clock gets to know all of T1, T1 ‘, T2, T2’(T3 and T3 ‘ are additionally required for the slave to know the offset as well.). Let RTT(Round Trip Time), i.e. transmission delay, be d, and assuming that the delays in master -> slave and slave -> master are the same,

T1′-T1 = o + d

T2′-T2 = -o + d

Offset o can be obtained, using this, as follows.

o = 12{(T1′-T1) — (T2′-T2)}

If the transmission delay is not constant, it is recommended to repeat the process several times and use the average value or to exchange the time as often as possible to the extent that it does not affect the actual data transfer.

Video, screen synchronization

Also for instruments that measure biosignals, the clock offset to the server should be measured to obtain the correct timestamp of the signal. However, it is not easy to apply algorithms such as NTP as it is when the clock(time) of the equipment itself is unknown or if the logic is inaccessible.

In this case, you can measure the clock offset by generating a signal that has an accurate timestamp. For example, a photodiode can be used to measure the offset of electroencephalography(EEG). But for this method, of course, you need a light emitting source capable of recording a correct event time, such as a computer monitor which can change its color suddenly from dark to white. When the photodiode, connected to an EEG electrode, receives the light, it generates an electric signal that is recorded together with the timestamp.

Figure 3. Time synchronization with photodiode

<image credit:>

Figure 4. Time synchronization with video

<image credit:>

By comparing the timestamp of light emission with the timestamp taken when receiving the electrical signal, you can to verify the time difference between the two devices and the synchronization accuracy.

To Conclude

Now you might have a sense of how clock offset is calculated and how the synchronization is applied in practice. I hope this article helps you understand the process in real cases, such as biosignal analysis or relevant experiments.


<H. Kopetz and W. Schwabl, “Global time in distributed realtime systems,” Technical Report 15/89, Technishe Universitat Wien, 1989>

Read More

Feature Matters


The performance of machine learning tasks highly depends on data representation or feature selection based on the problem you want to solve. For example, when distinguishing between apples and oranges in a photo, colors could be a more intuitively noti…

Read More

Unveil User’s Architectural Preference in VR


In our previous stories, we have continuously discussed infinite possibilities where a virtual reality (VR) can be utilized as a useful tool to explore the mind of the users. It becomes extraordinarily powerful when combined with technology, such as eye-tracking, EEG analysis, and neuroimaging, that in all tries to understand and unveil things originally hidden deep inside human’s mind. In particular, the latest research review — “Combination of Virtual Reality and Eye tracking: Explore the Mind of Consumers” — was about the development of a gaze-based assistant system in a virtual supermarket. The proposed system was able to provide individualized recommendations based on a consumer’s preference, which was determined by the customer’s attention level.

How can we know deep psychology inside preference?

The results of the previous research is inspirational as it has not only shown that the virtual supermarket can induce quite an active interaction between human and technology but also inferred that the customers’ in-time preference can be measured and adapted to the virtual environment. Yet, there is still a pending issue inside. Why did the consumer stare at a particular product longer than other products, did he like it more or was it rather because he strongly disliked it? What kind of feeling, or true reaction occurred inside his mind when he searched through granola of multiple choice? The bare suggestion that eye-tracking technology provides itself, though powerful, still cannot tell us what specifically people have in their minds.

The combined use of VR and EEG: Explore preference


The preference, or liking, can be rephrased into the term “affective response,” and it belongs to human emotions that is so much complicated than to be simply defined and determined by the superficial gaze information. On contrary, our brain which actively reacts to all sorts of stimuli contains much information about what we see and how we feel. Therefore, in this week, we decided to move inside of a research that aimed at investigating deeper nature of preference in a virtual reality (VR) through the use of electroencephalography (EEG) — “Affective response to architecture — investigating human reaction to spaces with different geometry.”

Investigating emotional response to spaces

The field of architecture is one of the most prominent areas that deal with the interaction between humans and the environment. As people react differently to various spaces they enter into, architects should be sensitive to those feelings in order to construct a space which is not only suitable for its use but also attractive to the mind of the users. In other words, searching for the right way to design an architectural space is enduring, but fundamental to most of the architects. A lot of people assume that it is the designers’ responsibility and ability to figure out the perceptual and cognitive influence of architectural space on people. However, much can be identified with the help of scientific measurement and analysis than merely with an individual’s insight. Hence, the research aimed at investigating emotional and cognitive reactions that are generated by various types of spaces through the quantification and measurement of EEG.

In order to achieve the stated objective, the research team has conducted two phases of the experiment. In the first stage, the study centered upon observing human behavior in a virtual environment through analyzing the participants’ self-assessment result. But above all, why did they choose VR? When designing an experiment setup, there always exists a trade-off between keeping control of experimental variables and presenting a realistic environment. In this situation, the virtual reality allows to manipulate experimental controls while maintaining design features in constant. Therefore, the research has chosen the virtual environment as a substitute for reality so as to overcome the trade-off. Then how did they design the virtual environment to observe human reactions to different types of architectural spaces?

Figure.1. Plan and sections of the four designed VR spaces

They have built four types of virtual environments: a square symmetrical space (Sq); a round-domed symmetrical space (Ro); a sharp-edged asymmetrical space with tilted surfaces (the surfaces refer to walls and a ceiling) (Sh); a curvy-shaped asymmetrical space with rounded smooth surfaces with no corners (Cu). The primary intention why they designed four different types of spaces in such way was to examine how people feel about interior with complex forms that have breaks and curves (Sh and Cu), as compared to a simple structure (Sq and Ro).

Figure.2. Upper left, external view of the four designed VR spaces

The participants were asked to enter each of the four spaces by walking via joystick; they passed through the corridor, opened the door, explored the space and left after they finished their exploration. After that, they filled out a questionnaire about their experience in each of the space, and rated their preference to it on a 5 point Likert scale.

In the second stage of the experiment, the new framework of examining physiological responses of humans to architectural space geometry has been adapted. Having the participants wear a wireless EEG device during the investigation, the same trial conducted in the first stage was held again to analyze the participants’ brain activity. In other words, the subjects walked through and explored the space as they did in the first experiment, but this time with wearing Emotive EPOC device.

VR experiment with SURVEY versus EEG

The results of the two experiments were revealed to be complementary. The first experiment could suggest that there exist some differences of what people felt about each space in terms of efficiency, aesthetic point of view, safety, pleasantness and level of interests. In addition, it was inferred from the respondents that participants who have no expertise in the field of design have a different tendency of space preference from those who work as designers.

Fig.3. Experiment 1

Then how was the result of the second experiment, the enhanced version of the first one with a reinforced analysis methodology? The participants’ brainwaves were successful in directly proving the different reactions of spaces which were indicated in the first experiment. What is more notable, however, is that the EEG examination could suggest an additional insight.

The figure below illustrates the NPC 1 and NPC 2 mapping of a participant, dots of each color indicate four different kinds of spaces. The first graph is based on a 10-s recording window while the second one focused on the first 2-s of exposure to a certain space. When looking more information and making comparisons between both of the two graphs, it is observed that different reactions to each space can be well distinguished in the early time window. That is, the adaptation and emotional response to an area occur within the short period. Besides, this finding is in line with other studies on eye-tracking which revealed that viewers of an artwork spend their first 2-s in doing a sweep of the image and grasping the overall gist.

Fig.4. Experiment 2

In a nutshell, the experiments conducted in virtual reality were able to provide a better understanding of affective response to architectural space, which can consequently contribute to building a better design that the users are in favor of. Furthermore, it was indicated that the use of EEG can visually show different physiological reactions in a more explicit way. When compared with the analysis of a subjective survey result, the brainwave can allow the researchers to get real time information about what happens in the users’ mind while they explore and adjust to a particular space.

Explore user mind with EEG in VR

To sum up, even the identical experiments and researches will yield qualitatively different results and contributions depending on the analysis methodology. In order to get a more profound understanding of humans and how they feel, think about and react to their surroundings, it is highly crucial to carefully collect and investigate physiological data. Electroencephalogram which has relatively high applicability can be a proper choice to a number of researchers.

If you are interested in trying out your research in VR and want to understand the users’ brain activities in the environment, visit our website and get relevant information of our newly released product, LooxidVR. This mobile-based VR headset is the world first to provide an interface for both the brain and the eyes through its embedded EEG sensors and an eye camera.


In addition, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.


  1. Affective response to architecture — investigating human reaction to spaces with different geometry | Architectural Science Review
  2. Visual interest in Pictorial art During an Aesthetic Experience | Spatial Vision
  3. In the eye of the Beholder: Employing Statistical Analysis and eye Tracking for Analyzing Abstract Paintings | Proceedings of the 20th ACM international conference on multimedia

Read More

Combination of Virtual Reality and Eye Tracking: Explore the Mind of Consumers


What do you think is the most relevant information about a product that can successfully induce a consumer’s purchase behavior? If you were a promotion manager for a granola selling company, how would you try to understand your potential consumers’ purchase behavior and deep psychology inside it?

Eye tracking: Keeping track of consumer attention


Eye tracking, the sensor technology that enables a device to measure exactly where and when people’s eyes are focused, is known to provide a better understanding of consumers’ visual attention. People tend to stare longer and look more times at the object that they are interested in. In addition, their visual path gives much information about cognitive flow. Therefore, carefully investigating visual logs of consumers — eye tracking data in other words — might help those who are desperately looking for ways to promote sales of particular products to get insignificant insight. On the consumer’s perspective, the general public might also be able to enjoy the far better shopping experience with time-to-time recommendation system based on their eye gaze information.

But how can we track consumer attention in the real world?

Recall your shopping experience. As you enter a supermarket and stand in front of the shelf stuffed with the product category you were looking for, you will skim through several products and finally pick one of them to your cart. As a matter of fact, the process of making purchase decisions happens within seconds. Consequently, it is highly important for retailing researchers to investigate consumers’ natural attentional process “in situ.”

The majority of current research, however, even when analyzing eye tracking data, is undertaken in laboratory settings. The laboratory environment would make it easy to exercise any experimental controls to investigate what you want to know deeply. On the other side, keeping experimental controls inevitably leads to low level of ecological validity. If the ecological validity level is poor, any kinds of well-analyzed result might become valueless as we cannot guarantee that similar effects would happen in the wild. This trade-off between control and ecological validity level has always been quite a serious issue for many researchers.

Virtual reality mobile eye tracking: A new research opportunity

Fortunately, the advent of a virtual reality (VR) is extending the previous trade-off frontier for the existing researches. It is because VR not only allows various levels of experimental control but also makes it available to build up shopping experience that feels like a reality. This sort of experimental environment now puts the researches on the point where the optimal combination of experimental control and ecological validity is implemented together. Therefore, with the help of VR, eye tracking technology can be used way more effectively to capture the user’s visual attention with better reliability. This week’s research — “Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research”— has reviewed how mobile eye tracking can be used in the virtual reality and discussed the pros and cons of applying eye-tracking technology in terms of experimental environments. Particularly, this research focused on three different kinds of environments — conventional 2-D monitor based setting, virtual reality, and the real environment. Besides, the research has proposed the experiment in a virtual reality setting to discuss the validity of using mobile eye tracking in VR to study consumer behaviors.

Figure.1. Interacting in a virtual reality

First of all, this paper has set up criteria and rated both of relative superiority and inferiority among three different experimental settings for each criterion. The result of ratings, as written in the table below, might work as a useful guideline to decide which equipment to use and how to design eye tracking experiments. As we can read from the table, the “desktop eye tracking”, compared with “mobile eye tracking in the field”, has relative advantage in criteria that are concerned with experimental control (“Ease of creating/using non-existing stimuli”, “Ease of controlling and randomizing treatment and extraneous factors”, “Naturalness of the eye tracking task”, “Ease of analyzing and reacting to respondent’s attention and behavior in real time”, “Ease of generating large sample sizes”, “Ease of obtaining retailer permission to record”, “Ease of data preparation”, “Reliability of AOI coding”, “Reproducibility of experimental setting”). In contrast, “mobile eye tracking in the field” shows better rating over “desktop eye tracking” in criteria about the ecological validity (“Realism of stimulus display” and “Realism of interaction”).

Table.1. Criteria for deciding which environment to use — eye tracking specific criteria are highlighted in grey

How about “mobile eye tracking in virtual reality”? Interestingly, “mobile eye tracking in virtual reality” seems to be the compromising plan that appropriately mixes up relative advantages of both sides (“desktop eye tracking” and “mobile eye tracking in the field”). “Mobile eye tracking in virtual reality” is rated with high scores in almost every criterion where “desktop eye tracking” outperforms “mobile eye tracking in the field.” What is more, different from “desktop eye tracking,” “mobile eye tracking in virtual reality” is rated with enhanced scores in “Realism of stimulus display” and “Realism of interaction.” Although it still needs to tackle with the problem of cost-effectiveness and to meet further technological requirements concerning realistic visualization as well as convincing presentation of the setting, it is anticipated that mobile eye tracking in VR might open a lot of new research opportunities.

Fig.2. Trade-off between experimental control and ecological validity

Observing shopper behavior with eye tracking data in a virtual supermarket

Here is one of the new studies that adopted eye tracking in virtual reality in a new field: shopper research. In order to prove how mobile eye tracking in virtual reality can contribute to answering unresolved questions in the retailing study, this research team has tried to design the virtual store to test whether additional information about the product can help change the consumers’ final purchase decision.

In the virtual supermarket which was designed to create a realistic shopping experience, there were several shelves filled up with assortments of different granola and baking mixture products. The supermarket was presented in a virtual reality lab equipped with the front projection screen of the CAVE environment, and respondents went through the experiment wearing SMI eye tracking glasses. They underwent three successive stages. In the first stage, they had to choose the most preferred product out of 20 from the shelf. Then, the same set of products reappeared with the additional red frame highlighting the initially chosen product. Soon after that, the recommendations of six other products were highlighted with a blue frame. There was also a pop-up bubble with the additional information about the product presented right next to the product where the respondent gazed at for more than 200ms. In the end, the subjects were asked if they would stay with their initial product choice or not.

Fig.3. Example scenes of the virtual supermarket

The results showed that some subjects had changed their preference during the stages. In other words, their decisions were affected by additional information provided in VR, which in turn implies that the virtual supermarket induced quite an active interaction between human and technology, and that such an experiment setting is helpful in testing and observing consumer responses.

Soon, when eye tracking technology is integrated into hands of electronic devices, far more innovations in retailing researches and people’s shopping experiences would come true. For instance, a gaze-based assistant system which can provide individualized recommendations based off of a consumer’s preference might change the expectation of what shopping should be in the future.

Try out your research with virtual reality and eye tracking

Although the paper mainly focused on the field of shopping, such gaze-based assistant system that reflects a real-time preference of the user in a virtual environment can be widely used in many areas in which exploring people’s minds is important. If yet uncertain of its validity, check out some available technology that has successfully combined virtual reality with eye tracking and try it out to investigate no matter what you want to know. A great deal of valuable but so far hidden information such as consumers’ complicated in-store decision processes, critical interior design elements that significantly influence people’s mood, and more would be in your hand.

If you are interested in using a brain and eye interface in the virtual reality, visit our website and get relevant information of our newly released product that provides the world first mobile VR headset with an interface for both the eyes and the brain.


In addition, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.


  1. Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research | Journal of Business Research
  2. How Eye Tracking Works | Blue Kite Insight

Read More