DR. FAN: "That's a good question. I guess we should ask Tim Cook today if he still feels that way. Google Glass has pivoted to Enterprise, and as everyone knows we (Kopin) are a major supplier to Google Enterprise Edition (Glass). Following that pivot, wearables slowed down for about 18 months and many became hesitant. There are always challenges to being a pioneer, and we should expect set-backs – progress is not always linear, but that doesn't mean it is wrong to explore. The work that Google Glass has done has greatly advanced the understanding of both the market for head-worn wearables and has also advanced the technologies that are key to increasing the market’s size. Should people go back and think about whether the product was wrong? Does it mean there is no need for Hands Free? Is Augmented Reality not needed?
If you ask Apple today, you will find they are big on AR. AR is even more important than VR as far as they are concerned. Therefore, AR is needed. If you ask Apple if Hands-Free is needed, they will definitely say yes. So, again, the need is there. The question is, can such device be put on the head and work as AR glass, or smart AR glass. Is that viable?
A head mounted device is the most difficult to design: head sizes are different and individual interpupillary distances vary widely. There is also an aesthetic element: devices need to look good and fashionable, or no one will wear it. Fashion is why people are going to the band or smartwatch now. Eventually you still have to design headwear for AR, but head mounted wearables are the most difficult area to succeed in right now for these – and many more -- reasons. I give a lot of credit to Google, RealWear, Vuzix and Recon Instruments, as they are real pioneers and they are trying different things and building knowledge in the space.
My view is that, to realize the promise of smart glasses – and I mean smart glasses with large FOVs, holograms and everything you have dreamed of – we are still some time away. But many think that head mounted wearables can replicate the experience of holding a phone and walking around, looking for Pokéman. Attempts are being made in labs, and somebody will find that sweet spot of a device that people can accept.
Personally, I think our Solos – our smart glasses built in collaboration with the U.S. Cycling Team for professional and amateur training – is an excellent early product example. But the question remains: do we have the perfect solution? Perhaps not, but I think it is probably the best in the world today, and we have to see where we can improve – until consumers say “Aha, this is acceptable! This I can use every day!” Once that happens, I see a rush, with people saying “smart glasses are finally here.”
Returning again to Christensen’s theory, one cannot build a very complicated and Hands-Free smart glass device without trial-and-error or even growing pains. Keep it simple. Retain the features that make it Hands-Free, but strip out unnecessary features to instead focus on making the basics useful, while also aesthetically pleasing (as well as light weight on the head).
I bring up weight because another factor is comfort. Those 360-degree, fully occlusive avionics AR helmets – while coveted by Consumers – are very heavy. Their users are also sitting in a chair, a pilot’s seat, and believe me those pilots are much stronger than the average person.
In conclusion to your question there is a human-to-machine relationship involved that human evolution needs to adapt to. You cannot assume that humans can keep up as quickly as necessary with the technology we are capable of designing. There are technologies that, in their current form, that humans may not be able to accept."
Following my successful interview with Andy Lowery of RealWear in May of this year, I approached Kopin about a similar opportunity with their CEO, Dr. John CC Fan. Dr. Fan is an industry veteran having been involved in the creation of the HBT transistor now used in all smartphones, the original AMLCD micro display technology for DARPA, along with many other technologies and IP which will be key to the success of head-worn wearables.
Dr. Fan agreed to spend some time with me to discuss Kopin’s views on the current state of Augmented Reality (AR) and the market opportunities that are opening up the company to opportunities in both enterprise and consumer verticals.
"On the display technology front, Kopin has a unique transmissive AMLCD architecture. There is also the reflective LCOS display which I know Kopin also has with subsidiary ForthDD, and of course now OLED is becoming important. Is transmissive AMLCD, with its advantages like low cost, durability, power-efficient, something the market is ready to line up for, and it is there immediate applicability? Or will the market jump to OLED? Do you see transmissive AMLCD as still having its time in the sun for head worn wearables?"
DR. FAN:"Pupil itself is a disruptive technology, addressing multiple optics criteria for wearables. You want see-through, but the image has to be very bright so that it can be used anywhere – even in sunshine. The optics should be very efficient so the power consumption is low. You also have to address focus change issues, because when you view an image, going from display and to the real world (and back again), your eye has to re-focus. This constant need to re-focus can cause eye strain.
"Having used the Solos product with Kopin’s Pupil optics, I was amazed at how small the optical module is and that it appears to be see-through because your brain fills in some of the detail behind the display in some way. It's actually quite an incredible optical system."
The reason Pupil optics are so challenging is because it is so tiny. You are talking about a microscopic light guide that provides an image directly to the human pupil, using very little light and very low energy.
Pupil optics focus the image at a tiny dot at your pupil. Because only part of your eye’s pupil is being used to view the image in the display, the rest of your pupil and exposed optical nerves can see everything else in your field of view. Your eyes merge the two image sources into one. Therefore, our Pupil optics appears as see-through AR.
THAT I TOOK AWAY FROM OUR CONVERSATION
DR. FAN:"I believe your last comment is correct. I think voice is coming, with voice engines becoming increasingly advanced, whether it be from Microsoft or Nuance or some of Chinese voice engines.
We’ve focused our Whisper audio product on wearables and near field noise cancellation. The technology combines at least two microphones, one near your mouth and one away from your mouth – able to sample the acoustic environment about 16,000 times per second from the microphone near your mouth. In addition to a microphone technology, Whisper is also an AI system, able to differentiate what is noise and what is the voice signal coming from your mouth.
While our Whisper technology is highly advanced, headset makers are still in their early development stages. We expect multiple headsets to be released in the next few months also incorporating Whisper. We believe there is a huge need out there – certainly within the wearables headset market.
While we have focused our initial market entry on head-worn wearables, we have also solved the problem of far-field noise cancellation. Now the question is, how do we package it? That is a big market you are talking about now. You can go to Amazon Echo, or you can go to automobiles, everywhere. So now we will seek out opportunities for Whisper in far-field applications.
One thing is certain, if we did not solve the near-field noise cancellation problem for headsets with Whisper, we would not have solved the far-field problem. We keep investing in R&D and keep moving. New technology takes time, especially disruptive technology. You have to have persistence and patience. Whisper is coming, the benefits are there. Once we get a design win for far-field, and people discover how well it works, people cannot get enough of you. It's like our avionics AR technology, once people test-drive our products, we achieved more than 90% share of market!
Far-field noise cancellation is a big market. If Whisper achieves design wins, this could be a grand slam for Kopin."
Dr. John CC Fan, CEO of Kopin Corporation
"I have to ask about Whisper. I continue to follow this technology, and have read all the patents for Whisper and follow Dashen Fan’s work. But I can’t figure out why we haven't yet seen a design win. It seems like Whisper should offer significant improvements in the area of noise cancellation, but I have yet to see a product with a Whisper design. Whisper could be bigger than Kopin’s display business eventually in terms of units, correct?"
BY: DERRICK ZIERLER
Visit Kopin's website by clicking their logo below:
Going back to our military developments, in working with soldiers we’ve learned that there is competition going on in the user’s field of view: you are simultaneously looking at infinity and also looking at a very close screen. As an experiment, try looking at something in the distance and then quickly look at your phone – your eyes need time to adjust. Pupil optics address this issue. Basically, Kopin’s Pupil optics minimize focal point changes and eyestrain resulting in optics that are more relaxed than other architectures.
Kopin began shipping Solos smart-glasses with Pupil this past Spring for cyclists, and the user feedback is very strong. Our users say they don't even notice the optics now – the optical module is that transparent to the user. Newer versions of Solos will incorporate improved Pupil with even better overall experience."
DR. FAN:"This is very a good question. As you know we also have OLED now (and we believe it is the best OLED in the world). We obviously continue to like the LCD micro display – the most important reason being brightness. LCD is extremely bright, several orders of magnitude higher than what OLED can do.
Now, in what cases will you need high brightness? Case in point is avionics, an environment where the horizon (with the sun shining) becomes so bright that you need an extremely high brightness image to visually compete. In addition, your see-through optics can be quite inefficient. Designers of avionics helmets experience both issues, bright sunlight and inefficient optics – making an extremely high brightness display critical. LCD micro display is the architecture to satisfy these issues.
Now, there are some drawbacks with LCD: while very bright, its contrast does not match the very high contrast of OLED. Also, LCD is not as fast as OLEDs when it comes to switching. When you watch TV today via LCD, you do not see any blurring, but it is still not as fast as OLED.
So it really depends on the application you are looking at for the display. We always ask, what are you using it for, how bright do you need it to be, and what level of contrast do you require. Historically, LCD was the choice.
Of course, different micro display architectures have their proponents. The market is in constant competition to improve to thwart competing technologies from eating their market dominance. So as LCD improves, its contrast improves. Similarly, OLED is improving its brightness. The race goes on and on. OLED has very good contrast ratios and high speeds, but is neither bright enough for many applications, nor is yet cost effective for mass production. The only company who can mass produce them in large quantities is Sony.
I think you saw our announcement about a new fab for OLED-on Si micro displays. Without that fab, you don’t have low cost into the supply chain. It’s interesting, for OLED on glass you are seeing more and more factories being built – but they are different types of factories than what you need to produce OLED on silicon. You need a dedicated factory for OLED on silicon.
When it comes to OLED micro display capacity, Sony is number one. But, when it comes to technology and display resolution, Kopin is number one. We have entered into an agreement with a group that is currently building a factory which is expected to be on line in 18 months, and to be the largest OLED factory in the world. With it, we expect to solve the market’s (low cost) OLED micro display supply chain problem.
Additionally, I think there will be an overlap of LCD and OLED, as each architecture gets better.
For the reflective LCOS, the optics are complicated. As a result of the more complicated optics, what you would wear on your head with LCOS would be bulky. In addition, it is slow and typically shows color break-up or rainbow effect.
This is where our wholly owned subsidiary, Forth Dimension Displays, comes in. ForthDD makes ferroelectric LCOS (FLCOS) displays, which are extremely fast and do not have the rainbow effect – and very suitable for use in AR. In fact, ForthDD has supplied displays for 100+ degree field of view, full color, high resolution AR HMDs for training for 10 years. We are currently focusing ForthDD’s LCOS for non-head worn applications like 3D metrology, but also selling these types of displays for VR training. An example would be VR training for the military, like simulating a landing on an aircraft carrier or driving a tank). Our customers, using ForthDD FLCOS displays, actually build very complicated VR and training systems. We are very experienced in the VR area. FLCOS is maybe 100 times faster switching than traditional LCOS.
What we provide is very unique. Our AMLCD micro display is transmissive and we are the only ones in the world who can do a transmissive micro display that is also extremely bright. We generate FLCOS at very high resolutions, and very fast LCOS. As for OLED, Kopin today offers the highest frame rate, high resolution technology. And, for each of these architectures, we will soon have high volume manufacturing capabilities. This means that we are not just a technology manufacturer, but also a high volume micro display supplier of all types."
Thank you for your time Dr. Fan!
"How do you see the AR opportunity unfolding? I see two paths: (1) a heads-up display that does not fully occlude your field of view or try to overlay everything on top of the real world; this is what has been shown with Kopin’s Pupil design; or (2) other concepts that fully occlude a user’s field of view and try to create the true AR vision we have seen in concepts. I see the first approach as a viable and practical approach to AR right now. How do you see AR evolving in the next five years? Do we enter the cycle with the heads-up AR approach and then end up with fully occlusive overlay in say, five years? Or is it perhaps ten years before we see full occlusive overlay optical AR?"
I have distilled the conversation into a true Q&A format as well as augmenting it with some of my own observations. Enjoy!
Could you offer some closing comments? Perhaps frame up the Kopin growth story. For the next five years, what does the opportunity look like for Kopin?
DR. FAN: "My feeling is that Kopin has, and continues to, deliver disruptive technology. A historical example includes our pioneering HBT transistor, developed using nanotechnology (created in the 90’s during our MIT and DARPA days) and still inside all current smartphones. We sold that business a few years ago and moved all of our capital into wearables, because we believe the next wave after smartphones is wearables.
We are now telling the market what we have: Whisper, Pupil optics, Solos, and more. Kopin is transforming from product development to commercialization in the wearables markets similar to what we did with the HBT transistor."
Dr. John CC Fan,
CEO of Kopin Corporation
KOPIN SOLOS Glasses with Pupil optics view
"When Google Glass first came out a few years ago, Tim Cook of Apple famously said that Smart Glasses would not work. Are you seeing senior executives at major tech companies, like Apple, changing their view yet, or do people still believe that smart eyewear won't work? Do they still see Glass as a failure, despite the recent pivot to enterprise? Are we seeing opinions change at senior levels so that someone will take a chance and create a viable smart glass form factor with a brand on it?"
DR. FAN:"There are several ways AR can work. Most AR used today is tablet or smartphone based – we call this video see-through AR, where you have a live video image on the device and you overlay the computer-generated images, like (for example) Pokéman. It's not optical see-through, but it's a video re-creation of a scene with overlays. That’s what consumers are prepared to use today and what they are used to it. Although you have to direct your eyes away from your view of what you are looking at in the real world, your brain is able to adjust. It is similar to when you are driving a car while glancing at a GPS screen. We’ve adapted to that little screen that provides the additional information we need.
Now, with AR, you have to decide what is acceptable as a Consumer product and as an Enterprise product. If it is a Consumer product and too complicated, it will not be used. Enterprise permits more sophisticated devices, as people can be trained.
As you know, Kopin has a very long history with the military, 40 years actually, going back as early as the MIT days. Take for example, infrared military gun sights, which are a simple form of AR and which Kopin has been designing for decades. Kopin has also been involved in the military avionics helmet market, which is a highly sophisticated form of AR. Complex AR devices, like the avionics helmet for the F-35, require extremely complicated technologies to allow a 360-degree view and cost hundreds of thousands of dollars.
Why do I bring this up? Because there are currently companies who want to do full AR with 360 degree views just like the F-35 helmet – for the consumer market. But military users go through extensive training because the system is quite complex, requiring adjustments in thought patterns for the integration of augmented information with real information. Frankly, consumers won’t do this, as the “cognitive load” is too heavy. In addition to being fatiguing, it can in fact be quite dangerous if the user is not facile enough to integrate real-world and augmented information quickly. You may have to fast forward ten or fifteen years for this level of technical sophistication in a Consumer AR device.
For complex AR devices to work, the brain has to be trained. With fully occlusive AR, there is constant cognitive competition for the user: what you see, what’s real, what’s not real. Also, your focus point keeps changing. Your brain has to determine what is important vs. what is not important. How are you going to walk down the street that way? Is the world ready for it? It might be, but the human brain has to be ready. Human evolution may not move that fast to handle such multi-tasking.
This said, I like what these companies are doing with fully occlusive AR. I think they are pioneers and the world needs pioneers to explore and find out new territories. But for immediate products, we are looking for simpler systems. Now, what are these simpler systems?
I am sure you are familiar with Clay Christensen’s theory of disruptive innovation, described in his 1997 book, “The Innovator’s Dilemma.” So what is the disruption that is occurring in the tech industry, and specifically AR headsets? It is Hands-Free. To Kopin, the disruptive innovation is a truly Hands-Free UI, so you do not have to hold a device in your hand.
When you move to Hands-Free, you do not necessarily have the same functions as the class of devices you are disrupting. But to Kopin, the genius in Christensen’s theory is that you don’t need to try to incorporate all existing functions in Hands-Free devices (while maintaining low costs due to volume). Try to meet all these criteria and you will fail. You need to focus on the disruption – which is Hands-Free. Hands-Free provides the capabilities that enable users function or work better, or simply increase enjoyment with the device.
Now, the intent of Hands-Free is to remove the need for touch, or at least not require touch all of the time. So you need an additional, new interface. What is that new user interface? Is it gestures or voice? Think about the environments where Hands-Free is most useful, like the outdoors or in active environments. If your chosen UI is Voice, and you take these dynamic environmental elements into consideration, you will also need noise cancellation.
If Voice is your chosen Hands-Free UI, you also need voice commands. But Voice as a UI engine cannot allow distortions, otherwise commands will become confused and unrecognizable by the voice recognition system. This requires a special type of a filter allowing voice commands to work in very noisy environments.
Another critical consideration is that, visually, whatever you are viewing via a Hands-Free device needs to work indoors and outdoors. The display should also be small and (hopefully) see-through or “see around.” Also, the screen should be able to both align with and move out of your line of sight as needed. In other words, you have a so called “second screen” that is there when you want to see it, but can be easily moved away or adjusted, so that it does not interfere with your focal point.
It is worth mentioning that our presence in second screen wearables: (e.g. Realwear, Vuzix, Glass and more) is very high. In fact, I can’t think of any serious product that doesn’t use Kopin LCD.
So, taking these multiple criteria into account: simple vs. complex, Hands-Free UI selection, and visual adaptability – these are the elements I see impacting AR near term. Humans have accepted the concept of AR, we know the technology is here, and we see the value in Hands-Free whether it's for military, Enterprise, or even Consumer markets. We’re even at the stage where, for example, a pair of well-designed smart-glasses connected by Bluetooth to the phone in your pocket let us engage with our device even while our eyes look up and ahead – instead of looking down at a screen in our hand. These are all short-term realities and considerations.
But as for our long-term view, many of the features people see – like fully immersive optical see-through AR experiences – raise issues. Fully immersive optical see-through AR will be possible, but we need to ask, how are humans going to use it given our limited ability to multi-task and how long will it take to train our brain to use this technology safely and without confusion?