All of our thoughts are with our friends and colleagues in Ukraine.
Last month in this space, writing in mid-February, I touched on my recent hospital stay for neurosurgery, and the many ways in which optical instruments are used in operating rooms, throughout hospitals and in the wider world for humanity’s benefit. Six weeks later, as I write in early April, the world situation has changed almost unimaginably.
Today, all of our thoughts are with our friends and colleagues in Ukraine, and, as everyone does, I sincerely hope peace can come soon to the region. Like many organizations, Optica has issued statements condemning the invasion. Our society has also led an effort by scientific organizations to encourage government assistance for Ukrainian scientists displaced by the crisis. Optica has taken practical steps as well, including direct outreach to members of the optics community in Ukraine and surrounding countries, and financial support for student chapters and efforts to match Ukrainian members of our community with work and academic opportunities (see optica.org/statement_on_ukraine).
As scientists and engineers, we instinctively want to provide solutions to problems. But is there anything the methods and tools of optics and photonics can do to stop war? Many of us work with advanced information technologies such as quantum computing and artificial intelligence (AI). Can such cutting-edge systems and algorithms find answers that bring peace?
In “supervised” deep learning, humans label data to be used as training sets for machine-learning algorithms, which become “smarter and smarter” as they encounter more and more data. The human beings who amass and label the data are the ultimate “supervisors” of this process. But in the absence of good data, deep learning cannot learn anything. And, unfortunately, there is very little data, and few good historical examples, to guide the successful end of wars. Humans, sadly, can be cruel and fickle—and any deep-learning solution to war would share those negative characteristics of its supervisors.
There is another approach to AI, a form of “unsupervised” learning, in which the algorithm learns not from previously catalogued and labeled data, but from the new data actually being analyzed. This approach is familiar in optics. Mathematically driven methods—such as blind deconvolution, autoregressive modeling, super-resolution methods based on singular-value decomposition, and non-negativity constraints—can be used to recover or reconstruct the image of an original from heavily distorted detected data, all without supervision or training. We might say that supervised learning is inductive, while unsupervised learning is deductive.
Recently, I used this latter type of unsupervised learning to develop a method of laser-beam scanning. It finds the optimal path for the laser beam to scan the sample for Raman-scattering confocal microscopy, without previous data collection and training. The “supervisor” is the image of the sample itself being analyzed.
Do these concepts have anything to say about the current crisis? Perhaps only that at a time of war and chaos, humans—like an unsupervised algorithm in AI—will need to find solutions not from previous data, but by learning from the current, evolving situation. As I wrote last month, I remain impressed by photonic technology’s power to benefit humanity. But I have not found a technology—photonic or otherwise—to end conflicts, calm anger, bring peace. For that, we will need not artificial intelligence, but true, human intelligence.
I would welcome your thoughts on this message or other topics at email@example.com.