Inside NTT Research’s push to commercialize deep tech
0
Politics

Inside NTT Research’s push to commercialize deep tech

April 16, 2026
Scroll

Posted 4 hours ago by

Since opening in Silicon Valley in 2019, NTT Research has operated as a long-horizon science lab, a dedicated arm of Japan’s telecommunications giant NTT Group, which invests more than 3 billion annually in global RD. Now in its seventh year, the lab was built as a research subsidiary insulated from quarterly pressure and product roadmaps. Unlike startups or typical corporate innovation teams, NTT Research is a wholly owned entity focused on seeding advances in computing, security, and healthcare that can later fold into NTT’s global infrastructure and enterprise services.

Inside NTT Research’s push to commercialize deep tech

Many of these efforts take five to fifteen years to approach commercialization, a timeline now under strain as AI compresses development cycles and markets reward speed. The question has sharpened: what is the value of discovery if it never leaves the lab? NTT Research is trying to answer that with what it calls “NTT Research 2.0,” a dual-engine model that maintains long-horizon science while pushing discoveries toward market. President and CEO Kazu Gomi frames the shift as inevitable. At its center is Scale Academy, a new incubator designed to spin out companies from lab breakthroughs. Its first test case, SaltGrain, is a zero-trust data security platform built on attribute-based encryption, a concept first proposed in 2004 that has largely remained theoretical. The effort reflects a broader challenge: turning deep research into viable companies without losing the rigor that produced it in the first place. In a conversation with Fast Company, Gomi discusses how to operationalize advanced science, what sets Scale Academy apart from traditional incubators, and how to judge when emerging technologies are ready for the real world. This conversation has been edited for length and clarity. NTT Research has long operated on a 5–15 year horizon for breakthroughs. With NTT Research 2.0, you’re introducing a more market-driven, startup-like approach. How do you decide when a technology is ready to move from the lab to commercialization—and how do you balance avoiding premature launches with not letting viable ideas sit too long? This shift didn’t happen overnight—we have actually been building this business incubation capability behind the scenes for over a year. What you are now seeing with Research 2.0 is the formalization of something that was already taking shape internally. At a practical level, the key realization was that after seven years of fundamental research, we now have several technologies that are approaching, or in some cases already at, a point where there is clear commercial potential. And it would be a shame to let those opportunities sit idle. So operationally, we have introduced a new layer—a business incubation function—that selectively picks up technologies that show near-term product viability. The idea is not to change how research is done, but to create a parallel path that can take these technologies to market in a more structured way. In terms of avoiding risks such as pushing immature technologies too early or letting mature ones sit too long, the way I think about it is not through rigid stage gates, but through separation of roles and clarity of intent. The research team continues to focus on fundamental discovery without pressure from product timelines. Meanwhile, the incubation team evaluates technologies through a completely different lens: market readiness, customer relevance, and business viability. The decision of when something is ‘ready’ is less about a fixed checklist and more about whether we can see a credible path to real-world deployment and value creation. The most important structural choice we made was not to convert researchers into business operators. Instead, we built Scale Academy as a completely separate team, bringing in people from outside who think in terms of markets, customers, and revenue. That separation ensures we don’t compromise the integrity of either side. The research team is not rushed, and the incubation team is not constrained by academic thinking. Many big-tech companies have incubators or venture arms. What makes Scale Academy structurally different? Is it truly operating with startup-like independence in incentives and governance, or is it still shaped by being inside a large enterprise? And how do you define success: venture creation, revenue, or building a repeatable commercialization engine? Where we differentiate is the starting point. Scale Academy is not sourcing ideas from product teams or incremental innovation—it is directly connected to a very strong basic research foundation. That means the technologies we are working with are often fundamentally new, sometimes even ahead of market demand, which gives us a different kind of leverage. In terms of governance and constraints, yes, we do have the advantage of NTT as a large parent providing funding and stability. But at the same time, we are very conscious that no company can succeed in isolation today. So one of the core principles we are building into Scale Academy is ecosystem participation. When we spin out companies, we don’t intend to own everything—we want to bring in other partners, investors, and players who are relevant to that market. Being part of a broader ecosystem is critical to scaling these upcoming technologies. As for success metrics, it’s still evolving, but I don’t want to reduce it to just the number of startups launched. That would be too simplistic. What matters more is whether we can create a repeatable, effective process—identifying the right technologies, applying the right business thinking, and building ventures that can become self-sustaining as quickly as possible. Of course, revenue and profitability will be important at the individual company level, but success for me is whether this becomes a sustainable engine that consistently translates deep research into real businesses. Research thrives on patience and uncertainty, while startups demand speed and market validation. How do you reconcile those two fundamentally different operating models without compromising either? And what cultural or organizational shifts were required within NTT Research to support both discovery and deployment? This is probably the most challenging aspect of Research 2.0. Trying to merge those directly would create conflict, so the key is clear separation with controlled collaboration. The incubation team needs technical depth and continued support from the researchers. But this interaction has to be carefully managed. Too much overlap risks distracting the research team; too little collaboration risks weakening the product. Culturally, I do see a shift happening, particularly in motivation. Many researchers have expressed a desire to see their work used in the real world. With Scale Academy, we can now offer that pathway. It becomes an additional incentive, not replacing the academic mission, but complementing it. We can say, ‘You’ve created something valuable, do you want to explore how it might be used?’. But if we don’t manage the boundaries and interactions properly, we could fail on both fronts—neither achieving strong research nor successful commercialization. So this is something we are actively watching and adjusting as we go. Attribute-based encryption (ABE) has existed for years, but never really broke into mainstream enterprise deployment. Why is it viable now? What changed in terms of performance, scalability, or real-world readiness? And does SaltGrain truly redefine zero-trust by embedding policy into the data itself, or is it an evolution of existing approaches shaped by the demands of AI? Technologies like ABE often require a long maturation period. Over the past decade, ABE has become significantly more stable and practical from a technical standpoint. So the technology itself is now ready. However, the more important factor is the market timing. What has changed dramatically in the last one to two years is the rise of AI, especially agentic AI. We are entering a world where more and more AI agents are being deployed across enterprises, and these agents require access to large volumes of data to function effectively. That creates a fundamental tension. On one hand, organizations need to provide as much data as possible to train these agents. On the other hand, much of that data contains sensitive information—personal data, financial details, internal records—that cannot simply be exposed. So companies are stuck in a dilemma: either risk leaking sensitive data or restrict access and limit the effectiveness of AI. This is where SaltGrain comes in. We are combining ABE with additional capabilities to address this specific problem. For example, we are developing classification engines that can automatically scan documents, identify sensitive information, and categorize it into different levels of sensitivity. Once that is done, ABE allows us to selectively mask or encrypt those parts of the data while leaving the rest accessible. Another key shift is that we are no longer designing this system primarily for human users. Increasingly, the ‘viewer’ of the data is an AI agent. So we are fine-tuning the system with that assumption in mind. Different agents can be given different levels of access based on policy, all enforced at the data level. The core idea of embedding policy into the data itself aligns strongly with zero-trust principles. But what makes it new is the context, applying it to AI-driven environments where the scale, speed, and nature of access are fundamentally different. Right now, I don’t see many practical solutions in the market addressing this problem. We’re entering a world where AI agents, not just humans, access and act on enterprise data, creating new security risks. How does your data-centric model address that reality, and can policies embedded at the data level scale across complex, real-world workflows? And how should enterprises think about post-quantum readiness today—urgent priority or longer-term transition? Recent security incidents, like large-scale data breaches where entire document repositories are exposed, show why data-centric security is important. In traditional models, once data is stolen, it is essentially compromised. But with ABE, the protection stays with the data itself. Even if a file is copied or leaked, the access control policies remain embedded, so sensitive information is still protected. For us, zero-trust data security means not relying on perimeter defenses alone and securing the data wherever it goes. In terms of scaling this to AI-driven environments, I think we are still in the early stages. We can point to scenarios where this approach would have significantly reduced the impact of past breaches, but we are still building real-world use cases to quantify that impact more precisely. At the same time, if enterprises can trust that their sensitive data will remain protected, even when shared with agentic AI systems, they will be much more willing to use that data for training and operations. That’s a critical enabler for AI adoption and innovation. The first release of SaltGrain is not post-quantum ready, and that is intentional. Today’s systems are still largely based on pre-quantum cryptography, so we want to deliver value immediately rather than wait. However, in parallel, our research team has already developed post-quantum versions of ABE. The challenge has been performance. Early implementations were too computationally heavy to be practical. Through collaboration between the research team and the incubation team, we spent two years refining those algorithms, adjusting assumptions, and optimizing them to reduce computational requirements while maintaining security. Now we have a version that is much more practical. So our roadmap is to deploy the current solution, demonstrate value, and then transition to post-quantum readiness over the next couple of years. We want to show the market that we not only understand where things are going, but that we have a concrete path to get there. Research is an increasingly crowded space with hyperscalers, startups, and governments all investing heavily in AI, quantum, and next-gen infrastructure. Where does NTT truly differentiate? Is it the depth of research, system-level integration, or long-term capital? And if Scale Academy succeeds, does it redefine the role of a corporate research lab in the AI era, or is this still an experiment in balancing deep science with commercialization? I see this as a management challenge. Until now, NTT Research has been completely focused on fundamental research, with a clear mandate of producing strong scientific work and publishing impactful papers. That has been successful and has built a very strong foundation. What changes with Research 2.0 is that we are adding another dimension. We are not replacing the research mission, but we are expanding it. Our strength lies in the depth of our research, combined with our ability to now connect that to real-world applications. Many companies participate in the Silicon Valley ecosystem, but not all of them come in with the same level of deep, fundamental innovation. I am particularly interested in leveraging this model across the broader NTT organization. Many technologies are being developed in our labs globally, including in Tokyo. Not all of them will be suitable for commercialization, but some of them could be very strong candidates. If Scale Academy proves successful, we can bring those ‘crown jewels’ into this process and take them to global markets more effectively. At this stage, this is still an experiment, but it is a very intentional one. We are not trying to prove that deep science and commercialization are easy to combine—they are not. But we believe that with the right structure, it is possible to create a system where both can thrive. And from what I see internally, there is strong support for this direction. There is a sense of excitement, but also eagerness to see how it develops.

Fast Company
Fast Company

Coverage and analysis from United States of America. All insights are generated by our AI narrative analysis engine.

United States of America
Bias: lean left

People's Voices (0)

Leave a comment
0/500
Note: Comments are moderated. Please keep it civil. Max 3 comments per day.
You might also like

Explore More