Blog & News
We upload to our blog every couple of weeks, sharing insightful articles from our engineers as well as company news an our opinions on recent industry topics. Subscribe to our mailing list to get great content delivered straight to your inbox.

Accelerating the Future Panel Discussion: Key Takeaways from Industry Leaders

Accelerating the Future Panel Discussion: Key Takeaways from Industry Leaders
The semiconductor industry's journey toward fully autonomous manufacturing is underway, driven by advanced technologies and strategic investment. Staying ahead in smart manufacturing technologies has become paramount for global competitiveness. This topic was the focal point of the recent panel discussion webinar, hosted by Jamie Potter, Flexction CEO & Cofounder. The panel featured industry leaders representing fabs and suppliers: Matthew Johnson, VP of Wafer Fab Operations at Seagate; Patrick Sorenson, Industrial Engineer at Microchip Technology; Francisco Lobo, CEO of Critical Manufacturing; and Madhav Kidambi, Technical Marketing Director at Applied Materials.
Survey Insights: Where Are We Now?
The panel discussion was initiated with a presentation of the findings from Flexciton's inaugural Front End Manufacturing Insights survey, conducted among fabs in the US, Europe, and Asia. Key takeaways included:
- A majority of respondents see autonomous manufacturing as achievable within the next decade.
- Data standardization and integration remain major barriers, delaying scalable solutions.
- Cloud computing, IoT and Mathematical Optimization stand as the top three advanced technologies that fabs have adopted so far.
These insights laid a strong foundation for a lively discussion, highlighting the shared vision while addressing divergent strategies and challenges.
Insights from Industry Experts
Pragmatism Over Perfection in Data Models
Francisco Lobo emphasized the importance of starting with what’s available when building scalable solutions.
“Instead of building a complete model from scratch, leverage existing standards and your MES infrastructure. Begin with a pragmatic approach and evolve as you learn.”
This iterative strategy ensures companies can start deriving value early, without waiting years for a perfect model to be developed.
Strategic Investments In Downturns
While many fabs postpone investments during downcycles, Matthew Johnson emphasizes that smart manufacturing investments should be continuous rather than cyclical. He highlighted the strategic advantage of such approach:
“In down cycles, you often need these solutions the most. For example, using smart manufacturing to scale metrology tools through sampling can significantly stretch your existing resources without capital-heavy investments.”
His insight underscores how downturns provide a window to refine processes for long-term gains.
Getting Leadership Buy-in
Securing leadership support for smart manufacturing investments remains challenging when benefits aren't immediately apparent. Patrick Sorenson shares that the ROI justification was easier during the recent upcycle:
"If we just get a few more lots out of the fab when we have more demand than capacity, that will pay for itself."
In other scenarios, focus on demonstrating benefits through yield improvements, capital avoidance, or labor efficiency.
Industry Alignment on the Vision
Madhav Kidambi observed a growing consensus around the end goal of autonomous manufacturing, even as companies differ in their pathways:
“The vision of Lights Out manufacturing is clear, but strategies are evolving as companies learn how to justify and sequence investments to sustain the journey.”
Ecosystem Collaboration and The Path Towards Autonomy
A key theme emerging from the discussion is the importance of collaboration between suppliers and fabs. This includes:
- Open platforms and integration capabilities
- Standardized data protocols
- Partner ecosystems for specialized solutions
- Shared innovation initiatives
As the industry progresses toward autonomous manufacturing, success will depend on:
- Maintaining continuous investment in smart technologies
- Taking pragmatic approaches to data integration
- Developing clear ROI frameworks
- Fostering collaboration across the ecosystem
- Building upon existing systems and standards
As Matt from Seagate concludes,
"Fab operation is really a journey of continuous improvement, and the pursuit of smart technologies is a fundamental tenet of our strategy to ensure that we meet the objectives as an organization."
Watch the Full Webinar
The conversation is packed with actionable insights on overcoming barriers, achieving quick wins, and navigating the complexities of smart manufacturing adoption. Don’t miss out—click here to watch the full discussion recording.

Innovate UK invests in breakthrough technology developed by Flexciton and Seagate

Innovate UK invests in breakthrough technology developed by Flexciton and Seagate
London, UK – 1 Oct – Flexciton, a UK-based software company at the forefront of autonomous semiconductor manufacturing solutions, is excited to announce investment from Innovate UK in a strategic collaboration with Seagate Technology’s Northern Ireland facility. Innovate UK, the UK’s innovation agency, drives productivity and economic growth by supporting businesses to develop and realize the potential of new ideas. As part of their £11.5 million investment across 16 pioneering projects, this collaboration will help develop and demonstrate cutting-edge technology to boost semiconductor manufacturing efficiency and enhance the UK’s role in the global semiconductor supply chain.
Jamie Potter, CEO and Cofounder of Flexciton, commented:
"We are thrilled to partner with Seagate Technology to bring yet another Flexciton innovation to market. By combining our autonomous scheduling system with Flex Planner, we are enhancing productivity in semiconductor wafer facilities and driving greater adoption of autonomous manufacturing."
The partnership aligns directly with the UK government’s National Semiconductor Strategy, which seeks to secure the UK’s position as a key player in the global semiconductor industry. Flexciton’s contribution to this strategy is not just a testament to its cutting-edge technology but also highlights the company’s role in reinforcing supply chain resilience and scaling up manufacturing capabilities within the UK.
Flex Planner: A breakthrough solution for chip manufacturing
At the heart of this project is Flex Planner, the first closed-loop production planning solution for semiconductor manufacturing with the ability to control the flow of WIP in a fab over the next 2-4 weeks, autonomously avoiding dynamic bottlenecks, reducing cycle times, and improving on-time delivery performance.
Supporting the UK's semiconductor growth
The UK government’s investment in semiconductor innovation underlines its commitment to fostering cutting-edge solutions that bolster the sector’s growth. The semiconductor industry is projected to grow from £10 billion to £17 billion by 2030, with initiatives like this collaboration driving the innovation necessary to achieve these goals.
Flexciton’s partnership with Seagate exemplifies how collaboration between technology innovators and manufacturers can lead to transformative advances in the industry. The funding from Innovate UK enables both companies to develop and test solutions that not only enhance productivity but also position the UK as a critical link in the global semiconductor ecosystem.
About Flexciton
Flexciton is pioneering autonomous technology for production scheduling and planning in semiconductor manufacturing. Leveraging advanced AI and optimization technology, we tackle the increasing complexity of chipmaking processes. By simplifying and streamlining wafer fabrication with our next-generation solutions, we enable semiconductor fabs to significantly enhance efficiency, boost productivity, and reduce costs. Empowering manufacturers with unmatched precision and agility, Flexciton is revolutionizing wafer fabrication to meet the demands of modern semiconductor production.
For media inquiries, please contact: media@flexciton.com

The Pathway to the Autonomous Wafer Fab

The Pathway to the Autonomous Wafer Fab
Over the next 6 years, the semiconductor industry is set to receive around $1tn in investment. The opportunities for growth – driven by the rapid rise of AI, autonomous and electric vehicles, and high-performance computing – are enormous. To support this anticipated growth, over 100 new wafer fabs are expected to emerge worldwide in the coming years (Ajit Manocha, SEMI 2024).
However, a significant challenge looms: labor. In the US, one-third of semiconductor workers are now aged 55 or older. Younger generations are increasingly drawn to giants like Google, Apple and Meta for their exciting technological innovation and brand prestige, making it difficult for semiconductor employers to compete. In recent years, the likelihood of employees leaving their jobs in the semiconductor sector has risen by 13% (McKinsey, 2024).
To operate these new fabs effectively, the industry must find a solution. The Autonomous Wafer Fab, a self-optimizing facility with minimal human intervention and seamless production, is looking increasingly likely to be the solution chipmakers need. This vision, long held by the industry, now needs to be accelerated due to current labor pressures.
Thankfully, rapid advancements in artificial intelligence (AI) and Internet of Things (IoT) mean that the Autonomous Wafer Fab is no longer a distant dream but an attainable goal. In this blog, we will explore what an Autonomous Wafer Fab will look like, how we can achieve this milestone, the expected outcomes, and the timeline for reaching this transformative state.
What will an Autonomous Wafer Fab look like?
Imagine a wafer fab where the entire production process is seamlessly interconnected and self-regulating, free to make decisions on its own. In this autonomous environment, advanced algorithms, IoT, AI and optimization technologies work in harmony to optimize every aspect of the manufacturing process. From daily manufacturing decisions to product quality control and fault prediction, every step is meticulously coordinated without the need for human intervention.
Key features of an Autonomous Wafer Fab:
Intelligent Scheduling and Planning: The heart of the autonomous fab lies in its scheduling and planning capabilities. By leveraging advancements such as Autonomous Scheduling Technology (AST), the fab has the power to exhaustively evaluate billions of potential scenarios and guarantee the optimal course for production. This ensures that all constraints and variables are considered, leading to superior outcomes in terms of throughput, cycle time, and on-time delivery.
Real-Time Adaptability: An autonomous fab is equipped with sensors and IoT devices that continuously monitor the production environment. These devices can feed real-time data into the scheduling system, allowing it to dynamically adjust schedules and production plans in response to any changes or disruptions.
Digital Twin: Digital Twin technology mirrors real-time operations through storing masses of data from sensors and IoT devices. This standardized data schema allows for rapid introduction of new technologies and better scalability. Moreover, by simulating production processes, it helps to model possible scenarios – such as KPI adjustments – within the specific constraints of the fab.
Predictive maintenance: Predictive maintenance systems will anticipate equipment failures before they occur, reducing downtime and extending the lifespan of critical machinery. This proactive approach ensures that the fab operates at peak efficiency with minimal interruptions. Robotics will carry out the physical maintenance tasks identified by these systems, and when human intervention is necessary, remote maintenance capabilities will allow technicians to diagnose and address issues without being on-site.
The Control Room: In an autonomous fab, decision-making is driven by data and algorithms. The interconnected system can balance trade-offs between competing objectives, such as maximizing throughput while minimizing cycle time, with unparalleled precision. That said, critical decisions such as overall fab objectives may still be left to humans in the “control room”, who could be on the fab site or 9000 km away…
How can we get there?
Achieving the vision of an Autonomous Wafer Fab requires a multi-faceted approach that integrates technological innovation, strategic investments, and a cultural shift towards embracing automation. Here are the key steps to pave the way:
A Robust Roadmap: All fabs within an organization need to have a common vision. Key milestones need to be laid out to help navigate each fab through the transition with clear actions at each stage. SEMI’s smart manufacturing roadmap offers an insight into what this could look like.
Investing in Novel Technologies: The pivotal step towards autonomy is investing in the latest technologies, including AI, machine learning, AST, and IoT. These technologies form the backbone of the autonomous fab, enabling intelligent planning and scheduling, real-time monitoring, and adaptive control.
Data Integration and Analytics: A crucial aspect of autonomy is the seamless integration of data from various sources within the fab. By harnessing big data analytics, fabs can not only gain deep insights into their operations, but they will have the correct data in place to support autonomous systems further down the line.
Developing Skilled Workforce: While the goal is to minimize human intervention, the semiconductor industry will still require skilled professionals who can manage and maintain advanced systems. Investing in workforce training and development to fill the current void is essential to ensure a smooth transition.
Collaborative Ecosystem: Even the biggest of chipmakers is unlikely to reach the autonomous fab all on their own. Collaboration with technology providers, research institutions, and industry partners will be key. Sharing knowledge and best practices can accelerate the development and deployment of autonomous solutions.
Pilot Programs and Gradual Implementation: Transitioning to an autonomous fab should be approached incrementally. Starting with pilot programs to test and refine technologies in a controlled environment will help identify challenges and demonstrate the benefits. Gradual implementation allows for continuous improvement and adaptation.
How will fabs benefit?
The transition to an Autonomous Wafer Fab promises a multitude of benefits that will revolutionize semiconductor manufacturing:
Enhanced Efficiency: By optimizing production schedules and processes, autonomous fabs will achieve higher throughput and better resource utilization. This translates to increased production capacity and reduced operational costs.
Better Quality: Advanced process control and real-time adaptability ensure consistent product quality, minimizing defects and rework. This leads to higher yields and greater customer satisfaction.
Reduced Downtime: Predictive maintenance and automated decision-making reduce equipment failures and production interruptions. This results in higher uptime and more reliable operations.
Improved Flexibility: Autonomous fabs can quickly adapt to changing market demands and production requirements. This flexibility enables manufacturers to respond rapidly to customer needs and stay competitive in a dynamic industry.
Cost Savings: The efficiencies gained from autonomous operations lead to significant cost savings. Reduced labor intensity, lower material waste, and optimized energy consumption contribute to a more cost-effective production process.
Sounds great, but when will it become a reality?
The journey towards an Autonomous Wafer Fab is well underway, but the timeline for full realization varies depending on several factors, including technological advancements, industry adoption, and investment levels. However, significant progress is expected within the next decade.
Short-Term (1-3 Years):
- Implementation of pilot programs and continual adoption of AI, IoT, AST and other advanced technologies.
- Incremental improvements in scheduling, process control, and maintenance practices.
Medium-Term (3-7 Years):
- Broader adoption of autonomous solutions across the industry.
- Enhanced data integration and analytics capabilities.
- Development of a skilled workforce to support autonomous operations.
Long-Term (7-10 Years and Beyond):
- Full realization of the Autonomous Wafer Fab with minimal human intervention.
- Industry-wide standards and best practices for autonomous manufacturing.
- Continuous innovation and refinement of autonomous technologies.
Conclusion
The pathway to the Autonomous Wafer Fab is a transformative journey that holds immense potential for the semiconductor industry. By embracing advanced technologies, fostering collaboration, and investing in the future workforce, fabs can unlock unprecedented levels of efficiency, quality, and flexibility. Autonomous Scheduling Technology, as a key pillar, will play a crucial role in this evolution, driving the industry towards a future where production is seamless, self-optimizing, and truly autonomous. The vision of an Autonomous Wafer Fab is not just a distant possibility but an imminent reality, poised to redefine the landscape of semiconductor manufacturing.
Now available to download: our new Autonomous Scheduling Technology White Paper
We have just released a new White Paper on Autonomous Scheduling Technology (AST) with insights into the latest advancements and benefits.
Click here to read it.

Switching to Autonomous Scheduling: What is the Impact on Your Fab?

Switching to Autonomous Scheduling: What is the Impact on Your Fab?
In the fast-paced world of semiconductor manufacturing, efficient production scheduling is crucial for chipmakers to maintain competitiveness and profitability. The scheduling methods used in wafer fabs can be classified into two main categories: heuristics and mathematical optimization. Both methods aim to achieve the same goal: to provide the best schedules within their capabilities. However, because they utilize different problem-solving methodologies, the outcome is dramatically different. Simply put, heuristics generates solutions by making decisions based on if-then rules predefined by a human, while optimization algorithms search through billions of possible scenarios to automatically select the most optimal one.
Autonomous Scheduling Technology (AST) features mathematical optimization combined with smart decomposition, allowing the quick delivery of optimal production schedules. Whether you are a fab manager or industrial engineer, the experience and results of applying Autonomous Scheduling in your fab are fundamentally different compared to a heuristic scheduler.
Here's how switching to AST can impact your fab.
Consistent and Superior KPIs Guaranteed
Autonomous Scheduling Technology (AST) evaluates all constraints and variables in the production process simultaneously, ensuring optimal decision-making. Unlike heuristics schedulers, which require ongoing trial and error with if-then rules to solve the problem, AST allows the user to balance trade-offs between high level fab objectives. With its forward-looking capability, it can assess the consequences of scheduling decisions across the entire production horizon and generate schedules that guarantee that the fab's global objectives are met. The tests we have conducted against a heuristic-based scheduler have proven that Autonomous Scheduling delivered superior results. Book a demo to find out more.
Never miss a shipment
One of the most critical aspects of fab operations is meeting On-Time-Delivery deadlines. With AST, schedules are optimized towards specific fab objectives, ensuring that production targets align with business goals. Mark Patton, Director of Manufacturing Seagate Springtown, confirmed that adopting Autonomous Scheduling in his fab allowed him to:
"improve our predictability of delivery by meeting weekly customer commits. With a lengthy cycle time build, this predictability and linearity has been key to enabling the successful delivery and execution of meeting commits consistently."
Reduced workload (by at least 50%)
The reactive nature of heuristic-based schedulers places a significant burden on industrial engineers, who must constantly – and manually – tune rules and adjust parameters. To ensure these systems run optimally, fab managers must dedicate at least one industrial engineer to working full-time on maintaining them. With AST, the workload is significantly reduced due to the system's ability to optimize schedules autonomously (without human intervention). This means there will be no more firefighting when the WIP profile changes. This reduction in labor intensity frees up engineers to engage in value-added activities.
Reduced rework, improved yield
Some areas of a fab are notoriously challenging to optimize. For example, the diffusion and clean area is home to very complex time constraints, also known as timelinks. When timelinks are violated, wafers either require rework or must be scrapped. Either way, it's a considerable cost for a fab. Autonomous Scheduling Technology is highly effective at managing conflicting KPIs with its multi-objective optimization capabilities. AST dynamically adjusts to changes in the fabrication process to consistently eliminate timelink violations whilst maximizing throughput.
Confidence in Balancing Trade-offs
With its ability to look ahead, Autonomous Scheduling Technology can predict the consequences of different trade-off settings. This capability is particularly valuable when balancing competing objectives like throughput and cycle time. Users of legacy schedulers would typically move sliders to adjust the settings and wait a considerable amount of time to assess whether the adjustments generate the desired scheduling behavior. If not, further iterations are required, and the process repeats. In contrast, AST can evaluate billions of potential scenarios and determine the optimal balance between conflicting goals. For example, it can predict the exact impact of prioritizing larger batches over shorter cycle times, allowing fab managers to make informed decisions with confidence. This strategic foresight ensures that the best possible trade-offs are made, optimizing the whole fab to meet overarching objectives.
Conclusion
In an industry where efficiency and precision are paramount, Autonomous Scheduling Technology provides a distinct competitive advantage. It equips fabs with the tools to consistently outperform legacy systems, streamline operations, and ultimately drive greater profitability. By investing today in upgrading their legacy scheduling systems to Autonomous Scheduling Technology, wafer fabs are not only optimizing their current operations but also taking an important step toward the autonomous fab of the future.
Now available to download: our new Autonomous Scheduling Technology White Paper
We have just released a new White Paper on Autonomous Scheduling Technology (AST) with insights into the latest advancements and benefits.
Click here to read it.

The Flex Factor with... Lio

The Flex Factor with... Lio
Meet Lio, a driving force behind client success as Flexciton's Technical Customer Lead. Discover more about her keen eye for collaboration and passion for innovation in this edition of The Flex Factor.
Tell us what you do at Flexciton?
I’m a Technical Customer Lead.

What does a typical day look like for you at Flexciton?
The day is incredibly busy and passes quickly while collaborating with the customer team and other teams at Flexciton, making rapid progress day by day. My focus revolves around ongoing customer work, such as our work at Renesas (analyzing their adherence, checking the Flex Global heat map, and listening to feedback from the client). Additionally, I often work on live demos and PoC projects. The nature of my tasks varies depending on the project stage, ranging from initial data analysis and integration to final stages where I collaborate with sales on deliverables and the story of the final report. While consistently moving forward with projects and meeting weekly targets, we concurrently establish our working methods and standardize processes to improve efficiency for future projects. For lunch, I usually go to Atis, my go-to place for fresh and nutritious meals. People in the office call it a salad, but I consider it the best healthy lunch with the highest ROI.
What do you enjoy most about your role?
I find the most enjoyment in witnessing the impact our product has on customers who need it. It's fulfilling to see their reactions when they share challenges, and I appreciate understanding how Flexciton can collaborate with them, providing that extra element for improvement.
If you could summarize working at Flexciton in 3 words, what would they be?
Creative, Fast, Collaborative.
Given the fast-paced evolution of technology, what strategies do you recommend for continuous learning and skill development in the tech field?
Stay closely connected to the client side. Understanding the technology they're developing and their current tech level (MES and other systems) provides insights into their readiness for Flexciton.
In the world of technology and innovation, what emerging trend or development excites you the most, and how do you see it shaping our industry?
The semiconductor industry's rapid evolution and diversity are fascinating. The competition between TSMC and Samsung Foundry in advanced GAA (gate-all-around) technology is particularly intriguing. While Samsung claims to be ahead, industry voices suggest a bluff with poor yields. The competition is ongoing, and I wonder if TSMC will maintain its lead or if there will be a paradigm shift in the industry.
Tell us about your best memory at Flexciton?
Meeting the Renesas team at their fab in Palm Bay and witnessing one of their operators' reaction to our app was a memorable experience. Kodi, a talented young manufacturing specialist, was genuinely impacted by our technology which was amazing to see in person. After returning home, he even had a piece of code named after him by Amar.
Do you think you have what it takes to work at Flexciton? Visit our careers page to browse our current openings.

Harnessing AI's Potential: Revolutionizing Semiconductor Manufacturing

Harnessing AI's Potential: Revolutionizing Semiconductor Manufacturing
The dominant technological theme of the year is unmistakably clear: artificial intelligence (AI) is no longer a distant future, but a transformative present. From the startling capabilities of conversational ChatGPT to the seamless navigation of autonomous vehicles, AI is demonstrating an unprecedented ability to manage complexity and enhance decision-making processes. This wave of innovation begs the question: how can the semiconductor industry, which stands at the heart of technological progress, leverage AI to navigate its own intricate challenges?
Complexity-driven Challenges
Semiconductor wafer fabs are marvels of modern engineering, embodying a complexity that rivals any known man-made system. These intricate networks of toolsets and wafer pathways require precision and adaptability far beyond the conventional methods of management. The difficulty of this task is compounded by the current challenges that hinder its dynamic pace: a protracted shortage of skilled labor, technological advancement in product designs, and the ever-present volatility of the supply chain.
The latest generation of products is the pinnacle of complexity, with production processes that involve thousands of steps and incredibly intricate constraints. This complexity is not just a byproduct of design; it is an inherent challenge in scaling up production while keeping costs within reasonable limits.
The semiconductor supply chain is equally complicated and often susceptible to disruptions that are becoming all too common. In this context, the requirement for skilled labor is more pronounced than ever. Running fab operations effectively demands a workforce that's not just technically skilled but also capable of innovative thinking to solve problems of competing objectives, improve processes, and extract more value. No small task in an environment already brimming with complexity.
The Need for AI in Semiconductor Manufacturing
As we delve into Industry 4.0, we find ourselves at a crossroads. The software solutions of today, while advanced, are not the panacea we once hoped for. The status quo has simply reshuffled the problems we face; we've transitioned from relying on shop floor veterans' tacit knowledge and intuition to a dependency on people who oversee and maintain the data in digital systems. These experts manning the screens are armed with MES, reporting, and legacy scheduling software, all purporting to streamline operations. Yet, the core issue remains: these systems still hinge on human intelligence to steer the intricate workings of the fabs.
At the core of these challenges lies a common denominator: the need for smarter, more efficient, and autonomous systems that can keep pace with the industry's rapid evolution. This is precisely where AI enters the frame, poised to address the shortcomings of current Industry 4.0 implementations. AI is not just an upgrade—it's a paradigm shift. It has the capability to assimilate the nuanced knowledge of experienced engineers and operators working in a fab and translate it into sophisticated, data-driven decisions. By integrating AI, we aim to break the cycle of displacement and truly solve the complex problems inherent in wafer fabs management. The potential of AI is vast, ready to ignite a revolution in efficiency and strategy that could reshape the very fabric of manufacturing.
Building AI for the Semiconductor Industry
Flexciton is the first company that built an AI-driven scheduling solution on the back of many years of scientific research and successfully implemented it into the semiconductor production environment. So how did we do it?
Accessing the Data
The foundation lies in data – clean, accessible, and comprehensive data. Much like the skilled engineers who intuitively navigate the fab's labyrinth, AI requires a map – a dataset that captures the myriad variables and unpredictable nature of semiconductor manufacturing.
Despite the availability of necessary data within fabs, it often remains locked in silos or relegated to external data warehouses, making it difficult to access. Yet, partnerships with existing vendors can unlock these valuable data reserves for AI applications.
Finding People Who Can Build AI
The chips that enable AI are designed and produced by the semiconductor industry, but the AI-driven applications are developed by people who are not typically found within the sector. They align with powerhouses like Google and Amazon or deep-tech companies working on future-proof technologies. This reveals a broader trend: the allure of semiconductors has diminished for the emerging STEM talent pool, overshadowed by the glow of places where state-of-the-art tech is being built. Embracing this drift, Flexciton planted its roots in London, a nexus of technological evolution akin to Silicon Valley. This strategic choice has enabled us to assemble a diverse and exceptional team of optimization and software engineers representing 22 nationalities among just 43 members. It's a testament to our commitment to recruiting premier global talent to lead the charge in tech development, aiming to revolutionize semiconductor manufacturing.
AI Needs Cloud
The advent of cloud computing marks a significant milestone in technological evolution, enabling the development and democratization of technology based on artificial intelligence. At the core of AI development lies the need for vast computing power and extensive data storage capabilities. The cloud environment offers the ability to rapidly provision resources at a relatively low cost. With just a few clicks, a new server can be initialized, bypassing the traditional complexities of hardware installation and maintenance typically handled by IT personnel.
Furthermore, the inherent scalability of the cloud means that not only can we meet our current computing needs but we can also seamlessly expand our resources as new technologies emerge. This flexibility provides collaborating fabs with the latest technology while avoiding the pitfalls of significant initial investment in equipment that requires regular maintenance and eventually becomes obsolete.
Security within the cloud is an area where misconceptions abound. As a cloud-first company, we often address queries about data security. It's crucial to understand that being cloud-first does not equate to possessing your data. In fact, your data is securely stored in Microsoft Azure data centers, which are bastions of security. Microsoft's commitment to cyber security is reflected in its employment of more than 3,500 professionals whose job is to ensure that data centers are robust and a fortress for data, offering peace of mind that often surpasses the security capabilities of private data centers.
Effective Deployment of AI in Fabs
The introduction of AI-driven solutions within a fab environment entails a significant change in existing processes and workflows and often results in decision-making that diverges from the traditional. This can unsettle teams and requires a comprehensive change management strategy. Therefore the implementation process must be planned as a multifaceted endeavor and deeply rooted in human collaboration.
A successful deployment begins with assembling the right team—a blend of industrial engineers with intimate knowledge of fab operations, and technology specialists who underpin the AI infrastructure. This collective must not only include fab management and engineers but also those who are the lifeblood of the shop floor—individuals who intimately understand the fab's heartbeat.
When it comes to actual deployment, the process is iterative and data-centric. Setting clear objectives is pivotal. The AI must be attuned to the Fab's goals—be it enhancing throughput or minimizing cycle times. Often, the first output may not align with operational realities—a clear indication of the AI adage that the quality of input data dictates the quality of output. It is at this juncture that the expertise of Fab professionals becomes crucial, scrutinizing and correcting the data, and refining the schedules until they align with practical Fab dynamics. With objectives in place and a live scheduler operational, the system undergoes rigorous in-FAB testing.
Change management is the lynchpin in this transformative phase. The core of successful AI adoption is rooted in the project team's ability to communicate the 'why' and 'how'—to educate, validate, and elucidate the benefits of AI decisions that, while novel, better align with overarching business goals and drive performance metrics forward.
Making AI Understandable and Manageable
The aversion to the enigmatic 'black box' is universal. In the world of fabs, it can be a barrier to trust and adoption —operational teams must feel empowered to both grasp and guide the underlying mechanisms of AI models.
We made a considerable effort to refine our AI scheduler by incorporating a feature that enables the user to influence the objective of what our AI scheduler is tasked to achieve and also to understand the decision. Once a schedule is created, engineers can look through those decisions and inspect and interrogate them to understand why the scheduler made these decisions.
Case Studies: Success Stories of AI Deployment
I firmly believe that we are on the cusp of a transformative era in semiconductor manufacturing, one where AI-driven solutions will yield unprecedented benefits. To illustrate this, let's delve into some practical case studies.
The first involves implementing Flexciton's AI scheduler within the complex diffusion area of a wafer fab—a zone notorious for its intricate processes. We aimed to achieve a trifecta of goals: maximize batch sizes, minimize rework, and significantly reduce reliance on shop floor decision-making. The challenge was magnified by the fab's limited IT and IE resources at the time of deployment. Partnering with an existing vendor whose systems were already integrated and had immediate access to essential data facilitated a rapid and efficient implementation with minimal engagement of the fab's IT team. This deployment led to remarkable improvements: clean tools saw 25% bigger batches, and rework in the diffusion area was slashed by 36%.
Another case study details a full fab deployment, where the existing rules-based scheduling system was replaced with Flexciton's AI scheduler. The goal was to enhance capacity and reduce cycle times. The deployment was staged, beginning with simpler areas starting with metrology tools, through the photolithography area and eventually scaling to the entire fab, yielding a global optimization of work-in-process (WIP) flow. The result was a significant increase in throughput and a staggering 75% reduction in manual flow control transactions, a testament to the AI's ability to autonomously optimize WIP flow and streamline operations.
The Autonomous Future of Semiconductor Manufacturing
In closing, the semiconductor industry stands on the precipice of a new era marked by autonomy. AI technology, with its capacity to make informed decisions without human input, has demonstrated not only the potential for improved KPIs but also a significant reduction in the need for human decision-making. The future of semiconductor manufacturing is one where AI-driven solutions consistently deliver superior production results, alleviating the human workload and steering fabs towards their objectives with unprecedented precision and efficiency.
As we embrace this autonomous future, it becomes clear that the integration of AI in semiconductor manufacturing is not just an enhancement of the status quo but a reinvention of it. With each fab that turns to AI, the industry moves closer to realizing a vision where technology and human ingenuity converge to create a landscape of limitless potential.
Author: Jamie Potter, CEO and Cofounder, Flexciton

The Flex Factor with... Will

The Flex Factor with... Will
Introducing Will, Lead Backend Engineer at Flexciton. Explore his daily tasks, ranging from crafting backend architecture to overseeing the codebase and managing technical debt in this month's edition of The Flex Factor.
Tell us what you do at Flexciton?
I am a lead backend engineer and the software development practice lead. My work involves designing the backend architecture, managing the codebase structure and technical debt, pushing for best practices across the wider engineering team and contributing features to my delivery team.

What does a typical day look like for you at Flexciton?
I usually start my morning by scanning through the production logs from our deployments and seeing if anything looks suspect and in need of an investigation. From there it will depend on what I am focused on for that week which tends to vary a fair amount. The majority of my time is spent coding features or doing large scale design work. Some days I get to spend refactoring and restructuring our codebase, occasionally I will get to work in the devops or optimisation space which I always look forward to. In any given week there will be a handful of ongoing projects at various stages, from architectural designs to software development practice work that needs to be structured and prioritised. No day goes by without me writing at least some code, but there is a fair amount of admin work to do as well.
What do you enjoy most about your role?
The diversity of the work I get to do. My work often overlaps with optimisation and devops so I can find myself speaking the lots of different people throughout the day. There are many opportunities to dive deeper into a topic with various team members willing to support you. Since joining I have worked with terraform, CI pipelines, infrastructure, hardware configuration, optimisation, frontend, customer deployments, database optimisation and management, the application backend and much more.
If you could summarise working at Flexciton in 3 words, what would they be?
Collaborative, Challenging, Diverse.
What emerging technology do you believe will have the biggest impact on our lives in the next decade?
I think the next decade is going to be made great by lots of smaller contributions made across technology from both hardware and software. I don’t have much hope for AGI / useful AGI this decade but there is a lot going on to be excited about. From a hardware perspective we have companies making huge progress in designing chips specifically for model training, and at the other end of the spectrum more companies are putting satellites into orbit to enable global access to high speed internet. AI has fuelled the search in identifying stable structures for proteins and crystals, pushing frontiers of new medicines and treatments, as well as material science. Memory safety in programming languages has started to draw attention from governments too with languages like Rust (and potentially Hylo in the future) likely to lead for memory safe applications. It will be interesting to see how the landscape changes over the next few years and see companies start to shift their codebases over.
What’s the best piece of advice you’d give to someone starting a career in the tech industry today?
I think the best piece of advice would be to throw away any notion of imposter syndrome from the start. Programming, and tech in general, is massive, and its certainly true that the more you know, the more you realise you do not know. Everyone will take a different path throughout their career and find themselves being expert in one topic and (momentarily) hopeless in another. When the topics that you know nothing about come along, its best to embrace that and start finding opportunities to learn. It is important to convince yourself that while you may not be able to learn everything, you could learn anything and find joy in accruing that knowledge as you progress in your career. Bearing this in mind, I would say come into tech because you love it and because you want to learn. There is such as good community across programming languages and industries, anyone who wants to learn can easily find help.
Tell us about your best memory at Flexciton?
I can’t think of one great memory that stands out, but what makes Flexciton great is all the little things that happen week after week such that by Sunday evening, I am looking forward to speaking with my team in Monday standup.

C is for Cycle Time [Part 2]

C is for Cycle Time [Part 2]
Part 2
In the first part of 'C for Cycle Time', we explored the essence of cycle time in front-end wafer fabs and its significance for semiconductor companies. We introduced the operating curve, which illustrates the relationship between fab cycle time and factory utilization, as well as the power of predictability and the ripple effects cycle time can have across the supply chain.
In part 2, we will explore strategies to enhance cycle time through advanced scheduling solutions, contrasting them with traditional methods. We will use the operating curve, this time to demonstrate how advanced scheduling and operational factors, such as product mix and factory load, can significantly impact fab cycle time.
How wafer fabs can improve cycle time
By embracing the principles of traditional Lean Manufacturing, essentially focused on reducing waste in production, cycle time can be effectively reduced [1]. Here are a few strategies that can help improve fab cycle time:
- Improving maintenance strategies, for example moving from reactive to more proactive maintenance can improve cycle time with fewer breakdowns and more predictable tool availability [2].
- As noted in part 1, minimizing wasted time in batch formation and reducing the frequency of rework due to defects improves cycle time.
- Purchasing faster tools. Although, this can be a time-consuming and costly undertaking. In-facility expansion may take up to a year, while the commencement of a new facility could extend to three years [3].
- Establishing optimal batching in diffusion poses a considerable challenge, given the intricate process constraints within the diffusion area, such as timelinks, as we’ve explained in a recent blog.
- Balancing cycle time of hot lots with average fab cycle time. Fabs often assign higher priority to hot lots, which can negatively impact the average cycle time of production lots [4].
- Developing the skills of existing operators and expediting the onboarding process for new operators could be another means of reducing variability in production, thus impacting cycle time.
The implementation of an advanced AI scheduler can facilitate most of the strategies noted above, leading to an improvement in cycle time with significantly less effort demanded from a wafer fab compared to alternatives such as acquiring new tools. In the next sections we are going to see how this technology can make your existing tools move wafers faster without changing any hardware!
Applying an advanced AI scheduler to improve cycle time
In this section, we delve into how an advanced AI scheduler (AI Scheduler) can maintain factory utilization while reducing cycle time.
First let’s define what an AI Scheduler is. It is an essential fab software that has a core engine powered by AI models such as mathematical optimization. It possesses the ability to adapt to ongoing real-time changes in fab conditions, including variations in product mixes, tool downtimes, and processing times. Its output decisions can achieve superior fab objectives, such as improved cycle time, surpassing the capabilities of heuristic-based legacy scheduling systems. More aspects of an advanced AI scheduler can be found in our previous article, A is for AI. The AI Scheduler optimally schedules fab production in alignment with lean manufacturing principles. It achieves this by optimally sequencing lots and strategically batching and assigning them to tools.
Figure 5 shows an example of how an AI Scheduler can successfully shift the cycle time from the original operating curve closer to the theoretical operating curve. As a result, cycle time is now 30 days at 60% factory utilization. This can be accomplished by enhancing fab efficiency through measures such as minimizing idle times, reducing re-work, and mitigating variability in operations, among other strategies. In the next sections, we will show two examples in metrology and diffusion how cycle time is improved with optimal scheduling.

Reducing queuing times and tool utilization variability in metrology
Many wafer fabs employ a tool pull-system for dispatching. In this approach, operators typically decide which idle tool to attend to, either based on their experience or at times, randomly. Once at the tool, they then select the highest priority lots from those available for processing. A drawback of this system is that operators don't have a comprehensive view of the compatibility between the lots awaiting processing, those in transit to the rack, and the tools available. This limited perspective can lead to longer queuing times and underutilized tools, evident in Figure 6.
An AI Scheduler addresses these inefficiencies. By offering an optimized workflow, it not only shortens the total cycle time but also minimizes variability in tool utilization. This in turn indirectly improves the cycle time of the toolset and overall fab efficiency. For example, Seagate deployed an AI Scheduler to photolithography and metrology bottleneck toolsets that were impacting cycle time. The scheduler reduced queue time by 4.3% and improved throughput by 9.4% at the photolithography toolset [5]. In the metrology toolset, the AI Scheduler reduced variability in tool utilization by 75% which resulted in reduced cycle time too, see Figure 7 [6].


Improving cycle time and optimal batching in diffusion
Diffusion is a toolset that poses operational complexities due to its intricate batching options and several coupled process steps between cleaning and various furnace operations [7]. Implementing an AI Scheduler can mitigate many of these challenges, leading to reduced cycle time:
- Strategic Batching can reduce total cycle time in diffusion. To maximize the benefit of an AI Scheduler, the fab should provide good quality data.
- Automated Furnace Loading: Typically, diffusion loading is accomplished via a pull-system from the furnace. This means that operators would revisit the cleaning area to manually pick the best batches, based on upcoming furnace availability. This approach often demands substantial resources and time, thereby increasing cost or cycle time. The AI Scheduler curtails this time considerably, freeing up operators for other essential tasks, which indirectly may reduce cycle time elsewhere.
- Reduction of Timelinks Violations: A recent pilot implementation of an AI Scheduler in diffusion at a Renesas fab underscored its effectiveness. As displayed in Figure 8, timelink violations were significantly reduced. This minimizes the necessity for rework, further cutting down the cycle time, as explained earlier in the article.

Maximizing the value of the AI Scheduler by integrating with other applications
In the above examples of photo, metrology and diffusion toolsets, the AI Scheduler can support operators to achieve consistently high performance. To enhance the efficiency of the scheduling system in fabs predominantly run by operators with minimal AMHS (Automated Material Handling Systems) presence, pairing the scheduler with an operator guidance application, as detailed in one of our recent blogs on user-focused digitalisation, can be a valuable approach. This software will suggest the next task required to be executed by an operator.
The deployment of an AI Scheduler should focus on bottleneck toolsets - specifically, those that determine the fab's cycle time. Reducing the cycle time of a toolset will be inconsequential if that toolset is not a bottleneck. Consequently, fabs should consider the following two approaches:
- Ensure the deployment of the AI Scheduler on the most critical toolsets to effectively address dynamic bottlenecks. This ensures that as bottlenecks shift, the AI Scheduler can promptly reduce the cycle time of the newly identified bottlenecked toolset. By doing so, fabs can consistently maintain a low cycle time.
- The introduction of a global (or a fab wide) application layer – such as a solution that looks across all the toolsets and all lots across the whole line – can help coordinate all deployed AI Schedulers. This application should indicate which toolsets are bottlenecks and it should also adjust lot priorities or production targets per toolset to ensure a smooth flow across the line. The interaction between global applications and local scheduling applications can be seen in recent papers [9] [10].

Dealing with dynamic changes in the fab and understand trade-offs between competing objectives
Another factor to consider is that the actual operating curve of the fab is moving constantly based on changes in the operating conditions of the fab. For example, if the product mix changes substantially, this may impact the recipe distribution enabled in each tool and subsequently, the fab cycle time vs factory utilization curve would shift. The operating curve can also change if the fab layout changes, for example when new tools are added.
In Figure 9, we show an example wherein the cycle time versus factory utilization curve for product mix A shifts upward. This signifies an increased cycle time in the fab due to the recent changes in the product mix (and the factory utilization was slightly reduced under these new conditions). An autonomous AI Scheduler, as described by Sebastian Steele in a recent blog, should be able to understand the different trade-offs. For example, in Figure 10, the AI Scheduler could deal with the same utilization as before (60%) with product mix A, but the cycle time will stay at 50 days (10 days more than in the case with product mix A). Another alternative is that the user can then decide if they want to customize this trade-off so that the fab can move back to the same cycle time with this new product mix B at 40 days but staying with lower utilization at 57%.

Trade-offs between different objectives at local toolsets may impact the fab cycle time. Consider the trade-offs in terms of batching costs versus cycle time. For instance, constructing larger batches might be crucial for high-cost operational tools such as furnaces in diffusion and implant. However, this approach could lead to an extended cycle time for the specific toolset and, consequently, an overall increase in fab cycle time.
Tool availability and efficiency significantly affect cycle time, akin to the influence of product mix on operating curves. If tools experience reduced reliability over time, the operating curve may shift upward, resulting in a worse cycle time for the same utilization. While the scheduler cannot directly control tool availability, strategically scheduling maintenance and integrating it with lot scheduling can positively impact cycle time. A dedicated future article will delve into this topic in more detail.
Conclusion
The topic of the cycle time has been enriched with the introduction of an AI Scheduler, bringing a paradigm shift in how we perceive and manage the dynamics of front-end wafer fabs. As highlighted in our exploration, these schedulers do more than just automate – they optimize. By understanding and predicting the nuances of operations, from tool utilization to lot prioritization, advanced AI schedulers provide a roadmap to not just manage but optimize cycle time considering alternative trade-offs. In future articles we will talk about how scheduling maintenance and other operational aspects can be considered in a unified and autonomous AI platform that we believe would be the next revolution, after the innovations from Arsenal of Venice, Ford and Toyota.
Author: Dennis Xenos, CTO and Cofounder, Flexciton
References
- [1] James P. Ignizio, 2009, Optimizing Factory Performance: Cost-Effective Ways to Achieve Significant and Sustainable Improvement 1st Edition, McGraw-Hill, ISBN 978-0-07-163285-0
- [2] Lean Production, 2023, TPM (Total Productive Maintenance), URL.
- [3] Ondrej Burkacky, Marc de Jong, and Julia Dragon, 2022, Strategies to lead in the semiconductor world, McKinsey Article, URL.
- [4] Philipp Neuner, Stefan Haeussler, Julian Fodor, and Gregor Blossey, 2023, Putting a Price Tag on Hot Lots and Expediting in Semiconductor Manufacturing. In Proceedings of the Winter Simulation Conference (WSC '22). IEEE Press, 3338–3348.
- [5] Robert Moss, Dennis Xenos, Tina O’Donnell, 2023, Deployment of an Advanced Photolithography Scheduler at Seagate Technology, IFORS News, Volume 18, Issue 1, ISSN 2223-4373, pp. 8–10, URL.
- [6] Robert Moss, 2022, Ever-decreasing circles: how iterative modelling led to better performance at Seagate Technologies. Euro 2022 Conference, Finland, URL
- [7] Thomas Beeg, 2023, Impact of “time links” or controlled queue times, Factory Physics and Automation, URL.
- [8] Jamie Potter, 2023, Fab scheduling is now so complex that it needs next-generation intelligent software, Silicon Semiconductor Magazine, Volume 44, Issue 2, pp. 26-29, URL.
- [9] I. Konstantelos et al., 2022, "Fab-Wide Scheduling of Semiconductor Plants: A Large-Scale Industrial Deployment Case Study," 2022 Winter Simulation Conference (WSC), Singapore, pp. 3297-3308, doi: 10.1109/WSC57314.2022.10015364.
- [10] Félicien Barhebwa-Mushamuka. 2020, Novel optimization approaches for global fab scheduling in semiconductor manufacturing. Other. Université de Lyon. English. ⟨NNT : 2020LYSEM020⟩. ⟨tel-03358300⟩

C is for Cycle Time [Part 1]

C is for Cycle Time [Part 1]
This two-part article aims to explain how we can improve cycle time in front-end semiconductor manufacturing through innovative solutions, moving beyond conventional lean manufacturing approaches. In part 1, we will discuss the importance of cycle time for semiconductor manufacturers and introduce the operating curve to relate cycle time to factory utilization. Part 2 will then explore strategies to enhance cycle time through advanced scheduling solutions, contrasting them with traditional methods.
Part 1
Why manufacturers care about cycle time
Cycle time, the time to complete and ship products, is crucial for manufacturers. James P. Ignazio, in Optimizing Factory Performance, noted that top-tier manufacturers like Ford and Toyota have historically pursued the same goal to outpace competitors: speed [1]. This speed is achieved through fast factory cycle times.
This emphasis on speed had tangible benefits: Ford, for instance, could afford to pay workers double the average wage while dominating the automotive market. The Arsenal of Venice's accelerated ship assembly secured its status as a dominant city-state. Similarly, fast factory cycle times were central to Toyota’s successful lean manufacturing approach.
Furthermore, semiconductor manufacturers grapple with extended cycle times that can often span 24 weeks [2]. This article will focus on manufacturing processes in front-end wafer fabs as their contribution to the end product, such as a chip or hard drive disk head, spans several months. In contrast, back-end processes can be completed in a matter of weeks [3]. However, the principles discussed apply universally to back-end fabs without sacrificing generality.
Why Short Cycle Times Matter for Front-end Wafer Fabs
- Revenue acceleration: The quicker products reach customers, the faster revenue streams in. However, quantifying the precise financial impact due to cycle time is intricate and beyond this article's scope.
- Competitive advantage: Gaining a competitive advantage involves reducing cycle time in R&D wafers, which accelerates product launches. More than 20% of front-end fab production lines can be used for R&D wafer testing and iteration. Swift deliveries enhance a company's reputation, leading to more contracts. At the 2022 Winter Simulation Conference, Micron highlighted their rapid advancements: maturing 30% quicker in DRAM (five months ahead of the previous node) and 20% in NAND (a year faster than the prior node). See Figure 1.
- Agility in market responsiveness: A fab with shorter cycle times can swiftly adjust to market fluctuations, whether that is a surge in demand or a shift in product preferences, such as changes in product mix. It can also respond faster to changes in customer requirements.
- Risk mitigation: The shorter the cycle time, the quicker a fab can respond once defects have been detected as it takes less time to perform rework.
- Inventory management: Lower cycle times can reduce the amount of work-in-progress (WIP) in buffers or racks (intermediate stock), or stock at the end of the production line. This not only liberates tied-up capital but also wafers can move quicker with less WIP in the fab as it is shown in a later section introducing the operating curve.

Achieving Predictable Cycle Time
Less variability in cycle time helps a wafer fab to achieve better predictability in the manufacturing process. Predictability enables optimal resource allocation; for instance, operators can be positioned at fab toolsets (known as workstations) based on anticipated workload from cycle time predictions. Recognizing idle periods of tools allows for improved maintenance scheduling which will result in reduction in unplanned maintenance. In an upcoming article (Part 2), we'll explore how synchronizing maintenance with production can further shorten cycle times.
Measuring and monitoring the cycle time improves overall fab performance
Measuring and monitoring cycle times aids in identifying deviations from an expected variability. This, in turn, promptly highlights underlying operational issues, facilitating quicker issue resolution. Additionally, it assists industrial engineers in pinpointing bottlenecks, enabling a focused analysis of root causes and prompt corrective actions.
Supply chain stakeholders usually fail to understand the impact of cycle time
In the semiconductor industry, cycle time plays a pivotal role in broader supply chain orchestration:
- A predictable cycle time informs suppliers when to provide fresh batches of raw materials.
- Furthermore, it influences the downstream processes of Assembly & Test Operations (back-end facilities). Back-end facilities with a cycle time of less than a week gain enhanced predictability, allowing for more effective allocation of capacity and resources.
- Predictable cycle times will also inform safety inventory levels, freeing capital and optimizing storage space.
Cycle time is a component of the total lead time of a product (it also includes procurement, transportation, etc). Therefore, total lead time can be reduced if the long cycle times in the front-end wafer fabs are reduced. A reliable cycle time nurtures trust with suppliers, laying the foundation for favorable partnerships and agreements. In essence, cycle time is not just about production; it's the heartbeat of the semiconductor supply chain ecosystem.
Understanding how cycle time impacts product delivery times is essential for the semiconductor industry. In some analyses, you could see that cycle time is confused with capacity, as the authors in a McKinsey article stated “Even with fabs operating at full capacity, they have not been able to meet demand, resulting in product lead times of six months or longer” [4]. On the contrary, in a fab operating at full capacity, lead times of the products will increase as the average cycle time of manufacturing is increasing.
How to measure cycle time
Fab Cycle Time
The fab cycle time metric defines the time required to produce a finished product in a wafer fab. The general cycle time term is also used to measure the time required to complete a specific process step (e.g. etching, coating) in a toolset, known as process step cycle time. The fab cycle time consists of the following time components as can be seen in Figure 2:
- Value-added processing time, which is the time taken to transform or assemble the unfinished product, which is a wafer in our case.
- Non-value-added processing time includes the time taken for inspection and testing, as well as the time for transferring the wafer between different steps.
- Time to prepare the products for processing: this refers to the time operators or tools required to form a batch, i.e. to select which lots should be bundled together for processing.
- Queue time: the time spent where the unfinished wafer is waiting to be processed because the tool required is busy, due to the tool processing another batch or undergoing maintenance.

To measure and monitor cycle time, wafer fabs must track transactional data for each lot, capturing timestamps for events like the beginning and completion of processing at a tool. This data is gathered and stored by a Manufacturing Execution System (MES). Such transactional information can be utilized for historical operations analysis or for constructing models to forecast cycle times influenced by different operational factors. This foundation is crucial for formulating the operational curve of the fab, which we'll delve into in the subsequent part of this blog. As outlined in an article by Deenen et al., there are methods to develop data-driven simulations that accurately predict future cycle times [3].
Fab operating curve: Fab Cycle Time versus Factory Utilization

As we mentioned earlier, historic data can be used to generate the operating curve of a fab which describes the cycle time in relation to the factory utilization. Figure 3 shows the graph of the fab cycle time in days versus the utilization of the fab (%). The utilization of the fab is defined as the WIP divided by the total capacity of the fab.
We have found this method useful in understanding the fundamental principles of cycle time. The operating curve helps to explain how factory physics impact fab KPIs such as cycle time and fab utilization by showing the changes in the operating points:
- The horizontal line, representing the summation of raw process times (known as theoretical cycle time), envisions a scenario with zero queuing time in the fab. This illustrates the impact of queuing time on cycle time as we increase WIP, moving right on the x-axis. Accumulation of queuing time becomes inevitable with the introduction of more WIP in the fab.
- The ideal operating curve represents the operation of a wafer-fab assuming that there is zero waste. This curve defines the minimum achievable cycle time for each fab load and the difference between this curve and theoretical cycle time is because of real life variability in the fab that cannot be eliminated completely, e.g. unplanned maintenances, inconsistent tool processing times.
- The cycle time tends to go to infinity, when you move towards 100% utilization of the fab.
- The actual operating curve, cycle time versus factory utilization, represents the current fab’s operation considering all the inefficiencies such as excessive inventory, variability in operations, idle times, poor batching and rework.
- Both curves assume average or constant values of the operational parameters of the fab for example a fixed number of tools installed, an average availability of each tool and labor, and a constant product mix.
- The actual operating curve describes the impact on cycle time if we load the fab with more WIP as shown in Figure 4. The fab management could use this information to make a decision about the trade-off between cycle time and factory utilization. Higher fab utilization is associated with a higher throughput (i.e. number of wafers per unit of time).

In Figure 3, you can see that the current fab cycle time is 40 days when the factory utilization is at 60%. Theoretically, we could reduce the cycle time to 22 days. The difference between these two points is due to the inefficiencies that contribute to the factory cycle time as explained in the introduction of this section. In Part 2 of this blog, we will explore the various types of inefficiencies and examine how innovation can shift the operating curve to achieve lower cycle times while maintaining the same fab utilization.
Summary
In summary, cycle time is not merely a production metric but the very pulse of the semiconductor manufacturing and supply chain. It governs revenues, shapes market responsiveness, and is pivotal in driving innovation. By understanding its nuances, semiconductor companies can not only optimize their operations but also gain a competitive edge. And while we've scratched the surface on its significance, the question remains: how can we further reduce and refine it? In part 2 of the C for Cycle Time blog, we will discover innovative techniques that promise to revolutionize cycle time management in wafer fabs.
Author: Dennis Xenos, CTO and Cofounder, Flexciton
References
- [1] James P. Ignizio, 2009 ,Optimizing Factory Performance: Cost-Effective Ways to Achieve Significant and Sustainable Improvement 1st Edition, McGraw-Hill, ISBN 978-0-07-163285-0
- [2] Semiconductor Industry Association, 2021, Blog, URL
- [3] Deenen, P.C., Middelhuis, J., Akcay, A. et al., 2023, Data-driven aggregate modeling of a semiconductor wafer fab to predict WIP levels and cycle time distributions. Flex Serv Manuf J. https://doi.org/10.1007/s10696-023-09501-1
- [4] Ondrej Burkacky, Marc de Jong, and Julia Dragon, 2022, Strategies to lead in the semiconductor world, McKinsey Article, URL.

Security and the Cloud: Should We Really Keep Everything On-prem?

Security and the Cloud: Should We Really Keep Everything On-prem?
Welcome to a nuanced exploration of pivotal considerations surrounding cloud adoption in the context of wafer fabrication. For those reading sceptically, uncertain about the merits of cloud integration, or perhaps prompted by concerns about lagging behind competitors—this blog endeavours to shed light on key areas of relevance.
Introduction
For those reading this blog, the chances are you (or perhaps your boss) remain unconvinced about the merits of cloud adoption, yet are open to participating in the ongoing debate. Alternatively, there might be a concern of falling behind industry peers, perhaps heightened by recent security incidents such as the hacking of X-Fab. By the end of this short article, you will have gained valuable insights into the significant areas of cloud security, with the anticipation that such information will contribute to a more informed decision-making process.
Firstly, this is about using a cloud service, not running your own systems in the cloud. There are good arguments for that too, but that’s not what this article is about. So, the areas deemed worthy of exploration within this context include:
- Security - It is a paramount concern that influences the reluctance of fabs to embrace cloud services, often accompanied by apprehension about entrusting sensitive data to cloud platforms.
- Type of service - If you’re running it on-premise, you’re probably managing it. Would you prefer to be buying software or a service?
- Cost - One of the supposed benefits of the cloud is less overall running cost. How much truth is in that?
- Reliability and criticality - What about backups and disaster recovery and failover and downtime and maintenance and…?
Recognising the complexity of these topics, we aim to take a segmented approach, with this blog dedicating its focus to the critical factor of security. Subsequent entries promise a comprehensive discussion on the remaining aspects.
Security
We’re going to start with a simple one. Is your fab in any way connected to the internet? If you’re genuinely air-gapped, then it's reasonable to assume you already have a high level of security. But, if you’re not actually air-gapped, then you could actually improve your security by using a cloud service rather than running that service on-prem. Not instantly obvious perhaps, but let us explain.
My fab is connected to the internet already
The most compelling argument that exists for this is a simple one. Microsoft, AWS, IBM and Google all run respectable professional public clouds. If the service we’re talking about connecting to runs on any one of them, it’s fair to say they have similar approaches to cybersecurity.
Microsoft alone employs 3500 cybersecurity professionals to maintain the security of Azure and together they spend a lot on cybersecurity improvements. That’s an awful lot more person-hours on security than most are going to be able to apply from their team. Every single one of them is contributing to the security of a system running in their cloud.
“Aha!”, you say, “that tells me that the underlying public cloud infrastructure that the service is running on is probably as secure as anything connected to the world could be, but that doesn't mean that the service running on it is, right?” And yes, that’s a fair concern. As one of those service providers, we can confirm that we do not employ 3500 cybersecurity professionals. But because we run our service on Azure, we don’t need to. More than half our fight is already done for us and the remainder is a lot easier. For example:
- Staying on top of security patches: Not something we need to worry about. Azure does almost all of it for us. The only thing it doesn’t do is update our own code dependencies, so we have Snyk do that for us.
- Anti-virus: Don’t want viruses in your systems? Tick. That’s an easy built-in feature.
- Secure data: All storage is encrypted without us having to do anything special and we can do the same with data in transit.
- Secret management: There’s a whole infrastructure designed by those 3500 cybersecurity professionals that we can use right off the shelf.
- Access management: Active Directory–a directory service built by Microsoft–is built into everything so we just have to define what the roles should be and who has access. There’s not many roles, and only some very trusted people.
- Security auditing: Every action in every piece of infrastructure is logged by default and we can feed that all in very easily to our security monitoring systems to identify any suspicious activity.
In discussing the ease of these security measures, perhaps we’ve been slightly frivolous. However, despite the casual tone, the implementation of security measures when using cloud technologies is notably simpler when compared with organisations that manage their own hardware.
My fab is air-gapped right now
On the other hand, maybe you’re a fab that is actually air-gapped. You’ve got a solid on-site security team and excellent anti-social-engineering measures. Why introduce any risk? Fair question. We’d argue that this is going to become an increasingly challenging problem for you and maybe now’s the time to get ahead of the problem. Tools on your shop floor are already getting more modern, with virtualised metrology and off-site telemetry feeds for predicting failure rates using machine learning. Some of these systems just can’t be run on site and you’ll increasingly have to do without the more advanced aspects of your tooling to maintain your air gap. Over time this will take its toll, and your competitors will begin to pull away.
At this point it’s worth mentioning that SEMI has put together standards in the cybersecurity space. These address risks like bringing tools into your network with embedded software on them as well as defining how to set up your fab network to secure it, while still enabling external communication. We’d suggest that you should treat a cloud service no differently. It is entirely possible to use a managed service, in the cloud, connected to your fab, while still relying on purely outbound connectivity from your fab, leaving you entirely in control of what data is provided to the service and what you do with any data made available by that service in return.
In summary
If you’re already “internet-enabled” in your fab, then we’d argue that using a reputable public cloud service is actually more secure than running that same service on-prem.
If you’re completely offline, we’re not going to argue that using a cloud service is more secure than not connecting to the internet. What I am arguing though, is that at some point you’re going to have to anyway, so you’re better off getting on top of this now rather than waiting until you’re forced into it by the market.
Author: Ray Cooke, VP of Engineering at Flexciton

User-focused Digitalisation: Empowering Wafer Fab Operators with Intelligent Software

User-focused Digitalisation: Empowering Wafer Fab Operators with Intelligent Software
For many years, my career has been deeply rooted in the ever-changing world of manufacturing–an industry where progress relies on innovation. Throughout my professional journey, I have been immersed in this dynamic sector, focusing on creating bespoke software solutions for manufacturing and logistics, all the while seamlessly integrating third-party solutions into established workflows. My experience has afforded me the opportunity to first-hand witness the profound changes that digitalisation and automation have brought to the manufacturing landscape. As technology and manufacturing processes have become more closely intertwined, the operational dynamics of production have been reshaped.
Like any successful partnership, the marriage of manufacturing and technology requires a strong foundation built on trust, mutual understanding, respect, and a shared ambition to support each other's growth and empowerment. However, these transformative shifts have brought along their fair share of challenges and concerns that continue to echo around the manufacturing world.
Embracing servant software in the manufacturing landscape
A few years ago, I collaborated with a couple of value stream managers as we scoured the market for various digital products, seeking the optimal solution to integrate with our in-house developed material requirements planning (MRP) system.
One significant concern was the fear of adopting software that was too intrusive. In an industry where precision and control are paramount, the idea of software delving too deeply into our operations was disconcerting. Even worse was the fear of getting locked into specific technologies. Having deeply integrated software within our operations poses a risk due to them being so costly to replace, which potentially limits our capacity to adapt and evolve in tandem with the industry. We wanted automation and the ability to forecast the incoming work. Our aim was to prevent defects and misjudgments, all the while ensuring that we retained control over our manufacturing processes. And importantly, we were adamant about not compromising our quality standards.
The reality is that the market for manufacturing-oriented software is littered with solutions that are cumbersome, inflexible, and expensive. When I joined Flexciton as a Senior Product Manager, I was pleasantly surprised to discover a refreshing departure from the norm in Flexciton's product philosophy.
It evokes the concept of “servant software”. Similar to the idea of servant leadership–where a leader prioritises the well-being, growth and empowerment of team members–servant software aims to streamline processes, simplify tasks, and provide solutions that cater to the users' requirements and preferences.
A servant software encompasses, as a foundational principle, the advantage of being as flexible and adaptable as a meticulously tailored suit. This quote summarises the concept:
Upgrade your user, not your product. Don’t build better cameras — build better photographers.
— Kathy Sierra
In the challenge of digitising semiconductor wafer fabs, Flexciton aspires to play a pivotal role in cultivating highly skilled operators and managers—individuals who are empowered by our technology rather than being replaced by it.
Automation for enrichment, not alienation
Picture Josh, a Senior Fab Operator in the diffusion area, who has been working for five years in a manually operated wafer fab. Half of his workday is consumed by the arduous task of sifting through a colossal spreadsheet that meticulously logs all the lots in progress, each with its own unique characteristics. He sits at his desk, constantly toggling between this spreadsheet and another monitor displaying the real-time status of the tools.
Jotting down notes on a piece of paper, Josh ventures into the tangible world of the fab. There, he confronts the actual events unfolding. He asks himself, "Is this an actuality? Are these lots genuinely ready for processing? Can I really preload this tool?" Realisation strikes: "No, they are still in transit, and I cannot proceed with this batch," or "I can’t preload this tool yet; a few minutes are still left." Josh retreats to his desk to recalibrate his plans once more.
When operators are liberated from repetitive and inefficient tasks, they can harness their cognitive abilities to identify improvement opportunities, propose innovative solutions, and implement process enhancements directing their efforts towards value-added activities that demand uniquely human qualities. This empowerment not only enhances job satisfaction but also drives a culture of ownership and accountability.
Embracing Lean Management principles
Servant software aligns seamlessly with the principles of lean management, a philosophy that champions efficiency through the elimination of waste and continuous improvement. Lean management is not just about operational optimization, it emphasises a shift in mindset, encouraging all levels of an organisation to work cohesively towards shared objectives. By integrating servant software within this framework, manufacturers can elevate their workforce's role away from simply executing tasks and towards contributing to the bigger picture.
Operators typically concentrate their efforts within their designated areas of responsibility, striving to optimize operations by carefully managing various tasks. They work diligently to maintain a delicate balance among tools, ensuring workloads are efficiently allocated, changeovers are optimized, and maintenance and process control activities are accommodated for. Even within a confined production area, this manual juggling of numerous constraints and variables presents a considerable challenge, a topic we explored further in our article on autonomous scheduling.
A new way to schedule the fab is the key. But what’s in it for the operators? What is the impact on their daily work? Our software aims to provide operators with a tool that leads them to take the right action at precisely the right moment. It ensures that tasks are executed with impeccable timing, neither prematurely nor delayed, considering not only the current status of the WIP (work in progress) and the tools they are responsible for, but also the potential effect of their actions on the following production stages.
This goes beyond optimizing individual areas; instead, it is designed to harmonise the entire manufacturing process. By avoiding over-optimization of one area, we prevent potential bottlenecks or resource shortages elsewhere in the workflow, resulting in a balanced, easily monitored, and controllable production process.
The fab in your pocket!
Our operators' tools are integral to the Flexciton application ecosystem, where every component is integrated and consistent. From analytics and scheduling to automated tuning, and extending to the practical, hands-on actions of our operators—such as loading or unloading tools or conducting Statistical Process Control (SPC) tasks—our system comprehensively covers all aspects. Therefore, Josh can simply glance at his portable device to discern the next best action to perform or be notified when something urgently requires his attention.
Our primary goal is to provide operators with the essential information they need, without overwhelming them. This information is easily accessible on portable devices, ensuring its effectiveness from the very first day an operator steps into the fab.
Operators—now armed with useful insights and empowered by automation—can expand their contributions beyond their individual roles, engaging in more value-adding tasks. The result is a collaborative ecosystem where every individual becomes a key player in achieving fab-wide targets and goals.

A customer-centric product philosophy
In delivering software solutions for the semiconductor industry, our mission revolves around achieving an optimal balance, thereby cultivating a modern, flexible, and customer-centric product philosophy. Our platform, while robust, maintains a deep respect for operational boundaries, ensuring that our customers are not confined to rigid models.
Instead, it functions as a dynamic tool that enriches adaptability and innovation, and grants users complete control over their manufacturing processes. By adhering to these core principles and relentlessly pursuing software that empowers without overwhelming, we unlock the full potential of a harmonious synergy between technology and manufacturing, propelling progress forward without concessions.
Author: Valentina Vivian, Senior Product Manager at Flexciton

The Flex Factor with... Jannik

The Flex Factor with... Jannik
Please give a warm welcome to Jannik, our next team member to sit in the hot seat. In this edition of The Flex Factor, find out how Jannik juggles being both an optimization engineer and customer lead, as well as what get's him excited in the world of tech.

Tell us what you do at Flexciton?
I’m an optimization engineer and technical customer lead working in the customer team. As an optimization engineer, I work on our models and the general back-end code to make sure we create optimal schedules that meet the client’s requirements.
As a customer lead, I speak to our clients to understand their unique challenges, so that I can translate them into requirements for our solution and liaise with our team to prioritise the right bits of work we want to get done.
What does a typical day look like for you at Flexciton?
To start my day I like to have a check in with my clients, to make sure their apps are working as expected and there are no queries waiting to be handled. Other than that, there is no such thing as a typical day.
Some days will be full of programming to create solutions for new problems we encounter, or to iron out bugs that made their way into the code during previous work. Other days might have lots of meetings to align our work with the engineering & product teams, or to speak with our customers and technology partners.
What do you enjoy most about your role?
My role has loads of connections within the company, which means I get to work with many super smart people to achieve our goals. I also really enjoy learning about the many different challenges our clients face and create solutions for them, and occasionally I get to visit clients and peek inside the cleanroom, which never fails to amaze me.
If you could summarise working at Flexciton in 3 words, what would they be?
Challenges, curiosity, intelligence.
If you could have dinner with any historical figure, living or deceased, who would it be, and why?
Sebastião Salgado, the Brazilian photographer. Not only is he an inspirational photographer, he must also be full of stories and life lessons from many years of travelling and reforesting his family's farm land.
In the world of technology and innovation, what emerging trend or development excites you the most, and how do you see it shaping our industry?
It’s a very broad trend, but it’s amazing to see AI solutions spreading to more and more people and helping them in their daily lives. You’d think an industry like semiconductors is at the forefront of this, but we can see that there is still a lot of hidden potential which we can hopefully help to unlock over the next few years by replacing some of the legacy technology.
Tell us about your best memory at Flexciton?
This one is really tough because I love all the small moments here, from having a super technical discussion amongst engineers to finding out a new fun fact about each other over some drinks.
If I have to pick a single moment, it would be our surfing lesson near Albufeira during last year’s team trip. It was just loads of fun trying it out (and failing) together.
We're hiring! To see what vacancies we have available, check out our careers site.

B is for Batching

B is for Batching
Welcome back to the Flexciton Tech Glossary Series: A Deep Dive into Semiconductor Technology and Innovation. Our second entry of the series is all about Batching. Let's get started!
A source of variability
Let's begin with the basics: what exactly is a batch? In wafer fabrication, a wafer batch is a group of wafers that are processed (or transported) together. Efficiently forming batches is a common challenge in fabs. While both logistics and processing both wrestle with this issue, our article will focus on batching for processing, which can be either simultaneous or sequential.

Simultaneous batching is when wafers are processed at the same time on the same machine. It is very much inherent to the entire industry, as most of the machines are designed for handling lots of 25 wafers. There are also process types – such as thermal processing (e.g. diffusion, oxidation & annealing), certain deposition processes, and wet processes (e.g. cleaning) – that benefit from running multiple lots in parallel. All of these processes get higher uniformity and machine efficiency from simultaneous batching.
On the other hand, sequential batching refers to the practice of grouping lots or wafers for processing in a specific order to minimise setup changes on a machine. This method aims to maximise Overall Equipment Effectiveness (OEE) by reducing the frequency of setup adjustments needed when transitioning between different production runs. Examples in wafer fabrication include implant, photolithography (photo), and etch.
Essentially, the entire process flow in wafer manufacturing has to deal with batching processes. To give a rough idea: a typical complementary metal-oxide semiconductor (CMOS) architecture in the front-end of the line involves batching in up to 70% of its value added steps. In a recent poll launched by FabTime on what the top cycle time contributors are, the community placed batching at number 5[1], behind tool downs, tool utilisation, holds, and one-of-a-kind tools. Batching creates lot departures in bursts, and hence it inherently causes variability in arrivals downstream. Factory Physics states that:
“In a line where releases are independent of completions, variability early in a routing increases cycle time more than equivalent variability later in the routing.” [2]
Successfully controlling this source of variability will inevitably result in smoother running down the line. However, trying to reduce variability in arrival rates downstream can lead to smaller batch sizes or shorter campaign lengths, affecting the effectiveness of the batching machines themselves.
The many complexities of batching
In wafer fabs, and even more so in those with high product mix, batching is particularly complicated. As described in Factory Physics:
"In simultaneous batching, the basic trade-off is between effective capacity utilisation, for which we want large batches, and minimal wait to batch time, for which we want small batches.” [2]
For sequential batching, changing over to a different setup of the machine will cause the new arriving lots to wait until the required setup is available again.
In both cases, we’re talking about a decision to wait or not to wait. The problem can easily be expressed mathematically if we’re dealing with single product manufacturing and a low number of machines to schedule. However, as one can imagine, the higher the product mix, the higher the possible setups and machines. Then the problem complexity increases, and the size of the solution space explodes. That’s not all, there are other factors that might come into play and complicate things even more. Four different examples are:
- Timelinks or queue time constraints: a maximum time in between processing steps
- High-priority lots: those that need to move faster through the line for any reason
- Downstream capacity constraints: machines that should not get starved at any cost
- Pattern matching: when the sequence of batching processes need to match a predefined pattern, such as AABBB
Strategies to deal with batching
Historically, the industry has used policies for batching; common rules of thumb that could essentially be split up into ‘greedy’ or ‘full batch’ policies[3]. Full batch policies require lots to wait until a full batch is available. They tend to favour effective capacity utilisation and cost factors, while they negatively impact cycle time and variability. Greedy policies don’t wait for full batches and favour cycle time. They assume that when utilisation levels are high, there will be enough WIP to make full batches anyway. For sequential batching on machines with setups, common rules include minimum and maximum campaign length, which have their own counterpart configurations for greedy vs full batching.[3]
The batching formation required in sequential or simultaneous batching involves far more complex decisions than that of loading a single lot into a tool, as it necessitates determining which lots can be grouped together. Compatibility between lots must be considered, and practitioners must also optimize the timing for existing lots on the rack to await new arrivals, all with the goal of maximising batch size. [4]

Industrial engineers face the challenge of deciding the best strategy to use for loading batch tools, such as those in the diffusion area. In an article by FabTime [4], [5] the impact of the greedy vs full or near full batch policy is compared. The greedy heuristic reduces queuing time and variability but may not be cost-effective. Full batching is cost-effective but can be problematic when operational parameters change. For instance, if a tool's load decreases (becomes less of a bottleneck), a full batch policy may increase cycle time and overall fab variability. On the other hand, a greedy approach might cause delays for individual lots arriving just after a batch is loaded, especially critical or hot lots with narrow timelink windows. Adapting these rules to changing fab conditions is essential.
In reality, these two policies are extreme settings in a spectrum of possible trade-offs between cost and cycle time (and sometimes quality). To address the limitations of both the greedy and full batch policies, a middle-ground approach exists. It involves establishing minimum batch size rules and waiting for a set duration, X minutes, until a minimum of Y lots are ready for batching. This solution usually lacks robustness because the X and Y values depend on various operational parameters, different recipes, product mix, and WIP level. As this rule-based approach incorporates more parameters, it demands greater manual adjustments when fab/tool settings change, inevitably leading to suboptimal tool performance.
In all of the above solutions, timelink constraints are not taken into consideration. To address this, Sebastian Knopp[6] recently developed an advanced heuristic based on disjunctive graph representation. The model's primary aim was to diminish the problem size while incorporating timelink constraints. The approach successfully tackled real-life industrial cases but of an unknown problem size.
Over the years, the wafer manufacturing industry has come up with various methodologies to help deal with the situation above, but they give no guarantee that the eventual policy is anywhere near optimal and their rules tend to stay as-is without adjusting to new situations. At times, this rigidity has been addressed using simulation software, enabling factories to experiment with various batching policy configurations. However, this approach proved to be resource-intensive and repetitive, with no guarantee of achieving optimal results.
How optimization can help master the batching problem
Optimization is the key to avoiding the inherent rigidity and unresponsiveness of heuristic approaches, helping to effectively address the batching problem. An optimization-based solution takes into account all batching constraints, including timelinks, and determines the ideal balance between batching cost and cycle time, simultaneously optimizing both objectives.
It can decide how long to wait for the next lots, considering the accumulating queuing time of the current lots and the predicted time for new lots to arrive. No predetermined rules are in place; instead, the mathematical formulation encompasses all possible solutions. With a user-defined objective function featuring customised weights, an optimization solver autonomously identifies the optimal trade-off, eliminating the need for manual intervention.
The challenge with traditional optimization-based solutions is the computational time when the size and complexity of the problem increase. In an article by Mason et al.[7], an optimization-based solution is compared to heuristics. While optimization outperforms heuristics in smaller-scale problems, its performance diminishes as problem size increases. Notably, these examples did not account for timelink constraints.
This tells us that the best practice is to try to break down the overall problem into smaller problems and use optimization to maximise the benefit. At Flexciton, advanced decomposition techniques are used to break down the problem to find a good trade-off between reduced optimality from the original problem and dealing with NP-hard complexity.[8]
Many practitioners aspire to attain optimal solutions for large-scale problems through traditional optimization techniques. However, our focus lies in achieving comprehensive solutions that blend heuristics, mathematical optimization, like mixed-integer linear programming (MILP), and data analytics. This innovative hybrid approach can vastly outperform existing scheduling methods reliant on basic heuristics and rule-based approaches.
Going deeper into the solution space
In a batching context, the solution space represents the numerous ways to create batches with given WIP. Even in a small wafer fab with a basic batching toolset, this space is immense, making it impossible for a human to find the best solution in a multi-product environment. Batching policies throughout history have been like different paths for exploring this space, helping us navigate complex batching mathematics. Just as the Hubble space telescope aided space exploration in the 20th century, cloud computing and artificial intelligence now provide unprecedented capabilities for exploring the mathematical world of solution space, revealing possibilities beyond imagination.
With the advent of these cutting-edge technologies, it is now a matter of finding a solution that satisfies the diverse needs of a fab, including cost, lead time, delivery, quality, flexibility, safety, and sustainability. These objectives often conflict, and ultimately, finding the optimal trade-off is a business decision, but the rise of cloud and AI will enable engineers to pinpoint a batching policy that is closest to the desired optimal trade-off point. Mathematical optimization is an example of a technique that historically had hit its computational limitations and, therefore, its practical usefulness in wafer manufacturing. However, mathematicians knew there was a whole world to explore, just like astronomers always knew there were exciting things beyond our galaxy. Now, with mathematicians having their own big telescope, the wafer manufacturers are ready to set their new frontiers.
Authors
Ben Van Damme, Industrial Engineer and Business Consultant, Flexciton
Dennis Xenos, CTO and Cofounder, Flexciton
References
[1] FabTime Newsletter: Issue 24.03
[2] Wallace J. Hopp, Mark L. Spearman, Factory Physics: Third Edition. Waveland Press, 2011
[3] Lars Mönch, John W. Fowler, Scott J. Mason, 2013, Production Planning and Control for Semiconductor Wafer Fabrication Facilities, Modeling, Analysis, and Systems, Volume 52, Operations Research/Computer Science Interfaces Series
[5] FabTime Newsletter: Issue 9.03
[6] Sebastian Knopp, 2016, Complex Job-Shop Scheduling with Batching in Semiconductor Manufacturing, PhD thesis, l’École des Mines de Saint-Étienne
[7] S. J. Mason , J. W. Fowler , W. M. Carlyle & D. C. Montgomery, 2007, Heuristics for minimizing total weighted tardiness in complex job shops, International Journal of Production Research, Vol. 43, No. 10, 15 May 2005, 1943–1963
[8] S. Elaoud, R. Williamson, B. E. Sanli and D. Xenos, Multi-Objective Parallel Batch Scheduling In Wafer Fabs With Job Timelink Constraints, 2021 Winter Simulation Conference (WSC), 2021, pp. 1-11

Maximising Wafer Fab Performance: Harnessing the Cloud's Competitive Edge

Maximising Wafer Fab Performance: Harnessing the Cloud's Competitive Edge
To cloud, or not cloud, that is the question.
Some might consider the opening statement a tad flippant in borrowing Hamlet's famous soliloquy. Yet, the internal struggle our hero feels agonising over life and death holds a certain likeness to the challenges faced by Fab Managers today. Businesses live and die by their decisions to either embrace or disregard new innovations to gain a competitive edge and nowhere is this truer than in the rough and tumble world of semiconductor manufacturing; Fairchild, National Semiconductor and S3 are just a few of those who did not last. [1][2][3]
Semiconductor manufacturing has had a long history of innovating, tweaking, and tinkering,[4] so it’s somewhat surprising that the sentiment towards cloud uptake has been weaker in the semiconductor industry compared to the wider market[5]. This article aims to explore some of the potential benefits of cloud adoption to better equip Fab Managers with the motivation to take another look at the cloud question.
Recap: What are the different types of Cloud?

Cloud computing encompasses public, private, and hybrid models. The public cloud (think Azure, AWS, Google Cloud and so on) offers rental of computational services over the internet, while the private cloud replicates cloud functionality on-premises. However, private clouds require a significant upfront investment, ongoing maintenance costs and a skilled in-house IT team to manage and maintain the infrastructure, making it a less appealing option for smaller firms. Hybrid cloud blends on-site and cloud resources for flexible workloads, segregating the most sensitive workloads to on-premise environments for the greatest control; however, control does not necessarily mean security, which will be discussed in a later article!
Understanding the benefits of cloud
1. The Latest Tech
Embracing the latest cloud technology offers wafer fab facilities, not just organisations, a direct path to heightened capabilities in their manufacturing processes through the use of digital and smart manufacturing technologies. By harnessing advanced computational powers such as real-time analytics; optimization[6]; and machine learning defects detection[7], fabs can maximise all their fundamental KPIs, ultimately leading to better business outcomes. McKinsey estimates that, compared to other semiconductor activities, manufacturing has the most to gain from the AI revolution (Fig. 1), and a key technology enabling this is will be the vast computational power of the cloud.[8]

Case Study: The Latest Tech Driving Improvements in Fab KPIs
Seagate achieved a 9% increase in moves by utilising Flexciton’s cloud native platform and cutting-edge autonomous scheduling.
2. Redundancy, Scaling, Recovery and Updates
It is true that some of these technologies can be provided on-premises; however, cloud computing, in general, reduces downtime through redundancy, automated scaling, and disaster recovery mechanisms, ensuring seamless operation even during hardware failures or unexpected traffic spikes. Some estimates suggest that downtime can cost firms an eye-watering $1 million to $5 million per hour, depending on their size and sector. [9] By leveraging the cloud, the cost of operating disaster recovery services has demonstrated potential cost savings of up to 85% when comparing public to private options. [10] It is easy to speculate that for wafer fab critical infrastructure, the cost of downtime could be significantly higher.
Furthermore, the number of wafers processed within a fab can cause computational traffic spikes during busy periods for some applications. On-premises deployments would need to account for this, even if the resource is not in use all the time, which can add to inefficiencies, while public cloud can elastically scale down, meaning you only pay for what you use.
Lastly, on-premises systems without the ability to monitor and update remotely are often many versions behind, prioritising perceived stability but research has shown increasing the rate of software iteration increases stability and resilience rather than weakening it. [11] Without the convenience of remote updates, legacy systems can become entrenched, with employees on the shop floor being hesitant to embrace change due to the fear of disrupting critical infrastructure and the expenses associated with upgrading IT infrastructure. This sets in motion a self-reinforcing cycle where the expenses and associated risks of transitioning increase over time, ultimately resulting in significant productivity losses as users continue to rely on outdated technology from decades past.
3. Specialisation and Comparative Advantage
Stepping back from the fab and taking a holistic view of the semiconductor manufacturing organisation reveals compelling economic arguments, both on macro and micro scales, for embracing cloud.
Allowing cloud providers to specialise in cloud computing while wafer fab manufacturers focus solely on wafer fabrication benefits the latter by freeing them from the complexities of managing IT infrastructure. [12] This collaboration allows wafer fab manufacturers to allocate their resources towards core competencies, leading to increased operational efficiency and superior wafer production.
Simply put, fabs do not build the complex tools they need to make their products, such as photolithography equipment; they purchase and utilise them in ways others can’t to produce market leading products. Why should utilising the tools of the cloud be any different?
On a macro level, the argument of specialisation also applies through comparative advantage.[13] Different continents and countries have comparative advantages in certain fields, Asia has long been a world leader in all kinds of manufacturing due to its vast populations.[14] The United States, on the other hand, has a tertiary education system which is the envy of the world; institutions like Stanford and MIT are household names across the globe, and this has provided the high technical skills needed to be the home of the technology start up. Utilisation of cloud technology and other distributed systems allows firms to take the best of what both regions have to offer, high tech manufacturing facilities from Singapore to Taiwan with the latest technology from Silicon Valley or perhaps London. Through the cloud, Fab Managers and organisations can leverage a single advanced technology across multiple fabs within complex supply chains. This eliminates the need for costly and experienced teams to travel across the globe or manage multiple teams in various locations with varying skill sets, all while locating facilities and offices where the best talent is.
In brief, semiconductor firms' fate could rest on one pivotal decision: adoption of cloud. This choice carries the promise of leveraging cutting-edge technology, fortifying resilience, and reaping a multitude of advantages. Notably, by transitioning to cloud-native solutions, Fab Managers can usher their organisations into an era of unparalleled competitiveness, all while enjoying a range of substantial benefits. Among these benefits, for example, is cloud-native architecture like Flexciton’s, promising lower cost of ownership and zero-touch maintenance for fabs. We will delve deeper into the crucial aspect of security in one of our upcoming blogs, providing a comprehensive understanding of how cloud-native solutions are actually key to safeguarding sensitive data and intellectual property, rather than compromising it. In this era of constant innovation, embracing the cloud is more than just an option; it’s becoming a strategic imperative.
Author: Laurence Bigos, Product Manager at Flexciton
References
[3] S3 Graphics: Gone But Not Forgotten | TechSpot
[4] Miller, C. (2022). Chip War: The Fight for the World's Most Critical Technology. Scribner.
[5] Flexciton | Blog & News | Is Fear Holding Back The Chip Industry’s Future In The Cloud?
[6] Flexciton | Resources | Seagate Case Study 2.0
[8] Applying artificial intelligence at scale in semiconductor manufacturing | McKinsey
[9] Know Key Disaster Recovery Statistics And Save Your Business (invenioit.com)
[11] Forsgren, N., Humble, J., & Kim, G. (2018). Accelerate: The Science of Lean Software and DevOps. IT Revolution Press.
[12] Specialization Definition (investopedia.com)
[13] What Is Comparative Advantage? (investopedia.com)
[14] Why China Is "The World's Factory" (investopedia.com)

Autonomous Scheduling: A Tale of Three Taxis

Autonomous Scheduling: A Tale of Three Taxis
At Flexciton, we often talk about how autonomous scheduling allows wafer fabs to surpass the need for maintaining many rules to enable the behaviours they want at different toolsets. I would like to offer an analogy to show how significant the difference is.
Navigating the City
Imagine you are a passenger in a taxi. Your driver is a local; they know every road like the back of their hand and know the best routes to avoid likely problems. They can be flexible and effective, but have to spend a long time thinking about how to get to your destination. They also can’t know about the traffic on each potential route, and for new destinations they may require some trial and error before they find a good way of getting there. Worst of all, though they might have accumulated some great stories from their years of driving, it’s only thanks to those many years that they can navigate with any level of mastery.
Now imagine you have a very basic robotic driver; this driver is so mechanical that it has a hard-coded rule for every single road and junction: “If I’m at junction 20, I wait exactly thirty seconds and then I turn left.” This rule has come from an engineer performing a time study based on traffic levels six months ago. The driver has no knowledge of local events happening (for example, if it turns out that there is no oncoming traffic right now), and doesn’t even change its decisions when you need it to navigate to a new destination!
Meanwhile, when local conditions change at all (gaps in oncoming traffic at junction 20 are now every twenty seconds on average!) an engineer needs to manually change that parameter in the robot’s logic. And if the overall conditions change everywhere, or a new destination is desired, every rule needs to be retuned.

Finally, imagine a truly autonomous taxi. This taxi has a navigation system that knows where the traffic is, assesses the speed of every potential route, reacts to changes in conditions, and can get you to exactly where you want to go. In fact, all you have to do is tell it the destination; then you can sit back and relax, knowing it will get you there in the shortest possible time.
Navigating the Fab
While many wafer fabs have moved away from relying purely on tribal knowledge of manufacturing specialists on the fab floor, the scheduling problem in semiconductor factories is so difficult that, until recently, the hard-coded robotic taxi driver was the state-of-the-art. These solutions ask industrial engineers to manually tune thousands of rules to achieve intelligent behaviour, and they must be continuously re-tuned as fab conditions change.
A common scheduling challenge is deciding when to allow wafers into a timelink (or queue time loop) at diffusion. A timelink is the maximum amount of time that can elapse between two or more consecutive process steps, and some schedulers will simply limit the number of lots allowed within the timelinked steps at any one time. Others will just use a priority weight given to all timelinked lots, so that they are more likely to move through the loop without violating their time limit. Both of these rules are manually tuned and can’t react to the conditions of that particular moment in time, leading either to rework or scrap, or unnecessarily high cycle times.
Another typical example from a commonly-used heuristic scheduler is the application of minimum batch size rules at diffusion areas. A typical rule might be “wait for a minimum batch size of x, unless y minutes have elapsed, in which case dispatch whatever is at the rack.” Many fabs will set up this rule for every furnace-operation combination, which could mean ~3,000 manually tuned parameters just for one rule at one toolset.
Meanwhile, when micro conditions change, for example daily wip level fluctuations, these tuned parameters cannot react. And worse, when macro conditions such as overall market demand change, it makes it very hard for the whole fab to pivot quickly, because every rule needs re-tuning manually. Despite the theory that these rules can be set once every few months, in practice most fabs end up re-tuning these rules continuously, even daily, in order to maintain reasonable performance - accepting the predictable impact on industrial engineering resources that has!
Optimized scheduling, however, does away with these rules entirely and directly calculates the optimal schedule to improve your chosen objectives. In the timelink example, it doesn’t need to rely on guessing how many lots can be allowed into the loop - it just calculates the optimal schedule for the multiple steps involved, ensuring no timelink violations will occur.
But still, how do you get the scheduler to do what you want?
A New Paradigm for Tuning
If you have read any of our previous articles, you may be aware that optimization-based scheduling uses objectives such as “minimise queue time” and “maximise batch size” to calculate the optimal schedule. In fact, on most of our toolsets we only use ~2-3 objective weights, and by setting these you can achieve the balance and results you want.
Even this, however, is not truly autonomous.
We’ve been working to bring forward a new paradigm: letting you choose the fab-level outcome you want directly - like setting the destination for the taxi. If you know you want to prioritise achieving higher throughput, you can just specify that and Flexciton’s autonomous scheduler will automatically figure out what the optimization objective needs to be to achieve it.
What does this mean? It means you directly control the fab outcome you want to achieve, rather than guessing what toolset-level behaviours will produce the fab-level KPIs you want.
Orders of Magnitude
So when we speak about autonomous scheduling, we are referring to this new paradigm where you can choose the outcome you want, and Flexciton automatically does the rest. Instead of ~3,000 manually tuned parameters for just one of many rules at one toolset, just pick your desired KPI trade-off, and we automatically set the handful of objective weights that drive the optimization engine.
The result is not needing industrial engineering resource dedicated to tuning – instead, this valuable resource can be redeployed to work on higher value tasks. Moreover, it can enable consistent high performance across changes in fab conditions; and it becomes easier to pivot the entire fab’s direction when market conditions change.
This is how Flexciton’s scheduler is powerful enough to let you set the destination, and go.
Author: Sebastian Steele, Product Manager at Flexciton

A is for AI

A is for AI
We are excited to introduce the Flexciton Tech Glossary Blog Series: A Deep Dive into Semiconductor Technology and Innovation.
In an ever-evolving semiconductor industry, understanding the nuances of new technologies and the transformative potential of artificial intelligence and optimization is paramount. The Flexciton Tech Glossary Blog Series is designed to shed light on specific technologies and innovations, offering insights into how these advancements can revolutionise semiconductor manufacturing operations.
Each article in this series will delve into a distinct theme, aiming to equip any practitioner in the industry from industrial engineers and manufacturing experts up to VP level professionals with the knowledge to integrate these innovations into their daily operations.
Beyond our in-house expertise, we’re excited to collaborate with industry experts, inviting them to contribute and enrich our series with their specialised knowledge and experience. Join us on this enlightening journey as we explore the frontiers of the semiconductor industry from A-Z.
AI will transform the semiconductor industry
Artificial Intelligence (AI) has become a transformative force in various sectors, driving a global wave of innovation and automation. Seemingly overnight, systems like ChatGPT that harness the primary human interface – natural language – have revolutionised how we interact with technology. In a similar vein, generative art technologies have reinvented our relationship with creativity, making it more accessible than ever before. These remarkable systems have acquired their capabilities through learning, fueled by training on vast amounts of data. This ongoing revolution prompts the question: what is the next frontier to be conquered?
Beyond the novel consumer applications leading the charge, the implications of AI in specialised fields, such as semiconductor manufacturing, are equally profound. Estimates place the earnings already achieved by AI across the semiconductor value chain at over $5 billion. The range of applications is immense and spans activities at all levels. From informing capital allocation, to demand forecasting, fab layout planning, and right down to chip design, AI can enable automation and increase efficiency. Semiconductor manufacturing, in particular, has been identified as the function presenting the most attractive opportunities, where the potential savings have been calculated to be over $10 billion in just the next few years [1].
Impact at all levels
The semiconductor industry is facing several challenges where AI can make a significant impact. These span all the industry’s key activities: long-term capacity planning, research & design, sales, procurement and, of course, manufacturing. Some use cases that are increasingly gaining traction are:
- Supply Chain Optimization: Predictive analytics can forecast demand, optimize inventory levels, and enhance the overall efficiency of the supply chain [2].
- Automated Material Handling Systems (AMHS): Utilising AI-driven cognitive robotics within AMHS automates material transportation throughout the plant [3], optimizing production planning considering AMHS [4].
- Predictive Maintenance: AI can predict when equipment is likely to fail or require maintenance, reducing downtime and increasing overall equipment efficiency [5].
- Defect Detection: Advanced image recognition algorithms can identify defects in wafers at an early stage, ensuring higher yields and reducing wastage [6].
- Virtual metrology: AI can be deployed to estimate a product’s quality directly from production process data. This enables real-time quality monitoring without additional measuring steps [7].
- Process control: AI can analyse vast amounts of data to optimize the manufacturing process, ensuring the best conditions for each step and improving the overall quality of the chips (e.g. tool matching) [8].

In this article, we focus on AI’s potential to automate scheduling within a semiconductor wafer fab and improve key metrics: increase the throughput of manufacturing lines, reduce cycle times and improve on-time delivery. But first, we step back and define both intelligence and artificial intelligence.
Defining intelligence
Defining intelligence has been a long-standing challenge, with various perspectives offered. A widely-accepted definition, which broadly aligns with the context of semiconductor applications, is as follows:
Intelligence is the ability to accomplish complex goals.
As suggested by Max Tegmark [9], intelligence is not universal but depends on the defined goal. As such, there are many possible types of intelligence. Extending this concept further, intelligence can be characterised according to the following features.
Goal type: Intelligence can be technical (problem-solving), social (interaction), or creative (idea generation).
Skill level: This is typically categorised as below/equivalent/super-human level. This determines whether we aim to match the performance of a human or surpass it.
Scope: Narrow intelligence specialises in a specific task, while broad intelligence encompasses a wide range of tasks like human intelligence.
Autonomy: Intelligence can operate with varying degrees of independence, from human-guided to fully autonomous.
In semiconductor scheduling, super-human performance level is necessary to sift through billions or even trillions of candidate solutions to derive optimal decisions, whilst adhering to complex constraints. Focusing on the narrow scope of scheduling allows the system to specialise, thereby optimizing its performance for these specific requirements. The technical nature of the task calls for a solution that exploits the strictly technical aspects to achieve superhuman performance. Finally, a system with high autonomy and no need for human intervention is desired in such a dynamic environment.

Three important facets of Artificial Intelligence
AI involves creating models and machines that mimic human intelligence, including learning, reasoning, and decision-making.
Learning is an important aspect of AI, relying on a model’s ability to iteratively refine its internal parameters until it can accurately capture underlying patterns. Machine Learning is the cornerstone approach for learning from data and techniques in this category can range from simple models like Linear Regression to complex Deep Learning networks.
Reasoning involves drawing inferences based on established rules and facts, mimicking the human ability to logically connect information. It can aid in tasks like medical diagnosis (See the generative LLM AI from Google Med-Palm 2) or legal case analysis.
Decision-making encompasses action exploration and problem-solving. Action exploration deals with determining actions through interaction with an environment, which can vary from well-defined scenarios, like a chess game, to unstructured situations, like driving a car. Problem-solving, on the other hand, focuses on finding solutions to clearly defined problems with specific objectives and constraints. This can involve simple tasks like sorting or more intricate challenges such as route planning, resource allocation, and scheduling. Optimization and mathematical programming are often employed in these contexts.

Five Crucial Factors When Selecting AI for Production Scheduling
Production scheduling involves making optimal choices to coordinate resources, tasks, and time to meet production goals. It requires handling well-defined parameters and constraints, along with specific objectives like maximising throughput or achieving on-time delivery. As such, it is best suited to rigorous and well-structured AI methods that focus on optimal and feasible decision-making such as mathematical programming.
Nevertheless, good production scheduling can involve some aspects of learning and reasoning as well. Learning can be useful when some of the parameters are not well defined or static. For example, estimating transfer times between different locations of a fab may depend on various parameters, necessitating the use of a prediction model that has learned from past data. In terms of reasoning, a good decision-making approach should allow some degree of introspection from the user. Contrary to black box approaches, such as deep neural networks, mathematically formal methods such as Mixed Integer Linear Programming (MILP) enable transparency and explainability.
Choosing the right AI technique for production scheduling in semiconductor manufacturing involves navigating the intricate balance among five crucial characteristics, each vital in this high-stakes field:
Optimality refers to the ability of an AI technique to reach and prove that the true optimal solution has been found. In a complex environment such as a semiconductor fab, where small improvements can have significant cost or time implications, optimality is of paramount importance.
Feasibility is about ensuring that the solution found truly abides by the constraints of the problem. Semiconductor fabs are bounded by many constraints, including machine capacity, human resources, and time windows. An AI solution must respect these constraints while optimizing the schedule.
Speed is crucial as it directly impacts the responsiveness of the system. Semiconductor manufacturing is a dynamic environment with constantly changing states. Therefore, the selected AI technique must be able to provide fast and accurate solutions to adapt to these changing conditions.
Explainability refers to the ability of an AI technique to provide insights into how it arrived at a given solution. In a high-stakes environment like a semiconductor fab, explainability helps build trust in the system, enables troubleshooting, and allows for more effective human-AI collaboration.
Flexibility refers to the technique’s applicability across a wide range of possible scenarios and system changes. This attribute highlights the capability of an AI method to be fully autonomous and require minimal human supervision and intervention. Within the context of a semiconductor plant, this quality is indispensable, especially as complexity grows and specialised personnel are spread thinner across other functions.
Different AI techniques fare differently on these dimensions. Rule-based systems offer high explainability and feasibility but may lack optimality, especially in complex scenarios. Unforeseen changes in a fab’s state may require rule adjustments or even entirely new ones, affecting flexibility. Heuristic approaches can provide acceptable solutions quickly, but typically cannot provide optimality or feasibility guarantees. Reinforcement learning can potentially offer high levels of optimality and speed, but at the cost of explainability, the risk of infeasibility, and the need for extensive tuning.
In contrast, mathematical programming techniques, such as MILP, can offer an excellent balance. They provide guaranteed feasibility, while the distance to true optimality can be easily computed. They offer explainability in terms of how decisions are made based on the objective function and constraints. Although computational complexity can be an issue, they can greatly benefit from advanced decomposition methods, and are well complemented by heuristic methods [10].
In the context of semiconductor fab scheduling, where feasibility, optimality, and explainability are particularly important, mathematical programming techniques can be a superior choice for AI implementation. Their deterministic nature and the rigour of their mathematical foundations make them a highly reliable and robust choice for such high-stakes, complex operational problems.
Going beyond with AI

Today, AI in semiconductor manufacturing stands at a critical point. With the increasing complexity of semiconductor processes and the escalating demand for efficiency and quality, the need for effective AI solutions has never been greater. As evidenced in many large companies’ roadmaps, AI is regarded as a key enabling technology of the future [11]. Companies that do not devote resources to a comprehensive AI strategy risk being left behind.
As we delve deeper into the era of AI-driven manufacturing, the nuanced roles of different AI techniques will become more and more apparent. Machine learning approaches bring novel capabilities for learning and predicting from data: yield improvement and predictive maintenance are very promising paths. When it comes to autonomously and reliably scheduling and planning operations in a fab, an exact optimization approach, such as MILP, becomes the key to unlocking peak performance.
Authors:
Ioannis Konstantelos, Principal Optimization Engineer at Flexciton
Dennis Xenos, CTO and Cofounder at Flexciton
References
[1] McKinsey & Company, Scaling AI in the sector that enables it: Lessons for semiconductor-device makers, April 2021. Link
[2] Mönch, L., Uzsoy, R. and Fowler, J.W., 2018. A survey of semiconductor supply chain models part I: semiconductor supply chains, strategic network design, and supply chain simulation. International Journal of Production Research, 56(13), pp.4524-4545.
[3] Lee, T.E., Kim, H.J. and Yu, T.S., 2023. Semiconductor manufacturing automation. In Springer Handbook of Automation (pp. 841-863). Cham: Springer International Publishing.
[4] Mehrdad Mohammadi, Stephane Dauzeres-Peres, Claude Yugma, Maryam Karimi-Mamaghan, 2020, A queue-based aggregation approach for performance evaluation of a production system with AMHS, Computers & Operations Research, Vol. 115, 104838, https://doi.org/10.1016/j.cor.2019.104838
[5] Çınar, Z.M., Abdussalam Nuhu, A., Zeeshan, Q., Korhan, O., Asmael, M. and Safaei, B., 2020. Machine learning in predictive maintenance towards sustainable smart manufacturing in industry 4.0. Sustainability, 12(19), p.8211.
[6] Ishida, T., Nitta, I., Fukuda, D. and Kanazawa, Y., 2019, March. Deep learning-based wafer-map failure pattern recognition framework. In 20th International Symposium on Quality Electronic Design (ISQED) (pp. 291-297). IEEE.
[7] Dreyfus, P.A., Psarommatis, F., May, G. and Kiritsis, D., 2022. Virtual metrology as an approach for product quality estimation in Industry 4.0: a systematic review and integrative conceptual framework. International Journal of Production Research, 60(2), pp.742-765.
[8] Moyne, J., Samantaray, J. and Armacost, M., 2016. Big data capabilities applied to semiconductor manufacturing advanced process control. IEEE transactions on semiconductor manufacturing, 29(4), pp.283-291.
[9] Max Tegmark, Life 3.0, Being human in the age of Artificial Intelligence, 2018
[10] S. Elaoud, R. Williamson, B. E. Sanli and D. Xenos, "Multi-Objective Parallel Batch Scheduling In Wafer Fabs With Job Timelink Constraints," 2021 Winter Simulation Conference (WSC), Phoenix, AZ, USA, 2021, pp. 1-11, doi: 10.1109/WSC52266.2021.9715465.
[11] Bosch, Humans and machines team up in the factory of the future, October 2021. Link

The Flex Factor with... Nitin

The Flex Factor with... Nitin
Meet Nitin, our Senior DevOps engineer and security guru. Keep reading to learn a bit more about him and what it's like to work in DevOps at Flexciton.

Tell us what you do at Flexciton?
I’m a Senior DevOps engineer in the Platform Engineering Team (PET), which involves building highly available, scalable infrastructure to support various customers deployments, building infrastructure as code and supporting existing environments. I also have an important role to play as a Security Lead, to ensure Infrastructure and Application development follows standard security principals, dealing with Security incidents/threats and resolving them.
What does a typical day look like for you at Flexciton?
A typical day starts with looking at my slack channels to see if there are any issues with infrastructure or customer environments, and checking high priority security incidents raised by our systems. After that we have a scrum meeting to discuss how we are dealing with current tasks, any hurdles and get ready for another challenging day. If I have a dedicated security day then I would be looking at our security board in SecureFrame and begin fixing various failing security tests as we aim to soon get our SOC2 certificate. Also, looking at our SIEM dashboard, check security threats and build smart alerts to help us capture issues beforehand. If I am on DevOps tasks then I will be helping the PET team with building infrastructure for new customers, getting best practices set, and making sure that our development team can build infrastructure.
What do you enjoy most about your role?
What I enjoy most about my role at Flexciton is the variety of tasks and responsibilities that come with being a Senior DevOps and leading the Security work. As we are aiming for a SOC2 certification, I love the challenges I am facing, the various ways I need to consider before making infrastructure changes, and keeping myself always on top of security incidents to make sure I am not missing anything.
From DevOps point of view, I really enjoyed working on the various initiatives/tech debts including logical grouping of infrastructure, and validation of Azure resources like why/when/how we will use them.
If you could summarise working at Flexciton in 3 words, what would they be?
Grit, innovation, challenges.
What career advice would you give to your younger self?
“Never settle down”.
Keep pushing yourself.
If you could swap jobs with anyone for a day, who would they be and why?
I’d love to be an Astronaut. Watching our planet and enjoying the beautiful views of the galaxy. It wouldn't be easy to acquire the skills I'd need, but given a chance I'd give it a try!
Tell us about your best memory at Flexciton?
My favourite memory was when we recently celebrated the 7th anniversary of Flexciton on a boat. We floated around the Thames and enjoyed the lovely views across the river.

The Flex Factor with... Sully

The Flex Factor with... Sully
Meet Sully, the Bucket Brigade team's backend wizard, as he shines in the spotlight for July's edition of The Flex Factor. Discover more about the diverse challenges he tackles during his day-to-day and the valuable career advice he wishes he had known earlier.

Tell us what you do at Flexciton?
I’m the Bucket Brigade team's backend developer, whereas the others are optimisation engineers, a frontend developer, a product owner, and some riff raff. Though it’s not a hard separation, this means I tend to do the backend work for our team that isn’t very optimisation focused.
What does a typical day look like for you at Flexciton?
Every day starts off with standup during which we are entertained by bad puns by our team scrum lord, Charlie, and acquire strange German words as sprint names from our German team member, Jannik. During stand up, I often say that I think I'll get something done that day, but then have to say I'm not done with it yet at least one additional morning. The rest of the day usually consists of working solo, doing some code reviews, and not going to too many meetings, unlike some colleagues who've made poor life choices and have frequent meetings. I usually leave between 6 and 7 PM, which is okay because I roll into the office pretty late every morning. I also make great use of the free snacks and colas.
What do you enjoy most about your role?
I like that we have a variety of different kinds of problems to solve. It's engaging and there are many things we need to do and improve so there are always ways to find new challenges.
If you could summarise working at Flexciton in 3 words, what would they be?
Live. Laugh. Optimize (with a z).
What career advice would you give to your younger self?
Focus on skills that will follow you from role to role and across companies, rather than company or product-specific skills and knowledge.
If you could swap jobs with anyone for a day, who would they be and why?
I’d maybe swap with a fighter pilot. I feel like on the day I’m there I could improvise and manage to get up in the air, have fun, and figure out how the ejector seat works. Since I’m gone the next day I won’t have to deal with the consequences.
Tell us about your best memory at Flexciton?
I had a lot of fun in Albufeira. Though my memory is hazy, I’m pretty sure it was lots of fun. If I have that memory in my subconscious somewhere I bet it is good

Come and Visit Our Booth at SEMICON West This July!

Come and Visit Our Booth at SEMICON West This July!
From 11–13 July 2023, Flexciton will be returning to the cosmopolitan city of San Francisco for this the latest edition of SEMICON West. And this time, we’ll be joining the Techworks / NMI members zone, where we will have our own stand – located at booth 945.
Drop by and visit our stand if you would like to meet with a member of our team to learn more about what autonomous scheduling could do for your fab. We can discuss how Flexciton can help you achieve your fab's unique objectives, how our scheduler can deal with the constraints you have and even conduct a live product demo session tailored to you. Or if you would prefer to just drop by for a warm drink and a chat, we'd be more than happy to have you!
The theme for this year’s event is “Building a Path Forward”, where a particular focus will be placed on the key challenges affecting the global microelectronics industry, including; Supply Chain Disruptions, Climate Change, and Talent Shortages. All of which need addressing to enable a $1T semiconductor industry.
SEMICON West provides a pathway for attendees to engage, learn, and conduct business under 3 key industry priorities that will be integrated into the Keynotes as a daily theme. We’re excited to be a part of the event once again.
Key details:
11–13 July 2023
Flexciton located at booth 945
Event located at the MOSCONE Convention Center, San Francisco
Register for the event here: https://semi.swoogo.com/SEMICON_West_2023
Interested in learning more about what Flexciton can do for your fab? We’re hosting a free-to-join webinar with our partner, FabTime, on 27 June. Find out more and register by following this link: https://bit.ly/flexciton_webinar

The Flex Factor with... Felipe

The Flex Factor with... Felipe
Join Felipe as he shares his typical day at Flexciton, highlights the most rewarding aspects of his role and offers valuable career advice in this month’s edition of The Flex Factor.

Tell us what you do at Flexciton?
I’m an Optimization Engineer, which deals with mathematical optimization and software engineering. At work, you’ll find me working on new components to our optimization model, thinking of and/or implementing improvements and fixing some bugs that appear from time to time. In general, it involves understanding the semiconductor manufacturing process and writing and maintaining production code to incorporate mathematical optimization into our software so that we can deliver the best schedules for our clients.
What does a typical day look like for you at Flexciton?
Treat myself with a cappuccino before anything else (I always regret it at the end of the month, it’s an expensive addiction), then I’m ready for our daily stand-up. That’s when the team meets to discuss priorities, status of ongoing work, if there are any blockers and how to sort them. After that, it is a mix of coding (new feature, improvement, bug fix, etc), discussing the design of a new implementation with another team member and doing code reviews. From time to time, I also present something in our knowledge transfer sessions and have also been onboarding new starters on the topic of optimization.
What do you enjoy most about your role?
We deal with very complex problems, so it really is a mix of challenging and exciting work, all done within a friendly and supportive environment! Learning a lot and having fun ends up being a byproduct.
If you could summarise working at Flexciton in 3 words, what would they be?
Interesting, fun, challenging.
If you could give one piece of career advice to someone, what would it be?
Tasks that initially seem daunting and make you doubt your knowledge and expertise are often the ones that will make you grow.
Tell us about your best memory at Flexciton?
I guess I’ll have to pick more than one here. It may sound cliche or cringe, but the first day was one of my best memories. Moving abroad for a new job and to do something for the first time is quite an intimidating experience. So it was a great feeling when I had a warm welcome on my first day. Everyone was friendly, open and looked super smart.
Apart from that:
- Team Trip to Albufeira.
- Visiting a wafer fab of one of our clients and watching the manufacturing process that I’d been discussing and modelling every day, but had never seen in real life until then.
- And of course, when I won the office contest for best Halloween costume although I was the only person wearing one…
Are you interested in working for Flexciton? Head over to our careers page to check our current vacancies or connect with us.

Is It Time to Redefine the UK's Role Within the Semiconductor Industry?

Is It Time to Redefine the UK's Role Within the Semiconductor Industry?
A missed opportunity
Rishi Sunak’s recent unveiling of the UK’s £1bn ($1.3bn) semiconductor strategy was always bound to provoke a reaction from critics. In an attempt to improve research and development and bolster international cooperation, the UK announced it will partner with Japan as part of its strategy. The aim of this collaboration is to foster knowledge sharing, increase expertise, and mitigate supply chain risks. The obscurity of the government’s strategy – as well as the delay from the original announcement date of autumn last year – tells me that they are very much still figuring this out. It appears the next step is to employ an advisory panel to help decipher what the actual actions will be before autumn of this year. A full year after the original date. Fundamentally, though, I think the UK has got this one right. It’s too late for us to start throwing huge amounts of money at building fabs, since we simply don’t have the capital or the resources to create our own security of supply. Instead, it’s much more beneficial for us to focus on specialisms that could make us globally relevant to the supply chain. However, what I’m less convinced about, is the government's understanding of the areas of expertise we already possess.
Let’s look at where the UK is particularly strong, as with a limited budget, focusing on creating that specialism makes sense. The obvious one here is chip design, which was detailed in the strategy unveiling last week. ARM has been at the forefront of this market for many years and, along with the spin-offs coming from the University of Cambridge, it’s a sector where the UK could be considered at the forefront. Other nations, such as China, have been offering a greater deal of support to their design companies for many years now, so it makes sense to match them if we want to remain competitive. Another obvious one is innovative new software and technology, which is not detailed in the government’s strategy. The skills shortage means that emerging technology like artificial intelligence will soon have to play a more central role in wafer fabs as they transition towards smart factories. We have a faster growing tech hub here than anywhere else in Europe, putting the UK in a prime position to establish itself as a global leader in smart manufacturing technology. Yet even with this opportunity sitting directly under their noses, I don't think the government has yet realised its potential.
For those who are unfamiliar, smart manufacturing refers to the integration of advanced technologies like artificial intelligence and automation into manufacturing processes and systems. It has the potential to transform traditional factories into intelligent, data-driven environments that enable much higher levels of efficiency with fewer skilled people required. Now, smart manufacturing is still very much an emerging field. At this point, only a handful of leading-edge manufacturers are concerned with it and even fewer have begun actually adopting it. But the current challenges faced by the industry, such as the skills shortage, are making its importance ever-more apparent.
The talent pool on our doorstep
The government seems to think that the best way to solve the skills shortage is to invest in the education of relevant fields. There’s no doubt that this will help somewhat, but it’s going to take a very long time. What they fail to take into consideration is that working in semiconductors used to be one of the most exciting prospects for skilled engineers. In some cases, it still may be, but now it has to compete with working for companies like Google or Apple. So as the demand for people rises with the construction of new fabs and tech companies continue to attract graduates, it’s going to be a challenge to attract the level of talent the industry needs in the time it needs it. As many of the vanguard wafer fabs are realising, a quicker and more realistic approach to solving the skills shortage is implementing smart manufacturing technologies.
The key component of smart manufacturing is software. The tech startup ecosystem here in London has a value of over £250bn ($314bn), which is over triple that of the next largest in Europe. The UK government is well aware that novel technology is a domain that the UK – and London in particular – is well positioned to become a leader in. But it seems they haven’t yet figured out that our strengths in this area could be applied to our semiconductor strategy.
The talent pool of software and data engineers we have access to here in London rivals that of anywhere else in the world. It’s one of the main drivers behind the capital’s success as a tech hub. With support from the government, this abundance of skilled engineers and software companies could be harnessed to create a specialism in smart manufacturing technology. Many of the disruptive technologies that will be used in wafer fabs over the next 20 years will come from outside of the traditional semiconductor supply chain, many of which could be already operating in London today. All of this means that the foundations for this new specialisation are already laid, giving the UK a head start to become a global leader in smart manufacturing.
To conclude, the UK's semiconductor strategy reveals both missed opportunities and potential for growth. While the government's collaboration with Japan and investments in chip design are steps in the right direction, our potential with emerging technologies seems overlooked. The UK's thriving tech hub, particularly in London, presents a pool of software companies and skilled engineers that could be leveraged to establish the country as a leader in smart manufacturing technology. By embracing smart manufacturing, the UK can help address the skills shortage, drive efficiency in the industry, and secure a position of relevance in the global semiconductor supply chain. However, it remains crucial for the government to recognise and harness these existing strengths to fully realise the potential for growth and competitiveness in the semiconductor industry.
Author: Jamie Potter, CEO and Cofounder

Scheduling as a Cornerstone of the Smart Factory [Part 2]

Scheduling as a Cornerstone of the Smart Factory [Part 2]
In Part 1 of this blog, we focused on use cases where lots are scheduled on tools and how advanced scheduling gives users the ability to optimize for future decisions as well as real-time. When we say "advanced," we are referring to autonomous, optimization-based solutions. Our emphasis was primarily on how scheduling can enhance productivity in a fab today. In Part 2, however, we’ll delve further into its potential for fabs in the not too distant future.
Previously, I discussed how task lists are typically associated with human workers. However, it is worth noting that task lists can also be applied to automated systems such as automated guided vehicles (AGVs) and automated material handling systems (AMHS) with the use of an advanced scheduler. With task lists, an advanced scheduler can not only determine which lot is assigned to which tool and when, but also which operator – or robot – will be serving the tool. There’s a whole set of new opportunities that arise with that, as humans and robots, just like tools, have a limited capacity that can be optimally utilised. It’s clear then that the possibilities for advanced scheduling go beyond the stand-alone Industry 4.0 applications and have the potential to integrate vast amounts of fab data into a holistic system.
One of the use cases of such a holistic system is described later on in this blog as a type of ‘digital twin’, but the capabilities of an advanced scheduling system go beyond that. With a digital twin concept, the human is still very much inside the cockpit. An advanced scheduling system, on the other hand, is more like an autopilot, augmenting the capabilities of other systems and taking control of manufacturing decisions when necessary. As such, advanced scheduling is a cornerstone of the so-called ‘smart factory’. Let’s try to understand the huge array of benefits it can bring. First, we’ll cover a couple of use cases that can benefit the manufacturers. Second, we’ll share some thoughts on how advanced scheduling aligns with the idea behind Industry 5.0 and how the technology can serve ourselves as humans.
- WIP Transport Scheduling
Once a lot is intelligently scheduled, we know when to process it and on which tool. The lot can be transported to that tool’s specific staging rack just before it gets processed. It enables fabs to eliminate waste by optimizing transport capacity, which removes the likelihood of a lot being transported at half capacity only for it to wait in queue. Transport scheduling also enables splitting logistics and processing workflows; some workers focus on keeping the tools running, others focus on getting the lots to the tools in time. Multi-cleanroom fabs will make better use of their capacity in areas that for logistical reasons are not preferred. Which means no more remote idle machines waiting for a lot that doesn’t arrive.
- Dynamic Capacity Models
With better control of lot processing, intra-fab logistics, and workforce planning, we get a more realistic view on the true capacity of a factory. We call it a dynamic capacity model, resembling the idea of a digital twin of a production plant. A dynamic capacity model better reflects the current state, loading and dynamics in a factory, as opposed to the static capacity models commonly used. Until now in wafer fabs, dynamic capacity models have at best been approximated by what-if scenarios in simulation models, but the potential goes beyond that. When playing around with different scenarios – e.g. when to plan maintenance or shutdowns, which availability increase has the most impact on the whole factory, what’s the effect of frequent product mix changes, what lead times to expect and so on – it should allow factories to better judge the impact of their decisions. Optimization can even help by not only interpreting the outcome, but suggesting the best decision for a fab’s goals.
- Multi-factory Models & Supply Chain Planning
Eventually, dynamic capacity models could scale to corporate level in multi-factory models. Further up, these models could feed into supply chain planning software. During the supply chain crisis, it was striking to see how disconnected sales and operations planning cycles in semiconductors were from the actual operational challenges of factories. Part of it was because of models that don’t properly comprehend the actual situation the factory was in. Fabs were treated as black boxes with a simple input and output signal, but just because you have promised your customers a sooner delivery date, it doesn’t mean it will happen automatically. You need a driver towards that new target, and that’s where advanced scheduling software helps, by optimizing towards shorter lead times. Its integration into dynamic capacity models and supply chain planning software would lead to more reliable input for inventory and order fulfilment optimization engines. This translates into lower inventory costs and better delivery performance of a company.
Eventually, we want technology to help us overcome the challenges we face as humans. From what has been written so far, this blog might give the impression that this technology is primarily serving profitability. But becoming a smart factory doesn’t necessarily contradict with a human-centric approach. Industry 5.0 is the theoretical concept that’s been introduced for that. It counters the illusion that the future of manufacturing is one in which humans play a minor role. Instead, we should embrace both the capabilities of new technologies, as well as those of humans and find synergies to make the best of both worlds. While Industry 4.0 can do a great job in automating repetitive tasks or making sense out of masses of data, humans have the advantage of better interpretation of context, require fewer data points to understand, and can make value trade-offs. Humans will not miraculously disappear from the factory shop floor, so we’ll benefit from thinking about how these advanced technologies can harmoniously coexist with people and yield mutually beneficial outcomes.

- Staff Workload Balancing
The obvious fear with advanced scheduling is that operators and technicians will turn into de facto robots, where only adherence is of importance when aiming to get more out of the workforce. Let’s turn that thought up-side-down: what if the same work could be better distributed amongst the team by offloading peaks to underloaded co-workers? Advanced scheduling can better predict and hence properly distribute work aligned with an individual's availability and level of training. Also the workflow itself - the number and order of actions to perform - can be streamlined to lower physical and mental workload.
- Training, Evaluation & Individual Productivity
With detailed production schedules, any lack of staff or training becomes directly visible and quantifiable. Hiring and training programs could become more timely and data-driven, just as annual evaluations will become less subject to biases of the manager. Even on-the-spot productivity can be monitored and optimised. This may sound like a “Big Brother” concept, but compare it with the advancement of sports analytics and medicine in the last decade. Professional athletes don’t complain about data integrity and privacy issues, because (1) it’s part of their job and (2) it helps them in what they want to achieve. If athletes ignore their data, they simply don’t reach the top anymore. Similarly, the fourth and fifth industrial revolution will bring staffing to higher levels of productivity, not because they are squeezed out more, but because the data will reveal where there’s room for improvement or when a red line is about to be crossed.
Given the increasing scale and complexity described above, significant computational power and data storage capabilities will be necessary. This makes it likely that cloud-based technology will be adopted to facilitate the transition to smart factories. Although many fabs are currently far from achieving smart factory status, it is clear that the industry is moving in this direction. Therefore, factory managers must acknowledge that the transition to becoming a smart factory is not just a concern for the future and must be implemented within a realistic timeframe. The foundations for this transition, including employee readiness, are already being established today. And given the use cases discussed, let there be no doubt that advanced scheduling will play an integral part in the next generation of wafer fabs.
Author: Ben Van Damme, Industrial Engineer and Business Consultant

The Flex Factor with... Amar

The Flex Factor with... Amar
On this month's edition of The Flex Factor, we're introducing Amar. Solutions engineer by day and the front man of Flexciton's band by night, find out a bit more about him and what he does for the team.
Tell us what you do at Flexciton?
It would be easier to say what I don’t do! Being on the Customer Team means I get involved with all aspects of the business, from explaining the technicalities of the Flexciton software in the very first call with a new client to working on new features that need to be implemented for a live trial in a wafer fab. Right now I’m working on a dynamic capacity model to simulate a client’s wafer fab under different scenarios in order to measure the impact of changing tool availability.

What does a typical day look like for you at Flexciton?
I’ll usually have multiple projects at once, so sometimes I’ll be running code overnight to be more efficient. This means I first check on the results of whatever I was running and figure out what needs to be changed. The next thing I tend to do is work on the other projects I’m on, like PoCs or Live Pilots. For PoCs this might be working on modelling new tool constraints and running simulations, and for Pilots we might need to add a new feature to accommodate a fab’s specific needs. Oftentimes I’ll also have client calls so I can show them what I’ve done and explain how we can help their fab improve KPIs.
What do you enjoy most about your role?
The variety is great, no two days are the same. I love being able to talk to new clients and figure out how best to apply the Flexciton solution to their needs. Most clients have vastly different KPI goals for their fabs, and so it’s a super interesting challenge to work out how Flexciton can be set up to run their fab optimally.
What’s one thing you’re learning now or learned recently?
I’ve been teaching myself to play the Guitar for the past few years. The Flexciton team has started their own mini band, and it’s been super fun playing with such talented musicians. I feel like I’m learning at a much faster rate thanks to the impromptu jam sessions.
If you could swap jobs with anyone for one day, who would it be and why?
I’d love to work in sports analytics. I’ve been a football fan my whole life, and I reckon if I had the data I’d be able to figure out why my beloved Liverpool FC are performing so poorly this season.
Tell us about your best memory at Flexciton?
I think I’ll have to copy everyone else and say the team trip last year, being able to show off my football skills in the rain on the beach in Albufeira was great, even if I did get stuck in the sand a few times.
Interested in joining the Flexciton team? Take a look at our current vacancies to see if there's a role fit for you.

Is It Possible to Improve Performance and Be More Energy Efficient?

Is It Possible to Improve Performance and Be More Energy Efficient?
The semiconductor industry needs to become more sustainable in a world of increasing demand – optimization holds the key.
An energy-intensive industry
Over the past decade, industries worldwide have had to tackle the issue of sustainability as a matter of increasing urgency, with the need to mitigate greenhouse gas emissions becoming a major factor in companies’ operations and processes. For many, this has been a difficult accommodation to make, because it has forced them to change the way they work. On the other hand, addressing the climate crisis has acted as a catalyst for the transition to more innovative and efficient methods of production and manufacture.
The semiconductor industry is at the heart of our modern technological society, and as recent supply chain issues have demonstrated, any slowdown in production has a significant knock-on effect on multiple other sectors. Given its centrality, and the need to maintain output, sustainability hasn’t always been as high on the agenda of chip companies as it should have been. However, ignoring the serious environmental impact that semiconductor industry has on the world around it is no longer tenable.
According to McKinsey, “large semiconductor fabs use as much as 100 megawatt-hours of energy each hour, which is more than many automotive plants or oil refineries do.” They also point out that an average fab will use as much power in a year as 50,000 homes – that’s enough electricity to run a small town. And as the demand for semiconductors continues to grow, and the production process becomes ever more sophisticated, on its current trajectory, the industry’s energy usage is only set to increase.
Electricity production is of course a major source of CO2 emissions, with the majority of power stations still fuelled by hydrocarbons such as coal and gas. It’s an inescapable fact that using more energy means emitting more greenhouse gases – so reducing energy consumption is an obvious way for the semiconductor industry to improve its environmental footprint. But given the demand outlined above, how is this realistically possible?
KPIs vs the green agenda
If we look at where the energy in fabs is actually being used, a solution becomes clearer. McKinsey calculates that approximately 55% of energy consumption is generated by running tools. Given that many of the machines on a fab’s production line are in use 24/7, this perhaps isn’t that surprising. And as I noted earlier, chip companies aren’t about to reduce this level of operation just to become more environmentally-friendly – for both them and other industries, the stakes are too high to do anything that would threaten production.
This is reflected in the KPIs that fabs work to, which are primarily based on cycle time, throughput and yield – energy consumption has historically been very much a secondary consideration. However, production doesn’t completely exist in a bubble, and as with any company, the bottom line is the ultimate driver of operations. Cost reduction is just as important as revenue generation. McKinsey notes that, depending on local electricity tariffs, energy consumption can account for up to 30% of a fab’s operating costs. And with the price of energy continuing to rocket, that figure is almost certain to increase.
So given that energy is a factor that affects both production costs and sustainability, reducing consumption is edging its way up the fab’s table of priorities. Yet what if you could address both those primary KPIs and the need to be greener at the same time?
Optimizing primary goals and energy consumption
Optimization technology is the key. It’s a point I keep returning to, but if chip companies are to deal with the challenges that the future is sure to keep throwing at them, then they have to start adopting best-in-class smart manufacturing practices and software.
To properly optimize the way in which the fab works, we have to first understand exactly what the state of the entire WIP is in real time. By mapping the current state of the fab’s operations, it’s possible to identify where bottlenecks are occurring due to sub-optimal scheduling. And in our experience of working with different fabs, the tools where queues usually occur are involved in the most energy-intensive stages of the production process – for example, photolithography, diffusion furnace and in the cleanroom.
Using optimization software to reduce bottlenecks by improving how wafers move through energy-intensive tools, the fab’s primary KPIs can be met and energy consumption at these tools can be reduced. For example, doing more moves with fewer tools at the photo stage means that it’s possible for some tools to be left idle. Or doing the same moves but with fewer batches at the furnace stage means fewer energy-intensive furnace runs.
At the tail end of last year, I gave a presentation at the Smart & Green Manufacturing Summit, as part of Semicon Europa 2022, that outlined how our scheduling optimization software has already achieved these goals in the real world.
Optimization technology can also be used to directly control the energy consumption of less busy tools as well. As long as those areas that are prone to bottlenecks are running efficiently, and all primary KPIs are being met, tools in other areas can be optimized specifically for energy conservation – for instance, powered down because the scheduling technology has identified that they aren’t required or don’t have to be operated at their maximum rate.
Taking sustainability seriously
Most of the larger semiconductor companies are looking at ways to be greener and meet net zero goals, although carbon offsetting is currently playing more of a role than making manufacturing processes more energy efficient. However, some companies are addressing the issue head-on and looking at ways to control their fabs’ energy use – for example, Flexciton, Imperial College London and the Technical University of Munich are currently working together on a project to map Infineon’s energy consumption, with the aim to better understand how smarter decisions can help the company reduce its carbon emissions.
The demand for semiconductors is only going to grow in the decades ahead, but a reckoning over the industry’s attitude to sustainability will be reached much sooner than that unless energy reduction becomes one of its key priorities. Yet by working smarter and re-evaluating their production processes, it’s absolutely possible for companies to improve throughput and yield while at the same time being more energy efficient.
Author: Jamie Potter, CEO and Cofounder

Investigating Operational Decisions and Their Impact on Energy Efficiency in Wafer Fabs

Investigating Operational Decisions and Their Impact on Energy Efficiency in Wafer Fabs
Chipmakers will encounter major challenges before the end of the decade in their quest to achieve stringent emissions goals. Achieving these targets will require a concerted effort from the industry to explore new and innovative methods of reducing their energy consumption and adopting cleaner energy sources.
In light of this, we are working together with Thorsten Greil from the Technical University of Munich, Nilay Shah from Imperial College London, and Hans Ehm from Infineon Technologies on an energy efficiency initiative to explore innovative approaches for reducing the carbon impact of the semiconductor sector. The objective of this undertaking is to make a valuable contribution towards realising a more environmentally-friendly future.
One low-cost opportunity to reduce emissions that does not require drastic capital investment is operational efficiency, where reduction of the energy consumption in production is considered as top priority. Together, we are inviting qualified students from The Technical University of Munich and Imperial College London to participate in the project and complete their master’s thesis on the following topic: Global virtual factory simulation for energy efficiency. Our findings will be presented at the Winter Simulation Conference 2023.
Project Objective
A hypothesis is that the energy efficiency – and subsequently the gas emissions – of a wafer fab can be reduced through improvements in operational efficiency, such as production scheduling. Previous studies have considered chemical alternatives, where Infineon investigated the use of alternate gases with less impact on the climate. Similarly, potentially significant savings can also be made in smarter and more environmentally friendly daily operational decisions.
We want to discover what operational decisions could be taken at fab level to reduce CO2 emissions, without drastic investment or damaging productivity. For example, how can we incorporate CO2 emissions targets into production scheduling? And whether or not it's possible to reduce CO2 emissions whilst improving cycle time.
Procedure
The solution approach to the above objective will be supported by the following methods:
- Familiarisation with semiconductor manufacturing including cleanroom and facility environment characteristics with a focus on energy consumption and operational decisions.
- Literature review in the field of energy efficiency and scheduling to present the current state.
- Embedding energy related aspects in the fab simulator tool developed by Flexciton based on energy emission information provided by Infineon.
- Investigation of smarter coordinated operational decisions in the fab to reduce carbon emission.

The Flex Factor with... Jamie

The Flex Factor with... Jamie
Say hello to Jamie, one of Flexciton's frontend developers. From watering his cactus to perfecting the user experience of our application, find out what he does during his day-to-day in this month's edition of The Flex Factor.
Tell us what you do at Flexciton?
At Flexciton, I have a few different roles. My main role is as a frontend developer, which means I work on creating and improving the user interface of the app. But I also have other responsibilities - I lead the frontend engineering practice, and I'm in charge of guiding discussions and decisions about the frontend architecture. It keeps me busy, but I enjoy the variety!

What does a typical day look like for you at Flexciton?
A typical day for me at Flexciton starts off with some personal tasks - I check on my trusty cactus (Mr Prickles) and give him some water. After that, I open up my personal Trello board and plan out my day, which usually involves working on engineering tasks or frontend related work.
Next up is the daily stand up meeting, where I update the team on what I've accomplished and what I plan to work on for the rest of the day. Then it's time to dive into work - if I'm working on an engineering task, that usually means writing some code and asking for help from resources like StackOverflow or ChatGPT. I like to listen to some ambient and melodic tunes while I work.
If it's a practice day, I'll probably be working on tickets to address existing tech debt or putting together plans for the UI architecture. Around lunchtime, I take a break and grab some food - on Tuesdays, we have a team lunch that gives us a chance to chat with other members of Flexciton outside of our immediate team. After lunch, it's back to work until it's time to call it a day.
What do you enjoy most about your role?
What I enjoy most about my role at Flexciton is the variety of tasks and responsibilities that come with being a frontend developer and leading the frontend engineering practice. I love delivering features that provide real value to the users of our app, and there's nothing better than seeing someone using a feature I've helped build. The idea of making a user's experience even just a tiny bit better is highly motivating.
On a daily basis, I get to work on both technical and creative aspects of the app - whether it's writing code for engineering tickets, collaborating on designs, or planning out the UI architecture. I find it really satisfying to see the tangible results of my work and how it contributes to the overall success of the company.
Additionally, being part of a team that is collaborative, supportive, and always striving for improvement makes my job even more enjoyable. I appreciate that I have the opportunity to learn from my colleagues and contribute my own ideas to help move the company forward. Overall, the combination of technical challenges, a positive team environment, and the ability to make a real impact on our users is what makes me look forward to coming to work every day.
If you could do it all over again, would you pursue your same career?
Absolutely, I would pursue the same career again without hesitation. Before transitioning to frontend development, I worked as an electrician for five years. Although I gained valuable experience and skills during that time, I knew that it wasn't the right long-term career path for me.
Switching to frontend development was a challenging but rewarding decision. I've been able to build upon my previous technical knowledge and apply it in new and exciting ways. I feel incredibly lucky to have found a career that I truly enjoy, and I never take that for granted.
Being a frontend developer allows me to combine my technical skills with my creativity and problem-solving abilities. I find the work to be constantly engaging, and I'm always learning new things. The fact that I'm able to contribute to the success of a company and make a positive impact on its users is incredibly fulfilling.
I know that not everyone has the opportunity to pursue a career they enjoy, so I feel grateful every day for the path that led me to frontend development. Looking back, I can confidently say that I made the right decision, and I'm excited to see where this career will take me in the future.
If you could summarise working at Flexciton in 3 words, what would they be?
Rewarding, challenging, fun
Tell us about your best memory at Flexciton?
My first day at Flexciton is definitely one of my fondest memories. Starting a new job can be daunting, but I was immediately struck by the friendly and welcoming atmosphere in the office. My colleagues went out of their way to make me feel comfortable and part of the team from the get-go. I knew from that first day that I had made the right choice in joining this company.
Since then, I've had the opportunity to work on some truly exciting projects and collaborate with some incredibly talented people. Every day presents new challenges and opportunities for growth, and I'm constantly motivated to improve and learn.
But it's not all work and no play at Flexciton. One of the highlights of the year for me is our company trips. After working hard all year, it's a much-needed break to spend some quality time with my colleagues and enjoy some downtime. It's always a great bonding experience, and I come back feeling re-energized and ready to tackle whatever challenges lie ahead.
Overall, I feel incredibly fortunate to be part of such a supportive and dynamic team at Flexciton. From my first day to now, it's been a truly memorable and rewarding journey, and I can't wait to see what the future holds.

Scheduling as a Cornerstone of the Smart Factory [Part 1]

Scheduling as a Cornerstone of the Smart Factory [Part 1]
One of the consequences of the pandemic has been an incentive to deglobalise, as regions suffered from the issues with supply chains and geopolitical dependencies. Significant delivery issues in the chip industry – and in particular wafer manufacturing – have had a negative impact on the global economy. However, onshoring this high technology industry will also bring its own challenges. Expertise and cost efficiency to name a couple. Zooming in a bit closer on so-called wafer fabs, we can distinguish two types of factories. The legacy and smaller fabs serving niche markets with older technology nodes, and the cutting-edge giga-factories, recently built or in the making. Both types have different problems to tackle, but one key component of their roadmap could be surprisingly similar.
The newest fabs have well integrated automated systems, but operating them efficiently on such a scale is a challenge of its own. The older factories have the downside of being less automated but they realise the need to become more efficient in energy consumption, labour cost and capacity utilisation. In both situations, digital transformation is coming to the rescue. Industry 4.0 is no longer a buzzword, it has become a matter of regional technological sovereignty.
The fundamental building block of Industry 4.0 is data; an asset which is present in abundance in wafer fabs. So what is preventing these factories from levelling up? The answer is simple, the solution is not: complexity. It’s an inherent part of wafer manufacturing, stemming from; increasingly high numbers of process steps, job shop factory types, re-entrant flows, product diversity, sensitivity to quality issues and so on.
The problem with complex systems is that there’s so much variability and interaction, it's hard to get actionable insights from data. Instead of accepting the stochastic and complex nature of the fab, factories can better control it by using advanced production scheduling to understand in which order lots get processed, on which tool and – the most important difference when compared with common rules-based approaches – when they get processed. To begin, this can be employed in certain bottleneck areas and then once you do it for the entire factory, you get a holistic picture of what is going to happen. Sounds great, doesn’t it? But how exactly will this benefit your fab? To explain, let’s place production scheduling in a couple of recognisable use cases.

- Lot-Tool Assignments
Wafer manufacturing has complicated recipe-tool qualification matrices within a group of tools that perform similar processes. The weaker tools can process fewer recipes than the stronger ones. We want to avoid stronger tools “stealing” lots away from the weaker tools, because it leaves fewer lots for the weaker tools to process, therefore wasting capacity. The same is true for faster and slower tools: while faster tools are preferred, pushing all the WIP through the faster tools leaves the slower tools under utilised. Advanced schedulers allow for better anticipation of incoming WIP and superior use of available capacity for weak and slow tools. The bigger and more complex the matrix grows, the harder it is to find the optimal processing of WIP. On top of the scheduling itself, mathematical programming helps to optimize lot-to-tool assignments over time. This results in a capacity booster, similar to putting a turbocharger on an engine: it’s the same engine, but with more power.
- Reducing Timelink Violations
Process steps with timelinks are common in wafer manufacturing to control the maximum amount of time a wafer spends between two or more process steps. If a timelink is violated, the wafer requires rework – or worse still, scrappage. A system that avoids timelink violations requires the ability to intelligently plan into the future. And that’s exactly what an advanced scheduler does. It has been proven to drastically reduce timelink violations, even in the most complex of scenarios.
- Improving Batching Efficiency
Batching is a complex decision making process since it involves an estimate of lot arrivals and how waiting longer trades off with running smaller batches. Predicting lot arrivals is difficult in such a complex environment, and trading off wait time against batch efficiency is even harder because the costs and gains are not always clear. Determining and automating this process is well within an advanced scheduler’s remit. Once the algorithm is tuned, it makes the most efficient decision, and perhaps even more importantly: it generates consistent output.
- Optimizing Changeover Decisions
Another use case related to the problem of lot arrivals is the problem of changeover decisions. One toolset with different machine setups can serve multiple different toolsets down the line. A bit like a waiter in a restaurant serving multiple tables. Waiters have to make sure no table is without food or drink, and to do that, they visit the tables regularly to ask for any orders. But for machines, you can’t switch the setup too often because it only increases non-productive time. Preferably, you also plan setup changeovers at a time when planned or predicted downtime for the machine occurs, to reduce downtime variability. To put it simply, it’s a decision on when to switch over from the type A process to the type B process on a tool. An advanced scheduler can solve that equation, finding the optimal point in time. Schedulers are better at this than human reasoning or rule-based logic, as solving to a time dimension is what they are designed for.
- Flow Control and Line Balance
Line balancing is – even for experienced manufacturing engineers – difficult to grasp. One can intuitively understand what it means, but how do you define “balanced” in the first place? Even if you can, it is absolutely beyond the capabilities of a human brain to manually and continuously make decisions that control it. And once it’s out of balance, to recover it. Again, considering the time dimension is a crucial aspect of what advanced schedulers offer, which enables them to recover faster from unforeseen circumstances and maintain better risk-control for generating continuous output.
- Operator Task Lists
As opposed to dispatch lists that only tell the order in which to process lots, advanced schedulers can also tell when a lot is supposed to start and finish processing on a tool. Combine that information with which operators are serving which tools, and you can move away from tool-centric dispatch lists towards operator-centric task lists. With a handheld device, that could even allow you to send push notifications when urgent intervention is needed. It can reduce idle time on tools that have no available operator. Even more so, it can allow for an entire rethink of the workflows operators are used to.

So far in this blog, we’ve focused on scheduling use cases where lots are scheduled on tools, leading to higher throughput on tools, toolsets or the entire factory. All these use cases can also be addressed by improving some rule-based dispatching strategies, but what advanced scheduling offers is the ability to optimize for future decisions rather than just real-time. With that comes better visibility on what will happen in the factory, and it also leaves opportunities for re-organising workflow and freeing up resources. In part 2 of this blog, we’ll begin to look at the future and what could happen when we integrate even further. Enter, Industry 5.0.
Author: Ben Van Damme, Industrial Engineer and Business Consultant
Part 2 is now live. Click to read.

Flexciton’s Software Trial at Renesas Tackles One of the Most Complex Aspects of Fab Scheduling

Flexciton’s Software Trial at Renesas Tackles One of the Most Complex Aspects of Fab Scheduling
Timelink constraints are one of the most complex issues to handle in fab scheduling. They define the maximum allowed time between steps in the production of a wafer. Correct scheduling of timelinks is critical to helping minimise the risks of oxidation or contamination. This can happen when a wafer is queuing outside of a tool for too long, resulting in scrappage or rework that damages profitability. Renesas Electronics asked Flexciton to see if its intelligent scheduling software could improve this aspect of scheduling in the diffusion area of its wafer fab.

What makes timelink constraints very hard to schedule is their interdependence. For example, by moving from step one to step two, the wafer enters the first timelink. When moving from step two, the wafer enters a second timelink which lasts until step 4. However, there can also be a third timelink constraint – known as a nested timelink – between step three and step four which overlaps the second timelink constraint (see Fig. 1). Therefore, step three has to be scheduled in a way that allows for both the second and third timelink constraints to be adhered. This example discussed is just for a few steps but, in reality, there could be hundreds of steps and many overlapping time constraints that need to be continually considered. This creates one of the most complex scheduling problems seen in a wafer fab, and any violation of the timelinks has a negative financial impact.
The most commonly used scheduling approach is based on heuristics, using a set of if-then operational rules that have been manually programmed and require constant maintenance. This is a relatively simplistic methodology that has hardly changed over the past two decades and thus cannot effectively solve today’s much more challenging scheduling problems. In modern day fabs, very complex, multi-dimensional problems are common on a daily basis and existing heuristic approaches don’t have the built-in intelligence to look ahead to future steps.
Flexciton’s next-gen scheduling software is the only solution on the market that is able to do this. It pairs powerful mathematical optimisation technology with smart decomposition techniques to work out solutions with complete autonomy. It has the ability to generate an optimised production schedule within a few minutes by searching through billions of scenarios to select the best possible one. Importantly, its intelligent algorithms consider the knock-on effects that one change can have against all the other constraints in the fab – including timelinks. This repeating iterative process ensures that it is continually updating the schedule to allow for any changes in fab conditions or business objectives.
The software was run in a simulation environment that replicated the way that Flexciton’s scheduler would have run live at the Renesas fab. The results showed that a significant improvement in reducing timelink violations of 29% could be achieved. Additional improvements would be possible of a 22% reduction in the number of batches and an 11% reduction in queue time despite these two KPIs being conflicting (see Fig. 2). This is because decreasing the number of batches naturally means increasing the number of wafers in each batch, but this increases the queue times for each batch as operators wait for new wafers to arrive at the tool before processing them together.
Currently, most fabs have no knowledge of the arrival times for future lots so operators can sometimes wait unnecessarily to maximise a batch size, causing more wafers to queue and damaging productivity. Uniquely, the Flexciton scheduler can see how lots are moving in time and can thus optimise the trade-off between number of batches and queue time to achieve the impressive gains seen on these conflicted KPIs.
Renesas were impressed with the simulation figures. Jay Maguire, Engineer at Renesas, commented, “Flexciton was able to show us several specific decisions we could have done differently to improve batching and cycle time. We are pursuing a live trial of the Flexciton software.”

Jamie Potter, Flexciton’s co-founder and CEO, explained, “The key differentiator of our approach is that our software has the intelligence to predict what may happen in the future based on the current state of a fab (or WIP in a fab). It searches for the best solution amongst billions of possibilities to continuously keep finding the optimal schedule that meets the KPIs to maximise a fab’s productivity and profitability. Humans and heuristics just can’t do that.”

Has the EU Chips Act Failed Before it's Started? Industry Strategy Symposium 2023

Has the EU Chips Act Failed Before it's Started? Industry Strategy Symposium 2023
The big theme at this year’s SEMI Industry Strategy Symposium (ISS) conference was ‘How does Europe fulfil its ambition by 2030?’. It involves an ambitious target of reaching a 20% share of the global semiconductor market by 2030 whilst having a more resilient industry ecosystem. This is a huge challenge, especially when one considers that the global semiconductor market is forecasted to reach $1tn by 2030. A 20% share of this would mean $200bn in just seven years. For perspective, the global market figure currently sits at $600bn which means Europe’s present-day 8% share is around $48bn. Breaking it down like this reveals the magnitude of the challenge; Europe must increase its share of the market by more than quadruple as the size of the pie increases along the way. Looking solely at Europe’s rate of growth in the market over recent years compared with the rest of the world, I can tell you that their target is infeasible. But before we conclude, there are several aspects to consider.
First, what is going to drive this extraordinary growth? Second, why has the EU – and indeed the US which currently claims 10% of global semiconductor manufacture – set these targets? And finally, what is being planned to achieve them.
- Semiconductors are a major driving force in the global economy and the EU clearly recognises this, but perhaps not to the full extent. Almost every modern innovation utilises electronics in some form or another, with the obvious mega trends over the past few years being computers and smartphones. Looking forward, applications that are likely to drive demand further include smart mobility, 5G, AI, IoT, quantum computers, 6G and so on. All of them need increasing amounts of leading-edge chips to handle everything from data capture to cloud processing in order to enable devices and systems to make smarter decisions.
- For years we have been enjoying a tightly integrated, global supply chain with wafers and chips crisscrossing the globe as the journey from silicon wafer to packaging to test is done in stages in centres of speciality in different countries.
Covid and the ongoing geopolitical tensions between the US and China mean that this model is being redefined to be more robust. The ideal solution is to onshore all stages of the manufacturing process, i.e., keeping it all within US borders or within EU countries. Thus, these targets don’t intend to take a larger share of the global manufacturing pie. Instead, they aim to increase the amount of semiconductors manufactured that aren’t sent overseas to provide resilience against disruptions to the supply chain of these devices, which are essentials for a healthy economy. China is already on their way there. They currently make 16% of their chips onshore, with ambitions to increase this to 70% in the future – highlighting massive potential growth in this market. - Semiconductor manufacturing requires wafer fabs. The vast majority of fabs today are in Asia, with big players such as TSMC, Samsung and UMC. The challenge with these ambitious targets is that building a new state-of-the-art fab for today’s advanced nodes takes billions of dollars, requires a skilled labour force and takes several years to build once planning permission is granted. And then there are all other stages of packaging and test facilities to be built from scratch and staffed. The skilled labour needed to manage this doesn’t currently exist in the EU so it’s clear to see how setting up totally onshore manufacturing capabilities will take considerable time, money and expertise.
Expertise can be fast tracked by partnering with existing fab companies; such as TSMC discussing building new fabs in the US and Germany. But naturally, they require government grants from the funds being created to boost the semiconductor manufacturing industries. It’s worth comparing how much each area is allocating for this. South Korea’s figure is $450bn, the US is $233bn, and China is investing $200bn. With these sizable sums already formally approved by the relevant authorities, fab construction in these nations is already starting.
The EU, on the other hand, is only planning to invest a comparatively tiny $43bn.
This figure is nowhere near enough to quadruple its current semiconductor manufacturing capabilities. In fact, Kurt Sievers, CEO of NXP, estimated that a more realistic figure to achieve a 20% market share would likely be over $500bn. And moreover, this has not yet been passed in parliament, so the EU is already behind on the timeline to achieve its target compared with the other market players. As for the UK, the figure has not been announced but is rumoured to be around $1bn – which is not enough to fund just one new fab at an advanced node.
It’s important that SEMI is driving this discussion around the EU Chips Act as government funding is a critical driver for the region's growth within the global semiconductor market. But it’s not enough. As an industry, we need to take stronger action and challenge the decisions being made by the EU and the UK. They require the expertise of industry leaders to understand the full importance of microelectronics for the economy, without it I believe the money they invest will be fruitless.

As regular readers know, our software can make existing and new build fabs smarter and substantially more productive, but in order to hit the EU’s extraordinarily ambitious targets, more funding and strategic partnerships must be considered. I suspect that one solution will entail a close relationship between the EU and the US to create a US/EU-based supply chain model with both regions working together to share their centres of excellence to create a complete, self-contained system. Even if the ambitious targets are not met, working on de-integrating the supply chain with onshoring will provide security for the electronics that underpin today’s successful economies.
Author: Jamie Potter, CEO and Co-founder of Flexciton
Photo Credit: SEMI

Fab-Wide Scheduling of Semiconductor Plants: A Large-Scale Industrial Deployment Case Study

Fab-Wide Scheduling of Semiconductor Plants: A Large-Scale Industrial Deployment Case Study
This article draws from the contents of a paper presented at Winter Simulation Conference 2022, titled: “Fab-Wide Scheduling of Semiconductor Plants: A Large-Scale Industrial Deployment Case Study”.
An Introduction to Fab-Wide Scheduling
The semiconductor industry is one of the largest and most complex industries in the world. The critical factors in semiconductor manufacturing are the ability to rapidly develop and test novel technologies, improve manufacturing processes to reduce rework and waste, as well as meet production targets in terms of prescribed volumes and due dates. In this context, high quality scheduling is of paramount importance.
Due to the long cycle times, where a wafer is processed over a span of months, decision-making in semiconductor fabrication plants (fabs) is typically framed as a two-level problem. On one hand, global scheduling (or fab-wide) is tasked with the strategic management of factory assets while considering all work-in-progress, incoming and outgoing flows across the fab, expected resource availability and other constraints. On the other hand, local (or toolset-level) scheduling focuses on the operation of individual work centres. It is typically tasked with identifying the best immediate dispatch decisions i.e. which jobs waiting for dispatch should be assigned to which available machine.
Most development efforts to date have focused on the shorter time frame dispatch decisions i.e. local scheduling. This is a more manageable problem since there is little look-ahead and the scope is limited to a single or a few toolsets. Despite numerous research efforts, to date there has not been a published case study of a fab-wide scheduler successfully deployed in a large semiconductor manufacturing facility. Nevertheless, the potential for improvement at the fab-wide level is tremendous; there are numerous opportunities to improve throughout and have a step change in performance. For example:
- Bottlenecks occur due to repetition of process loops, high-cost machines with low capacity, and other physical or operational constraints. To manage them, a strategic approach is needed that looks at the bigger picture and avoids early dispatch of wafers that will end up in a bottleneck area.
- WIP flow control mechanisms (kanbans) are important for quality control but can block high-priority wafers. Fab-wide scheduling can greatly improve this aspect of operation.
- Timelinks (also known as timeloop, time lag, or qtime constraints) are challenging because they define the minimum or maximum amount of time between two or more consecutive process steps, leading to a conundrum of keeping downstream machines idle or not. Fab-wide scheduling can greatly assist by accurately predicting arrival times and deciding when to trigger timelinked lots.
Methodology
The scheduling framework proposed in this blog is hierarchical and consists of two main components which run independently and at different frequencies — the Toolset Scheduler (TS) and Fab-Wide Scheduler (FWS).
The Toolset Scheduler considers the currently in-process and/or upcoming process step of all wafers in the cluster.
FWS takes a view of the entire fab at once and considers multiple future steps for each wafer. It focuses on improving schedule quality by considering the flow of wafers through the fab, something the toolset scheduler cannot do due to its singlestep, toolset-level nature. The main purpose is to redirect flow through the fab and thereby improve flow linearity, reduce bottlenecks, improve WIP flow control management, and reduce timelink violations. Our FWS approach achieves this by predicting wait/cycle times for multiple future steps, analysing those predicted wait/cycle times with respect to the different areas of potential improvement, and re-prioritising wafer steps in a way that guarantees improved (weighted) cycle times. In brief, FWS combines two main elements: (i) an operational module that captures in full detail all relevant constraints e.g. detailed process time modelling, machine maintenance, shift changes, dynamic batching constraints, kanbans etc. (ii) a search module that identifies beneficial priority changes given the evolving fab conditions and state features.

FWS communicates with the toolset schedulers via priority weights (and some other predicted timing information) for individual steps of a wafer, as shown in Figure 2. An advantage of our approach is that, while FWS always schedules all tools in the fab, users can specify which toolsets are subject to guidance; FWS adjusts its search accordingly. This is particularly useful for gradually rolling out FWS in a fab and evaluating its impact. In addition, the guidance strength is controllable - although full guidance is the optimal choice, tuning down guidance allows for a more gradual deployment.

Seagate Deployment
Seagate is a world leader in data storage technology, with more than 40% share of the global Hard Disk Drive (HDD) market. The Springtown facility in Northern Ireland produces around 25% of the total global demand for recording heads, the critical component in a HDD. Flexciton’s FWS / TS scheduling system was trialled in Seagate Springtown between March-May 2022. After successful testing, the system has been operational 24/7 since June 2022; a timeline is shown in Figure 3.

It is important to note that deploying and testing a novel piece of technology in a large factory that runs around the clock presents many practical challenges to be overcome:
- Controllability (scope): important to ensure that the new development is deployed in a controlled manner. The FWS-TS guidance scheme allows for localised trials, where focus can be placed on problematic areas and gradually increase scope.
- Controllability (magnitude): it is useful to only focus on cases with obvious merit first. This is achieved by controlling guidance strength.
- Explainability: important to be able to detect and reason about the changes. This is achieved by a combination of UI features and support tools which have been designed to give operators and managers situational awareness.

Results and Learnings
Quantifying the benefit of an alternative scheduling approach remains a challenging task. When deployed in a real plant, traditional A/B testing between pre and post-deployment suffer from (i) dynamic fab conditions (ii) an ever-changing product mix and (iii) evolving capabilities of the fab e.g. increased/decreased labour capacity and new tool commissioning/decommissioning.
As such, it was decided to look at the impact from different angles - a statistically significant impact would be expected to result in a substantial shift in numerous business processes and metrics. In particular, three different aspects were examined.
- Deep dives on specific toolsets and metrics.
- Comparison against internal simulation and planning tools.
- Observing the impact on manual interventions.
Notably, all three approaches indicated a change in fab performance between pre and post-deployment; more details will be shared in future articles. In the Winter Sim Conference paper presented in December 2022, we focused on the latter point; A proxy we can use for this benefit is the volume of ad hoc control flow rules activated/deactivated in the fab. Every day, specialists have to define numerous, in some cases even hundreds, of ad hoc control flow rules to better manage operations given the prevalent conditions. For example, setting a ”hard down” rule, where lots are manually placed on hold so as not to continue to a downstream bottleneck. In Figure 5, we show the number of ad hoc operational rules implemented in the Seagate Springtown fab between weeks 2 and 26 of the year 2022 (i.e. from early January until late June). As can be seen in the final weeks, the number of ad hoc rule transactions averaged less than 150 per week, a decrease of over 300% compared to the pre-deployment period. This is strong evidence that FWS deployment reduced massively manual interventions required to effectively control flows within the fab.

Conclusions
The main takeaway of the Winter Sim paper is that the increased horizon look-ahead and global nature of FWS presents numerous opportunities for a step change in factory KPIs. The Flexciton FWS was successfully trialled at Seagate Springtown over 3 months in 2022 and has been fully enabled across the fab since June 2022. It resulted in a radical decrease of interventions previously used to manually control wafer flows. Further analysis suggests that Flexciton’s TS and FWS schedulers have achieved substantial improvements in throughput and cycle times.
Author: Ioannis Konstantelos, Principal Engineer

The Flex Factor with... Yichen

The Flex Factor with... Yichen
In this month’s edition of The Flex Factor, we introduce one of our QA Engineers: Yichen Tian. Have a read to find out what this serial multitasker does during her day-to-day.

Tell us what you do at Flexciton?
I am a QA engineer in the Chrysalis team, which involves checking if the result of our development makes sense with automated and manual tests. I’ve also recently joined our Platform Engineering Team, whose mission is to make other developer’s lives easier by building faster CI/CD pipelines and laying the ground work of our architecture.
What does a typical day look like for you at Flexciton?
My day starts with a coffee and then a short gathering with all the team members to share updates and prepare for the day. During the day I discuss with developers about the appropriate outcomes of different user scenarios and meddle with services like GitLab and Kubernetes for the rest of my day.
What do you enjoy most about your role?
I feel it’s the excitement. As a QA I constantly switch contexts and have more than three tasks simultaneously on a busy day. I also troubleshoot pipelines and any breakage in our app and that constant change excites me.
What career advice would you give to your younger self?
To quote a member of the team, just keep swimming.
If you could do it all over again, would you pursue the same career?
I know it sounds like a cliche but I don’t regret the decisions I have made along the way. However I would love to work in an animal shelter for some time if the opportunity arises.
If you could summarise working at Flexciton in 3 words, what would they be?
Inspiring, supportive, fun.
Tell us about your best memory at Flexciton?
There are so many amazing memories I have at Flexciton! Most of them are from team trips and day to day banter. One of the best has to be swimming in the sea together and watching my colleagues play beach football in Portugal.

We Need To Embrace Complexity, Not Run Away From It

We Need To Embrace Complexity, Not Run Away From It
As next-gen designs become increasingly sophisticated, a more holistic and streamlined approach to the manufacturing process is vital.
Why Has Sophistication Become An Issue?
As I’ve talked about in previous blogs, the semiconductor industry faces serious challenges on a number of fronts.
The supply chain issues caused by Covid are still a headache. While some industries (automotive in particular) are putting pressure on chip companies to ramp up production, others, such as data storage, suffer from demand downturns. Another key factor impacting chip making is an ongoing shortage of skilled labour within the industry. Then there’s the problem of manufacturing equipment, with companies either unable to source second-hand tools or new tools being too expensive due to inflation. And as the world’s energy crisis continues, power itself – and skyrocketing electricity bills – is also a major concern.
As I discussed in my presentation at last year’s Fab Management Forum, the big issue that underlies all of these challenges is complexity. In many ways, fabs and the way they operate haven’t changed much in the past decade – yet the products they make have become increasingly sophisticated and as a result, more difficult to manufacture at scale. It’s not unusual now to see chip designs going into production with over 1,600 unique steps required to produce them, in cycle times that can stretch up to nine months. And as an example of just how complex chips are becoming, Micron recently began volume production of the world’s first 232-layer NAND.
This level of sophistication is only going to increase in the coming years, and the complexity challenge will soon reach breaking point if fabs continue with current practices. Unless fabs introduce new methods to streamline and simplify the management of the production process, their performance and output will continue to suffer, hindered by the sophistication of their own products.
What’s the problem with how fabs attempt to deal with complexity? Currently, they follow the classic model of addressing a big problem by breaking it down into a series of smaller, more manageable problems, with different teams assigned specific challenges to tackle. However, this approach has created problems of its own – different teams within the fab also have different priorities and KPIs, which they often work towards in isolation. And as individual teams try to max their KPIs, conflicts can arise that negatively impact production itself.
Production Scheduling Is Compromised By Simplification
Let’s drill down into the complexity issue and look at how it affects production scheduling in particular.
There are a number of different areas within chip production – metrology, photolithography, diffusion furnace, epitaxy etc – which each have their own set of tools and rules as to how they operate. Each area also has its own team with their own KPIs. So while the overarching objective of a fab is to produce a required number of saleable wafers, each team also has more granular objectives against which they’re being measured.
Typically, teams schedule production in their areas according to a series of rules that dictate the sequence in which wafers are processed - for example, this particular recipe should always run on this particular tool. That sounds simple enough, except there can be thousands of these rules for each area – in fact, it’s so difficult for industrial engineers to properly manage and control each area’s parameters that the rules tend to be full of simplifications and shortcuts.
To maintain the fab’s performance, these rules also require regular maintenance to respond to different events happening in the fab on a daily basis. Yet given their sheer volume, and the growing complexity of the products being made, it’s impossible for teams to adapt every rule to address the real-time situation. An additional issue is that each area has its own software to administer these rules and monitor its KPIs, but it generally doesn’t interoperate with the software in other areas.
All of which means that the teams aren’t able to see the status of each other’s area – they can only operate based on their own data. Not only have the rules they use have been simplified in an attempt to deal with complexity, but they’re designed to meet each area’s objectives, not the overarching goal of production. So while individual teams may be hitting their own KPIs, the overall performance of the fab is inconsistent.
There is no ‘big picture’ of the production process that individual teams are able to consult to guide their decision-making – and as it is, they are not being judged on overall performance, just how well their own area is doing. But this is simply not a viable way for fabs to work going forward.
Embracing Complexity
So what is the solution for handling production complexity on its own terms rather than constantly diluting it? It’s counter-productive to try and simplify data when it’s that very complexity that makes it so powerful – and by genuinely engaging with every aspect of it, it’s possible to gain a more accurate and comprehensive picture of what’s happening in the fab. Rather than simplifying the data, we should instead be simplifying the process.
The first step to managing complexity is employing an intelligent scheduling system that operates based on a holistic overview of what’s actually happening in the fab at any one time, identifying and responding to bottlenecks in the WIP as they happen. It also needs to make these adjustments and deliver schedules autonomously, because as we’ve seen, the complexity and unpredictability of modern fab operations make it infeasible for conventional rules-based schedulers to deliver consistent results. The constant requirement for manual retuning is a drain on IE resources, and the intelligence in the software itself is not advanced enough to effectively tackle the hardest problems found in a wafer fab.
Is such an autonomous approach to scheduling possible? Short answer, yes it is, but it requires a willingness on the part of the semiconductor industry to a) fully embrace smart manufacturing practices, and b) to switch from their conventional scheduler to deploy a best-in-class technology that leverages both the power of the cloud and the computational speed of AI.
The complexity of modern chip design demands a new approach to production that is equal to this complexity – otherwise, the industry will be forever on the back foot, constantly struggling to keep up with the future while failing to capitalise on the richness of the data available to it in the here and now.
Author: Jamie Potter, CEO and Cofounder of Flexciton

The Flex Factor with... James

The Flex Factor with... James
Meet James Adamson, one of our senior optimization engineers here at Flexciton. Many, many moons ago he was an aspirant farmer, now he’s designing and improving our scheduling algorithms.

Tell us what you do at Flexciton?
I’m an Optimization Engineer, which essentially means I focus on designing and improving our scheduling algorithms, while also implementing and maintaining them in production code. I also have a technical lead role for one of our customers, so I spend some time understanding their requirements in detail and thinking about how to expand the product or customise it to meet their individual needs.
What does a typical day look like for you at Flexciton?
In my engineering team we kick things off with a stand-up to agree on priorities for the day and discuss any issues that need attention. My day would then typically be a mix of drinking coffee, getting stuck into writing code for some new functionality, and having design discussions with other members of the team to keep us aligned technically.
What do you enjoy most about your role?
I would say the opportunity to combine two things: working on one of the most challenging optimisation problems out there; and the ability to actually have an impact, for example through getting my code into production or making and influencing key design decisions.
If you could give one piece of advice to someone, what would it be?
I would maybe suggest they seek advice from better places… but no, I think it’s important to always be thinking about what it is you want, and to think several steps ahead. It’s all too easy to get stuck doing something you don’t enjoy.
If you could summarise working at Flexciton in 3 words, what would they be?
Interesting, challenging, impactful.
If you could swap jobs with anyone for a day, who would it be and why?
I used to want to be a farmer… so provided I could pick a day with decent weather then sure why not give that a go for day. I reckon it’s much harder work than the idea I used to have of chilling on a combine harvester though…
Tell us about your best memory at Flexciton?
There’s a whole bunch of memories from our team trips, most recently to Albufeira in Portugal where some people really shone with their dance moves. I will avoid naming names.

Scheduling Innovations: Academic Research and its Adoption in the Semiconductor Industry

Scheduling Innovations: Academic Research and its Adoption in the Semiconductor Industry
Introduction
The first integrated circuits were invented by Texas Instruments and Fairchild Semiconductor in 1959. Today, semiconductor manufacturing is a $600 billion dollar industry and microchips are ubiquitous and impact our lives in ever increasing ways. To achieve such astonishing growth, academics and industry have had to constantly innovate, researching new production technologies. While much has been said about Moore's law and the push towards higher and higher transistor densities, the innovations made in how the billion dollar factories producing these chips are run have received less attention. This article focuses on innovations in scheduling: algorithms which assign lots to machines, decide in which order they should run, and ensure any required secondary resources (e.g. reticles) are available. These decisions can significantly impact the throughput and efficiency of wafer fabs.
Many innovative technologies in scheduling were first proposed by researchers and have, over time, been adapted in manufacturing. They include:
- Dispatching: rule-based systems for deciding which lot to run next on a tool
- Optimization-based scheduling: mathematical techniques like mixed integer programming and constraint programming which can generate optimal machine assignments, sequencing, and more for entire toolsets or areas of the fab, improving fab-wide objectives like cycle-time or cost
- Simulation: computer models of the manufacturing process which are often used to run what-if analysis, evaluate performance, and aid decision making
From dispatching to mathematical programming
Early academic research on dispatching rules dates back to the 1980s. Authors at the time already highlighted the significant impact scheduling can have on semiconductor manufacturing. They experimented with different types of dispatching rules, ranging from simple first-in-first-out (FIFO) rules to more bespoke rules focused on particular bottleneck tools. Over time, dispatching rules have evolved from fairly simple to increasingly complex. Rule-based dispatching systems quickly became the state-of-the-art in the industry and continue to be popular for several reasons: they can be intuitive and easy to implement, yet allow covering varying requirements. There are, however, also many situations in which dispatching rules may perform poorly: they have no foresight and generally look only at a single tool and therefore often struggle with load balancing between tools. They also struggle with more advanced constraints such as time constraints or auxiliary resources, e.g. reticles in photolithography. More generally, dispatching systems are a mature technology that has been pushed to its limits and is unlikely to lead to significant increases in productivity and yields.
For these reasons, focus has shifted over time to alternative technologies, especially deterministic scheduling based on mixed-integer programming or constraint programming. In the academic literature, these approaches start to increasingly show up around the 1990s. Early contributions focused on analysing the complexity of the wafer fab scheduling problem and solved the resulting optimization problem using heuristic techniques, but slowly moved towards rigorously scheduling single machines, tackling one particular aspect of the problem at a time. Due to the limited scope deterministic techniques could initially tackle, their adoption in industry lagged behind the academic discussion.
From single machines to fab-wide scheduling
The last twenty years have seen deterministic scheduling techniques mature and schedule larger and more complex fab areas. In the academic literature, authors moved from focusing on single (batching) tools, to entire toolsets or larger areas of the fab including re-entrant flows. They also started including more and more operational constraints such as sequence-dependent setup and processing times, time constraints, or secondary resources such as reticles. In order to achieve this increase in scale and complexity, researchers have applied a large number of optimization techniques, and often combined rigorous mathematical programming methods with heuristic approaches. Some have used general purpose meta-heuristics, such as genetic algorithms or simulated annealing, while others have developed bespoke heuristics for fab scheduling, such as the shifting bottleneck heuristic.
As the size of problems optimization-based scheduling techniques could solve grew, the industry started to explore how to adopt these methods in practice. For example, in 2006, IBM announced that it had successfully used a combination of mixed-integer programming and constraint programming to schedule an area of a fab with up to 500 lot-steps and that this had led to a significant reduction in cycle time. Our own technology at Flexciton leverages mathematical optimization and smart decomposition, combined with modern cloud computing, to efficiently schedule entire fabs. One key advantage of using cloud technology is the ability to access huge amounts of computational power. It allows to break down complicated problems and deliver accurate schedules every few minutes, as well as the ability to adapt the solution strategy to the complexity at hand. Additionally, it enables responsive adjustments, as events unravel in real-time, allowing for a truly dynamic approach to scheduling.
Optimization-based scheduling’s trajectory from an academic niche to a high-impact technology has partially been accelerated by two major trends:
- The increasing automation of wafer fabs and availability of process data. Especially automated material handling systems in modern fabs have made scheduling techniques almost a necessity.
- The success of the semiconductor industry itself. Faster microprocessors have made it possible to solve bigger scheduling problems in less time.
The process has been accompanied by considerable improvements in productivity, as scheduling is able to overcome many of the downsides of dispatching: it can look ahead in time, balance WIP across tools, and improve fab-wide objectives such as cost or cycle-time. A major advantage of scheduling is that it can both increase yields when demand is high and reduce cost when demand is low.
When in doubt, simulate.
A discussion of scheduling in wafer fabs would not be complete without a word on simulation models. Simulation models are technically not scheduling algorithms - they require dispatching rules or deterministic scheduling inside them to decide machine assignment and sequencing. But they have been used to evaluate and compare different scheduling approaches from the very beginning. They were also quickly adopted by industry and have, for example, been used by STMicroelectronics to re-prioritise lots and by Infineon to help identify better dispatching rules. The development of highly reliable simulation models could greatly increase their use for performance evaluation and scheduling.
The future
More reliable simulation models are also important in light of recent trends in academic literature, which may provide a glimpse into the future of wafer fab scheduling. Rigid dispatching rules that need to be (re)tuned frequently may soon be replaced by deep reinforcement learning agents which learn dispatching rules that improve overall fab objectives. In some studies, such systems have been shown to perform as well as dispatching systems based on expert knowledge. If and when the industry adopts such techniques on a large scale remains to be seen. Since they require accurate simulation models as training environments, they can be extremely computationally intensive, and their adoption will largely depend on the development of faster training and simulation models. The combination of self-learning dispatching systems, and comprehensive, scalable scheduling models may well hold the key to unlocking unprecedented improvements in fab productivity.
Flexciton aspires to be the key enabler in this transition, bringing state-of-the-art scheduling technology to the shop floor in a modern, sophisticated, and user-friendly platform unlike anything else on the market. Despite the enormous challenges that come with the scale of this endeavour, the initial results are very encouraging; cloud-based optimization solutions can indeed bring a step change to streamlining wafer fab scheduling while delivering consistent efficiency gains.

The Flex Factor with... Charlotte

The Flex Factor with... Charlotte
This month on The Flex Factor, we get to know our Senior People & Talent Partner, Charlotte Conway! Find out a little more about her and how she creates a supportive environment that helps our whole team to thrive.

Tell us what you do at Flexciton?
I work across both the People and Talent function as a Senior People & Talent Partner. I help Flexciton to find, attract and recruit top talent, and am responsible for engaging, supporting and developing our employees.
What does a typical day look like for you at Flexciton?
There is no such thing as a typical day in a startup! However, my day is often split 80% on the people side and 20% on talent. I like to start my day with any admin tasks or reply to any slack messages that might have come through. I then create a to-do list for what I plan to do that day. This can be dealing with employee queries, or business partnering with managers to check in on any people related matters. During busier periods I will often be taking a hands-on approach to hiring, sourcing and speaking to candidates as well as setting up our talent processes and looking at our employer branding strategy to help us to attract the best talent. As a startup there are also lots of projects to get involved in across all of HR (e.g. performance management, L&D) so a lot of my day may involve working on improving our people and talent processes... or implementing new processes!
What do you enjoy most about your role?
What I enjoy most about my role is getting to work closely with our people (I guess it’s in the name, ‘people partner', right?). For me, the important part of being a ‘people’ partner is creating an environment where people feel heard, supported, and empowered to bring their best selves to work. Being able to have a small part in ensuring employees have all of the above is incredibly rewarding and fulfilling.
What's a quote that you live by?
“I've learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.”
― Maya Angelou
If you could summarise working at Flexciton in 3 words, what would they be?
Exciting, dynamic and FUN.
If you could give one piece of career advice to someone, what would it be?
Never doubt yourself or let fear of failure hold you back. It’s ok to make mistakes and take risks! It’s better to look back and never have that feeling of ‘what if’ because you were too scared to take the next step.
Tell us about your best memory at Flexciton?
There are lots! However, It’s one of the many fun Flexciton socials that comes to mind - Dabbers Bingo. What better way to celebrate with your colleagues than with some good, old fashioned competition. There was dancing, music and of course bingo. This was then followed by a late night showing of Shrek in the office, and a very patient colleague (thanks Jannik) failing miserably to teach me how to ride a bike…I blame the one too many glasses of prosecco!
Interested in working at Flexciton? Head over to our careers page to to check what vacancies we currently have available and learn a little more about us whilst you're there.

Managing The Human Side Of Smart Manufacturing

Managing The Human Side Of Smart Manufacturing
Change management is just as important as new technology in a successful implementation.
People have the power
The core belief that drives the work we do here at Flexciton is that, for the semiconductor industry to advance to the next level of efficiency, it has to comprehensively embrace smart manufacturing practices.
As I’ve written previously, the rollout of smart manufacturing will require fabs to adopt ‘disruptive’ cloud-based, AI-driven technologies. As such, the move to smart manufacturing will be an absolute step change for most companies and will result in some fundamental adjustments to the way that the fab works. Yet ensuring that these new technologies integrate seamlessly with the existing systems is only part of the challenge.
Ultimately, the success of a smart manufacturing implementation will be decided by the people who work in the fab. For these new technologies to deliver the efficiencies they promise, there has to be total buy-in from the staff who are expected to work with them, particularly in legacy fabs where final decisions are still often made by humans. This is what we call the human side of smart manufacturing, and getting it right is just as important as deploying the technology itself.
What exactly is the human side? It’s the recognition that, for all the advanced tools and machinery, and the software that runs them, most fabs still depend on skilled workers to not only move the WIP around the factory floor, but also make decisions that are integral to the manufacturing process itself. As such, these workers are deeply invested in how the fab runs and take pride in the job they do.
Sensitive change management is critical
For implementation of smart technology to succeed, the human side of the transition has to be managed with skill, sensitivity and awareness. It’s not easy to shift existing work practices in any industry, and this is particularly the case within the semiconductor sector, which is used to doing things its own way.
Cutting-edge disruptive technologies are more often than not born in the minds of academics who, despite offering an innovative solution, may have a limited understanding of the inner workings of a fab. As a result, they can fail to take into account the complex implications of their technology and how the adoption phase can impact the people working with it. This is one of the reasons why fabs tend to stick with implementing conventional or in-house solutions. Despite being far less sophisticated, these technologies are built with an appreciation of the realities of a factory and the people who work there. Subsequently, the adoption process is smoother and, although it may not provide radical performance improvements, the impact can be more positive for the fab and its workers.
The key point here then is that, in order for a disruptive smart technology to be implemented successfully, it is critical to have a comprehensive understanding of a fab and a sensitive approach to human change management. Resistance to change is only natural and is to be expected, which is why from the word go, we work side-by-side with our clients to ensure that everybody is on board with changes to the way the fab works, because new technology on its own is not a silver bullet. Fabs don’t conform to theoretical models, but are subject to all manner of real world influences, with the human factor being especially strong.
The Flexciton approach
There are specific steps that we take to make the change management process as successful as possible:
Understand their challenges
To support the change management process, we build close relationships with our clients – not only to create trust, but also to develop a deep understanding of how their current processes work and how production has been executed so far. We have to identify both the objectives and ambitions of the management and the specific challenges and pain points that operators are facing on the shop floor. We do this by shadowing the roles of everyone at the fab who will be affected by the new implementation.
Educate and explain
Change management must follow a systematic methodology, but every fab is different, and so there’s no such thing as a standardised rollout. We understand that our technology will change the way in which both operators and engineers work, which is why the onus is on us to educate and explain why these changes are necessary – as already noted, unless something has gone disastrously wrong, people tend to be resistant to change, particularly if they think they are already doing a perfectly good job. To aid this transition, we always strive to give as much context to the decision-making process as possible.
Establish champions
As such, it is vital that we also establish client champions of the new solution who are already trusted by the fab’s staff, and can help navigate the acceptance process. This is particularly important if, for instance, the decisions that our advanced technology is suggesting initially seem counter-intuitive to those who are familiar and comfortable with old procedures.
Act on feedback
And just as important as demonstrating the results our technology is delivering in a way that’s easy to understand, we also regularly capture user feedback during the rollout period to see where our product and the user experience can be improved. Implementation is a constant process of testing and tweaking to produce the best possible results, and that requires an honest, two-way relationship to be in place. We regularly put new features into our product based on feedback from the shop floor, and it’s always satisfying to hear how we’ve improved operators’ ability to do their jobs as a result.
To undertake a successful smart manufacturing implementation, particularly as a third party vendor, it’s not enough to just have an innovative technology solution. To be a genuine change maker, you need to understand that in the real world, it’s the people that matter as much as the technology. This is why we always set out to build a strong partnership with the fabs that we work with, becoming much more than just an external vendor. Our team is committed to delivering on the KPIs that are targeted by our clients, which doesn’t stop at providing the best possible solution. We also have to understand the people who use it and ensure their adoption of our technology is a smooth and positive process.

The Flex Factor with... Sudesh

The Flex Factor with... Sudesh
Say hello to Sudesh Lutchman; senior back end engineer, delivery manager for the Taiichi team here at Flexciton and aspiring jet pilot.

Tell us what you do at Flexciton?
I’m a delivery manager / backend engineer for the Taiichi Team - I help plan the team’s work and coordinate its delivery within the team.
What does a typical day look like for you at Flexciton?
A typical day for me at Flexciton is split into two main responsibilities. The first bit is working with product and other delivery managers to ensure that we are building the right things and that we are building them at the right time. The second responsibility is ensuring that Taiichi runs smoothly. This means helping teammates with tickets and importantly trying to identify and remove any impediments that the team might be having.
What do you enjoy most about your role?
I enjoy the challenge. At Flexciton we are solving a very complex problem. Not only is the core problem difficult to solve but managing the software development of these solutions is equally complicated. As a delivery manager and an engineer I get the pleasure of trying to simplify both.
If you could give one piece of advice to someone, what would it be?
Start simple and do not be afraid to “fail.” This has definitely helped me have some amazing experiences and learn valuable lessons along the way.
If you could summarise working at Flexciton in 3 words, what would they be?
Talented people. Exciting.
If you could swap jobs with anyone for a day, who would it be and why?
Jet pilot - hopefully the skills transfer during this job swap and I get to zip over entire continents in a few hours. If the skills don’t transfer… I’ll do my best.
Tell us about your best memory at Flexciton?
Channy (Head of Operations) getting my start date wrong and the funny conversation that followed. Channy was expecting me to be in the office full time (before my start date) and I was regretting telling them that I wasn’t working at the moment. This led to some really confused looks being shared between us.
Interested in working at Flexciton? Head over to our careers page to to check what vacancies we currently have available and learn a little more about us whilst you're there.

Flexciton Return to Present at FMF 2022 For This Year's SEMICON Europa

Flexciton Return to Present at FMF 2022 For This Year's SEMICON Europa
This year, Flexciton will be returning to Munich, Germany for SEMICON Europa and the 2022 Fab Management Forum and we're thrilled to announce that we'll be silver sponsors of the event!
Not only are we sponsoring the event, but we will also have our own booth located in B1864. Come by our booth to speak to one of our team about what Flexciton can do for your fab, participate in a product demo or just stop by for a warm drink!
Flexciton will be presenting twice across the week, once at the Fab Management Forum on Wednesday 16th November as a part of the 'State of AI in Microelectronics' segment and once at the Green and Smart Manufacturing Summit on Thursday 17th November.
The 2022 edition of SEMICON Europa is co-located with electronica at Messe München, creating the strongest single event for electronics manufacturing in Europe, and broadening the range of attendees across the electronics chain. We’re excited to be amongst business and technology leaders, researchers, industry analysts and other like-minded people from across electronics manufacturing.
Join us from 15–18 November 2022!

Goodhart’s Law and the Pitfalls of Targeting Load Port Utilisation on Photo Tools

Goodhart’s Law and the Pitfalls of Targeting Load Port Utilisation on Photo Tools
It has been described as the law that rules the modern world, and its effects can be observed in every organisation. I’m referring to Goodhart’s law, named after British economist Charles Goodhart, who wrote the maxim: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”
A common flavour of this effect is described in the following cartoon, based on a possibly apocryphal story of how central planning failed in a nail factory in the Soviet Union.

We have seen (less dramatic) examples of this effect at work in semiconductor wafer fabs. For instance, teams of operators may be measured on the number of lot moves that occur during their shift. In general, more moves per shift correlates with more wafers delivered on time to customers. However, this relationship breaks down if operators ‘game the system’ by loading batch tools with small batches at the end of the shift, thus wringing out a few extra moves in their shift, but hobbling the next shift.
Memorable though such examples are, they give the impression that Goodhart’s Law relies on people being uninterested in the ultimate goal that their organisation is pursuing. However, apathy is not usually the driving factor in Goodhart’s law; whenever lack of information, limited computational power or even an inability to concisely express our true preferences leads us to substitute a proxy metric for our true goal, the law is bound to rear its head. Former Intel CEO Andy Grove described the effect of such surrogate indicators as like “riding a bicycle: you will probably steer where you are looking”; and if where you’re looking isn’t perfectly correlated with the road ahead, you can expect a wobbly ride!
The intricacies of tools with multiple load ports
For a more subtle example of where using an imperfect measure as a target can lead to suboptimalities when scheduling a wafer fab, we were inspired by a post on the excellent Factory Physics and Automation blog looking at the relationship between load port utilisation and cycle time. In our experience, we have seen load port utilisation of a tool used as a target when designing both operator workflows and dispatching rules.
First, some quick definitions. Many tools in a fab have multiple ‘load ports’ where lots can be inserted into the tool, but then a limited chamber capacity so that, for instance, only one wafer can be processed in the chamber at the same time.

Consider the machine in Fig. 1 with three chambers and two load ports. Lots can be loaded in either load port, but then each wafer in the lot has to move through Chambers A, B and C one at a time. This means wafers may have to queue inside the tool, if the next chamber they need is still processing. Lots must be unloaded at the same load port in which they were inserted. Suppose it takes each chamber 10 minutes to process a wafer, and we want to process two lots each consisting of three wafers. If we were only allowed to use a single load port, we would have to wait for the first lot to move through all three chambers and exit at the same load port before we can start processing the second lot. Fig.2 shows that for a simple model (that ignores transfer time between chambers), the second lot will have to wait 50 minutes before it can start processing.

If however, an operator loads both batches into the two load ports at the same time (Fig. 3), the machine will pick up the first wafer of the second lot as soon as the first lot has finished processing in chamber A. Thus the second lot will only need to wait 30 minutes.

Therefore, for a given level of WIP at a tool, we can expect higher load port utilisation to be correlated with reduced waiting and therefore improved cycle time.
Indeed, in cases where a wafer cannot be unloaded from a tool until all the wafers in the same lot are also ready to be unloaded (a common workflow), it can actually make sense to split lots before a chamber tool. For instance, if we have a lot of 6 wafers before the tool (see Fig. 1) – loading all the wafers as a single lot in a load port – it will take 80 minutes for all 6 wafers to move through the three chambers until we can unload the lot. If however, we split the original lot into two lots of three and load them into both load ports (as in Fig. 3), then the first lot can be unloaded after just 50 minutes, and potentially continue to its next step earlier.
How directly targeting load port utilisation can harm cycle time
As predicted by Goodhart’s Law, the correlation between load port utilisation and fab cycle time breaks down once we try to optimize directly for load port utilisation. This breakdown is particularly stark on photolithography tools, where process steps rely on a critical secondary resource: reticles. Reticles (also called photomasks) act like stencils in the expose step of a photolithography process, patterning the wafer with the desired features. In most photo tools, reticles must be loaded onto the tool in containers, called pods, before the lots that require them can be loaded onto the machine. Therefore, if a lot is inserted into a load port early, the wafers could just be waiting inside the machine. Moreover, this also requires loading a reticle into the machine when it could have a more productive use elsewhere.
For a simple example, consider a toolset consisting of two of the tools from Fig. 1 (we can imagine chambers A, B and C are performing coat, expose and develop operations respectively).
Suppose we have just loaded a 3 wafer lot onto tool 1. The other load port of tool 1 remains free. Meanwhile on tool 2, both load ports are utilised, but there are only two wafers yet to be processed in Chamber A.
A lot (lot X) that requires a special reticle (of which only one exists) arrives. Due to a lot-level restriction, lot X can only run on tool 1. This sort of restriction is particularly common in photolithography where running consecutive photo layers through the same tool (even if there are multiple tools qualified for the operation) can reduce product variability caused by idiosyncratic aspects of the lensing to a particular tool (this is sometimes known as a ‘lot-to lens’ dedication).The operators on this toolset abide by the following rule for dispatching lots:
Rule 1: If a load port and the required reticle are available, load the reticle and the lot onto the tool.
Since tool 1 has a load port available, the operator immediately loads the reticle onto the machine, and puts lot X into the load port.
Ten minutes later, lot Y arrives at the toolset, also requiring the same reticle, and with a lot-level restriction forcing it to run on tool 2. Since the reticle is already loaded on tool 1, lot Y cannot be dispatched until lot X has finished processing and the reticle has been moved from tool 1 to tool 2. Assume, for the purpose of simplicity, the reticle moves instantaneously, both lots will have finished processing in 130 minutes time (see Fig. 4).

Imagine, however, the operators adopted the following workflow:
Rule 2: If a load port and the required reticle are available and the tool can begin processing immediately (i.e. Chamber A is free), load the reticle and the lot onto the tool
In this case, lot X will not be immediately loaded onto tool 1, since Chamber A is initially occupied. After only 20 minutes though, lot Y can be loaded onto tool 2, to finish processing 50 minutes later, at which the reticle can be moved and lot X can start on tool 1. Thus, after just 120 minutes (as opposed to the 130 minutes under Rule 1), both lot X and lot Y will have finished processing. Therefore, we can see that by adopting rule 2, the cycle time, and hence the throughput of the toolset can be improved.

In our experience of wafer fabs, we often see workflows akin to Rule 1, wherein operators fill the load ports of photo tools as soon as they are free, thus forfeiting the opportunity to use reticles earlier on different tools. Adopting a workflow like Rule 2, however, is more difficult since it requires operators to have foreknowledge of when the tool will be ready to process a new lot, and reacting promptly to load the tool at precisely this time. In practice, particularly when operator availability is limited, you will risk increasing wait time because you leave the tool under utilised if you fail to load a lot as soon as a machine becomes available.
Using advanced optimization to handle Goodhart's Law
Flexciton’s scheduler can help to alleviate this problem by employing advanced optimization technology. It can predict when lots will arrive at the photo toolset and which reticles they will require, and then jointly schedule the reticles and lots on the toolset to obtain an optimized schedule. The knowledge of future arrivals crucially allows us to identify cases where loading a reticle onto a machine now is suboptimal, since a lot will soon arrive at another tool that can make use of the reticle sooner or that simply has a higher priority. Thus, following a Flexciton schedule, operators can dispatch to load ports when they become available, with minimal risk of harming cycle time due to locking in reticles prematurely.
However, we still are not immune to the curse of Goodhart’s Law. The cycle time of an optimized schedule is itself only a proxy for what we actually care about: producing more high quality wafers at a low cost per wafer. Over-optimizing for cycle time may lead to a solution with so many loads and unloads that the labour cost of running fab becomes prohibitive. Or, as described in one of our previous blog posts, the solution may require moving reticles so frequently between tools that we increase the chance of a costly breakage.
To solve this, we apply a technique suggested by Andy Grove himself: we use pairing indicators. Combining indicators, where one has an effect counter to the other, avoids the trap of optimizing one at the expense of another. This is why we typically pair cycle time with the number of batches (to account for limited operator availability) or the number of reticle moves (to keep the risk of reticle damage low), thus mitigating the perils of Goodhart’s Law.
.avif)
Is Fear Holding Back The Chip Industry’s Future In The Cloud?
.avif)
Is Fear Holding Back The Chip Industry’s Future In The Cloud?
The semiconductor industry is at the cutting edge of technology – so why is it still so nervous about the cloud? Persisting with an outmoded security model means missing out on significant gains in manufacturing.
Only the paranoid survive?
Perhaps more than any other sector in the world, the semiconductor industry is incredibly protective of its intellectual property (IP). Given the centrality of the silicon chip to modern life, that’s not surprising – companies are in a constant arms race to design and develop ever more sophisticated chips to meet the never-ending demand for innovation from their customers. A design breakthrough could be worth billions of dollars, and so the security of the relevant data is paramount.
And that’s not the only threat that keeps semi co security teams awake at night – there’s the security of the actual chips themselves to consider. An ongoing fear within both the industry and among government security agencies is that rogue code may be inserted into a chip either during development or the manufacturing process, making any system it becomes part of vulnerable to attack.
In fact, security of manufacturing – with many companies now sub-contracting to facilities in Asia – has been explicitly cited as a key reason for building more fabs in the US. In March 2022, President Joe Biden said that semiconductors are “so critical to our national security… that we’re going to create rules to allow us to pay a little more for them if they’re made in America.” In other words, security fears are so intense that the industry is willing to put prices up just for the supposed reassurance of having chips that aren’t produced overseas.
Although Biden’s worries over the threats to national security are not cloud related, they feed into a culture of fear that has become embedded into the semiconductor industry, hindering its advancement towards next-gen technologies.
The cloud revolution
The cloud has revolutionised the way that business works in the 21st century in a number of ways. For a start, it’s decentralised the IT function – applications that would previously have resided in on-premise server rooms are now accessed as a service via the cloud. This has significantly simplified the set-up and running of satellite offices and local branches because there’s no need to house and manage IT hardware at every location – all that’s needed is a connection to the internet.
But for hi-tech companies, the real advantage of the cloud is the ability to access vast amounts of computing power on demand. Whether it’s for data crunching a massive set of figures, running an AI model through its paces, or simply trying to crack a really complex problem, the muscle provided by cloud computing can dramatically speed the process up.
On the face of it, this would make the semiconductor industry an obvious candidate for the widespread adoption of cloud technology. But that hasn’t been the case. Limited adoption has taken place – though usually relating to ‘non-critical’ business functions – but compared to the companies they serve, semi cos have been conspicuously slow to embrace the potential of the cloud.
Outmoded assumptions and intransigence
For an industry on the cutting edge of technological innovation, the reasoning behind this state of affairs seems to be based on outdated assumptions, an indication perhaps of just how embedded the fear culture is. The security philosophy at many chip makers is still predicated on each separate facility being a castle under siege that needs to be protected from external attack. The idea of willingly opening up these defences to the cloud is anathema.
Another factor holding back the full embrace of the cloud at chip companies and fabs is the fear of change. Many IT and security managers simply don’t recognise the new world of serverless functionality that the cloud can bring, and are quite happy to stick with the existing model. And there are IT teams that do understand the possibilities of cloud, but are frightened by what they imagine will be a massive upheaval of their working lives and environment, from having to create new security policies to potentially making themselves redundant. Without the pressure to change that has come from the top in other industries, IT itself is blocking cloud adoption.
Yet as both design and manufacturing processes become more complex, this reluctance to change isn’t tenable in the long-term. As chips become more and more sophisticated, the need to access computing power at scale will increase – and that means companies either building bigger server farms and private data centres, or properly embracing the cloud paradigm.
The fact is that cloud security has improved immeasurably over the past decade. According to a recent report from Accenture, “Today’s cloud solutions offer enhanced security and automation technologies that aren’t available for on-premise systems, making cloud a better option for preventing IP theft.” And refusing to move with the times because it threatens to disrupt the status quo is an increasingly questionable excuse from an industry built on pushing the technological envelope.
Ultimately, semiconductor companies have only fear and intransigence holding them back from total cloud adoption.
The end of on-premise production scheduling?
If the industry is to continue to innovate and keep up with the demands of its customers, it needs to produce highly sophisticated, next generation chips at scale. The only way to do that is by adopting smart manufacturing practices and technologies - and that means fully embracing the cloud. Why? Because current on-premises scheduling systems are no longer fit for purpose to handle the new levels of manufacturing complexity that next gen chips demand.
In an enclosed, siloed environment, such as exists in most current fabs, a typical on-premise scheduling system will only have access to so much computing power. Traditionally, these constraints have resulted in a reliance on heuristics to predict and control production workflow, as this is the best that can be achieved with the resources available. However, although these systems often use real-time data, the decisions they make are still based on rules that are created based on human experience from the past. The dynamic nature of a fab means that these rules are never going to stay pertinent, thus resulting in suboptimal production decisions.
By connecting the fab to the cloud, these power constraints disappear – and with them the restrictions that previously forced fabs to use heuristics-based scheduling. With access to a new magnitude of compute, companies can deploy more sophisticated systems able to schedule production based on real-time information, and thus optimize the manufacturing process.
Thanks to the power of the cloud, this next generation of scheduling systems is able to use complex mathematical algorithms to search through the billions of possible WIP permutations and make the best scheduling decision with present-time accuracy. This AI-based approach to scheduling requires a huge amount of computing power to rapidly work out the fab’s optimal position, but the cloud makes it possible to perform these calculations at unparalleled speed.
In theory, it is possible to get good computational power on-premise. The system would most likely be chosen based on what is cost-effective at the time and the power needed to solve the problem a fab had on that day. However, new computational power becomes more available and cost effective all the time. Moreover, fab complexity can easily change. For example, introducing a larger product mix into the fab could exponentially increase the complexity of the scheduling problem. With cloud, you can improve your hardware – and hence your KPIs – almost immediately. Something that is extremely unlikely on-premise due to the practical implications for the IT department.
And what could be a greater incentive to become cloud-friendly than fab capacity increases of up to 10%, which is what we’ve seen using these next gen systems? That’s the type of figure which should help even the most security-conscious chip company to change their mind about cloud technology.

The Flex Factor with... Seb

The Flex Factor with... Seb
Introducing Seb Steele; self-proclaimed 'colossal nerd', John Boyd super fan and all-round product person.

Tell us what you do at Flexciton?
Hi, I'm Seb and I work at Flexciton. In my mind my role is to "try to be helpful", but we honestly couldn't think of a job title for that so we stuck with Product Manager.
My main responsibility is to work out what constraints stand between our customers and whatever their desired future states are, and to help our engineering and R&D teams to find the right solutions to those constraints.
In the last year or so, I've been most heavily involved in developing our Fab-Wide Scheduler. Our team actually got laughed at when we explained how quickly we were planning to develop & roll it out across the entire fab - but we made it!
What does a typical day look like for you at Flexciton?
My day is typically very varied, which is ideal for me. It might be focussed on client requirements, in which case I could be meeting with them, or I could be testing a new feature on the fab floor, or maybe analysing some of their data to help with a design, or shaping some new tickets.
Sometimes I'll be getting in-deep on the logic for a feature, in which case I could be pair programming with an engineer if the code is tightly coupled to a customer's business logic, or I could be trying to keep up with people much smarter than me discussing the various consequences of different objective value formulations.
Then there's all the more internally-focussed things: providing context or knowledge sharing with teams, or doing onboarding, grabbing coffee with colleagues, or chatting about brain-computer interfaces in book club!
In short, I don't really have a typical day - and that's the joy of it.
What do you enjoy most about your role?
The variety and the fact that I get to work with such incredibly smart and talented people every day. There's a tonne of goodwill and a real culture of continuous improvement; it's down to earth and pretty flat, so there's no nonsense to deal with. I'm incredibly grateful for all of those things!
Also, as a product person, I'm grateful to be able to help solve challenges that actually make a positive impact in the world (did someone say "chip shortage"?), rather than optimising some clickthrough rates for a social media platform, or something.
If you could give one piece of advice to someone, what would it be?
Always try to have skin in the game.
If you could summarise working at Flexciton in 3 words, what would they be?
Speed, agility, humility.
What’s one thing you’re learning now or learned recently?
I've started - really just started - dipping my toes into quantum computers & the progress on creating quantum algorithms for use in optimisation. From applications of Grover's algorithm to more recent heuristic approaches e.g. a recent paper where the global optimum of a solution landscape is found in polynomial time, where a classical computer would take exponential time. It sounds like it's still not clear whether this could actually be applied to a real-world problem or not, and I'd like to dig into it more to understand whether this was calculated using purely theoretical, perfect qubits, or if it would still work with real-world noisy qubits - it sounded like the overhead of requiring error correction is pretty huge. In any case, it's obviously a mind-blowing field and one that I'm excited to learn a little more about.
Tell us about your best memory at Flexciton?
I've loved the time I've spent with various teammates when on client visits. We get to know each other better and have some great conversations. But colossal nerd as I am, I'm also going to mention how much I love the book club I'm part of. We talk about science and history and strategy and psychology and kung fu, and everything in between. I was always a big fan of when John Boyd said: "When there are no new ideas or I am unable to think, I'll be dead because that's my life's sustenance."
Interested in working at Flexciton? Head over to our careers page to to check what vacancies we currently have available and learn a little more about us whilst you're there.

Machine Says No – Is There A Way Around The Legacy Equipment Shortage?

Machine Says No – Is There A Way Around The Legacy Equipment Shortage?
Manufacturing equipment makers are under pressure to meet new fabs’ demands, with a serious knock-on effect for legacy chip makers. But can they increase capacity without increasing their number of tools?
Machines are the new bottleneck
The story of the semiconductor industry right now is dominated by shortages. There’s the chip shortage itself, as global supply chains continue to struggle to meet demand post-COVID. There’s the labour and talent shortage that we looked at in a previous blog. And now hitting the headlines is a manufacturing equipment shortage, with a lead time of up to 18 months on new lithography machines and other chip making tools.
Speaking to Reuters in April, ASML CEO Peter Wennink noted that, not only are the company’s customers having to wait over a year for its products, but that utilisation rates of ASML's machines are also at an all-time high, as semi cos try to keep up with demand. This is borne out by another industry executive, quoted recently by Nikkei Asia, who said, “Chipmakers like TSMC and UMC have told their senior executives to jump on a plane and visit all their key equipment suppliers in the US, Europe and Japan to avoid any of their rivals getting the machines ahead of them, and to personally make sure their equipment vendors are not lying to them about the lead times.”
Yet at least companies like TSMC know that manufacturers are working round the clock to fulfil their orders and provide the machines they need for the new, leading edge fabs they’re building. For many legacy chip makers – whose output is still vital to numerous industries, but is regarded as being on the ‘trailing edge’ of innovation – the problem is more acute, because most equipment manufacturers have actually stopped making the machines they need to increase capacity at their fabs.
The reason why is a simple case of economics. Chip companies around the world are making massive investments in new facilities focused on producing next generation semiconductors, which of course is why there is such a huge demand for machines to service this process. Not only has this created a new and thriving technology ecosystem, but manufacturers can also charge a premium for these machines. In contrast, supplying equipment to legacy chip makers is a lot less lucrative, and in order to meet demand from the new fabs, many manufacturers have simply stopped making the old machines.
The double bind of sourcing equipment
This presents legacy fabs with a major problem. While the focus in the industry is on increasing capacity to meet demand for next-gen chips, semiconductor shortages are occurring across all sectors, with markets such as traditional, non-electric automotive still reliant on older, legacy chipsets. As such, there is increased demand at legacy fabs as well, with companies dependent on machines that should have already been retired because there aren’t new replacements for them anymore. This also means that if a legacy fab wants to ramp up production, it has to source and recondition second-hand equipment, which is both increasingly difficult and far from ideal.
However, there’s another issue that affects companies with legacy fabs that want to boost capacity, and that’s one of increased costs. Because these facilities have been operating for a comparatively long period of time, they will almost certainly be fully depreciated, which is reflected in the price of the chips they manufacture - in other words, because their capex costs are now low, chips can be sold more cheaply than when the facility was still being paid for and capex had to be factored into the price. But this means that, even if a company manages to source additional machines, its capex will go up again, which will potentially make its chips more expensive and less competitive.
Legacy chip companies are thus caught between a rock and a hard place. On the one hand, they’re finding it increasingly difficult to source new tools to produce more chips. On the other hand, they risk becoming uncompetitive if, by increasing capacity with new machines, they are forced to increase their prices. And while using depreciated second-hand equipment is an option to get around this, finding it is another matter.
This is a huge headache for legacy fabs, but the impact in the wider world is even worse, with many industries continuing to suffer from chip shortage issues because their suppliers are unable to ramp up production.
Increase capacity, not machine count
There is another solution which bypasses the vicious circle described above, and that’s for companies to embrace smart manufacturing practices. Historically, the favoured way to significantly expand capacity was to increase the number of machines in the fab, and many companies are still wedded to this way of thinking. But advances in production scheduling software, in particular, are enabling forward-thinking companies to unlock capacity they didn’t know they had by optimising their WIP and the way their machines are used.
Most legacy fabs still use heuristics-based scheduling software to run their WIP, derived from SLIM methodology. However, this methodology is now over 20 years old, and was developed to work within constraints that no longer exist. Access to computational power - driven by cloud computing - has increased enormously, which means that much more sophisticated scheduling systems can now be used to make decisions about the WIP.
By using complex mathematical algorithms, this new generation of scheduling systems can make production decisions that are optimal for that exact point in time, enabling fabs to work at genuine capacity rather than the ‘false capacity’ that an over-reliance on rules-based software has created. The results that we have seen are truly game-changing, with capacity increases of up to 10% using the same number of machines and tools.
Rather than getting caught up in the equipment bottleneck, another solution exists today that can be quickly implemented with minimal upfront costs. By optimising production scheduling with AI-based precision, makers of legacy chips can increase capacity and meet new orders without having to expand their physical facilities. In a world of ongoing shortages, that’s a significant advantage.

Flexciton and Seagate Technology to Present at SEMI's Upcoming FutureFab Solutions Webinar

Flexciton and Seagate Technology to Present at SEMI's Upcoming FutureFab Solutions Webinar
What will the future of wafer fabrication look like? With innovative AI-driven technologies paving the way for significant improvements in efficiency, quality and on-time delivery whilst also driving down costs – chip manufacturers need to be paying close attention. In SEMI's upcoming FutureFab Solutions webinar, we explain why disruptive technologies, such as the hybrid optimization-driven scheduling that Flexciton provides, are pivotal in making progress towards Industry 4.0. We will discuss the technology behind Flexciton’s solution as well as how it performed when it was deployed live into the photolithography area of a Seagate Technology wafer fab.
This webinar will be taking place on June 21 from 17:00–18:00 CEST (16:00–17:00 BST). Can't make it? The full session will also be available on-demand for those who register.
Here’s an overview of what we’ll be covering:
- Flexciton’s optimization-driven solution and the scheduling strategy they used when challenged with increasing capacity at Seagate’s Springtown facility.
- Real case studies presented by both Seagate and Flexciton on the results gathered from deployment into the photolithography area.
- How Flexciton’s advanced scheduling managed to enable an increase in throughput and a reduction in the amount of reticle moves.
This webinar is hosted by SEMI and will feature presentations from Flexciton, Seagate Technology and Lynceus AI, another disruptive AI-driven solution provider that will talk about their case study from deployment into a Tier 1 automotive supplying wafer fab.
You can register your space today by following this link: https://www.semi.org/eu/events/Future-Fab-Webinar

Position Vacant: Are Chip Companies Really Running Out Of People?

Position Vacant: Are Chip Companies Really Running Out Of People?
The semiconductor industry worries that it won’t have enough workers to run its new fabs. But there’s a labour problem right now at legacy facilities. Could disruptive technologies help to solve this problem?
A worldwide labour crisis
It’s not just supply chain issues that are afflicting the semiconductor industry. Another major problem is a shortage of labour. There’s a significant fab building programme underway, but already companies fear there won’t be enough manpower to run them properly.
This is a worldwide issue. A recent white paper by talent management company Eightfold shows that, to meet the capacity demands from new fabs, the US chip industry needs to increase its workforce by at least 50%. And according to Deloitte, China is also facing a labour crisis, with 400,000 more semiconductor employees required to meet its stated targets. Even Taiwan is feeling the pinch, with a huge gap opening up between rocketing demand and the ability to meet it due to a lack of skilled engineers.
Unsurprisingly, these countries are doing everything they can to boost the number of STEM graduates, with billions of dollars going into universities to support this goal. In addition, the SEMI Foundation – the non-profit arm of global industry association SEMI – has a number of programmes in place to develop a larger and more diverse workforce, with the ultimate aim to dramatically expand the pipeline of skilled workers ready to fill labour deficits.
However, while these initiatives are laudable and entirely necessary, they don’t address the labour issues that many legacy fabs are facing right now. These issues – such as experienced operators retiring and skilled engineers being poached by newer, bigger facilities – mean that it’s growing ever harder for legacy fabs to meet capacity pressures. With their workforces dwindling or under threat, what can be done to ensure that legacy fabs are still able to operate efficiently?
The problem of running fabs on gut feeling
One solution is to change the way in which fabs operate. Many legacy facilities are still reliant on workers on the floor to move WIP from one machine to the next, since upgrading legacy fabs to facilitate advanced material handling systems (AMHS) is often too costly or too complicated. Instead, operators tend to take instructions from a rules-based scheduling software. However, in some instances the operators will make their own scheduling decisions based on their ‘gut feeling’ if, for example, the system doesn’t take into account certain constraints and makes an implausible suggestion or the operator thinks they can make a better decision themselves.
Because of the lack of intelligence in rules-based scheduling systems, many chip companies have to rely on experienced, highly-skilled operators to oversee the manufacturing process. Therefore, when companies look to expand facilities or replace employees, they understandably think they need to find operators experienced enough that they require minimal training to adapt to their fab – and worry that new candidates don’t exist in sufficient quantity.
However, if decision-making around scheduling in the fab can be improved, with less dependence on operators’ own judgements, then it’s possible to not onboard new staff much quicker but also optimize the total number of people needed to run the facility.
The simple rules-based scheduling software that many fabs rely on to guide operators’ decisions runs using predefined rules. These predefined rules take into consideration only a limited number of possible cases, resulting in the software making suboptimal decisions when it is faced with unknown scenarios which contribute to inconsistent results. Little wonder then that experienced operators often believe their own scheduling decisions are just as good, if not better. But by applying smart manufacturing practices – as SEMI is encouraging chip companies to do – it’s possible to automate and optimize production scheduling and easily add constraints that allow for far better decision making.
The AI-driven production scheduling system can analyse the state of the fab at any given moment and make scheduling decisions that are optimal for that precise point in time. The ramifications of these systems are profound. If the WIP flow is truly optimized, there is no need for operators to make their own dispatching decisions – they just need to follow the instructions coming from a smart scheduler, reducing the pressure on fabs to find highly skilled workers and enabling a concomitant percentage reduction of the manpower required on the fab’s floor. Optimized scheduling also allows for significant improvements in production KPIs such as throughput and cycle time – assisting a fab to achieve overall performance gains.
In addition, optimized scheduling removes the need for skilled engineers to spend time analysing production data in an effort to continually tweak the rules and maintain the scheduling system. Instead, their time can more usefully be spent on other tasks. Essentially, optimization enables more to be done with less.
Why competition is sometimes counter-productive
Another area in which advanced technologies can have a positive impact on the labour issue - not just in legacy facilities but across the industry - is in the optimization of manufacturing. At every semiconductor company, a significant proportion of their engineering talent is focused on developing ways to improve the chip production process, and thus gain an advantage over their rivals. While this type of competition undoubtedly drives progress within the industry, it can also be counter-productive, with teams at each company tied up trying to solve the same problem.
It's only natural that a cutting edge industry structured around research and science should assume the best solutions to every problem can be developed internally. Yet this is not always the case, particularly in emerging fields such as advanced AI. Rather than tie up talent and resources trying to solve issues such as optimizing scheduling in fabs, companies should be prepared to investigate ‘disruptive’ technologies from beyond their own walls that may already have cracked this problem.
By adopting a ‘best-in-class technology’ approach to the manufacturing process – rather than perpetuate a culture of trying to develop proprietary solutions for everything – companies can instead refocus their engineering talent on core competencies. For instance, by embracing external innovation, chip companies can redeploy internal teams to overcome efficiency obstacles elsewhere in the fab that they previously did not have the capacity to work on.
It goes without saying that continuing to promote STEM-based graduates towards careers in microelectronics is vital if the semiconductor industry is to meet its ambitious targets over the next decade. But while disruptive technologies can’t make the current labour shortage problem go away on their own, they can make a serious contribution to lessening its impact and changing chip companies’ attitudes towards recruitment.

A Hot Topic: What Makes Scheduling the Diffusion Area so Challenging? [Tech Paper Review]

A Hot Topic: What Makes Scheduling the Diffusion Area so Challenging? [Tech Paper Review]
The diffusion area is particularly important to the smooth operation of a wafer fab. Not only does it receive raw wafers at the very beginning of the fabrication process but it also interacts with many other areas of the fab.
The challenge in scheduling diffusion area lies in the particularities involved in its operation:
- Re-entrant flows: Furnaces are loaded with wafers that may have already been processed by other furnaces or wet benches.
- Batching machine: Several lots of 1-25 wafers each can be processed together in one batch if they run the same recipe.
- Time constraints: The diffusion area has timelinks that determine the maximum time a wafer has to move to a subsequent tool to avoid rework or scrappage.
- Dummy wafers: Used in furnaces, for example, to fill out a lot when a full lot is required or to protect the most exposed ends of a lot to ensure uniformity.
Balancing very long fixed processing times on batching tools with the features mentioned above makes it exceptionally tricky to get solid production KPIs on diffusion furnaces. Currently, fabs often resort to using simplistic "minimum batch size" dispatch rules that try to balance building full batches (to maximise the utilisation of the tool) with queue time and the risk of violating a timelink constraint.
As a result of these characteristics, it's very common for diffusion areas to become a bottleneck if not managed correctly – negatively impacting the production KPIs of the rest of the fab.
This is what prompts the exploration of more novel scheduling methods, such as the one we'll be discussing in this article.
Case Study: Job Scheduling of Diffusion Furnaces
To explore the various ways to schedule diffusion areas, we review the paper “Job scheduling of diffusion furnaces in semiconductor fabrication facilities” by Wu et al. (2021) that describes a new scheduling system that was deployed live in a 200mm GlobalFoundries wafer fab.
Fab Characteristics and the Need for Change
The fab that the system was implemented into consisted of the following attributes; approximately 300 products, 500 recipes, and 4500 lots daily at the diffusion area which is host to more than 90 furnaces.
The approach was designed to build schedules aiming to maximise the weighted number of moves. The weights were based on the product of the wafer and the stage of production for which moves were being calculated.
Schedules were planned by 6 operators several times a day, taking up to 6 hours a day per operator on average. The quality of schedules are also impacted by the judgement and experience of operators which led to suboptimal decisions and lower efficiency.
The Approach to Scheduling
The heuristic model used in the system took about nine months to be built, whilst the system implementation took one year and a half, with the majority of time being spent on clarifying user requirements and collecting data.
The problem was addressed with techniques called Dynamic Programming and Genetic Algorithm:
Dynamic Programming consists of breaking down a large problem that contains many possible solutions into several sequential sub-problems that are easier to solve. Each of these subproblems is solved one at a time, such that each solution feeds into the next problem.
Each one of these sub-problems is then solved using a modified version of the Genetic Algorithm, a meta-heuristic procedure commonly used for large optimization problems.
Results
After implementing live in the fab, average daily weighted moves per tool improved by 4.1% in the first two months of trials when compared to the 2 months before deployment. When tested offline and compared to historical data, the approach increased the number of moves by 23.4% and the average batch size by 4.1% while reducing tool idling by 62.8%. The authors argue the fab was short of staff, subject to varying demand and product mix over time, and with operators still not fully adhering to the new schedules.
It is also expected that, by exploring the full potential of the system, cycle time can be reduced by 1.8 days and that an increase of eleven thousand moves can be achieved, leading to an estimated financial saving of $2M USD per year.
Flexciton’s View
A lot of the academic literature on scheduling furnaces tend to omit some rather critical details such as missing constraints, only being tested on small test datasets, or they are prohibitively slow in live environments.
The reviewed approach stands out by addressing these issues and successfully implementing a complex scheduling system in a fab that brings measurable improvements to the number of moves, batch size and tool idleness. The model accounts for many relevant details such as preventive maintenance, lots with tool dedications at certain steps and different lot priorities.
Nevertheless, as specialists in scheduling, we have spotted weaknesses in the approach where we believe there are opportunities to make it even more robust and versatile, whilst delivering even better results:
1. Schedule updates every 40 minutes: unexpected events (e.g., machine downtime) can take longer than schedule creation time. Suppose a furnace goes offline 10 minutes after the start of the generation of a new schedule. Two things will happen:
a. Schedule being built (unaware of the machine outage) may dispatch lots to the offline tool.
b. Machine outage will be handled only in the next schedule, 70 minutes after the machine went down.
2. Diffusion furnaces scheduled in isolation: Optimizing diffusion furnaces in isolation may cause other machines and areas to be neglected – resulting in suboptimal decisions. For example, since these clean tools feed other parts of the fab, there’s no guarantee that the necessary WIP will arrive at the furnaces to accommodate the optimized schedule having not taken clean capacity into account.
3. Assumption that transportation time of wafers is negligible compared to the processing time: despite the long processing times in furnaces, it’d be interesting to test transportation times in the model to confirm if it’s indeed irrelevant for scheduling or if it brings different decisions to the final schedule.
4. Loading and unloading time not addressed in the approach: Unlike processing times that are fixed, the loading and unloading times can still vary with the number of wafers.
Flexciton’s Way
Flexciton’s solution has been built to schedule any area of a fab through multi-objective optimization, handling multiple fab KPIs with their trade-offs and sending an optimized schedule to the fab every 5 minutes. Below, we outline the main features of how we tackle the main challenges of furnaces scheduling:
1. A fab-wide approach: our optimization engine schedules furnaces not in isolation but together with other machines across the fab. We utilise a holistic approach, looking ahead for bottlenecks across the entire factory and account for the existence of bottleneck tools when making scheduling decisions. For instance, a lower priority wafer may be dispatched before a high priority one if the former is going to a low-utilisation machine while the latter is going to a bottleneck in its next step.
2. Criticality of time constraints: whilst eliminating violations of timelinks, we account for the different criticalities they may present, be it because of the machines and recipes used or due to wafer priorities. This means that under a situation where one of two timelinks must be violated for reasons beyond our control, the less critical timelink will be violated.
3. Multi-objective optimization: We balance multiple KPIs simultaneously and handle their trade-offs through user-defined weights. For example, objectives such as “minimise timelink violations” and “minimise cycle time” can receive different weights depending on the desired behaviour in the fab. This directly impacts decisions such as “how long should a high priority wafer wait for a full batch?”.
4. New schedules every 5 minutes: Our technology is based on a hybrid approach that combines Mixed Integer Linear Programming (MILP) with heuristic and decomposition techniques, enabling the delivery of high-quality schedules to the fab every 5 minutes.
5. Change management: Adherence by operators and managers to a new scheduling system and its decisions is among the main post-implementation challenges. Because of that, our deployments follow a rigorous plan that helps foster a higher adoption of the technology. We also use detailed Gantt charts to aid the visualisation of schedules, which facilitates a solid understanding of decisions made which in turn enables higher adherence from operators.
As explored in this article, scheduling diffusion furnaces can be an extremely complex task. This is true even from a computational standpoint, leading many semiconductor fabs to rely on the judgement and experience of their operators at the cost of obtaining suboptimal and inconsistent schedules that take hours to generate. On the other hand, the usage of some fast-scheduling systems may mean leaving some constraints behind, ignoring different KPIs or not observing the fab in its entirety.
At Flexciton, we combine the best of both worlds and bring fast optimal decisions while fostering technology adoption at all hierarchies of the fab.

EU Chips Act Proposes €43 Billion Of Support – But How Will It Be Spent?

EU Chips Act Proposes €43 Billion Of Support – But How Will It Be Spent?
The European Commission has set out an ambitious plan to double the EU’s share of the semiconductor market to 20% by 2030. But is increasing production capacity the way forward? In this blog, we look at where they should and shouldn’t be spending their money to achieve this aim.
A plan to boost competitiveness
The global chip shortage has kickstarted initiatives right at the top of Western politics, first in the US, and now in Europe. In February, the European Commission (EC) proposed a “Chips Act”, a series of measures designed to boost the EU’s competitiveness in the semiconductor supply chain and marketplace.
Currently, the EU’s share of the global semiconductor market is around 10%, which the EC proposes to double to 20% by 2030 by “mobilising more than €43bn of public and private investments.” Its ultimate aim is to “bring about a thriving semiconductor sector from research to production and a resilient supply chain.”
It’s a highly ambitious vision, and one that may be difficult to achieve given the position that Europe is starting from. Its share of global production capacity was 24% in 2000, but this share has been falling ever since, with European fabs focused on mature chip technologies rather than the more advanced chip architectures that will increasingly be in demand as industrial and consumer electronics devices become ever more sophisticated. In fact, according to ASML, the Dutch chip manufacturing equipment maker, Europe’s share of production could drop to as low as 4% if the industry doubles in size to $1 trillion by the end of the decade as predicted.
While it’s heartening to see the EC being prepared to give this level of support to the semiconductor industry in Europe, it’s important that the money is invested in the right areas if they’re to get anywhere near that 20% target. For instance, trying to compete in production capacity alone is a high-risk strategy – not only is building chip fabs from the ground-up an incredibly expensive undertaking, but the current supply of mature chips comes mainly from fully depreciated fabs in other parts of the world which have already absorbed the massive costs involved. This makes it very difficult for Europe to compete directly in the production market just by building new manufacturing facilities.
Strength in R&D and advanced tools
However, Europe retains a strong position in the semiconductor market in the areas of R&D and manufacturing equipment. In recognition of this, €11bn of the EU Chips Act investment has already been allotted to the Chips For Europe Initiative, which the EC says, “will be made available to strengthen existing research, development and innovation, (and) ensure the deployment of advanced semiconductor tools, pilot lines for prototyping, testing and experimentation of new devices for innovative real-life applications.” This is significant, as it acknowledges that the equipment and infrastructure underpinning the chip supply chain is just as important as square footage of production facilities.
Another important element of the EC’s proposal is its commitment to create a €2bn Chips Fund to “facilitate access to finance for start-ups to help them mature their innovations and attract investors.” This recognises that, while the industry in Europe already has a robust R&D tradition, it’s vital that innovation continues to be encouraged and supported, and ideally, that should include new approaches and applications from outside of the traditional semiconductor ecosystem. The industry shouldn’t assume that the best solutions will always come from within, and the Chips Fund will hopefully acknowledge this.
Given the expense of both setting up and running fabs, and the general issues that the semiconductor industry currently faces in terms of production and capacity, anything that can be done to make Europe’s existing chip facilities more efficient and cost-effective should be vigorously supported. Investing in smart technologies that can help Europe's facilities optimize capacity and production cost should be equally important as funding new fabs.
Early movers offer proof of concept
Many facilities around the world still rely on people to make production decisions, yet there are far too many variables involved in the manufacturing process for the human mind alone to always arrive at the best possible outcome. However, early movers in the European chip industry are already embracing new methods of scheduling in fabs, using AI-based tools to radically improve and automate real-time decision making. These companies are seeing efficiency improvements of up to 10% in their operations, which represents a significant expansion of their current capacity.
These early movers and the results they’re seeing represent an important proof of concept for new types of manufacturing technologies, and potentially position Europe as a global leader in this field. Rather than try to compete in terms of production itself, European-developed software and AI-based applications can increase the region’s market share and profile. The advanced scheduling technologies created here could ultimately accelerate fab operations across Asia and the US, helping them to realise new efficiencies and get the global supply chain working properly again.
Flexciton welcomes the EU Chips Act proposal as a much needed intervention at a time when the European semiconductor industry is at a crossroads. Europe can competitively increase its production capacity. Building new fabs remains a key stepping stone towards achieving this goal. However, getting the most out of facilities – both new and existing – by optimizing their operations via advanced manufacturing technologies should also be considered a critical initiative. But the greater prize still is for Europe to help fully commercialise these technologies and sell them to the rest of the world – this is the best path for the region to once again grow its global footprint and semiconductor market share.

Looking Into The Future: How Advanced Optimization Can Manage Timelink Constraints (Part 2)

Looking Into The Future: How Advanced Optimization Can Manage Timelink Constraints (Part 2)
In part one of this article, our case study illustrated how the use of advanced optimization can help balance the line by reducing queue times at bottleneck tools. By employing optimization, we were able to look into the future at the state tools further down the line and prevent problematic congestion.
In our second case study, we consider a more complex problem where a trade-off must be made between the cycle time of high priority lots and violating certain timelinks. The problem from the first case study was adjusted by increasing the priority of two of the lots (outlined in red in Figure 4).
Case Study 2: Balancing Cycle Time and Timelinks
In some cases, a fab may wish to favour the reduction of a high priority lot’s cycle time and violate a lower priority lot’s timelink. For example, violating certain timelinks may result in minor rework, such as re-cleaning, rather than resulting in scrappage. This rework takes time, but it could be a sacrifice a fab is willing to take with different priority lots.
Scenario 1: Optimizing to Meet Cycle Time Objectives (Benchmark)
When purely minimising cycle time without timelinks, the priority lots are brought forward as early as possible. If timelinks were present, then the final two batches of Figure 4 would violate their timelinks.
Scenario 2: Optimizing with Critical Timelinks
In this scenario, timelinks are introduced and are all considered to be critical. We eliminate their violations, but this comes with an 8% increase in cycle time. The priority lots highlighted in light blue in Figure 4 are already within a timelink at the start of the schedule. In order to meet the critical time constraints, the lots get delayed.
Figure 4 shows the difference between Scenarios 1 and 2.

Scenario 3: Considering Timelink Criticality and Cycle Time of High Priority Wafers
In this scenario, cycle time is a vital KPI for high priority wafers. Additionally, we consider two different levels of timelink criticality; critical (should not be violated) and non-critical (can be violated if necessary). Only the timelinks feeding the final step at the furnace tool are considered critical. The critical timelink violations are eliminated first, before optimizing non-critical timelinks and cycle time simultaneously. We configure the optimizer objective so that an hour of non-critical violations would be equivalent to an hour of cycle time for the lower priority lots.
Figure 5 shows the calculated schedule when these parameters are considered; all the critical timelinks between steps 5 and 6 are satisfied, however some non-critical timelinks are broken to reduce the cycle time of the high priority lots.

Figure 6 visualises the trade-offs between timelink violations and cycle time for the three scenarios discussed. It demonstrates a benefit of advanced optimization, where different schedules can be optimized depending on the goals of the fab – whether it's purely for cycle time or adhering to timelinks.

Conclusion
Handling timelinks is crucial to managing the cost of running a fab, however it presents challenges. Our case studies demonstrate how Flexciton’s advanced scheduler can be used to plan several steps into the future when there are timelinks on bottleneck tools. Doing so helps spread WIP more evenly across tools and helps manage dynamic bottlenecks. By configuring priorities and the impact of missing timelinks, it can also flexibly trade-off violations of differing importance against each other and with other KPIs such as cycle time. This can be achieved without the need to rely on highly bespoke heuristics that are difficult to configure, require frequent maintenance and don’t necessarily guarantee the desired outcome.

It’s Time For The Semiconductor Industry To Embrace Smart Manufacturing

It’s Time For The Semiconductor Industry To Embrace Smart Manufacturing
With industries around the world still being hit by semiconductor shortages, chip companies need to embrace smart manufacturing practices to boost production. In this blog, we talk about what those practices are and how to accelerate their adoption.
Long-term capacity issues
The global chip shortage has highlighted that, despite the centrality of silicon to our technology-driven world, the semiconductor industry is far from robust. The Covid-19 pandemic may have created the perfect storm of increased demand versus restricted production. However, this is just the culmination of a long-term struggle to make semiconductor production truly agile which, alongside various geo-political factors, has now exposed the fragility of the semiconductor supply chain for all to see.
The obvious answer to the capacity issue is to build more fabs, but that’s easier said than done. Building a new chip manufacturing facility can take between three to five years, and cost anything from $4bn to $10bn. Undoubtedly more fabs do need to be built, but that isn’t exactly a quick-fix solution when the ongoing chip shortage remains a clear and present danger to industries requiring a constant, uninterrupted supply of semiconductors, from consumer electronics firms to auto manufacturers.
But that’s just to keep us where we are now – what about the R&D and production capacity needed to make the ever more sophisticated, next generation chips required to transition us to a world of ‘smart’, intelligent technologies designed to automate the environment around us and create more sustainable homes and cities? How will we get to this future if we can’t even solve the current bottlenecks that afflict the semiconductor supply chain?
Unsurprisingly, the industry itself has been attempting to address these issues, and look at ways to improve how it works. In the short-term at least, one solution to the capacity problem is to make existing facilities as efficient and productive as possible, and effectively increase capacity without having to build additional fabs.
The smart manufacturing vision
SEMI, the global industry association representing the electronics manufacturing and design supply chain, is attempting to drive this evolution via its Smart Manufacturing initiative. In a nutshell, smart manufacturing is defined as “the use of production and sensor data within manufacturing technologies to enable process adaptability”, ultimately speeding implementation and maximising efficiency. In other words, using the information generated by the factory itself to improve decision making about how best to manufacture products.
SEMI’s vision is to apply smart manufacturing practices across the entire electronic supply chain based on three essential pillars. These are:
- Sensing – establishing the state at any one time of work-in-progress (WIP) and all machines/tools within the fab, including process monitoring and location tracking
- Connecting – ensuring that all this information can be securely transferred to both internal databases and the cloud, for the purposes of creating a ‘digital twin’ of the fab
- Predicting – using the digital twin’s data, via artificial intelligence (AI) and/or machine learning applications, to optimize both WIP scheduling and available resources
SEMI is committed to promote awareness and interest in smart manufacturing practices across the industry, but acknowledges that not every semiconductor company is in the same place. While far from perfect, the industry is reasonably good at capturing data – however, the problem is that this data is often poorly integrated and can effectively become trapped in siloed systems. Without establishing a ‘single version of the truth’ by combining all data sources, it is impossible to create a functioning digital twin with which to improve efficiency.
Acceleration through disruption
To accelerate the rollout of smart manufacturing, we believe that semiconductor companies have to embrace ‘disruptive’ technologies from outside of the industry’s traditional supply chain. It should by now be standard practice for chip companies to actively seek out and adopt best-in-class technology to improve efficiency and boost output. Instead, the decision is often made to develop an in-house alternative. Operating in one of the most technologically cutting-edge industries – with a multitude of engineers and scientists at their disposal – is perhaps one reason why chip manufacturers often look inwards to find solutions to their problems. Searching internally for these complex answers can often mean years spent on researching and millions spent developing, whilst progress already made by academics and disruptive tech companies gets overlooked. The outcome is the same type of systems that have prevented the movement towards SEMI’s third pillar and properly optimizing their fabs’ processes and production.
However, in Flexciton’s experience, this reluctance is easing, with early movers already beginning to enjoy the benefits that come from a fully integrated data environment where AI-based tools can be deployed to improve and automate decision making, and help fabs to work at genuine capacity rather than the ‘false capacity’ that an over-reliance on manual processes and siloed data has created.
For companies to fully embrace smart manufacturing and truly move onto that third pillar, the next step on their journey is to transition away from conventional, heuristic scheduling software that mimics human decision-making process based on historical data. Instead, manufacturers need to adopt advanced technology that makes optimized decisions in real-time with the ability to seamlessly adapt to unpredictable fab environments. By making decisions based on the actual state of the fab and its processes in the present moment, companies can realise even more capacity and see up to 10% extra efficiency in their operations.
The chip shortage may seem like a temporary problem that will soon be resolved – but that’s a dangerous assumption. Instead, it’s a wake-up call that the semiconductor supply chain needs to be re-engineered to become fit for purpose. Fab building may be part of the solution, but more important is the need for existing facilities to be fully optimized in order to realise their true capacity – and that needs the industry to be willing to innovate and pilot cutting edge technologies.

Looking Into The Future: How Advanced Optimization Can Manage Timelink Constraints (Part 1)

Looking Into The Future: How Advanced Optimization Can Manage Timelink Constraints (Part 1)
Timelinks (also known as time constraints, time lag constraints, time loops or close coupling constraints) are one of the most challenging aspects of a wafer fab to navigate and significantly increase the complexity of scheduling it. We take a dive into a case study that shows how optimization can be used to manage timelinks to alleviate pressure on bottleneck tools.
What are timelinks?
A timelink is a maximum amount of time that can elapse between two or more consecutive manufacturing process steps of a lot (a group of silicon wafers). Defining timelinks is necessary to mitigate the risk of oxidation and contamination of wafers while waiting between process steps. Violating these timelinks can lead to wafers being scrapped or undergoing a costly rework due to exposure to impurities – ramping up the production costs for a fab.
Managing timelink violations is therefore critical as its impact on cost and chip quality must be balanced with delivery speed (measured by cycle time) and other cost considerations such as leaving tools idle. When it comes to managing timelinks, most fabs look towards heuristics. However, the complexity of time constraints are difficult for a rules-based approach to navigate and require a more advanced scheduling solution. For example, deciding whether or not to dispatch a lot when the subsequent tool has a timelink requires the ability to look at the future state of tools further along the schedule. Alternatively, if you use optimization to generate a schedule then looking into the future becomes a lot more straightforward.
Using optimization to eliminate timelink violations
To demonstrate how optimization can help tackle the scheduling of timelinks on bottleneck tools we take a dive into a case study where scenarios are ran using Flexciton’s advanced scheduler. The case study demonstrates how timelinks at a bottleneck tool can be managed by looking several steps into the future and delaying earlier steps to more evenly balance the line.
For illustrative purposes, we consider a small problem with only 33 lots. Each lot has up to 6 remaining steps to be scheduled across 52 tools. There are time constraints of varying duration between all consecutive steps.
The sequence of process steps is defined by the product routes: A, B, C, D and ‘Other’. Routes A, B, and C all end on tool Z which is a diffusion furnace tool that runs batches of five lots at a time. All the timelinks around this tool are relatively tight at around two hours.
Case Study: Managing Timelinks at a Bottleneck Tool
We begin by running a production schedule through Flexciton’s scheduler without considering timelinks, where we prioritise minimising cycle time alone (Fig. 1).

When timelinks are included in the schedule, we first eliminate their violations before minimising the cycle time (Fig. 2).

Figure 2 shows the light green lots (the last batch on tool Z) are shifted right on the W toolset to avoid violating timelinks on the next step, and therefore incurring the cost of scrapping wafers or performing rework. This creates a period of idle time on the W toolset and delays the other lots on W. If toolset W was considered in isolation, this solution would be suboptimal. However, when taking into account both toolsets, this provides a far better outcome.
With timelinks met, we also more evenly balance queue times across the line, as demonstrated in Figure 3, which shows total queue time at the final two toolsets used in Route A in order of process steps. Toolset Z still has the highest queue time due to it being the bottleneck, but the difference is substantially reduced. This balancing reduces the bottleneck effect on tool Z.

Conclusion
This case study illustrates how using advanced optimization can help balance the line and reduce the queue time at problematic bottleneck tools. By harnessing the power of optimization we are able to assess the state of tools further down the line – something that isn’t realistically possible with traditional heuristics. This ability to significantly reduce queueing time can go a long way to helping a fab manager to hit KPIs such as the reduction of cycle time whilst avoiding costs incurred from scrapping wafers. However, the problem of scheduling timelinks becomes even more complex when you begin to consider wafers of differing priorities.
Want to learn more? Take a dive into Part 2 of this article where we will be taking a look at how to solve this problem with the added complexity of priority wafers.

Could Reinforcement Learning Play a Part in the Future of Wafer Fab Scheduling? [Tech Paper Review]

Could Reinforcement Learning Play a Part in the Future of Wafer Fab Scheduling? [Tech Paper Review]
A discipline of Machine Learning called Reinforcement Learning has received much attention recently as a novel way to design system controllers or to solve optimization problems. Today, Jannik Post – one of our optimization engineers – takes a look at the background of the methodology, before reviewing two recent publications which apply Reinforcement Learning to scheduling problems.
The exciting prospect of Reinforcement Learning
Traditionally, semiconductor fabs have relied on real-time dispatching systems to provide their operators with the dispatch decisions – with their ability to show the current state of the work in progress within seconds. These systems may follow rules based on heuristics or derive them from domain knowledge, which makes their design a lengthy process that requires deep knowledge of the fab processes. Maintenance of the contained logic also requires continuous attention from subject matter experts. As well as this, these systems have very limited awareness of the global effects of decisions at toolset level – therefore making them susceptible to providing suboptimal decisions.
More advanced approaches to wafer fab scheduling rely on optimization models, which can take many factors into account, e.g., the effect of dispatching decisions on bottleneck tools further downstream. These solutions will generally require a slightly longer computation time to achieve high-quality solutions.
Reinforcement Learning (RL) promises to avoid the downsides of both common dispatching systems and optimization approaches. So, how does it work? At the heart of RL there is an agent* which performs a task by taking decisions or controlling a system. The goal is to teach this agent to make close to optimal decisions by allowing it to explore different options and providing feedback on the quality of its decision. Good decisions are rewarded whilst suboptimal decisions are punished. Of course, this training will not be performed in a live environment, but rather by simulating thousands of scenarios that might occur to prepare the agent for any possible situation.
A common example of Reinforcement Learning is self-driving cars, but it can easily be seen how it could be productive when used in other environments, such as dispatching in a wafer fab. In theory, it could be utilised to dispatch wafers to tools in a way that optimizes certain KPIs – such as throughput.
Reinforcement Learning for Job Shop Scheduling** problems
Numerous recent publications have explored the use of RL for production control. However, the approaches are still in their early stages and applied to problems much less complex than semiconductor scheduling. Nevertheless, they demonstrate the potential to play a part in future solution strategies. Two approaches stood out to us when reviewing the literature:
“Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning” (2020)
This paper by Zhang et al. describes an approach to designing an agent that generalises its knowledge beyond what it has been trained to do, enabling it to handle unseen problem instances. This is achieved by initially conducting a large amount of diverse training scenarios. The model can flexibly handle instances of different sizes, e.g., with varying numbers of tools.
The agent is first trained on large numbers of scenarios and will thereby learn to exploit common patterns and perform well in instances not encountered before. After the training, the agent can be deployed to solve new instances. As training is conducted separately from solving an instance, the latter can be performed in less than a minute. The performance on benchmarking problems is compared against optimization models and simple dispatching heuristics. The Reinforcement Learning approach yields a makespan – the total duration of the schedule from start to finish – between 10-30% longer than when computed through optimization, but around 30% shorter than what simple heuristics achieve.
“A Reinforcement Learning Environment for Job Shop Scheduling” (2021)
This paper published by Tassel et al. sets out to design a reinforcement learning environment to optimize job shop scheduling (JSS) problems as an alternative to optimization models. The objective in this approach is to reduce periods in the schedule where tools are not in use, which is shown to correlate with a minimisation of makespan. The agent is designed as a dispatcher and is trained on a single scenario at a time by running a real world simulation over and over. As the goal is to generate an optimized solution for the instance, the best solution achieved during training will be saved. Training time and solution time are thus the same in this approach and are limited to 10 minutes to reflect production requirements. In this approach, there is no intention to generalise the behaviour of the agent to other instances.
The authors disclose a makespan of just 10-15% worse than the best known benchmarks for job shop scheduling, and just 6-7% longer than time-constrained optimization approaches.
Flexciton’s view
At Flexciton, we are excited about bringing cutting-edge optimized scheduling to wafer fabs worldwide. We are always exploring new ways that could help us improve the service we provide our customers so it’s exciting to see new emerging technologies which may help solve scheduling challenges in the semiconductor industry. The two publications reviewed in this article both present promising new approaches that yield measurable improvements over simple dispatching heuristics, but still fall short of optimization.
Both approaches can cope with disruption and stochasticity of the environment, such as machine downtimes. Another commonality is that both can readily be applied to problems of different sizes. In both cases the authors respected the requirement for frequent schedule updates (Tassel et al.) and quick decision support (Zhang et al.) and still achieved optimized solutions. It is conceivable that reinforcement learning has the capability to teach an agent to make smart decisions in the present that will improve the future fab state and reduce bottlenecks.
However, as the use of RL for JSS problems is still a novelty, it is not yet at the level of sophistication that the semiconductor industry would require. So far, the approaches can handle standard small problem scenarios but cannot handle flexible problems or batching decisions. Many constraints need to be obeyed in wafer fabs (e.g., timelinks and reticle availability) and it is not easily guaranteed that the agent will adhere to them. The objective set for the agent must be defined ahead of training, which means that any change made afterwards will require a repeat of training before new decisions can be obtained. This is less problematic for solving the instance proposed by Tassel et al., although their approach relies on a specifically modelled reward function which would not easily adapt to changing objectives.
Lastly, machine learning approaches can lead to situations where the decisions taken by the agent will be hidden in a black box. When the insights into the rationale behind decisions are limited, troubleshooting becomes difficult and trust into the solution is hard to establish.
Flexciton’s way
Using wafer fab scheduling to meet KPIs such as increased throughput and reduced cycle time is a challenge that requires a flexible, quick, and robust solution. We have developed advanced mathematical hybrid optimization technology that combines the capabilities of optimization models with the quickness of simple dispatching systems. When needed, the objective parameters and constraints can be adjusted without the need to rewrite or redesign extensive parts of the solution. It can therefore easily be adapted to optimize bottleneck toolsets, a whole fab or even multiple fabs.
Flexciton’s scheduling software produces an optimized schedule every five minutes and easily integrates with existing dispatching systems. The intuitive interface enables users to investigate decisions in a wider context, which helps during troubleshooting and increases trust in the dispatching decisions.
References
[1] Zhang, Song, Cao, Zhang, Tan, Xu (2020). “Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning.”
[2] Tassel, Gebser, Schekotihin (2021). “A Reinforcement Learning Environment for Job-Shop Scheduling.”
[3] Five reasons why your wafer fab should be using hybrid optimization scheduling (Flexciton Blog)
Notes
* – We use the term ‘agent’ to describe a piece of software that will make decisions and/or take actions in its environment to achieve a given goal
** – The job shop is a common scheduling problem in which multiple jobs are processed on several machines. Each job consists of a sequence of tasks, which must be performed in a given order, and each task must be processed on a specific machine.

The Reticle Allocation Problem and How to Approach it [Tech Paper Review]

The Reticle Allocation Problem and How to Approach it [Tech Paper Review]
The photolithography process is considered the most critical step in semiconductor wafer fabrication, where geometric shapes and patterns are reproduced onto a silicon wafer, ultimately creating the integrated circuits.
What makes this process unique is the use of additional resources, called reticles. The reticle is a photomask used to expose ultraviolet radiation that generates a specific pattern into the wafer. When not in use, reticles are stored in dedicated storage with a fixed capacity, called a stocker. Although the problem of allocating reticles in a stocker is not a “core” one in wafer production scheduling, if not optimized, it might significantly impact overall production efficiency by causing bottlenecks.
This week, Daniel Cifuentes Daza, one of the Optimization Engineers here at Flexciton, explores this problem by reviewing a technical paper by Benzoni, A. et. al. – “Allocating reticles in an automated stocker for semiconductor manufacturing facility” – and contrasting their approach with the one we use when scheduling at Flexciton.
The reticle and stocker bottleneck:
A fab working with a wide variety of products may need several thousand reticles at any given time to fulfil their production requirements [2]. Not only must reticles be stored in a stocker, as explained above, but they often need to be transported in a container known as a pod in order to prevent contamination too. Therefore, the capacity and availability of stockers and pods within the fab makes deciding where each reticle should be stored at each step of the production schedule extremely complex – frequently causing bottlenecks [3].
To manage this process, fabs need to decide the best way to allocate reticles into pods, and then try to find an optimal assignment between pods and tools. However, there is another optimization problem at hand that complicates the process further; the position of reticles within the limited-capacity compartments of the stocker itself.
The time to retrieve a reticle from storage can be drastically different depending on its own location inside the stocker, thus leading to large inconsistencies in the so-called processing time of the stocker. As a result, the stocker can become a bottleneck by not dispensing reticles fast enough to meet wafer demand. Therefore, the reticle allocation problem also consists of choosing which reticles are to be stored in the low-capacity fast-retrieval compartment (“retpod”) vs the high-capacity slow-retrieval compartment.
In order to explore what might be the best way to address this problem, we have reviewed a tech paper published by IEEE for the WSC conference in 2020. The authors of the paper address the allocation issue using the famous knapsack problem. This approach will be evaluated in the next section of this article – with the pros and cons being discussed – before discussing how the proposed solution compares to how we model photolithography tools here at Flexciton.
The reviewed paper approach:
In “Allocating reticles in an automated stocker for semiconductor manufacturing facility” by Benzoni, A. et. al. (2020) [5] the stockers examined by the authors have two compartments – one where reticles are stored using pods; retpod, and another where they do not use pods. The main objective is to allocate reticles into the retpod compartment, as this has faster retrieval times.
Additionally, the authors consider:
(1) the reticles, as the main resource of the problem,
(2) the steps where wafers will need to use the reticles in the near future, and
(3) the capacitated storage for reticles; the compartments of the stockers. Additionally, they see each reticle as having a profit value; the number of wafers processed in the batch. With this initial information, the problem can be modelled as the well-known knapsack problem.
Sheveleva, A. et. al (2021) [4] defines the knapsack problem as the following:
“There are k items with weight nk and value ck, and a knapsack with a capacity N. The problem is to fill the knapsack with items with the maximum total value, respecting the knapsack’s capacity limit”
In this case, each k item is a reticle, where its corresponding weight is always 1, and its value ck refers to the profit value. The knapsack is the retpod compartment of the stocker.
The knapsack problem is an NP-hard combinatorial problem that has been studied for many years within computer science, operations research, and other sciences. Therefore, due to its complexity, the authors decided to use a well-known heuristic. Here, the approach is to rank each reticle according to a specified objective value ratio and then fill the knapsack with the first N elements fulfilling its capacity.
The authors benchmarked three different objective functions for this heuristic as follows:
- Reticles are ranked according to their priority and their probability of being used soon. Reticles required by wafers in the short term have higher priority than those needed in wafers for later steps.
- Similar to (1), but the ranking also considers the total number of wafers that each reticle is being used by.
- Ranks reticles by the priority and probability of the reticle to be used, but only considers the most immediate reticles being used by at least one wafer.
The three approaches reported an increase in the utilisation of the reticles from around 8% to 20%. This implementation also led to a reduction of processing times for the stockers of 1 hour. Strategies 1 and 2 showed the lowest error percentage, which is expected as Strategy 3 does not consider future steps where reticles are used.
Flexciton’s view:
Using the knapsack approach to solve this problem certainly has some positive points. Firstly, using a heuristic method is easy to implement and does not require much computational time, which also makes it scalable to industrial-sized problems. Secondly, it is trivial to work out why certain allocation decisions are taken, making it highly understandable. Lastly, the approach is flexible because the user can modify the objective function of the heuristic depending on the fab’s goals.
However, the issue of reticle allocation is just a small piece of the complex wafer manufacturing process. Since this approach is modelled as a standalone problem, it is creating feasible solutions for the reticle stocker alone without considering the state of the rest of the fab. This will likely lead to inconsistencies as the wafer schedule is intrinsically linked to the reticle allocation.
In addition, the approach described in the paper models a simplification of the photolithography area. There is relevant information missing, such as the availability of pods in the fab and their possible allocation to machines, transfer times, and load and unload times. The use of this information would give robustness to the approach.
Flexciton’s way:
At Flexciton, we consider that the best way to tackle the reticle allocation problem is to proactively generate not only feasible solutions, but optimized production schedules. In order to do this, we take into account all the scheduling constraints for reticles available within our optimization engine – using information such as:
- Availability and capacity of pods for reticles
- The scheduling of pod usage – taking into account the number of pod ports in the machine
- Storage of pods into stockers, considering its capacity
- Transition of pods between stocker and machine
- Load, and unload times for reticles into pods and pods into machines
- Variable stocker unload time on account of where the reticles are located
- The reticles and pods already loaded into the machines; where they are in the fab at the time of scheduling
The benefit of considering a multitude of information like this all in one optimization model means that we can provide a consistent and robust production schedule that takes into account all the constraints of reticles, pods and stockers. Additionally, our scheduler allows the user to configure their specific business objectives into the optimization process in order to meet their fab’s KPIs – the algorithm is then calculated and an optimized schedule is returned in a matter of minutes. All of this means that our technology is able to return a reliable, scalable and flexible solution that is tailored to our client’s needs – whilst optimizing the photolithography area in its entirety.
References:
[1] Y. T. Lin, C. C. Hsu and S. Tseng, "A Semiconductor Photolithography Overlay Analysis System Using Image Processing Approach," Ninth IEEE International Symposium on Multimedia Workshops (ISMW 2007), 2007, pp. 63-69, doi: 10.1109/ISM.Workshops.2007.16.
[2] S. L. M. de Diaz, J. W. Fowler, M. E. Pfund, G. T. Mackulak and M. Hickie, "Evaluating the impacts of reticle requirements in semiconductor wafer fabrication," in IEEE Transactions on Semiconductor Manufacturing, vol. 18, no. 4, pp. 622-632, Nov. 2005, doi: 10.1109/TSM.2005.858502.
[3] You-Jin Park and Ha-Ran Hwang, "A rule-based simulation approach to scheduling problem in semiconductor photolithography process," 2013 8th International Conference on Intelligent Systems: Theories and Applications (SITA), 2013, pp. 1-4, doi: 10.1109/SITA.2013.6560788.
[4] A. M. Sheveleva and S. A. Belyaev, "Development of the Software for Solving the Knapsack Problem by Solving the Traveling Salesman Problem," 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus), 2021, pp. 652-656, doi: 10.1109/ElConRus51938.2021.9396448.
[5] A. Benzoni, C. Yugma, P. Bect and A. Planchais, "Allocating Reticles in an Automated Stocker for Semiconductor Manufacturing Facility," 2020 Winter Simulation Conference (WSC), 2020, pp. 1711-1717, doi: 10.1109/WSC48552.2020.9383933.

Flexciton Announces £15M Series A to Boost the Capability of the Global Semiconductor Industry

Flexciton Announces £15M Series A to Boost the Capability of the Global Semiconductor Industry
- Flexciton’s technology intelligently and efficiently deals with the increasing complexity of modern chip production, enabling next-generation semiconductors to be manufactured at scale.
- The global semiconductor market will grow to over $800 billion by 2028*, but the supply chain has recently been falling short due to the chip shortage - impacting over a hundred other industries, from automotive to health services.
- Flexciton has been proven to drive efficiency gains of 10 per cent by using advanced AI technology in real-time to optimise production - a 10 per cent efficiency gain per 1,000 machines can save a factory tens of millions of dollars annually.
- The Series A was led by Nadav Rosenberg of Saras Capital. Other investors include BACKED VC Chalfen Ventures, and Entrepreneur First.
London, UK. 21 October 2021: Flexciton, an optimisation technology company that has developed a unique solution to radically improve the efficiency and productivity of fab manufacturing processes, has announced a £15M Series A investment, led by Nadav Rosenberg (Saras Capital). Flexciton’s solution analyses the real-time data that each fab generates and applies cutting edge technology to decide which actions need to be taken to optimise production.
Modern semiconductor fabs are the most complex manufacturing environments in the world, with the production process generating more scheduling options than there are atoms in the universe. A next-generation chip wafer might go through between 500-2,000 machine steps in a dynamic process. The end-to-end process of making a single chip could take from six months to a year to complete.
Jamie Potter, co-founder and CEO of Flexciton, said, “Automation is already used by many semiconductor manufacturers. However, even in advanced fabs where scheduling itself is automated, the software used to make these decisions tends to be based on predefined rules programmed by humans and determined by historical data. Yes, it can calculate different options far quicker than a human operator could, but the options are still 'best guesses' rather than optimal outcomes.”
Potter continued, “Flexciton is able to create an overview of how the entire fab is operating and rapidly sift through the trillions of options available, to come up with the optimal decision at that precise point in time. Using AI-powered mathematical algorithms and Mixed Integer Linear Programming, we can analyse real-time data – not historical – and make the best choices possible based on what is happening in a fab at a given moment. Our vision is to become the best in the world at running semiconductor fabs, before turning our attention to support other manufacturers. This investment plays a key part in achieving this, as we expand our team.”
The current chip shortage continues to make headlines worldwide, highlighting a semiconductor supply chain that is far from robust. Covid-19 may have impacted production, but it has shown how difficult it is for the industry to quickly adapt to surges in demand. 169 industries** were affected by the shortage, from automotive and consumer electronics to steel producers and concrete manufacturers. But agility is not the only issue - demand continues to grow and the industry needs more capacity. Globally, the industry is expected to be worth $803 billion by 2028.
Flexciton has been proven to achieve efficiency gains of 10 per cent. To a fab using 1,000 machines, this can save a factory tens of millions per year. There are currently over 1,000,000 machines worldwide waiting to be optimised - a number that continues to grow due to the worldwide demand for semiconductors.
Nadav Rosenberg, an investor who has supported the team since their early days and led the current Series A round, added, “Semiconductors are the fundamental building blocks of modern life. More than any other development of the scientific age, they have completely revolutionised the way that we work, play, communicate and learn. Demand will keep growing, and the industry must consider how to better utilise its current assets, before expanding and using further resources to meet demand. Efficiency is key. Flexciton is the first company to successfully apply this level of machine intelligence to real-world manufacturing, with hugely impressive results.”
Mike Chalfen, Chalfen Ventures, commented, “In a globally strategic industry, with billions in capex and opex spent on the world’s most complex manufacturing operations, Flexciton measurably saves enormous costs, fast. Its insight can change the very capacity, agility and economics of the semiconductor industry. Flexciton has the team, technology and ambition to be an enduring and important company.”
Flexciton was founded in 2016 by Jamie Potter and Dennis Xenos who have worked in the optimised manufacturing field for more than ten years, with a focus on how advanced mathematics can solve manufacturing scheduling issues. They quickly realised that it was impossible for humans to understand the trillions of options that manufacturing processes generated and that it was only through the application of mathematics that the unpredictability of such complex systems could be understood.

The team has since grown to 41, made of world-class experts, combining the disciplines of mathematics, semiconductor scheduling, AI, optimisation, data science and software development. Flexciton team members have published over five hundred academic papers, many of which are focused on optimisation technology. These papers and 10 years of academic research have become the foundation upon which Flexciton technology has been built.
Since its inception, Flexciton has received over £21m in funding, with its recent Series A round raising £15m. The Series A investment will be used for hiring across the team.
References
* – Semiconductor Market Size https://www.fortunebusinessinsights.com/semiconductor-market-102365
** – Impact of Semiconductor Shortage https://finance.yahoo.com/news/these-industries-are-hit-hardest-by-the-global-chip-shortage-122854251.html

Five Reasons Why Your Wafer Fab Should Be Using Hybrid Optimization Scheduling

Five Reasons Why Your Wafer Fab Should Be Using Hybrid Optimization Scheduling
In order to maintain high margins, the cost of manufacturing semiconductors needs to continually diminish. Previously, this had been achieved by increasing the wafer size and shrinking the size of the chips whilst increasing the density of transistors. However, as the effects of these tactics begin to culminate, the future of maximising a wafer fab’s capacity lies within optimizing operational processes.
The process of manufacturing a semiconductor chip is exceedingly complex, often requiring thousands of unique steps. Reducing cycle times and increasing throughput in such an intricate production process calls for very high production efficiency. Fabs usually approach this in one of two ways; the heuristic approach, which is fast but not optimal and the mathematical approach, which is optimal but time-consuming. In order to attain optimal results that are able to keep up with changes on the factory floor, however, fabs need to switch to advanced production scheduling.
The hybrid approach
At Flexciton, we are pioneering a new model that combines the two different methods with a hybrid technique. With this ground-breaking new model, which we call advanced mathematical hybrid optimization technology, optimal results are delivered in a matter of minutes.
Here are 5 reasons your fab will benefit from switching to an advanced scheduling solution:
It's quick
The most complex of scheduling problems can be solved in less than 5 minutes, delivering near-optimal results. This makes the hybrid technique perfect for the dynamic fab environment, since updates can keep up to speed with changes on the factory floor.
Accommodates all constraints
Hybrid optimization is able to realise fully accurate schedules by accommodating all constraints. This ensures a true representation of all activity in the fab, as well as its limitations.
Optimal schedules
By employing mixed-integer linear programming (MILP), Flexciton’s hybrid method guarantees high-quality solutions. Thanks to performance enhancing decomposition, final solutions are very near to the global-optimal.
Low maintenance
With MILP being the core of the solution, high performance can more easily be maintained with very little upkeep. With changes in objectives and recipes constantly taking place, not every consecutive shift in a fab is alike. Despite this, Flexciton’s solution can take into account all aims and constraints and consistently calculate an optimized schedule.
It's adaptable and easy to alter
When needed, constraints and parameters can be altered without the need to rewrite or redesign extensive amounts of new code. Not only this, but hybrid-optimization scheduling can also be rolled out into the entirety of a fab (global scheduling) as well as multiple fabs, even when they have differing production characteristics.
Hybrid optimization could be the answer to your fab’s scheduling problems. Download our white paper to find out more.

Why Optimized Scheduling is the Answer to Balancing Reticle Moves and Cycle Time

Why Optimized Scheduling is the Answer to Balancing Reticle Moves and Cycle Time
Introduction
Photolithography processes are central to producing computer chips and semiconductor devices. However, they are typically considered to be bottlenecks due to their reliance on a critical secondary asset; reticles. Reticles are limited in number and yet are a critical piece of the coat-expose-develop loop. What is more, reticles are delicate in nature; they are enclosed in purpose-built cases for their transport in order to keep the potential of damage or distortion to a minimum.
As such, a fundamental tradeoff arises when operating photolithography toolsets: moving a reticle to the machine it is needed most (to carry out high-priority tasks) clashes with the requirement to be conservative with its transport. In theory, there are several compromises that the operator can make to reduce reticle movements - waiting a bit longer to ensure more wafers arrive to a machine and a larger batch can be processed with a single move is one example. However, in practice, identifying these strategic actions and balancing between the competing goals is highly complex. Flexciton can provide a solution to this issue by leveraging the power and flexibility of optimisation.
In this article, we show how the Flexciton’s scheduling engine can balance between minimising cycle times and reticle moves. Through a series of example case studies, we delve into the scheduling trade-offs that arise in the day-to-day operation of a semiconductor fab and how Flexciton’s solution can assist in uncovering schedules that optimally balance across competing goals.
Trade-off frontier with the Flexciton scheduler
The Flexciton scheduler can accommodate a range of user-defined objectives. The fab operator is typically interested in minimising KPIs such as cycle times but may also want to include other considerations, such as penalisation of labour-intensive decisions e.g. number of batches built. In this vein, we have recently introduced a new component; the number of reticle moves carried out. As shown in Figure 1, the user is able to define a penalty factor for reticle moves; the higher the value, the harder the engine will try to avoid moving reticles.

In the example case study we have 6 machines and 48 wafers to be scheduled, with a total of 4 reticles.
Reticle 10001 is required for all lots schedulable on machines 01, 02 and 03. Deciding on how to move this in-demand reticle across the 3 machines will impact both the number of moves as well as cycle times, particularly as there are some high-priority wafers waiting to be dispatched. Reticle 10001 is originally loaded in machine 01.
The other three reticles, 20001, 20002 and 20003 are initially loaded on machines 04, 05 and 06 and can be used by all three toolsets interchangeably. However, different machines are better suited to different reticles; for example in our case study, the same process is completed faster if using reticle 20001 on machine 06. Note that all machines have a maximum batch size of 4 wafers.
Case Study 1: Focusing solely on cycle times
We start off by not penalising the number of reticle moves and solely minimising the total priority-weighted cycle times (TWCT) across all wafers. The optimal schedule produced by Flexciton’s engine is shown in Figure 2.
There are a total of 7 reticle moves, noted with red arrows in the figure below. 4 moves pertain to reticle 10001 which is moved from its initial location 01 to 03 and then 02 to carry out some high-priority wafers (as evidenced by the circled 1/2/3 next to the job names). The reticle is then moved out again to machine 01 and finally to machine 03 to carry out some lower priority jobs. Looking at machines 04, 05 and 06, the engine decides to immediately swap the reticles between the machines, to ensure that each lot is fed to its most suitable (in terms of processing times and capability) machine. The TWCT of all 48 wafer steps (where priority weights are user-defined and in this case range from 1 for highest-priority to 0.1 for lowest-priority wafers) is 16.79 hours.

Case Study 2: Moderate penalisation of reticle moves
In this second study, we have penalised reticle moves; the ratio for balancing TWCT and reticle moves has been set to 100:75 i.e. we will choose to avoid a reticle move only if its avoidance translates to an increase of TWCT of 0.75 hours or less. This is quite relaxed, but is aimed at avoiding reticle moves with little benefit, since the risk of potential damage is deemed higher. The optimal schedule obtained is shown in Figure 3.
In this study, there are a total of 5 reticle moves, noted with red arrows in the figure below. 3 moves pertain to reticle 10001 and its journey across machines 01, 02 and 03. The main difference to the previous scheduling pattern is that now we do not move the reticle back to machine 03 to carry out the very last batch of low priority wafers. Instead, we choose to wait for their arrival and carry them right after the high-priority batch finishes a bit after 19:00. This way we avoid that final reticle move, while also incurring a delay in the high-priority wafers scheduled on machine 02 which now have to be moved from 19:15 (in study 1) to 19:30.
Looking at machines 04, 05 and 06, the engine decides to immediately swap only two the three reticles this time, and leave reticle 20001 on its initial machine. Although that initial setup is not ideal in terms of processing times it does prevent the reticle move deemed to be “lower value”. The TWCT of all 48 wafer steps is 18.73 hours.

Case Study 3: Only necessary reticle moves
In this study we look at the extreme case of using a very high penalty on reticle moves, hence allowing only absolutely necessary reticle moves. In particular, we have opted to use a TWCT to reticle move cost ratio of 1:10. In such cases, the operator is willing to accept sub-optimal job-machine allocation decisions, as well as delayed scheduling of high-priority wafers, for the purpose of keeping reticle movement to the absolute minimum. The optimal schedule obtained is shown in Figure 4.
In this study, the total number of reticle moves has come down to just 2 moves, noted with red arrows in the figure below. Both moves pertain to reticle 10001 and its journey across machines 01, 02 and 03 to ensure all wafers are completed. In the case of machines 04, 05 and 06, we are still able to carry out all tasks, albeit with longer processing times, as evidenced in the much later finishing times of the machines. The TWCT of all 48 wafer steps is 23.20 hours.

Exploring the trade-off frontier
Plotting the aforementioned runs (and also some more data points), we obtain Figure 5, which clearly illustrates the trade-off at play here. As we traverse the penalty factor from a low to a high value, the number of reticle moves drops and the cycle times increase. As expected, these relationships are monotonic but not smooth, since they depend on discrete events. Note also that both curves are bounded both from above and below, corresponding to the absolute minimum number of reticle moves required (in this case 2) and the absolute maximum number of reticle moves that is optimal (in this case 7).
By running a few scenarios with different parameters, the Flexciton engine opens up the possibility to explore the tradeoff frontier in detail, enabling operators to quantify how KPIs would change with a more relaxed or constrained attitude towards reticle movements.

Performance in real-world applications
In practice, enabling the Flexciton scheduling engine to consider reticle moves is a computationally challenging task, involving novel development in the model’s MILP formulations and heuristics. Nevertheless, this feature has been accommodated with no deterioration to performance and schedule quality. The Flexciton engine is capable of scheduling thousands of wafers across hundreds of machines in a few minutes while also controlling for the operator’s tolerance to reticle movement.
Indicatively, we showcase results obtained from scheduling a real-world fab plant. At the time of the study, the plant had a total of 3,478 wafers to be scheduled on 209 toolsets (with a total of 358 load ports). We computed two schedules: one with low and one with high penalisation of reticle moves. These scheduling runs were computed in roughly the same time: respectively, confirming that despite the added complexity, this feature can scale well and provide a schedule in a few minutes.
Focusing on the reticle machines, the results obtained suggested that reticle movements could be reduced by around 26% while leading to an increase in total cycle times of around 2%. Note that these results are priority-weighted, with further analysis revealing that high-priority wafers are not substantially impacted; the optimiser is able to identify “low-value” reticle movements relating to e.g. early processing of a low priority wafer and either avoid that movement by using an alternative recipe, or deferring that movement to later when a low-priority wafer can be combined with a high-priority wafer in a batch.
Conclusions
Reticle scheduling is a very important consideration in the scheduling of advanced semiconductor fabrication plants. This resource, already highly constrained, comes with a critical consideration in practice: frequent movements and manual handling of the delicate reticles increase the risk of damage or distortion during transport. As such, the number of times a reticle is moved to a new machine must be managed conservatively. This inadvertently clashes with the operator’s fundamental objective of reducing cycle times.
Flexciton has extended the capabilities of our Mixed Integer Linear Programming (MILP) scheduling engine to natively accommodate the modelling and penalisation of reticle movements. This allows the user to define their own risk profile, so as to limit reticle movements solely to cases deemed of high value. In addition, the engine opens up the possibility to explore this tradeoff frontier in detail, enabling operators to quantify how their plant’s performance may change with a more relaxed or constrained attitude towards reticle movements.
Authors
Ioannis Konstantelos is a Principal Optimisation Engineer at Flexciton. He holds a PhD from Imperial College London and has published over 50 conference and journal papers on optimization and artificial intelligence methods. Ioannis joined Flexciton over 3 years ago and is involved in the development of Flexciton’s scheduling engine.
Charles Thomas is a Test Analyst with a background in Mechanical Engineering and a Masters degree from the University of Southampton. He has been at Flexciton for 2 years and leads the benchmarking and testing of the application with a particular focus on scheduling engine performance.

Webinar: Flexciton and Seagate Case Study

Webinar: Flexciton and Seagate Case Study
Wafer fabrication is not only a highly complex manufacturing process but also capital intensive. With the cost of a single new 300mm fab now exceeding $1bn and with some tools costing in excess of $40m, fixed costs are significant and demand high fab utilization.
In an environment where capacity and production efficiency is key to continued cost reductions, the benefit of high-quality scheduling is huge to factory efficiency. It enables higher utilization of expensive toolsets (e.g., photolithography and etch), reduces cycle times and ensures on-time delivery.
We were thrilled to present with Seagate Technology once more. Jamie Potter, CEO & Co-founder of Flexciton and Tina O'Donnell, Systems Engineering Manager from Seagate discussed advanced scheduling technology and its impact on wafer fab production performance.
The webinar was hosted by SEMI Europe and moderated by TechWorks NMI.
June 15th, 5pm CEST / 4pm BST.
Watch this case study webinar to learn how Seagate is successfully using smart scheduling technology to optimize fab efficiency.
This webinar has now been removed from our website. To get exclusive access to the webinar, please get in touch by clicking here.
.avif)
A Review of the Two-phase Approach to Photolithography Production Scheduling
.avif)
A Review of the Two-phase Approach to Photolithography Production Scheduling
Reviewing technology literature is a common practice when developing a new approach to solving an existing problem. James Adamson, a Senior Optimization Engineer at Flexciton, has recently reviewed several technical papers on photolithography scheduling, one of which he found particularly interesting.
The reticle challenge
Photolithography is the cutting edge of semiconductor manufacturing and as a result, requires the most complex and expensive equipment to run and maintain. Reticles (also known as photomasks), must be prepared and loaded into litho tools before the wafers can process. These fragile masks are extremely expensive (in the region of $100k or more [Weber 2006]), making them a scarce resource.
Wafers require specific reticles for their individual process step. Therefore fab operators need to ensure that the correct reticles are at the correct tools on time in order to keep production KPIs such as wafer throughput and cycle time optimal. While reticles can be moved between tools, this takes time and considering how fragile these masks are, the movement needs to be minimised as much as possible. If wafer scheduling wasn’t already difficult enough, we now have to wrestle with reticle scheduling too.
The approach review
The paper “A Practical Two-Phase Approach to Scheduling of Photolithography Production” by Andy Ham and Myesonig Cho was published in 2015. The authors present an approach that is based on the observation that most semiconductor manufacturing companies are still using real-time dispatching (RTD) systems to make last-second dispatching decisions in the fab. RTD has the advantage of being familiar and relatively understandable whilst also being fast computationally. In contrast, some optimization-based approaches, particularly for photolithography, can struggle to scale up to industrial-scale problems. The authors' approach exploit the idea that an exact schedule for the next several hours is not necessarily needed and that RTD will ultimately be responsible for the final dispatch decision.
They propose a two-stage approach that integrates a simple heuristic (designed to mimic a fab’s RTD system) with mixed-integer programming (MIP):
- Stage 1 - Upper-stage (Assignment): MIP is used to guide the high-level lot-to-machine and reticle-to-machine assignments. Explicit timing of lots and reticles is ignored.
- Stage 2 - Lower-stage (Sequencing & Timing): The finer-detailed sequencing (and therefore timing) of lots according to those assignments is provided by the heuristic.
The two stages are then tied together in an iterative fashion. A set number of lots are scheduled in each iteration of the two stages. The algorithm then repeats from Stage 1 with new additional lots and keeps iterating until all lots are scheduled.
Stage 1
A MIP approach is proposed to solve the assignment problem in the first stage. Two primary decision variables are used:
- A variable that allocates a reticle to a machine
- A variable that allocates a reticle to a lot on a machine
The model does not account for the explicit timing of lots on their allocated machines. Therefore it cannot prescribe a sequence of lots or reticles on the machines. It just indicates that they will be scheduled on this machine at some point. The model requires that all lots are assigned to a machine and a reticle. Finally, the model measures the completion time of each machine as a function of the processing time of all lots allocated to the machine, rather than explicitly deciding the order of each lot on the machine.
Multiple objectives are used to achieve the trade-off between reticle movements, cycle times, and machine load balancing:
- Minimise completion time; the end of the last lot’s process
- Minimise the difference between the earliest and latest machine completion times. This aims to achieve load balancing across machines
- Minimise total machine underachievement, where underachievement is the gap between the total processing time on a machine and the average total processing time over all machines.
Stage 2
Sequencing decisions are handled by an RTD system, where the manufacturer’s custom business rules can be applied; however, the lot-reticle-machine assignment decisions are fixed. This reduces the scope of the decision-making that RTD must make. The authors highlight the benefit of explainability with this two-stage approach. When questioning assignment decisions, the assignment model should be explored, whereas when questioning sequencing decisions, RTD should be investigated.
This practical approach was shown to solve reasonable problem sizes (500 lots, up to 800 reticles, 30 machines) in 2-4 minutes. They managed to reduce cycle times by 3%, on average, and, particularly interestingly, reticle movements by up to 40% when compared to standalone RTD.
Flexciton’s View
Although the model does have some shortcomings, as outlined above, the practicality of the approach makes it a strong candidate for production-size scheduling as very few studies have been able to effectively handle industrial problem sizes for photolithography tools. The reduction in reticle movements achieved, in particular, cannot be ignored.
However, with the notion of time largely ignored in the assignment model, the approach outlined is certainly simplistic.
There are a number of factors not considered in the model, including:
- The effect of batching decisions on processing times of the lots. For example, if 5 lots could be assigned to machine A as a single batch, and 5 different lots were assigned to machine B in five unique batches, then the assignment model would consider these cases as having equal total processing times (which is unlikely to be true).
- The sequencing (and therefore timing) of reticles on machines.
- The batching of reticles into “pods”; a container of reticles for easy and safe transportation around the fab and within lithography machines such that they can’t always be separated
- How high priority wafers are handled in relation to lot-reticle-machine assignments
- Accounting for periods of downtime or machine unavailability, which would contradict with the aim of achieving machine load balancing.
Flexciton’s Way
At Flexciton, we schedule a variety of photolithography tools as part of our optimization engine. Our hybrid optimization-based solution strategy is therefore capable of handling all the intricacies of a wafer fab simultaneously, such as the issues described in the previous section;
- modelling photolithography tools that require pods,
- priority lots,
- batching tools with variable processing times based on the batch, and
- machine unavailability.
Not only do we model these complexities, but we also succeed in achieving high-quality schedules in little computation time.
The user is given the option of controlling various relative priorities of the lots, in addition to deciding the relative importance of KPIs such as reticle movements vs lots’ cycle time. The flexibility of an optimization approach that considers all of the advanced photolithography constraints combined with a self-tuning model that has limited tuning parameters is what makes our engine highly attractive as a semiconductor scheduler.
1) Weber, C.M; Berglund, C.N.; Gabella, P. (13 November 2006). "Mask Cost and Profitability in Photomask Manufacturing: An Empirical Analysis". IEEE Transactions on Semiconductor Manufacturing. 19 (4). doi: 10.1109/TSM.2006.883577

Scheduling Time Constraints in Wafer Fabrication

Scheduling Time Constraints in Wafer Fabrication
Time constraints (also known as timelinks) between consecutive process steps are designed to eliminate queueing time at subsequent steps. In a highly complex wafer fabrication environment, even the most advanced fabs struggle with scheduling time constraints. While our engineering team works on applying Flexciton technology to solve the timelinks problem, Begun Efeoglu Sanli, one of our Optimization Engineers, reviews a recently published technical paper on this particular subject.
A fab manager perspective: time constraints and a fab performance
A silicon wafer undergoes a fabrication process by entering multiple production steps, where each step is performed by different, highly sophisticated tools. Optimizing the transition and waiting time of the lots has a huge impact not only on a fab production performance but also its profitability. As an example, by introducing time constraints at the wet etch and furnace process steps, we prevent the likelihood of oxidation and contamination. Failing to do so, risks contact failures, low and unstable yields, the consequence of which is either rework or the wafers must be scrapped. Such problems are difficult to discover during wafer processing, and to run special monitoring lots would be a considerable effort.
Yield optimization has long been considered to be one of the key goals, yet difficult to achieve in semiconductor wafer fab operations. As the semiconductor manufacturing industry becomes more competitive, effective yield management is a determining factor to deal with increasing cost pressures. Time links between consecutive process steps are one of the most difficult constraints to schedule, with a significant impact on yield management.
Some factories avoid the problem by dedicating tools to each process group that requires a previous cleaning or etch step. This strategy's obvious disadvantage is the higher demand from wet tools, which leads to higher investment, more cleanroom space, and ultimately to lower capital efficiency. The tradeoff between increasing throughput and a higher likelihood of violating lots’ time constraints is an everyday battle for fab managers trying to meet yield targets.
Technology perspective: scheduling time constraints
An example of time constraints for a single lot is illustrated in Figure 1 below. It shows a time link system between four consecutive process steps. In this example, we can see that the lot has time links constraining Step 2 to Step 4 as well as from Step 3 to Step 4, with overlapping time lag phases (also known as a nested time link constraint). This means that after completing process Step 3, the lot begins a new time lag phase (Time Link 3) whilst already transitioning through an existing time lag (Time Link 2) started upon completion of Step 2. As you might expect, the need to simultaneously look ahead and consider future decisions whilst also being constrained by past decisions is not trivial to model well in a heuristic or as real-time dispatch rules.
Time constraints are already difficult to navigate, but nesting them adds yet another layer of complexity for heuristics to wrestle with. In the example below, if the final step cannot be brought forward, scheduling Step 3 too close to Step 2 may make it impossible to meet the "Time Link 3". It is because the time between Steps 3 and 4 is now greater than the maximum allowed. This would not be a problem if the time constraints were not nested and we only have to schedule according to the "Time Link 2".

The technical paper review
Surprisingly, although time constraints are an important topic, they have not been widely discussed in the tech literature so far. That said, an interesting paper on this topic was presented at the Winter Simulation Conference 2012 in December, by A. Klemmt and L. Mönch, “Scheduling Jobs with Time Constraints between Consecutive Process Steps in Semiconductor Manufacturing” of Infineon Technologies and University of Hagen.
The authors propose mixed-integer programming (MIP) model formulation and share some preliminary experimentation. Unfortunately, even state of the art MIP solvers can only solve problem instances up to 15 jobs and 15 machines to optimality in a reasonable amount of time.
Consequently, the authors develop two alternative approaches:
- A heuristic that focuses on creating a feasible schedule where all time constraints are respected
- A mathematical approach based on their MIP model that extends the heuristic in (1). The idea is to break the overall problem down into many smaller optimization problems that are easily solvable individually. These problems aim to ensure all lots are delivered on time and no time constraints are violated.
This novel decomposition approach allows for solving considerable problem instances including more than 100 machines, more than 20 steps, nested time constraints, and a large number of jobs.
Flexciton’s View
The paper referenced above highlights that both approaches can provide good feasible schedules quickly and as expected, the MIP-based heuristic outperforms the simple heuristic. Nevertheless, as with most heuristic approaches, there are some important tuning parameters that might affect the schedule quality. In the paper, the lots are sorted with respect to their due dates to build subproblems. This approach could perhaps be reevaluated if cycle time is the most important KPI for a fab or if some jobs are of higher priority. Similarly, if time constraint violations are allowed to some degree, then one could relax the importance of this in the heuristic.
The most important consequence, which is also mentioned, is that the cycle time of time-constrained processes correlates highly with utilization of upstream (start of the time link) processes. This eliminates waiting time in front of the upstream tool and enables higher utilization. However, if downstream tools are bottlenecks, then WIP may have to be withheld such that time constraints are not violated by stagnating in front of busy tools.
Another tradeoff to be considered is how low-priority time-constrained steps are scheduled among high-priority non-time-constrained steps. For example, is it worth risking a time constraint violation for the sake of rushing an urgent lot through the toolset? This needs to be quantified and considered by the fab manager. Therefore, all these tradeoffs should be taken into account in order to provide the best schedule.
The Flexciton Way
At Flexciton, we include time constraints as part of our optimization engine that attempts to eliminate all violations of time links as the highest priority. Only as a last resort are time constraints relaxed if it is not possible to provide an otherwise “feasible” schedule. This could occur if the time windows provided are unrealistically short considering all operational constraints, especially tool capacities.
As mentioned in the previous publications, the Flexciton optimization engine is a multi-objective solution that can balance various KPIs according to user-chosen weights, one of which controls the degree to which violations of time constraints are penalized. The main advantage of this approach is that, with all the other competing objectives, our solution can balance throughput, cycle time and priority-weighted time constraint violations simultaneously.

Understanding the Trade-offs in Preventative Maintenance for an Optimized Fab Performance

Understanding the Trade-offs in Preventative Maintenance for an Optimized Fab Performance
In the constant pursuit of improved efficiency in semiconductor wafer fabs, the reliability of equipment is essential. The tools used in a fabrication process are extremely sophisticated; requiring an extensive preventive maintenance regime to ensure reliable production. A big challenge faced by fab managers is getting in place optimal scheduling of preventative maintenance whilst still meeting their production KPIs. Simply because such scheduling is extremely complex, involving many trade-offs as well as being time-consuming. What’s more, the exact impact on productive output is difficult to quantify.
Typically, to handle this complex problem, a fab may develop statistical models that try to predict unexpected tool downs. Such preventive maintenance – based on a pre-determined frequency – can help to minimise unexpected disruptions.
However, determining optimal maintenance frequencies is not an easy task, requiring answers to numerous questions and trade-offs that impact the eventual ability of a fab to meet its KPIs. Such questions include:
- How do different frequencies in maintenance impact the fab’s ability to meet on-time-delivery?
- How does one decide which tools to take offline at a single time, to minimise the disruption to the fab overall?
- Is it possible to forecast whether taking one arrangement of tools offline will lead to a 4% drop in throughput for the day? Whereas an alternative arrangement may only yield a 2% drop?
- What is the impact of taking those same tools down at 7 am tomorrow as opposed to 5 pm two days from now?
- What if – due to personnel requirements - you need all maintenance to take place at the same time? Alternatively, can they be staggered in 10-minute intervals for the same reason?
Fulfilling your KPIs
But what happens if it was possible to use your KPIs as basis to optimize maintenance scheduling? Instead of using a simple rule-based predictive model, such scheduling weighs constraints and finds the optimal schedule that will enable you to meet your KPIs.
Flexciton's scheduling technology addresses all such questions by finding the optimal schedule for your fab in any variety of forecasted conditions.
A ‘what-if’ scenario capability allows fab managers to effortlessly trial new preventative maintenance plans based on a variety of trade-offs or constraints. In addition, rather than dictate the time that tools must be taken offline, our optimizer will ensure all KPIs are achieved as best as possible, given the constraints.
By doing so, it prescribes the optimal maintenance schedule for the factory. All the fab manager has to do is to decide on suitable windows of time for each of the tools to be taken down.
Smart scheduling technology in practice
Let’s see what happens in three scenarios where we apply Flexciton’s maintenance scheduling capabilities with varying degrees of scheduling complexity. The scenarios are structured as follows:
- The status quo — In the first case, we optimize a production schedule, with fixed maintenance timings prescribed to all start at the same time for a given toolset. Production is scheduled using a heuristic-based dispatch system emulating that found in many fabs.
- Optimizing production around fixed maintenance — We use Flexciton’s advanced optimizer to perform the production scheduling.
- Simultaneously optimizing production and maintenance — Finally, we allow the Flexciton advanced optimizer to schedule both production and maintenance timings. The window to flex the maintenance timings is chosen to be a 90-minute addition to the original timing provided in Case 1.
The Gantt chart below (Figure 1) shows a snapshot of 300 lots scheduled in small toolsets over the course of twelve hours. Each lot can only go to a certain number of tools within that toolset, where the toolset is identifiable by the tool’s prefix. Each lot is assigned a priority. We optimize for the total cycle time of the lots, weighted by their priority. The maintenance periods (shown in striped orange) are of varying duration and are randomly assigned to tools to take place at a specific fixed time somewhere in the twelve-hour schedule.
In Case 1, we compare the logic of scheduling, given these fixed maintenance timings, with a heuristic dispatcher against Flexciton.

Here we can see that on the ‘XZMW/097’ tool, the dispatch system struggled to ‘look ahead’ and dispatch effectively, when given obstacles such as upcoming downtime just after 02:00. It would be better to, in the meantime, dispatch a short processing lot. An even more ideal schedule can flexibly move downtime around to maintain consistent, predictable throughput across the schedule.
So, what if the scheduler is allowed to prescribe the timings that it finds optimal? The following Gantt chart is from Case 3, where the optimizer is free to plan the maintenance at any time within a 90-minute window.

The benefits of a flexible maintenance approach
To get a quick understanding of the results achieved using the three scenarios, we use queue time as our KPI. In the table below, you can see that the flexible maintenance approach greatly outperforms a simple dispatch heuristic. Obviously, queue time is a one-dimensional constraint of which there are infinitely more in a fab process that need to be considered. It is here that our maintenance optimization solution offers fabs unique capabilities: weighing all possible constraints to ensure KPIs are met to the fullest.

So instead of letting predictive maintenance schedules drive production, why not let the driver of maintenance planning be your fab's top-line production KPIs? The Flexciton optimizer allows easy scenario testing and exploration in order to effectively quantify the impact that maintenance has on the production schedule.
Flexciton’s solution enables fab managers to consider multidimensional trade-offs simultaneously. The alternative to such informed decision-making is that fabs schedule their maintenance ‘blind’. They will ultimately pay the price through unpredictable cycle time, unsatisfactory throughput and unnecessary tool downtime. By switching over to smart scheduling, it is much easier to get an accurate prediction of the impact that modifying a downtime schedule will have in terms of meeting top-level KPIs. Learn more about smart scheduling by downloading our white paper, "Superior Scheduling: hybrid approach boosts margin"

A Fab Manager's Dilemma: Maintenance Scheduling vs Productivity KPIs

A Fab Manager's Dilemma: Maintenance Scheduling vs Productivity KPIs
Preventive maintenance is a common practice in semiconductor wafer fabrication and essential for overall equipment availability and reliability. A typical approach is to plan maintenance activities ahead of time using simple rules-based models, where the maintenance is run on a particular day, at a particular time. The consequence of such approach, however, is optimising maintenance timing at the expense of production KPIs such as cycle time and throughput. What if we consider it the other way around and treat these KPIs as priority in the objective?
The maintenance scheduling problem
The complexity of semiconductor wafer fabrication entails a huge number of decisions and trade-offs a fab manager has to deal with each day. Preventive maintenance is one of them. The equipment used in the fabrication process is extremely capital intensive; therefore, it is critical that tools are utilised effectively and maintained on a regular basis to avoid any failures. Any servicing requires stopping the tools and suspending it from the manufacturing processes for a given period of time. With a recent shortage of chips, for the automotive industry, in particular, a fab manager faces a significant challenge - how to schedule preventive maintenance operations whilst ensuring maximum OTD and high throughput?
Maintenance scheduling is an established topic of research, with many authors showcasing various ways of solving this scheduling problem using simulation and optimisation techniques. An interesting tech paper on this topic was presented at the Winter Simulation Conference 2020 in December, by A. Moritz et al. “Maintenance with production planning constraints in semiconductor manufacturing” of Mines Saint-Étienne and STMicroelectronics. [the paper]
In this article, Ioannis Konstantelos, our Optimization Technology Lead, reviews the paper and explains Flexciton's approach to this complex topic.
Technology perspective
The authors focus on identifying the best possible period of time (e.g. a day), across a large time range, in which to carry out maintenance tasks, while “respecting production deadlines and the capacity constraints on tools”. Two mathematical models are presented; in model 1, the maintenance is seen as a task that must be performed in a single period of time, e.g. one day (24h), while model 2 allows maintenance to be distributed across two consecutive periods, e.g. two days (48h).
Both models treat production schedules as fixed i.e. the lot to tools assignment and timings for production purposes have been decided a priori. As such, the proposed formulation is a discrete-time model1, allowing to perform maintenance only within defined points in time. The model uses the following decision-making variables:
- A variable that indicates whether a maintenance task should be performed or not.
- A variable that assigns a maintenance task to the period in which it will be carried out.
There is a limit on the total time allocated to each of production and maintenance tasks.
The model's objective function is a combination of maximising the number of maintenance tasks that can be performed within the time horizon and the earliness of these tasks. A user defines parameters to tune the importance of each of these aspects.
The trade-off between production and maintenance remains unanswered
The two-step approach showcased in the paper leaves a core question unanswered; the trade off between production and maintenance. Typically, production scheduling aims to optimise a particular KPI, such as cycle time or throughput. By treating the production schedule as fixed and optimising the number and earliness of maintenance operations around it, we are ignoring the trade-off to the KPIs that matter and there may well be foregone synergy opportunities.

The paper rightly highlights the need to consider maintenance using a formal mathematical framework. Nevertheless, there are some assumptions that limit the applicability and benefit of the proposed approach.
- One limitation is the discrete representation of time; continuous-time modelling* would instead allow for a more precise indication of when the various tasks should be carried out. This is especially relevant for modelling more complex tools e.g. photolithography, where maintenance lasts only a fraction of the discrete time.
- The model makes use of the concept of “tool families”, to capture the fact that many tools are identical, which allows for substantial model simplification. However, in practice, most tools will have individual characteristics (such as the set of recipes they can carry out, or the secondary resources they may be using), which renders them non-interchangeable.
- Another consequence, mentioned also by the authors, is that all maintenance is treated as optional, with no possibility to mark a particular task as “must-run” since that may require amendments to the original production schedule for feasibility.
Want to know more about the different wafer fab scheduling approaches including heuristics, mathematical and hybrid approaches? Read the following whitepaper, here we cover everything about wafer fab scheduling approaches
Business perspective
Results are presented for 12 real-world case studies, involving around 100 maintenance tasks to be scheduled for 14 tools families over a span of 60 one-hour periods. The more relaxed model (model 2) is shown to perform better, both in terms of the number of maintenance tasks planned as well as total earliness. One strength of the proposed approach is the speed of computation. As the authors state, the proposed model can be the basis for iterative discussions between production and maintenance planners.
Flexciton’s view on solving wafer fab challenges
Ideally, production and maintenance scheduling should be tackled in a single model, where the objective function according to cycle time and throughput applies. This can be achieved by treating maintenance as tasks that need to be scheduled within a specific window. Thereby fab managers can explicitly consider the impact that maintenance tasks have on the schedule such that impact on the production KPIs is minimised. Of course, such integrated approaches result in a substantial increase of problem size and complexity, necessitating the development of solution strategies capable of handling the ensuing complexity. Especially in cases of a large number of maintenance tasks or lengthy maintenance, such constraints can quickly render a problem intractable.
Flexciton offer wafer fabs smart scheduling solution
At Flexciton, we have developed smart scheduling solution that involves decomposition techniques to manage the added complexity introduced by maintenance constraints. The users can describe their maintenance tasks as “optional” or “must-run” as well as having a fixed start time or a flexible time window within which they can be carried out.

The Flexciton engine proceeds to optimise the target production KPIs while respecting maintenance constraints. The resulting production schedule prescribes the best time to carry out maintenance while capturing all individual tools characteristics and respecting all operational constraints so as to achieve the best use of available assets. Learn more about the technology behind Flexciton's smart scheduling solution
* Discrete time and continuous time are two alternative frameworks within which to model variables that evolve over time. Discrete time views values of variables as occurring at distinct, separate "points in time". In contrast, continuous time views variables as having a particular value for potentially only an infinitesimally short amount of time. Between any two points in time there are an infinite number of other points in time

Wrestling with Recipes

Wrestling with Recipes
Insightful experiments expose the weakness of limiting the number of recipes enabled on a tool. The key findings are that this limitation can lead to an increase in fab cycle times by more than 40 percent.
It’s not easy managing a fab. While the goals are simple – maximising the yield and throughput – execution is really hard. That’s partly because fabs churn out a range of products, produced within many tools and processes; and partly because the goalposts constantly shift, due to the dynamic nature of the environment.
Sometimes the need for change reflects success in the business. After winning a new order, those running a fab may need to develop and run new recipes before they can manufacture these latest products. Unfortunately, this requires manpower to implement, it could dictate the need for more regular maintenance of a tool, and evaluating KPIs could prove tricky.
At other times those that run a fab will have no warning of the need for change, and will be forced to make tough decisions at breakneck speed. If a tool suddenly fails to process material within spec it has to be taken off-line and assessed. Meanwhile compromised wafers are etched back and processed via a different route through the fab, potentially involving alternative tools running new recipes.
To simplify operations within a fab, many managers restrict some tools to processing only particular products. This is accomplished by limiting the number of recipes on selected tools. It’s a tempting option that might reduce tool maintenance, but it is not without risk. While the hope is that this course of action has negligible impact on the throughput of the fab, there is a danger that it could make a massive dent in the bottom line.
Up until now, fab managers have taken an educated guess at what the implications might be. But they would clearly prefer a more rigorous approach - and thankfully that is now within their grasp, due to the recent launch of our Optimization Scheduler.
To illustrate some of the powerful insights that can be garnered with our software, we have considered the consequences of restricting the use of tools within a hypothetical fab. Our key findings are that this can lead to a massive hike in waiting times at particular tools, and ultimately increase fab cycle times by more than 40 percent.
Complementary case studies
We reached these conclusions after performing a pair of complementary case studies. The first, considering a single randomly generated dataset, allowed us to take a deep dive into the frequency of use of particular tools and their corresponding wait times. The second, involving twenty randomly-generated datasets, allowed us to evaluate the impact of restricting the use of tools on the cycle times for the fab.
In the fab that we modelled there were six toolsets, each with four tools. All ran until the fab had carried out what we describe as 1000 work units – that is, the fab operated until it clocked up 1,000 steps across all lots through the modelled tools (one lot has one work unit for every step completed along its route).
For our first case study, which considered a single randomly-generated dataset, we distributed the work units between the toolsets in the following manner:

The objective of this case study was to examine how, if we were to vary the number of recipes enabled on tools, the corresponding wait times at tools would change. We simulated 4 different scenarios. In the most restrictive scenario, a work unit only had the option to be assigned to 1 tool (because only that tool had the recipe enabled). In the opposite, most generous scenario, a work unit could be assigned to any of the 4 tools within each toolset (because every tool has the corresponding recipe enabled).

Clearly, restricting the number of tools has unwanted consequences on waiting times. For all six types of tool that we considered, a decline in recipe availability increased the total wait time. This is a non-linear relationship, with by far the greatest difference in wait time found when availability shifted from two tools to just one. The impact of having just one tool available is also tool dependent. For the six types of tool considered, the increase in wait time over baseline varied from less than 250 percent to just over 700 percent.
Of course, wait time is only a part of overall cycle time. To investigate the impact of tool availability on total fab cycle times, we considered twenty datasets, each with a slightly different distribution of work units allocated to toolsets. For this investigation we maintained our requirement for 1000 work units, undertaken by six toolsets, each with four tools.
For this simulation, we flexed the recipe availability and calculated the change in cycle time. We found that when the tools were available and capable of running all recipes, flexibility was high as it could be, allowing the fab to run at its full throughput. Reducing tool availability by limiting recipes led to a significant increase in cycle time.
Plotted in the graph below are increases in cycle time resulting from a reduction in tool availability. These values are relative to the theoretical minimum cycle time, realised when all four tools are available and flexibility maximised. While there is a variation in impact across the 20 datasets, the trend is clear: when tool availability reduces, cycle time takes a significant hit. Averaging the results across all datasets (depicted by the bold line) shows that when the proportion of tools available falls to 25%, cycle time lengthens by more than 40%.

Conclusion
This pair of case studies, carried out with our smart scheduling software, has uncovered valuable lessons. While simplifying production by limiting the recipes run on toolsets may be tempting, that can cost a fab significant increase in cycle times. By utilising the "what-if" capabilities of our Optimization Scheduler, fab managers can run different scenarios for data-driven decision making and ultimately become more informed about the impact of their choices on the shop floor.

Multi-objective Fab Scheduling: Exploring Scenarios and Tradeoffs for Better Decision Making

Multi-objective Fab Scheduling: Exploring Scenarios and Tradeoffs for Better Decision Making
Building and maintaining any form of scheduling solution to be flexible yet robust is not an easy undertaking. Commonly, fab managers have resorted to rule-based dispatch systems or other discrete-event simulation software to estimate how their fab will play out in the near future. Often this requires deciding a specific KPI that is important to the fab up-front; do I care more about getting wafers out the door, or reducing the cycle time of those wafers?
Competing objectives challenge
As a fab manager, there are a number of competing objectives to balance on the shop floor that all impact the profitability of the fab. Whether that be reliably delivering to customers their contractual quantities on time, or ensuring that fab research and development iteration time is kept low, fabs need a flexible, configurable scheduling solution that can produce a variety of schedules which account for these tradeoffs. At Flexciton, we call this “multi-objective” scheduling; optimizing the factory plan whilst considering several independent KPIs that, in this case, are fundamentally at odds with one another. This article explores Flexciton’s approach to multi-objective scheduling and how we expose simple configurations to the fab manager, whilst allowing our scheduling engine to ultimately decide on how that configuration plays out in the fab.

If there is no automated real-time dispatch system in the fab, determining the "best" schedule is a very complex procedure that cannot even be accomplished with advanced spreadsheet models. Assuming that the fab is advanced enough such that a dispatch system is in place, it will likely only consider "local" decisions pertaining to the lots that are immediately available to the dispatch system at the time the decision is made.
Dispatch systems typically do not have the configurability to adjust the user's incremental utility with respect to throughput and cycle time; they typically adhere to a series or hierarchy of rules that are tuned to consider exactly one KPI. Therefore to change the objective of the dispatch system would require rewriting these rules; an often time-consuming exercise that requires advanced technical knowledge of the dispatch system. This makes it almost impossible or otherwise very time consuming to trial various configurations of the fab manager’s preferences.
Balancing various objectives for best results
The Flexciton optimization engine is a multi-objective solution that can linearly balance various KPIs according to user-chosen weights. As these weights are exposed to the end-user, this renders the possibility of running many different scenarios with varying preferences trivial. Fab managers can have access to the specific weight values themselves or work with our expert optimization engineers to select from a handful of high-level configurations and the solution will select appropriate weights itself.
To properly understand the flexibility of the engine, we will now step through four case studies. The goal is to compare how, given the same dataset, slightly different objective configurations impact the solution that is returned by accounting for the change in preferences.
We present a schedule of nine tools from across five toolsets with seventy lots of a mix of 65% Priority1 lots. Each lot can go to a random subset of tools within a single toolset.
The schedule will then be tested against four runs:
- Produced by a dispatch system with heuristic rules
- Optimized for cycle time
- Optimized for the on-time delivery of wafers
- Balanced optimization considering both cycle time and OTD
For each of these scenarios, we will present two gantt charts; one labelled with the “Queueing Time” of each lot (aka “rack time”) and another labelled with the “Late Time” of each lot. Late time refers to the duration by which the lot completed processing after its due date. If it was not late, the label reads “0s” since we do not consider being more early as being more favourable. Lots that are considered high priority (Priority 1 to 3) are given a circle badge indicating such. Low priority lots are Priority 4 through 10. Each lot is coloured according to this priority class.

Case study #1: base case - greedy dispatch
To begin, we’ll present how a schedule could look when produced by a dispatch heuristic that does not consider the future arrivals of wafers, but simply what is currently available in front of a tool. The greedy rule here is to just dispatch the highest priority wafer on the rack at the point the tool is idle.



In the above example, the high-priority wafers have to wait due to the system only considering what’s on the rack and therefore dispatching the low-priority wafers that are ready to go.
It should be noted that such a strategy is great for improving overall throughput and cycle time since the machine idle time is reduced by constantly dispatching wafers. This has the side effect of delivering all-bar-one of the wafers on time. In reality though, not all lots are equal and fab managers care a great deal more about certain high-priority lots thus making the scheduling problem quite a bit trickier.
Unfortunately, in order to reconfigure the system to place greater importance upon the high-priority wafers and dispatch them first would require complex rewriting of the dispatch rules to “look ahead” at the wafers that are not yet on the rack, and are arriving shortly. The dispatcher would then elect to keep the machine idle in order to reduce the high-priority wafer cycle time.
Case study #2: Optimize for high-priority-lot cycle time
Instead of modifying the RTD rules, we can emulate what that would look like by running our optimization engine whilst optimizing for the cycle time of high-priority lots:



The low priority lots at the front of the schedule are replaced with high-priority lots so that they can be dispatched as soon as they arrive. These low priority lots have been pushed to the back of the schedule with non-zero rack time (since the cycle time of high priority lots matters so much more). Naturally this is at the cost of overall average cycle time which has suffered by 23% in order to improve Priority1 cycle time by 11%. Also note that on tool “SBXF/115”, our scheduling solution has pushed the Priority2 (orange) and the Priority10 (green) lots later so that the Priority1 (red) lots are rushed through with zero rack time.
Case study #3: Optimize for on-time delivery
With optimisation, there are no additional changes required to increase the flexibility of the system. We simply describe what a good schedule looks like using the multi-objective function and the optimizer does the rest. Subtle tweaks to this function will inevitably produce very different schedules. Now let’s take a look at how the schedule alters when we want to maximise solely on-time delivery.



As expected, cycle time is quite a bit worse than previously however now there are no lots delivered late. This is very similar to the original schedule produced by simple dispatch rules. The low-priority lots have been brought forward so that they are delivered on time and the cycle time of the high-priority lots suffer as a result.
Case study #4: Optimize for both
Finally, the main purpose of this article is to illustrate the ease of considering both KPIs with some relative weight simultaneously.



Note that the KPIs of cycle time and throughput are slightly worse than when that was the sole KPI being optimised. The key is that both are better than when the other KPI was being optimized. This balance is entirely in the hands of the fab manager. We maintain roughly the same cycle time of high-priority lots as when optimising for cycle time and fewer lots are late than when optimizing only cycle time.
Summary and Conclusions
This article has provided a number of ways that illustrate how optimization can be considered both more flexible and robust than heuristics that cannot effectively search the global solution space.

The engine is simple to tune due to the exposed weights and/or configurations presented to the fab manager which allow a high degree of customisation both with respect to the objective function and wafer priorities. This flexibility allows us to easily consider complex hierarchical objectives found in semiconductor manufacturing such as “optimise high-priority cycle time as long as no P1-8 lots are late” or “optimise batching efficiency (perhaps due to operator constraints) and then high-priority cycle time”. Ultimately, our solution is a market-leading scheduler that will realise true KPI improvements on your live wafer fabrication data.
Flexciton is currently offering the Fab Scheduling Audit free of charge. To enquire, please click here.

Flexciton Cofounders Reflect on Their Five Year Journey

Flexciton Cofounders Reflect on Their Five Year Journey
Flexciton celebrated its fifth anniversary on May 17th. The past 12 months have been intensely positive, bringing new exciting projects and allowing the company to accelerate its growth. We took this opportunity and asked Flexciton's cofounders to reflect on their journey by answering the following five questions, independently.
Where did you expect Flexciton to be in 5 years when you started it 5 years ago?
Jamie Potter, CEO & Cofounder: Not where it is today. I think when we set out 5 years ago, we knew we had a cutting-edge technology but we weren't sure where the best application was for it. The semiconductor industry is the most complex manufacturing process in existence and it needs sophisticated technology to solve its problems. The average consumer would have never guessed how incredibly difficult it is to manufacture the semiconductors which make up all of the electronics they use today. We were lucky that 12 months after founding the company we found such a great partner, Seagate Technologies, who we have partnered with to bring this technology to Semiconductors.
Dennis Xenos, CTO & Cofounder: I'd be lying if I said we had a crystal-clear vision of where we want the company to go in the first five years. However, we were well aware that we were developing a highly complex technology to address a particularly difficult manufacturing problem. We knew that it would take some time to see the results. Today, I am both grateful and excited that we now have a fully automated closed loop scheduling solution that runs the scheduling of the world's most complex manufacturing processes 24 hours a day, seven days a week, after five years! This would not have been possible without an incredible team of exceptional engineers and researchers who believed in and remained committed to the Flexciton vision to transform manufacturing from the start!
What would you do differently if you could go back in time?
Jamie: I would have built the solution from day one, the way it is now. It has taken us many years to develop our technology into the fully functional product that it is today. Along that journey, we attempted numerous things that didn't work. At the same time, this is to be expected; after all, that is the nature of R&D! I'm just incredibly proud that our team was able to bring this disruptive technology to market after more than ten years of academic research and five years of commercialisation.
Dennis: I wouldn't change a thing; it's been an incredible journey so far. I believe that where we are now is the result of all of the decisions, actions, and lessons we have learned along the way.
What’s been most surprising about the journey so far?
Jamie: The complexity of the problem we solve is so much more complex than we first imagined. The reality is that we set out to bring technology that had only been written about in academic papers through to industry. We realised quickly how there was still such a big gap between academia and the real world. This is why it has taken years of work by an incredibly talented team to bring this through to market.
Dennis: That Jamie and I had not lost any of the excitement and determination we had when we first started the company. The difficult times have toughened us, and the good times have strengthened our confidence that what we do is not just what we believe in but it's something the industry needs.
Flexciton smart technology is versatile and can be used to plan production in any manufacturing industry. Why did you decide to specialise solely in semiconductor wafer fabrication?
Jamie: The complexity of wafer fabrication is just staggering. In most factories you put raw materials through a few machines and the end product is created. In semiconductor wafer fabrication, you may need to put the raw material through 1000 machines. The complexity of the manufacturing process is exponentially higher than you find in almost any industry. What we do here at Flexciton is the state-of-the-art and the most complex manufacturing industry in existence is the best application of such technology.
Dennis: Semiconductor manufacturing is one of the most exciting industries. Making the end product is hard; it involves thousands of steps and equally many resources. The wafer fab is a fast-paced environment, and over time, the manufacturing process becomes more complex, introducing more complicated operational constraints. Compared to other manufacturing types with even hundreds of production lines, our optimization-based scheduling technology can add significantly more value to the semiconductor industry. Furthermore, the advancement of manufacturing automation in wafer fabs makes the implementation and execution of our technology a much better fit.
What is your vision for the next 5 years?
Jamie: Our vision for the next 5 years is to bring Flexciton technology to semiconductor wafer fabs across the world to enable them to manufacture the next generation of products which make up the basis of all electronics. The complexity of semiconductor manufacturing is increasing all the time and many manufacturers struggle to manufacture the latest products at scale. We exist to solve that problem.
Dennis: In the next five years, I want us to bring Flexciton technology to as many wafer fabs as possible. Our technology provides a solution to the industry's existing problem, allowing fabs to increase throughput from existing equipment, efficiently manage production cycle time to ensure orders are delivered on time. I believe, Flexciton will be the ultimate scheduling technology to enable semiconductor fabs to run their production at a new efficiency level.
If you would like to ask Jamie or Dennis a question of your own, just use our contact form.

To Batch or Not to Batch?

To Batch or Not to Batch?
Batch tools are purposefully built to process two or more lots in parallel. However, due to the complexity and volatility of the wafer fabrication environment, each day, wafer fabs are challenged to make complicated batching decisions. How to determine when to batch lots together and when better not? This is what we shall call the ‘batch or not to batch dilemma’.
Why batch lots together in a wafer fab?
One of the most precious commodities is time, and batch tools are designed to make the best use out of time. Often, these tools have a long processing time. An example would be a diffusion furnace. Instead of waiting for 6 or 8 hours for a single lot to go through the whole process before loading the next one, batching allows the processing of multiple lots simultaneously. This sounds like an easy way to solve a complex problem. However, with batch tools, on top of a whole host of constraints to respect, there are exponentially more scheduling options available compared to a non-batch tool.
In addition, the processing times of batch tools introduce a very complex dynamic. Batch tool processing times consist of fixed time (regardless of the number of lots in a batch) and a variable time (increases with each additional lot). Because of the fixed time component (+ tool setup time) in creating a batch, there is the perception that larger batch sizes are more efficient.
A typical approach to batching decisions
At a batch tool, there are a number of decisions to be made, such as whether you process lots that are already in front of the tool or wait for more to arrive? Additionally, if the number of lots waiting exceeds the tool capacity, which lots should you batch together and process first?
Typically, each fab decides on their batch-size policy, which will guide the batching decisions. One of the commonly used policies would be a Minimum Batch Size (MBS) policy, setting a minimum number of lots required to start processing. This could be determined by running a large number of different simulations. These simulations will determine the batch size that provides the best performance for that specific use case. Thus, an MBS heuristic rule would be created, setting a certain direction to follow for all future lots.
On the other hand, a ‘near-full’ policy would require you to wait until the batch size is as close as possible to the maximum capacity of a tool. In this case, it is possible to achieve high throughput, but it can also cause the tool to stay idle for a long time when waiting for more lots to arrive in order to satisfy the policy, which negatively impacts on overall cycle time.
Processing a batch whenever the tool is available and ready to process, is another approach (this is called the ‘greedy policy’). This may reduce cycle time during times of low WIP, but will likely cause an increase to cycle time and lower throughput during times of high WIP.
How to determine the right batch size
The fact of the matter is that there isn’t one perfect batch size. In reality, it depends on the context of the whole system at that specific point in time. The batching decision relies on a number of dynamic factors:
- Max batch size constraints
- What will arrive and when
- What is currently in the queue
- Possible recipe combinations that can be batched together
- What priorities the wafers have
- What objective you are currently optimizing for
In many fabs, daily batching decisions are guided by a dispatch system which uses rules-based heuristic algorithms. This approach can work very well in some cases but can bring very poor results in others.
Let’s take a look at an example, where we use a simplistic approach to illustrate the problem.
Suppose we have a batch tool with a maximum batch size of 4 lots. In order to get better efficiency, a typical dispatch rule is to set a minimum batch size e.g. minimum batch size of 2 lots. However, in a situation where one lot is already present, waiting too long, will be inefficient. Therefore, a maximum wait time - let’s say 30 min - would apply to the rule of minimum batch size. If we have waited 30 min, and another lot did not arrive, the dispatch system would send one lot for processing.

Sometimes this works well - here another lot arrived within 5 minutes, allowing 2 lots to be processed simultaneously.

In this example, the rule was effective and achieved a lower average cycle time than if we hadn’t waited for the second lot to arrive. However, sometimes a rules-based dispatch system can make poor decisions. Typically the dispatch system will only make local decisions, and won’t look ahead to anticipate which lots and when are coming, as it’s illustrated in the second scenario below. The second lot arrives at the tool in 60 minutes, but due to the 30 min waiting rule, the first lot has already been dispatched.

In order to make more optimal decisions around batch sizes, we need to be able to anticipate what WIP will arrive, when it will arrive, and where this WIP goes next after the current batching step. This can be achieved by applying a smart scheduling approach which understands the broader context required to make optimized batching decisions. The below example illustrates a rules-based decision vs an optimization approach.


This is one very simple example which shows some of the trade-offs that must be considered when making batch decisions. In the above case, it would be possible to write an extension to the dispatch rules to account for the scenario presented. However, in reality, there are several other factors which bring additional complications. For example, often, Lots will have different priorities. When a high priority lot is batched together with a lower priority lot, the average cycle time for both lots may be reduced compared to running the Lots in two sequential batches. That said, the cycle time of the high priority lot is likely to be increased - this may be undesirable.
Given how dynamic a fab is, writing dispatch rules to efficiently deal with the full range of scenarios is possible, but would be extremely time-consuming to maintain and expensive to build.
Conclusion
Batch tools are extremely complex machines to schedule, as there is a huge number of scheduling options, and each option has a different efficiency. Commonly used dispatch rules can cause poor performance in a dynamic fab environment. Often, the batching methodology follows a fixed rule, such as maximum batch size. These fixed rules can provide occasional good outcomes, but they are unable to consistently provide good solutions. As a result, the KPIs across the batch toolset might show undesirable increases in cycle time, or reductions in throughput if the WIP mix or objectives change. Additionally, creating very efficient rules would require a lot of time and extensive maintenance.
Smart scheduling, on the other hand, introduces the ability to make optimized batching decisions in any situation to achieve the objective of increased throughout or lower cycle times. By applying mathematical hybrid-optimization techniques, we are able to find a solution which is near-global-optimal, delivering a consistently high-quality outcome.
Get your Wafer Fab Scheduling Performance Analysis fully remotely and free of charge. Click here to get in touch.

The Pareto Principle of Wait Time

The Pareto Principle of Wait Time
In our previous blog, we talked about The Theory of Constraints. One of the principles being, the system is only as good as its weakest module - i.e. a bottlenecked tool or toolset.
Because of bottlenecked toolsets, wafers will spend a great proportion of their cycle time queuing (non value-adding) rather than processing. The longer or more uncertain the wait time, the higher risk of variability in the cycle time. This ultimately impacts the overall productivity of a fab.
The wait time challenge
The Pareto Principle (aka the 80/20 rule) implies that most things in life are not distributed evenly. The very same rule can apply to wait time vs processing time in fabs. We have carried out performance analyses of numerous fabs recently, and we have witnessed the Pareto Principle repeatedly across various wafer fabs.
It is obvious that in order to maintain high performance in a fab, wafers are required to be processed within a predictable time frame and in doing so, spend as little time as possible queuing. However, it is very common to observe a significant disproportion within cycle time, where the wait time is significantly longer to processing time. In some cases, we have seen wait/process time ratios of 70% to 30%, respectively.
If we can reduce the queue time, we can reduce cycle time. By drilling down the wait time, we repeatedly observed the Pareto Principle highlighting that ~20% of toolsets contribute to ~80% of all wait time.
The bottleneck tool is not always the root cause
In order to solve the wait time problem, we first need to identify where in the fab the wafers spend the most time waiting to be processed. Typically, the culprit toolsets are those which deal with wafer re-entry and possess challenging constraints that heighten the scheduling difficulty, such as photo, EPI and Implant.
Improving the sequencing or mix of WIP directly at the bottlenecked toolsets seems to be the obvious starting point. However, whilst the bottleneck toolsets are incurring the most amount of wait time, they are not always the root cause of the problem. This may occur elsewhere.
For example, if the toolset is being fed WIP in which the associated recipe is unavailable for the next few hours, then this WIP has no option other than to wait. The toolset becomes bottlenecked, however, the root cause was ineffective WIP management of toolsets upstream.
Optimizing WIP to reduce wait times
In order to reduce wait times at the bottleneck areas, we must control and optimize the WIP flow. This can be achieved by applying smart scheduling, which advises dispatch on the best real time decisions to take.
Smart scheduling can first be applied to the bottleneck toolset in question to drive local efficiencies that relate to the specific constraints and cost functions of that toolset. For example, you can use optimization to:
- Optimize batching decisions (the wait or not to wait dilemma)
- Optimize changeover sequencing
- Balance the toolset loading
By optimizing at a local level, you can achieve increases in efficiency, which leads to cycle time reduction. However, the scheduling is still at the mercy of which WIP is sent from upstream toolsets. The next step is to schedule toolsets globally (scheduling multiple toolsets together) so that the WIP mix arriving from upstream toolsets is optimally sequenced to facilitate the heavily loaded bottleneck.
Referring to the example in the previous chapter, by applying advanced global scheduling, the upstream toolset can be scheduled to prioritize WIP such that it matches the available recipe of the downstream toolset. As a result, the wait time at the bottleneck toolset can be reduced even further than if you had just optimized the toolset locally. Additionally, by having a better future forecast on where WIP will be, you can better optimize load port conflicts (IPP conflicts) and auxiliary resources (reticles at photo tools).
At Flexciton, our optimization strategy is to:
- Optimize local toolsets which incur most cycle time
- Expand to global scheduling to balance the entire system
Identifying the problem
By taking a global view of a fab, we can analyse the cycle time across the whole fab and begin to determine where to focus optimization efforts in order to improve cycle time KPIs.
At Flexciton, we have developed an analytical tool (Wafer Fab Performance Analysis), which uses historical MES data to provide a comprehensive view of fab performance. Using the tool, we are able to locate bottleneck toolsets across the fab and quantify the impact each toolset has on cycle time. Through this global analysis, we are able to recommend an optimization strategy outlining where cycle time improvements can be made.
Although fab managers are acutely aware of the overall efficiency of cycle time at their fab, employing the Wafer Fab Performance Analysis allowed us to accurately quantify the problem and pinpoint where efficiencies are lost.
Click here to get in touch and learn more about complimentary Wafer Fab Performance Analysis for your fab.

The Theory of Constraints

The Theory of Constraints
Any manageable system is limited by at least one constraint. What if this system is the most complex manufacturing process in existence? Providing optimized production scheduling for wafer fabs, we deal with a great number of constraints.
In the early 80s, Eli Goldratt, a physicist turned management guru published his book "The Goal". Why is it important? In the book, he underlined the foundations for what was described later as the "Theory of Constraints".
Although it was written nearly 40 years ago, and manufacturing industries have since evolved, many of the principles stated in the book remain relevant today. They also mirror what we do at Flexciton, helping wafer fabs to improve performance by optimizing the way the fabrication process is scheduled.
Let us give you a snippet of what is it about by sharing the three key messages of the Theory of Constraints:
- "Throughput is the money coming in." Balancing the flow of product through the plant with demand from the market is critical for the business performance. Throughput is usually a priority KPI we follow to calibrate our scheduling solution for a given fab.
- "Every hour lost at a bottleneck is an hour lost in the entire system." Based on the Wafer Fab Performance Analysis which we have carried out for various fabs, we see that a small number of tools are bottlenecks, yet this seemingly little number are responsible for substantial wait time in the fab.
- "We shouldn't be looking at each local area and trying to trim it. We should be trying to optimize the whole system. A system of local optimums is not an optimum system at all." At Flexciton, we first take a global look across the whole fab, then we identify areas where optimization would bring the most significant improvement for the entire facility.
The very first step for improving the fab productivity and KPIs such as throughput or cycle time is to identify the system's constraints and where the efficiency is lost. Flexciton's Wafer Fab Performance Analysis takes a global view of a fab and pinpoints the areas or specific toolsets responsible for lower productivity. This analysis often brings unexpected and eye-opening results.
We are currently running the Wafer Fab Performance Analysis fully remotely and free of charge. If you wish to discover more, please click here to get in touch with one of our consultants.
.avif)
Two Factors That Can Make Or Break Wafer Fab Throughput
.avif)
Two Factors That Can Make Or Break Wafer Fab Throughput
A consistent metric that wafer fab and foundry managers seek is high throughput. Being able to control and maximise throughput is critically important to the health and profitability of a semiconductor business. If the factory in question is capacity constrained, then any percentage increase to total fab throughput can be converted into further revenue for the business. Higher throughput can also prevent the need to invest in additional expensive CAPEX.
Achieving operational excellence in semiconductor manufacturing is a very difficult task and requires sophisticated industrial engineering for the best results. Fabs are highly dynamic and can be unpredictable at the best of times (tool downtimes and changing priorities), thus sustaining consistently high throughput is a major challenge for any fab manager or industrial engineer.
In this article, we are going to outline two factors that are key culprits in limiting semiconductor fab throughput, and how to control them by applying smart scheduling strategy.
Factor #1 - WIP mix at bottleneck toolsets
Wafer fabs have to deal with managing bottleneck toolsets since tools can be expensive and some deal with reentrant flow. Bottlenecks have a negative effect on cycle time and overall throughput, however, it is how we manage the sequencing and flow of WIP through these bottlenecks that leads to a productive and well-balanced factory.
At Flexciton, we repeatedly see Pareto's principle reveal that 20% of toolsets are responsible for 80% of wait time in the fab. Tackling the queue time first behind these key toolsets can significantly help with maximising global throughput across the fab, and should be on the priority list for industrial engineers.
Production progresses as quickly as the worst bottleneck allows. An hour lost at a bottleneck is an hour lost for the entire system (following the theory of constraints).
When a toolset is a fixed bottleneck (always in high demand), then it should be a priority to increase throughput at this step as the first point of call. Thus, the rest of the fab benefits from a higher overall throughput as a result. This can be achieved by improving load balancing across the toolset and better scheduling of upstream WIP to ensure optimum sequencing. For more complicated toolsets, you may need to consider optimizing changeovers or consider the scheduling of auxiliary resources (reticles for photo) in order to provide relief to the bottleneck.
It is also common to witness dynamic bottlenecking, whereby the bottleneck toolset may change over time as a result of the changing factory dynamics. For example, WIP might suddenly inflate at implantation toolsets from time to time, but will not always have a consistently high level of WIP behind it. Here, managing the global flow of WIP between toolsets (upstream and downstream) helps to alleviate these dynamic cases of WIP buildup, leading us on to the second discussion point below.
Factor #2 - Local optimization of toolsets
It is common to measure fab performance using local KPIs and targets, broken down by area or toolset for the operators. Thus, there is pressure to maintain constant high throughput across all toolsets (with little or no consideration of the impact this has on upstream or downstream tools).
Unfortunately, when local optimization occurs within toolsets, it can result in a significant imbalance to the overall fab system - leading to poor throughput. Put simply, if we process a load of WIP through 'toolset A' really quickly, then this may end up sitting at 'toolset B' for hours because the recipes are not available! Going back to point 1, this is highly inefficient and is likely to inflate bottlenecks and grind away at your throughput potential!
A wafer fab is a complex, intricate system, and we must optimize the flow across all toolsets to ensure high overall throughput. A fab in which all tools are working at max capacity at all times is actually very inefficient, and a system of local optimums is not an optimum system at all.
We want to ensure that what is being produced upstream of bottleneck toolsets are fed with optimum WIP mixture. Doing this can provide exponential increases in a fab overall system efficiency and ultimate throughput!
Smart scheduling, and its impact on throughput
The key to managing WIP flow across the entire fab system is to introduce high-quality scheduling into the process. By being able to comprehend upstream and downstream WIP flow, we begin to schedule globally across toolsets (rather than locally), and this is the key to minimising the severity of dynamic bottlenecks.
Smart scheduling is critical, as it allows the fab manager to balance priorities and make smart decisions in advance that aid greater throughput. By feeding directly to the dispatch system or providing operators with clear, specific direction on the next actions, it improves predictability and performance throughout the fab.

Heuristics or Mathematical Optimization: Which is the Best Method for Wafer Fab Scheduling?

Heuristics or Mathematical Optimization: Which is the Best Method for Wafer Fab Scheduling?
Scheduling is difficult. Scheduling your workday ahead in the morning is difficult. However, this blog is about wafer fabs, and let's start by saying that scheduling a wafer fab to run optimally is one of the most challenging mathematical problems that exists in modern day manufacturing. Why?
Wafer fabrication is extremely complex. Lots of wafers may re-enter tools numerous times, and compete for capacity against hundreds of other lots. Not to mention a whole host of unique scheduling constraints as well as unexpected events, such equipment suddenly going offline.
Scheduling problems are widely researched, and there are various methods used to solve them. However, in the semiconductor industry, there are two commonly debated techniques: heuristics (rules-based) and mathematical optimization.
Both methods provide effective scheduling solutions, however, each has limitations, which is the topic of the discussion below.
Heuristics
A heuristic algorithm is essentially a ‘best-guess’ approach to a decision problem.
By taking approximations and algorithmic shortcuts, heuristics can arrive at a final decision very quickly. Think of a heuristic as a long decision-tree of if-then logic. Heuristics are fantastic at finding realistic solutions extremely quickly. Since wafer fabs are extremely dynamic, this method is widely adopted for this reason.
However, the benefit that heuristics have on speed, gives way to its biggest flaw - solution quality. Being rules-based, it can only search through a restricted number of scenarios and follow a 'familiar' path. Hence, you never really know how good the final solution is in terms of quality (there could be a much more optimal solution available).
A heuristic simply cannot optimize a decision problem - leaving wasted productivity on the table.
Mathematical optimization
Mathematical Optimization is used when you want to find the optimal (best possible) solution.
A classic optimization problem is the ‘travelling salesman’ problem, where the objective is to seek the shortest path for a salesman to travel, given a number of nodes that must be visited. Here there is only one optimal solution, and mathematical optimization is a method of finding it.
In wafer fab scheduling, optimization can be used to similar effect: where, as an example, the objective may be to find a schedule that minimises cycle time for both production and R&D wafers. Here, all possible scheduling scenarios can be evaluated and the best one chosen (a major benefit over heuristics, where only one schedule is evaluated and chosen).
In a fab, factory states change frequently making it an extremely dynamic environment to schedule. As a result, a scheduler must be fast enough to re-run schedules in order to cope with the factory dynamics. However, optimization algorithms need time to find the best possible solution. In practice – it might be even hours, which makes pure optimization scheduling impractical and unrealistic for wafer fabs.
Which method for high-quality wafer fab scheduling?
This is the dilemma. Heuristics are widely adopted because they are fast and realistic. But the quality of solutions is inferior compared to the potential of using an optimization technique. Pure optimization, on the other hand, is not feasible in a fast-paced live fab environment.
What is needed is a ‘best of both worlds’ approach. A scheduler that is quick, reliable and optimal. This is exactly what a hybrid-approach seeks to achieve.
In our recently published white paper, we examine a new hybrid optimization approach that enables fast and near-optimal scheduling. In this, we compare the hybrid approach with the two widely used methods described above. The white paper can be viewed and downloaded by clicking here.
EU-Startups – London-based Flexciton secures about €17.8 million to boost the capability of the global semiconductor industry
EU-Startups – London-based Flexciton secures about €17.8 million to boost the capability of the global semiconductor industry
0xzx – London-based Flexciton receives approximately €17.8 million to enhance global semiconductor industry capabilities
0xzx – London-based Flexciton receives approximately €17.8 million to enhance global semiconductor industry capabilities
Speak to one of our experts
Book a demo session or simply reach out to one of our experts to learn more about what Autonomous Technology could do for your fab.
