Anker’s Yang Meng Discusses GPU, Transformers, and the Future of Robotics

Anker's Yang Meng Discusses GPU, Transformers, and the Future of Robotics
How does Anker, which focuses on robotics, view the future of large models and general-purpose robots?

Anker is often seen by the Chinese public as a power bank company, but in fact, power bank revenue accounts for less than 10% of their income. In 2022, Anker generated $2 billion in revenue, achieving top-tier status in several subcategories within charging and energy storage, audio-visual equipment, and smart home sectors.

As a former Google employee, Anker’s founder & CEO Yang Meng has a background in machine learning and has developed search algorithms at Google. He has a unique understanding of the NVIDIA GPU + Transformer architecture that underpins current large models: “(The future of large models should be an integrated architecture), and the paradigm that is more suitable for this integrated architecture may not be the Transformer paradigm, but rather a newer, more suitable paradigm for integrated architecture.”
Having a robotics business, Anker also has its own plans for the future of robotics + large models. According to Yang Meng, “General-purpose robots will definitely become a super category, regardless of whether they are humanoid or not, this is indisputable.”
Regarding Anker’s future, Yang Meng has his own plans, stating, “Our goal is to become the Procter & Gamble of consumer electronics, or Texas Instruments.” This requires many creators who seek extreme excellence and long-termism.
On March 26, during a live dialogue between Zhang Peng, founder & president of Geek Park, and Yang Meng, founder & CEO of Anker, various topics were discussed, including the Transformer architecture of large models, NVIDIA’s GTC conference, edge models, and the future of robotics + large models, presenting some non-consensus viewpoints.
This article is based on the live stream and has been slightly edited.

01

Divide and conquer has a ceiling,

End-to-end is the future

Zhang Peng: NVIDIA just held its tech gala GTC 2024, and its stock price has been soaring since last year, with limitless potential. When I spoke with Yang Meng, I found he had some non-consensus viewpoints. How do you understand NVIDIA’s future value ceiling and what exciting prospects lie in the combination of hardware and large models?
Yang Meng: Our work habit is to abstract problems, looking through phenomena to see the essence, because often, the closer we get to the essence, the more we can withstand long-term tests. So I want to explain NVIDIA’s situation from an abstract perspective first.
First, let me state my viewpoint: the product that made NVIDIA famous is primarily the GPU, I believe the success of the GPU is an intermediate state in the process of algorithm paradigm alternation. The GPU helped usher in a new era of algorithm paradigms, but when this new era truly arrives, it is likely to be consumed by the new era as an intermediate product.
Zhang Peng: Is this era referring to computation?
Yang Meng: Yes, starting from Turing and von Neumann, the past few decades of computer science has essentially been a divide and conquer (divide and conquer) science.
Divide and conquer, literally meaning ‘divide and rule’, involves breaking a complex problem into two or more similar subproblems until the subproblems can be solved directly. The solution to the original problem is then derived from the solutions of the subproblems. This method forms the basis of most algorithms, such as sorting algorithms (merge sort), natural language processing (dividing into tokenization, word information statistics, as well as translation, search, etc.), and autonomous driving (dividing into perception, decision-making, control, etc.).
Divide and conquer has many benefits; it aligns well with human logic, is easy to operate, easy to understand, and can be implemented through team division of labor.
The core of von Neumann architecture distinguishes between CPU and memory, storing the code generated by divide and conquer in memory. When the problem is solved to a certain point, the relevant code is loaded from memory into the CPU for execution, making it the most suitable computer system architecture for divide and conquer.
Zhang Peng: To summarize, after the introduction of divide and conquer, we can break down complex problems into solvable steps, allowing people to invest greater engineering effort into smaller problems, driving progress. If we do not decompose a grand problem, even the strongest individual cannot solve it. Through divide and conquer, we enable many people to tackle the problem through engineering methods and then connect them to solve it.
Yang Meng: That summary is excellent. While divide and conquer can quickly reach a score of 70 or 80, its ceiling is relatively low, and it is unlikely to surpass 85.
Zhang Peng: Why is it unlikely to surpass 85?
Yang Meng: Let me share a fact: when I was a search engineer at Google in 2008, under our internal evaluation standards, Google Search could already achieve a score of 62 or 63, but over the past decade, this score has not improved significantly.
This is backed by some underlying logic. First, divide and conquer must avoid cumulative errors, so each step must be done as well as possible. However, it turns out that the more detailed the steps are, the more likely we are to get bogged down in minutiae without meaning. For example, robots need to achieve repositioning accuracy, while humans have poor repositioning accuracy but can perform complex operations. I believe that in the future, humanoid robots may not require such high repositioning accuracy.
Secondly, divide and conquer consumes a lot of manpower as it continually subdivides problems. Search requires a thousand people at its base, and autonomous driving also requires a thousand people. The worse situation is that the more finely divided it becomes, the more complex the code, leading to exponentially increasing understanding and iteration costs. Eventually, its ceiling is reached; at that point, adding more people or further subdividing does more harm than good.
Zhang Peng: So what should we do?
Yang Meng: In 2003, when I was pursuing my PhD in the U.S. and studying Machine Learning, I quickly felt that in the AI field, a new paradigm emerges every 5-10 years. A paradigm refers to a popular algorithm framework, for example, in the 1980s, decision trees were popular, in the 1990s, Support Vector Machines, in the 2000s, Random Forests, in the 2010s, deep learning, and starting from 2017, the Transformer. You can see that basically every 5-10 years, a new paradigm emerges.
At that time, my intuition was that decision trees and SVMs had no future, while neural networks, as biomimetic paradigms, had a promising future. However, I was not good at math, so I shifted to engineering and business.
As the paradigm iterated to the fourth and fifth times, end-to-end neural networks emerged, breaking the previous divide and conquer approach. In other words, since deep learning began, divide and conquer has been disrupted by end-to-end algorithms. We no longer solve problems through subdivision but rather use biomimetic algorithms similar to the human brain to solve problems end-to-end, which also breaks the previous performance ceiling of divide and conquer.
Using end-to-end algorithms, OpenAI’s 30-person GPT team (at the end of 2022) can produce NLP algorithms that outperform teams of hundreds or thousands of people, and Tesla’s algorithm team of fewer than 100 can achieve Level 4 autonomous driving.
For instance, in natural language processing, the era of solving problems using divide and conquer has cultivated at least 30,000 PhDs, published tens of millions of papers, and consumed the lifeblood of many. However, in today’s era of end-to-end large models, all of that is rendered meaningless. In other words, when you possess a large biomimetic model, as long as you have enough data and computational power, even the most complex problems can be solved by a few dozen people. The knowledge that each member of a thousand-person team possessed to solve detailed niche problems, such as how to tokenize Japanese katakana, is no longer needed.
I also experienced something at Google. In 2008, there were already people using machine learning to rank search results, and the results were better. However, this algorithm took five or six years to launch because the head of the search team believed it could not be explained or debugged. Today, many who follow the traditional divide and conquer approach argue that end-to-end algorithms are inexplicable and undebuggable, and we must be cautious. However, I believe this is an excuse; in the face of absolute capability, the forces that obstruct the arrival of the new era will surely be crushed.

02

NVIDIA GPUs Spawned the

Transformer Paradigm,

But It May Not Be the Long-Term Optimal Solution

Zhang Peng: If we cannot understand how results are produced, we might think they are wrong, but they could actually be correct; we just cannot understand why they are right. You mentioned that the von Neumann architecture and divide and conquer seem to be a perfect match, and even neural networks and end-to-end large models are born under the von Neumann architecture. How do we understand this?
Yang Meng: Many things require analogy or comparing similar entities to discover discrepancies and further contemplate the underlying reasons.
Let me first state a fact: the human brain has 86 billion neurons, but its computational power consumption is only about 15W. Why does an 800 billion parameter Transformer model running on NVIDIA’s architecture require several kilowatts of power?
The human brain is essentially integrated in storage and computation; neurons both store information and perform calculations. However, NVIDIA still follows the von Neumann architecture, where all model parameters are stored in high-bandwidth HBM3 memory. Each time a calculation is performed, they are moved to the matrix multiplier for computation, and after completion, they are moved back, causing most of the power consumption to be wasted during the transportation process. If our brain were built on the von Neumann architecture, with storage on one side and computation on the other, for example, storing in the left brain and computing in the right brain, every time we think, we would have to move all knowledge and memories from the left brain to the right brain. This repetitive back-and-forth would surely burn out the brain.
Thus, the ideal architecture for biomimetic neural network algorithms must be a biomimetic architecture, where storage and computation occur within the same unit. In Chinese, this is called integrated storage and computation, in English known as in-Memory Computation (IMC). Computing in memory eliminates data transportation.
I believe that integrated storage and computation architecture will eventually replace the von Neumann architecture.
Previously, I spoke with Teacher Zhang from IDEA, and we discussed that the Transformer is essentially a neural network algorithm based on large-scale parallel matrix computation. Why did the Transformer emerge? It is because of NVIDIA’s GPU, which is particularly suitable for large-scale matrix computation hardware.
This may sound a bit convoluted, but the Transformer is a biomimetic algorithm paradigm built on the current most suitable parallel matrix multiplication hardware for biomimetic algorithms. In other words, if there were no NVIDIA GPUs and matrix multiplication, we would not have Transformers today, so NVIDIA has indeed spawned the Transformer.
Zhang Peng: At the GTC conference, Huang made it clear that chip computing power is still expanding. The infinite scaling law of large models is supported by NVIDIA’s computing power. His statement suggests that computing power improvements are still adaptive today and can move forward. However, you believe this will not last and will eventually be disrupted?
Yang Meng: Using GPUs with biomimetic algorithms is not adaptation; it’s making do.
According to the logic we just derived, large models must eventually move away from the von Neumann architecture to an integrated storage and computation architecture. The paradigm that is more suitable for integrated storage and computation may not be the Transformer paradigm but rather a newer, more suitable paradigm for integrated storage and computation.
In other words, the sixth-generation paradigm running on NVIDIA GPUs may give rise to a brand-new paradigm in the seventh generation on the integrated storage and computation architecture.
Zhang Peng: So, although NVIDIA, which operates under the von Neumann architecture, has given birth to the Transformer and the current neural network algorithms, the evolution of biomimetic neural networks will ultimately settle on a new computational architecture. In other words, only superhumans can give birth to superhumans. This logic suggests that only on a superhuman computational architecture can we reach ultimate AGI and achieve the most advanced biomimetic algorithms, correct?
Yang Meng: Yes, the current architecture is like storing all knowledge and memory in the left brain and moving it all to the right brain for reasoning every time. I believe that in the long run, this will definitely change.
Zhang Peng: Is this mainly about energy consumption and efficiency?
Yang Meng: It’s not just about energy consumption; it’s also about CPU cycle efficiency. Currently, more than 90 out of every 100 cycles are wasted on transportation. Additionally, the human brain is not only integrated in storage and computation but also in training.

03

Integrated Storage and Training as a New Paradigm

May Be the Future, but It’s Also Dangerous

Zhang Peng: Regarding integrated storage and computation, what companies worldwide are making significant advancements in this area? Or what can we see today that is relatively certain?
Yang Meng: Domestically, there are already dozens of companies working on integrated storage and computation, and there are also quite a few overseas.
However, integrated storage and computation is much harder to achieve than CPUs. Although CPUs sound complex, they have a good ecosystem and division of labor in the industry. Companies can license ARM or RISCV and use TSMC for chip production, so even a small company with just a few dozen people can create CPUs.
However, you will find that storage is completely different from CPUs. There are currently fewer than ten companies globally that produce storage, and they are all oligopolies, forming very thick patent protections and producing in their own IDM. Under such high barriers, it is quite challenging for startups to engage in integrated storage.
Zhang Peng: So, we need to discuss computational issues based on storage, but there is little industry ecology in the storage area, which is very unfavorable for innovation. Is that the point?
Yang Meng: Yes, there is little industry ecology in storage, making it very difficult to truly break in; innovation is quite challenging.
Zhang Peng: I can’t help but ask, if you can see this point, can’t NVIDIA see it? NVIDIA has been in computing for so many years and has such a high market value; I think in front of NVIDIA, storage barriers are not barriers. They could just buy a storage company. Is there such a possibility?
Yang Meng: It’s possible. I think when NVIDIA buys a company that does storage, like Micron or Western Digital, they may really be moving toward integrated storage and computation. If they haven’t bought it yet, this matter is likely just an internal trial; it’s hard to say they are truly engaged.
Zhang Peng: If we follow this logic, NVIDIA must see it, but currently, NVIDIA is still under the von Neumann architecture and is still using engineering methods to further accelerate. It seems they can match market demand, and the key now is to increase production capacity, let alone structural innovation.
So today, NVIDIA does not face significant competitive pressure from peers and has no urgent need to switch to so-called superhuman architecture.
Yang Meng: I want to talk about integrated storage and training because I think ‘training’ is a fascinating topic. Humans evolve as they grow; every experience they go through involves both computation and training.

04

Long Context + Edge

Models May Be the Optimal Solution for Individuals

Zhang Peng: I remember you previously worked on some adaptations in AI; how did that go? In today’s technological conditions, can that approach solve the problem?
Yang Meng: We use large models extensively. For example, our customer service system uses GPT based on a knowledge base and agent, automatically replying to over 40% of customer emails daily, totaling tens of thousands. This is a collaboration with Shulex, and they also provide services externally. We have nearly 1000 software engineers, 100 algorithm engineers, 200 app and server engineers, 300 embedded engineers, 100 IT specialists, and 200 software testers. With the internal development tools based on GPT, 20% of the code submitted today is already AI-generated, and we hope to increase this proportion to 50% in the second half of this year.
We are using these tools, but they are not perfect yet because the models do not understand the relevant prior knowledge of the enterprise, and thus, the results they provide may not be the best. In other words, today’s large models possess general knowledge but have no understanding of your knowledge and memory, which severely limits their effectiveness in solving problems. The same applies to enterprise issues; when large models lack internal information, the results always miss the mark.
So how do we combine enterprise knowledge with large models? Our company habitually lists all options, arranging them from extreme left to extreme right. The extreme left involves retraining, the middle left is called fine-tuning, the extreme right is prompt engineering, which involves writing better prompts, and the middle right is embedding, also known as retrieval-augmented generation (RAG), where we break enterprise knowledge into small segments, vectorize them, find relevant vectors, translate them back into text segments, and send these segments to the large model for result generation.
Last time we discussed this with ByteDance, Zhang Yiming mentioned another intermediate solution called long context. When your context truly reaches 10 million tokens, that context may encompass everything about you.
Zhang Peng: Isn’t this integrated storage and computation? The context effectively becomes a memory.
Yang Meng: Yes, to some extent, it turns the context into memory, but achieving a very long context today still incurs significant algorithmic costs.
Another issue is, if there were truly a context that recorded all your experiences over the past ten or twenty years, would you feel comfortable having that context stored in the cloud?
Zhang Peng: Definitely not in the cloud; it must be on my edge, with at most an encrypted backup in the cloud.
Yang Meng: This is why I say that future reasoning must occur on the edge. When you want to store all knowledge and memory in such a long context, it can only be done on the edge; it cannot be run in the cloud; that is too dangerous and lacks privacy.
I believe that in the future, long context should be combined with edge models, and even when the context is long, edge models won’t need particularly large parameters. As training data becomes more refined, the model parameters can become smaller. Thus, the future may involve a 7B or 10B model running on the edge, combined with your super long context; in this case, we must question whether an NVIDIA chip will still be present on the edge.
In the long run, this may involve integrated storage and computation chips. In the short term, today, a 7B model paired with a particularly large context might be something Qualcomm can achieve, leaving NVIDIA out of the equation. Today, our era appears particularly unpredictable and complex. However, if you look at the underlying principles, they remain quite stable, consistently moving towards biomimicry; both algorithm paradigms and system architectures are heading in that direction.

05

New Opportunities in Consumer Hardware,

Anker Seeks Creators with Similar Values

to Collaborate

Zhang Peng: What imagination space do you see for the future integration of large models with consumer hardware, including embodied intelligent robots?
Yang Meng: There are several layers to this. At the foundational level, we are looking at how to expedite the landing of integrated storage and computation chips. Regarding the algorithms for robotic large models, we are also considering whether to develop a robotic large model ourselves. In fact, the human brain has two layers: the cerebrum and the cerebellum. The cerebrum is responsible for knowledge storage and decision-making, while the cerebellum is responsible for movement. Therefore, we are contemplating which model we should develop ourselves.
At a higher level, creating algorithms can be done by a small team, but once hardware is involved, it cannot be managed by a small team of a dozen people, as the value chain for hardware is particularly long. A hardware company typically requires at least 1000 people to have a significant presence in the international market.
Zhang Peng: So in this regard, divide and conquer must continue; we must break it down to the corresponding problems and have enough force to engineer a better solution, correct?
Yang Meng: Yes, from our observations, almost all hardware companies with a scale of several billion or even over a hundred billion have business but lack capability. What do I mean? They rely on a group of people to develop a few products to sell globally, achieving billions in revenue, but they lack the ability to systematically implement this across many subcategories.
After the transformation of underlying technologies in recent years, I believe that first, many existing hardware forms can be reimagined, and second, large models will undoubtedly spawn entirely new hardware forms.
Zhang Peng: You usually keep a low profile and only communicate in small circles. Why did you suddenly come forward to speak out? Are you looking to recruit?
Yang Meng: You caught me…
We indeed feel that there are many opportunities available at this stage, and our company’s biggest conflict is that there are fewer people than opportunities. Relying on one-on-one communication will certainly not attract many people. So we want to use some hot topics and well-known media to share our thought processes and ways of working to attract like-minded individuals to create something new.
Zhang Peng: So what kind of company is Anker? What kind of people do you need?
Yang Meng: Anker is a consumer electronics company. Many people know Anker as a power bank company, but today power banks account for less than 10% of our revenue. We have over a dozen product lines in charging and energy storage, audio-visual equipment, and smart home sectors, with a research and production team of 2000 people, and in 2022 we generated over $2 billion in revenue. In some areas, we have already reached the top tier globally. For example, in home security, our Eufy brand has achieved a 30% market share in products priced over $400, becoming the market leader.
Anker is a company focused on niche categories; we do not produce smartphones or computers, but instead concentrate on excelling in many subcategories. Our goal is to become the Procter & Gamble or Texas Instruments of consumer electronics. We have a platform with mature process tools, foundational technologies, sales channels, and branding effects to enable teams focused on niche areas or markets to efficiently create outstanding products and achieve commercial success, with a higher probability of success than independent entrepreneurship, earning more expected income.
When it comes to recruitment, we often talk about finding individuals who share our values, that is, creators. From the discussions above, you can sense that we are a company that enjoys deep thinking. I want to emphasize that Anker’s first value is first principles, which means setting aside what others tell you and starting from the most basic principles to derive the immutable framework of a matter and find the factors within that can change.
You will find that the direction derived from first principles often leads to a path that no one has taken before.
Zhang Peng: First principles lead to non-consensus.
Yang Meng: Yes, so having first principles is not enough; you also need the courage to deviate from the majority and walk down a path that no one has walked before. Therefore, our second value is to pursue excellence, which means daring to take risks, finding ways to pursue long-term optimal solutions behind non-consensus.
In fact, pursuing excellence is a concept that contrasts with seeking victory. For instance, in basketball, if my team strives to win by one point, that is called seeking victory, but pursuing excellence means that every shot, every move must be executed as well as possible. The outcome in this case may not just be winning by a few points.
Zhang Peng: It’s about dominating the competition.
Yang Meng: Defeating others is not the ultimate goal; you will find that companies that focus on competing with others often do not end well. Only by truly finding the right path from first principles and walking it quietly step by step can one create an exceptional company. We only realized this last year in the second half.
Previously, our corresponding value was to pursue excellence, summarized into three surpasses. The first step is to surpass yesterday’s self, the second step is to surpass the best peers, and the third step is to surpass the most essential needs of consumers. While this sounds like a linear progression, after four to five years of practice, we found that this is not a sequential staircase; it is actually two choices of moving left or right.
Moving right is a smooth path to surpass oneself and peers, while moving left is a mountain with no path. If you want to surpass the most essential needs of consumers, you must forget about the previous two surpasses and not take the smooth path on the right. You must be willing to find that hidden upward path on the left and walk that path quietly.
For example, in audio front-end algorithms, noise reduction, 3A, etc., were previously addressed using different signal processing algorithms or small deep learning models. Unifying them into a large model is a first principles approach, but no one has done it. Our audio algorithm team identified this first principles path, took risks, found ways, and spent two years to achieve a large speech model that reached the best speech front-end effects in the world, which will be fully deployed in Anker’s conference phones, headphones, and other products starting this year. This approach of not competing in the short term with peers but quietly breaking through is an example of the values we encourage internally.

06

Distributing Surplus Value

to Those Who Create Value

Zhang Peng: What qualities do you think your company possesses that support continuous innovation and enable you to maintain excellence over the years, achieving today’s scale?
Yang Meng: First, we are a long-term oriented company.
Long-termism can be broken down into two qualities: the first quality is endgame thinking, which means recognizing the future situation that guides current choices. For example, some may ask, why did Anker, which initially sold products through e-commerce, decide to transform into a product company? Because we found that any mature distribution platform will place branded goods at the forefront, meaning that only branded products can thrive. Therefore, when we clearly see this endgame, we must choose to build a brand.
The second quality is delayed gratification, being willing to sacrifice short-term benefits for greater long-term interests. For instance, we are currently investing a lot of money into developing robotic large models, new hardware, and even storage-computation chips. In the short term, it is certainly difficult to achieve profitability, but a very firm element of our company’s values is delayed gratification; we believe these investments will lead to better long-term outcomes. As long as we don’t starve in the short term, we will strive for long-term success.
Secondly, it’s about distributing value correctly.
To quote Ren Zhengfei, he said that value distribution must align with creation; otherwise, value creation cannot be sustainable. In other words, whoever earns the money for the company should receive it; otherwise, the company will not last long.
We share the same view. Our company has an indicator called surplus value. If you open a company’s financial report, you will find two lines: one line indicates the compensation and benefits paid to workers, and the other line shows the net profit left for the parent company’s shareholders after deductions. These two lines essentially represent the money available for distribution. Simply put, the total amount of money paid to workers and the money left for shareholders is called surplus value. We observe that top-tier companies generally have a surplus value of over 30 points; for example, Apple has 29 points, and Huawei has 33 points.
However, how companies distribute surplus value varies significantly. Apple allocates most of it to shareholders, while laborers receive only a few points, whereas Huawei allocates the majority to laborers and shareholders only receive a few points. We believe that the value of consumer electronics companies is primarily created by workers, so we will, like Huawei, allocate 75% of the surplus value to workers, with shareholders receiving a smaller portion.
Zhang Peng: How many points of surplus value do you currently have?
Yang Meng: We are just over twenty points, so Anker is still a second-tier company striving to become a top-tier one (laughs).
However, we say that this year’s results stem from last year’s efforts. Before the second half of last year, Anker was still a company focused on winning, but after upgrading our values to pursue excellence, our thinking and actions have begun to shift, and I believe everyone will see the changes in our results.

07

General-purpose Robots Will

Become a Super Category

Zhang Peng: Lastly, I want to ask, what do you see as the truly structural opportunities that excite you and that you are willing to bet on in the next five years?
Yang Meng: First, we must integrate our existing hardware products with large model capabilities, and based on this, many new hardware categories will emerge. I believe there are definitely many opportunities in this area.
From a hardware product perspective, the biggest opportunity lies in robotics. Today, within the robotics category, there are specialized robots, such as vacuum robots, autonomous driving, delivery robots, and various specialized robots, as well as general-purpose robots, such as humanoid robots.
Returning to first principles, we can abstract the rules. The pattern during the smartphone era was to first refine products in specialized subcategories, breaking through user experience thresholds, such as making functional phones, MP3 players, digital cameras, GPS navigators, etc., before converging into a general-purpose super category like smartphones. If this rule holds for the robotics field, I think there will be many opportunities for specialized robots in the next 5-10 years.
Zhang Peng: What about general-purpose robots?
Yang Meng: I believe that general-purpose robots will definitely become a super category, regardless of whether they are humanoid or not; this is indisputable.
Regarding whether specialized categories like vacuum robots will still exist after general-purpose robots emerge, I believe they will still exist. This is because a household cannot buy three or five general-purpose robots; if you buy one general-purpose robot, it will help you cook and do laundry, and it may not have much time left for vacuuming. In this case, if a vacuum robot is not too expensive, I believe it will still have a place.
For general-purpose robots, we like to align our understanding with three questions:
  1. When the one millionth general-purpose robot rolls off the production line, what will be its factory cost?

  2. In what scenarios will these one million robots be utilized?

  3. What year will that day come?

If anyone has their own thoughts on these questions, feel free to come to the company and find me for a secret handshake.

As a company focused on niche categories, if we do well, the specialized categories we create will have the opportunity to converge into the general-purpose robot super category, and we will turn them into independent companies to seize this opportunity. Life only gives us a few chances to fight.

Leave a Comment