Ad Code

Interesting read about Artificial Intelligence/Machine Learning

(1) Harvard Business Review: Why Companies That Wait to Adopt AI May Never Catch Up
Source: https://hbr.org/2018/12/why-companies-that-wait-to-adopt-ai-may-never-catch-up
Summary: A HBR article that discusses the importance for companies to become earlier adopters of AI/ML technology: The sourcing of substantial amount of meaningful training data, the training of AI model by using incoming dataset that continues to grow at exponential rate, and business processes re-engineering following the adoption of AI based enterprise system all take time. But once the AI system starts to produce fruitful business outcome, companies will be getting unassailable first mover advantage.


(2) Harvard Business Review/ Andrew Ng: What Artificial Intelligence Can and Can’t Do Right Now
Source: https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now
Excerpt: After understanding what AI can and can’t do, the next step for executives is incorporating it into their strategies. That means understanding where value is created and what’s hard to copy. The AI community is remarkably open, with most top researchers publishing and sharing ideas and even open-source code. In this world of open source, the scarce resources are therefore:

1) [Data]. Among leading AI teams, many can likely replicate others’ software in, at most, 1–2 years. But it is exceedingly difficult to get access to someone else’s data. Thus data, rather than software, is the defensible barrier for many businesses.
2) [Talent]. Simply downloading and “applying” open-source software to your data won’t work. AI needs to be customized to your business context and data. This is why there is currently a war for the scarce AI talent that can do this work.

Coursera Co-Founder Andrew Ng on Artificial Intelligence: Why AI Is the New Electricity?
[i] https://medium.com/syncedreview/artificial-intelligence-is-the-new-electricity-andrew-ng-cc132ea6264
[ii] https://www.youtube.com/watch?v=21EiKfQYZXc [Youtube]


(3) MIT Technology Review: Is AI Riding a One-Trick Pony? Source: https://www.technologyreview.com/s/608911/is-ai-riding-a-one-trick-pony/
Excerpt: AI today is deep learning, and deep learning is backprop—which is amazing, considering that backprop is more than 30 years old. Geoffrey Hinton with his colleagues David Rumelhart and Ronald Williams published a breakthrough paper in 1986 that elaborated on a technique called backpropagation, or backprop for short. Backprop, in the words of Jon Cohen, a computational psychologist at Princeton, is “what all of deep learning is based on—literally everything.”

The way how Backprop works is that you start with the last two neurons [that produced the computation results], and figure out just how wrong they were: how much of a difference is there between what the excitement numbers should have been and what they actually were? When that’s done, you take a look at each of the connections leading into those neurons—the ones in the next lower layer—and figure out their contribution to the error. You keep doing this until you’ve gone all the way to the first set of connections, at the very bottom of the network. At that point you know how much each individual connection contributed to the overall error, and in a final step, you change each of the weights in the direction that best reduces the error overall. The technique is called “backpropagation” because you are “propagating” errors back (or down) through the network, starting from the output.

My Interpretation of the article: The danger of the hype over Deep Learning (Machine Learning) is that most of the recent breakthroughs in various application domains are achieved through the improvement of hardware processing capability in combination with the big data and not the true leapfrog development of the science itself. The fundamental of the neural network science probably hasn't changed much in the past 30 years. It is the powerful hardware that has drastically reduced the time taken for the computation of big data. Besides, the result of computation probably will only show you the correlation and not causal relation.


(4) Companies are suddenly declaring themselves "AI first." Why it’s a problem for their current customers
Source: https://www.linkedin.com/pulse/companies-suddenly-declaring-themselves-ai-first-why-its-joshua-gans/
Summary: Technology companies today are facing trade-off dilemma in their strategic commitment: Do they want to be AI first, Mobile first or Cloud first? As what questioned by the author: If AI is first, what becomes second?

Additional reference about the context of Mobile first, Cloud first and AI first: The Three Major Platform Shifts of Our Time - a16z Partner Frank Chen
Source: https://www.youtube.com/watch?v=OGIOAlSyHs4
Excerpt: We've seen basically three platform shifts since we started so nine years. These things don't happen that often.

One platform shift was 'on premise' to 'cloud'. At one point it was counter intuitive to start a cloud first company. In fact I remember when Databricks started and they said we're gonna be cloud only, we're not gonna be on premise, we were like I'm not sure that's the right strategy. And it's turned out great for them and so it was sort of the right time.

Another big platform shift was mobile first, which at one point it was risky to start a company and not basically build the desktop web version first and start with mobile. It was counter intuitive and then it clicked.

AI first is probably sort of the the latest incarnation of that. And at one point that was meaningful and you were differentiated. And then at some point it's like what do you mean you don't have AI algorithms? What do you mean you're not doing deep learning or machine learning? Like two companies being equal doing the same thing - one with, one without machine learning, guess which one gets funded, guess which one will build together software best? Guess which one will delight their customer right? It's obviously the one sort of with AI.


(5) Partner at Andreessen Horowitz Benedict Evans on Ways to think about Machine Learning
Source: https://www.ben-evans.com/benedictevans/2018/06/22/ways-to-think-about-machine-learning-8nefy
Summary:

1) Machine Learning AI provides opportunities to release human talents to more meaningful and impactful works.
2) It enables/accelerates the automation of repetitive works that require low to medium level of human cognitive skills. However, Machine Learning AI still cannot match the in-depth intelligence of any individuals in a single domain or multiple domains.
3) AI is the next wave of enabling technology that resembles the contribution of relational database to information system in 1980s, 1990s and 2000s. The author envisions that in the future, Machine Learning could be an integral part of almost any information system.

Other sharing about Machine Learning by Benedict Evans: Does AI make strong tech companies stronger?  *NEW*
Source: https://www.ben-evans.com/benedictevans/2018/12/19/does-ai-make-strong-tech-companies-stronger
Excerpt: In the past, if a software engineer wanted to create a system to recognise something, they would write logical steps (‘rules’). To recognise a cat in a picture, you would write rules to find edges, fur, legs, eyes, pointed ears and so on, and bolt them all together and hope it worked. The trouble was that though this works in theory, in practice it’s rather like trying to make a mechanical horse - it’s theoretically possible, but the decree of complexity required is impractical. We can’t actually describe all of the logical steps we use to walk, or to recognise a cat. With machine learning, instead of writing rules, you give examples (lots of examples) to a statistical engine, and that engine generates a model that can tell the difference. You give it 100,000 pictures labelled ‘cat’ and 100,000 labelled ‘no cat’ and the machine works out the difference. ML replaces hand-written logical steps with automatically determined patterns in data, and works much better for a very broad class of question - the easy demos are in computer vision, language and speech, but the use cases are much broader. Quite how much data you need is a moving target: there are research paths to allow ML to work with much smaller data sets, but for now, (much) more data is almost always better.

Though you need a lot of data for machine learning, the data you use is very specific to the problem that you’re trying to solve. GE has lots of telemetry data from gas turbines, Google has lots of search data, and Amex has lots of credit card fraud data. You can’t use the turbine data as examples to spot fraudulent transactions, and you can’t use web searches to spot gas turbines that are about to fail. That is, ML is a generalizable technology - you can use it for fraud detection or face recognition - but applications that you build with it are not generalized. Each thing you build can only do one thing. This is much the same as all previous waves of automation: just as a washing machine can only wash clothes and not wash dishes or cook a meal, and a chess program cannot do your taxes, a machine learning translation system cannot recognise cats. Both the applications you build and the data sets you need are very specific to the task that you’re trying to solve (though again, this is a moving target and there is research to try to make learning more transferable across different data sets).


(6) Former Microsoft Executive, former Baidu COO Qi Lu on AI and other technology topics
Source: https://blog.ycombinator.com/baidus-coo-qi-lu-discusses-ai-with-daniel-gross/
Excerpt: In this wave of technology development, there’s one aspect that is fundamentally different from the previous generation of the big technology wave, which is data plays an essential role. I’ll offer you this simple example. You can have 10,000 engineers, great engineers, or you can have a million great engineers. You will not be able to build a system that understands human conversations. You will not be able to build a system that will recognize objects or scenes of images because you need to have data. A simple analogy is very much like humans. When you and I grew up, it’s not like our parents or God is writing coding to our brains. Our builtin neuro-engines have the ability to learn, so our sensory systems, essentially, our perceptive systems, whether it’s visual systems or whether it’s auditory systems that we are able to observe the world. Our observations, those sensors, these are data. This data carries knowledge, and we are able to learn from our interaction with the world. As we grew up, we acquired knowledge. The same thing happens for AI technology. It’s not about writing code this time. It’s about writing code that implements algorithms with both soft and hard wares that are able to learn, and learn knowledge from the data.

If you take that perspective, data in my view is for the AI era. It will become a primary means of production. By definition, means of production is a form of capital. We look at, historically, in our human history, let’s say, in the agricultural era, land is the primary means of production. You can see everything is organized around the land. All the wars are competing for land. In the industrial eras, the means of productions are primary labor equipment, different type of equipment. And certainly, financial capitals, human talent. But in the AI era, my view is that data will become a primary means of production. Harnessing data becomes key. And that comes back to China, because China has a different socio-economical policy around it. For certain segments, not on everything. For certain segments, it’s much easier to acquire and harness it. With that, it creates an environment for developing AI technologies, and then commercializing those technologies towards market oriented applications or social applications.

It is in that context that China has a structural advantage. In terms of approach, there would be cultural differences, even in the entrepreneur world. The startups in the China environment, they tend to work in their ways. That, I will say, Silicon Valley and China, there’s common attitudes, there’s some different approaches, but that’s not the bigger factor. In my view, it’s the environment that’s the more determinant factor making China to be, relatively compared to other marketplaces or other regions, a better place for AI development, because of data.


(7) Former Microsoft exec reveals why Amazon’s Alexa voice assistant beat Cortana Source: https://www.theverge.com/2017/8/14/16142642/microsoft-cortana-amazon-alexa-qi-lu
Excerpt: Google and Microsoft, technologically, were ahead of Amazon by a wide margin. But look at the AI race today. The Amazon Alexa ecosystem is far ahead of anybody else in the United States. It’s because they got the scenario right. They got the device right. Essentially, Alexa is an AI-first device.

Qi Lu believes Microsoft and Google “made the same mistake” of focusing on the phone and PC for voice assistants, instead of a dedicated device. “The phone, in my view, is going to be, for the foreseeable future, a finger-first, mobile-first device,” explains Lu. “You need an AI-first device to solidify an emerging base of ecosystems.”


(8) Partner at Andreessen Horowitz Frank Chen on What's working and what's not for AI
Source:
[i] https://a16z.com/2017/12/07/summit-ai-update-frank-chen/
[ii] https://mixpanel.com/blog/2017/12/12/frank-chen-ai-andreessen-horowitz/
Excerpt: You can think about there being three kinds or stages of AI. There’s “narrow AI,” which can solve very specific kinds of problems. It can play a board game, or it can predict whether or not a customer is going to churn, it can figure out what’s in a picture.

There’s “general AI,” which is basically what we think of as human intelligence. Can it learn new things? Can it pass the Turing test? That is, if a human is interacting with it, will the human know that they are dealing with a machine rather than a human?

Then there’s “Super AI,” which Elon Musk is warning us about, in which the machines move far past humans.

All of the success you read about in AI has been in “narrow AI”, and the results have been spectacular.

We don’t have a unified approach towards general AI yet—we just don’t know how to get there. And until have General AI, it’s too soon to worry about Super AI.

The way Andrew Ng puts it, worrying about Super AI is a bit like worrying about overpopulation on Mars. Someday it might be a problem, I guess, but first, let’s put the first person on Mars.


(9) Jeffrey Towson: Former President of Google China Kai-Fu Lee shares his insights into AI lessons in China
Source: 
[
i] https://www.linkedin.com/pulse/10-things-i-learned-artificial-intelligence-from-kai-fu-towson-%E9%99%B6%E8%BF%85/
[ii] https://www.linkedin.com/pulse/10-big-lessons-chinese-ai-from-kai-fu-lees-book-pt-2-towson-%E9%99%B6%E8%BF%85/

Excerpt:
Lesson 1: China is well positioned for AI’s age of implementation
Lesson 2. China has an advantage in data – and that is what will matter most in AI
Lesson 3. Scrappy entrepreneurs are China’s secret weapon in AI
Lesson 4. AI-driven automation will impact economies based on cheap labor and manufacturing
Lesson 5. Meituan’s “war of a thousand Groupons” is a good example of digital China
Lesson 6: AI is a competition between batteries and grids
Lesson 7: Optimizations and data network effects are going to get a lot more complicated
Lesson 8: Online-merge-offline (OMO) is next. And it’s awesome
Lesson 9: Government support could really accelerate AI in China
Lesson 10: Comparison between US Model and China Model
US Model
China Model
Breakthrough
Fusion + Speed
Technologies
Applications
Vision Driven
Result Driven
Light
Heavy

McKinsey: Kai-Fu Lee’s perspectives on two global leaders in artificial intelligence: China and the United States
https://www.mckinsey.com/featured-insights/artificial-intelligence/kai-fu-lees-perspectives-on-two-global-leaders-in-artificial-intelligence-china-and-the-united-states

Kai-Fu Lee: The Four Waves of A.I.
https://www.linkedin.com/pulse/four-waves-ai-/


(10) Chris Thomas, McKinsey & Co: Artificial Intelligence Has 3 Big Technology Drivers
Source:
[i] https://www.youtube.com/watch?v=g7xMvcPO43w [Youtube]
[ii] The Rise of the Machines:How Chinese Executives Think about Developments in Artificial Intelligence

Transcript: https://erhc79.blogspot.com/2018/12/reshare-artificial-intelligence-has-3.html
Excerpt: The recent great progress in core computing technology, algorithm, dataset, and application drive AI towards its tipping point.

1. AI is a central target of the leading semiconductor vendors and all the top players in CPU and GPU are investing heavily in the high-capacity processing necessary for AI and machine learning.

2. The number and size of open-source AI platforms are growing dramatically, providing developers free access to programming interface and tools, algorithms, and training datasets for AI functions.

3. A massive increase in the amount and variety of data sources means that machines can be trained to make better decisions more quickly.

4. Tech giants and venture capitalists are eagerly pursuing start-ups that innovate the uses of AI across industries. Venture investments in AI startups have grown more than 20-fold from 2010 to 2014.

We have seen such pivotal transitions before, when technological innovations have coincided with market forces to create products that transform entire industries. The introduction of the iPhone in 2007 was one such moment, when the maturation of the touchscreen intersected with the growing popularity of mobile phones, resulting in a category-changing product.

Though the exact timing is impossible to predict, AI appears to be on the brink of a similar breakthrough.Significant technological advances in AI are creating opportunities for game-changing products and services. One key application is voice recognition. Success rates for natural language processing are approaching 99% (the technology tipping point) and major global and Chinese tech players are working hard to bring to market home network devices like routers that use voice input technologies.

In autonomous driving, key technologies are also approaching the tipping point: the object tracking algorithm, the algorithm used to identify objects near vehicles, has reached a 90% accuracy rate. Solid-state LiDAR (similar to radar but based on light from lasers) was introduced for high-frequency data collection of vehicle surroundings. Because these technologies have quickly become viable, major technology companies like Google, Nvidia, Intel, and BMW are accelerating efforts to develop self-driving vehicles.
Source: McKinsey & Co

(11) Chris Thomas, McKinsey & Co: How A.I. Is Different in China?
Source: https://www.youtube.com/watch?v=puPSTkQhfjc [Youtube]
Excerpt:
Question 1: What is the most important thing happening in AI in China today?
Answer: I would say I'm gonna give you two answers on that one. One is a massive amount of experimentation in the consumer space driven by the growth of these messaging and connection platforms and lots of venture capital and big company money flowing into it.. So experimenting with new ways to bring value to consumers. This is everywhere from financial services to media and other things.

The second thing I see is a huge amount of innovation around the technical solutions for artificial intelligence especially around developing new neural processing semiconductors.

Question 2: What's the biggest difference between AI in China and the rest of the world?
Answer: I think AI in China has two unique characteristics. One, because it has a massive but already digitally connected populace, you can scale new AI applications much faster. So that means the competition is much more intense to be the first mover and the big winner. The second thing is that in China because of these platforms and because of the requirement to move quickly, there's less fundamental innovation at the technology level, and much more innovation at the business model or application level than what you see out of Silicon Valley.

Question 3: What is everyone getting wrong about AI in China? What's the biggest misconception ?
Answer: That it is just a consumer game. There's actually a lot more economic value to be created leveraging advanced analytics and AI in traditional industry and manufacturing and service industries then helping people buy stuff online.

Question 4: What's next? What's coming in the next couple of years that we should all be keeping an eye out for?
Answer: Well, if I knew the exact answer to that question, I'd be a hedge fund trader and I'd be sitting on my yacht today. But in all seriousness, I think what you're going to see is you're going to see big winners and standard platforms for artificial intelligence, similar to the way that you have Android or iOS for the phones or you have Wintel for the PCs. This is a standard platform which other people can innovate that has to happen inside the AI world for the technology to take off. And from this massive competing set of companies, some winners will emerge. Maybe not just one, maybe two or three with more specialized applications - so winning platforms.

Last question: Outside of hiring McKinsey, what simple step could a CEO of a China or Asia company do in regards to AI? What would be easy next step?
Answer: The way I would look at it is - take a look at a compendium of use cases of artificial intelligence, take a look at 150 different ways that it's being used today. Brainstorm three or four of them that you could apply to your own business and then just go out there and do it. Put together some Tiger teams, put up some sensors in the factory, run some advanced analytics, see what you see. Apply three or four, see if you make some money. Do it in a piloting way, see which one scales, see which one works, see what you learn.

Other sharing about AI by Chris Thomas, McKinsey &Co:
What AI Can and Can't Do
https://www.youtube.com/watch?v=wTKSmJIEIMc [Youtube]


(12) Bridgewater Associates founder Ray Dalio: The Great & Terrible of A.I. in Markets
Source:
[i]https://www.linkedin.com/feed/update/activity:6327241955071840256/
[ii]https://twitter.com/raydalio/status/1062383097025626112
Transcript:
I think the biggest issue we are dealing with particularly in the markets regarding (AI) algorithms is algorithms are going to blow up if these two things... two considerations (do not make sense)...
i) Do you understand the algorithm?
ii) Does the cause-effect relationship make sense to you?
A lot of (AI) algorithm and machine learning means that the person cannot explain the logic of the cause-effect relationship and that is the first sign of danger.
The second sign of danger or risk is the future (in markets) is (always) different from the past.
If you have both of those things, we are in the markets... the future is more likely to be different from the past and most importantly that whatever is discovered becomes put into the price (by the investors), right?
In other words, if you discover something right and the algorithm discovers it and other people discover the same algorithm, then what's going to happen is the worst - the reverse.
Because everybody is finding that using that algorithm will bid up the price, let say, and not understand why. And it's therefore more logical to go the opposite way of the algorithm than to follow the algorithm - you got to short it, right? And history has shown us that's the case.


(13) Other interesting articles/videos about Artificial Intelligence/Machine Learning:

Google Chief Decision Intelligence Engineer Cassie Kozyrkov: 30 Data Science Punchlines - A holiday reading list condensed into 30 quotes *NEW*
https://towardsdatascience.com/data-science-conversation-starters-84affd2347f6

MIT Sloan Management Review: The Machine Learning Race Is Really a Data Race
https://sloanreview.mit.edu/article/the-machine-learning-race-is-really-a-data-race/

Harvard Business Review: New Supply Chain Jobs Are Emerging as AI Takes Hold
https://hbr.org/2018/08/new-supply-chain-jobs-are-emerging-as-ai-takes-hold

MIT Technology Review: The Dark Secret at the Heart of AI
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

Wall Street Journal: AI Can’t Reason Why
https://www.wsj.com/articles/ai-cant-reason-why-1526657442

The Atlantic: How a Pioneer of Machine Learning Became One of Its Sharpest Critics
https://www.theatlantic.com/technology/archive/2018/05/machine-learning-is-stuck-on-asking-why/560675/

MIT Technology Review: One of the fathers of AI is worried about its future
https://www.technologyreview.com/s/612434/one-of-the-fathers-of-ai-is-worried-about-its-future/

Bloomberg: Artificial Intelligence Has Some Explaining to Do
https://www.bloomberg.com/news/articles/2018-12-12/artificial-intelligence-has-some-explaining-to-do

MIT Technology Review: What is machine learning? We drew you another flowchart
https://www.technologyreview.com/s/612437/what-is-machine-learning-we-drew-you-another-flowchart/

A Friendly Introduction to Machine Learning
https://www.youtube.com/watch?v=IpGxLWOIZy4 [Youtube]

Post a Comment

1 Comments

Eliza Taylor said…
I am really impressed with your blog such a great & useful knowledge you mentioned about Artificial Intelligence Your post is very informative. I have read all your posts and all are very informative. Thanks for sharing and keep it up like this. For more information about ai driven platform you can visit at 81qd.