Ad Code

My learning/reading this week (W50 2018)

1) [The Wall Street Journal] Peter Landers: The Old U.S. Trade War With Japan Looms Over Today’s Dispute With China

Source: https://www.wsj.com/articles/the-old-u-s-trade-war-with-japan-looms-over-todays-dispute-with-china-11544717163

[Excerpt] A fast-rising Asian power has built up a huge trade surplus with the U.S. and threatens American economic supremacy. Washington is outraged at how this new giant has acquired U.S. technology—often, U.S. officials warn, by theft—and how it has used the heavy hand of government to thrust its companies into a dominant global position. Now a Republican president who won election with surprising support from blue-collar men in the Midwest says that the economic rival had better make a deal, or else.

That, of course, is a description of the mid-1980s, when the rise of Japan was a top challenge for President Ronald Reagan and the U.S. was constantly setting deadlines and threatening tariffs. Now the U.S. has a similar challenge on its hands—only this time, it is China, which has replaced Japan as the world’s No. 2 economy and America’s No. 1 challenger.

Under similar pressure three decades ago, Tokyo made a fateful choice: major concessions to the U.S., including in the 1985 Plaza Accord that let Japan’s currency rise against the dollar. What happened next offers important lessons for the U.S. about how trade conflicts can end in unanticipated ways.

Faster than anyone thought possible, Japan ceased to be a threat to U.S. economic primacy. Under U.S. pressure, it cut interest rates to stimulate demand for imports. That spurred a historic bubble, which collapsed in the early 1990s, sending Japan into a tailspin. Soon after, all the hand-wringing about a world economy controlled by Tokyo ceased.

China has closely studied Japan’s experience and is likely to put up greater resistance to the concessions that the U.S. wants. But whether it gives in to Washington’s demands or resists and suffers a big tariff hit, the Chinese economy is in for a destabilizing jolt. Like Japan, China may show vulnerabilities America didn’t expect and isn’t prepared for. [/Excerpt]


2) [Linkedin Pulse] Ben Hunt: You Don't Have to Dance Every Dance

Source: https://www.linkedin.com/pulse/you-dont-have-dance-every-ben-hunt/

[Excerpt] Humans are biologically designed to pay attention to one big thing at a time, and our media stimulus machine knows it, so every week the “story” of what’s driving markets up or down will be good news or bad news on one of these concerns. Two weeks ago the story was Jay Powell finding that old-time dovish religion, which markets loved. This week the story was Trump starting the clock on a trade war game of Chicken with China, which markets hated. In truth, of course, there is “news” about all four of these concerns happening all the time, but until further notice … if you want to explain the WHY of your portfolio to your clients or whoever it is that you have to explain yourself to, you’ll need to explain it in terms of The Story of the Week. Because even if you’re not utterly focused on that Story of the Week, THEY are.

And for the next 90 days, that Story of the Week is most likely to be the China trade war story.

Why? Because it’s got a ticking clock. Again, we are biologically hardwired to respond with full attention to a countdown to disaster (the core brilliance of the TV show “24”), and again, our media stimulus machine knows it.

Let me be really clear about this. Anytime that anyone talks about “chances” or “odds” or “likelihood” in the context of a game of Chicken, like this trade negotiation between the US and China, they are talking nonsense.

There are no odds in a game of Chicken.It’s one of the hardest concepts in game theory to wrap your head around, but maybe the most important. And it has enormous consequences for how to invest over the next three months.

Chicken is a game with two equilibria – two potential outcomes for the game. You can see this mathematically in the stylized 2×2 game matrix, with the equilibrium outcomes circled in gold, or you can just think about every game of Chicken you’ve ever seen in the movies.

I get so frustrated with some of the “analysis” that you hear about this trade war game of Chicken, especially this gem that you’ve already heard umpteen times and will hear umpteen-squared more times over the next two months:

“The US will win because China has more to lose in a trade war.”

This is nonsense. It’s nonsense because it’s talking about the pay-offs of the game, not about the behavioral driver of player strategy – willpower. Pay-offs cut both ways. Sure, I’ve known people where ‘having more to lose’ makes them jump from their tractor first. But I’ve also known people who have MORE willpower in a game where they ‘have more to lose’.

Bottom line: it’s a fool’s errand to impute resolve from starting game conditions and pay-offs.

The only analysis that has any predictive usefulness in determining the winner of a game of Chicken is an analysis of exogenous player resolve. That’s a ten-dollar phrase that means you peer into the psyche of the players and figure out who wants it more.

So who wants it more, Trump or Xi?

Anyone who claims to have an answer to that question is either lying to you and/or lying to themselves. Not even Trump and Xi can know the answer to this question! Again, this is the delicious tension and suspense of a game of Chicken – not even the players know who wants it more. The outcome of the game only and always emerges from the playing of the game.

It’s not that the odds of the game are unknown. It’s that the odds of the game are unknowable. [/Excerpt]


3) [Linkedin Learning] Ken Blanchard on Servant Leadership: Habits of a servant leader

Source: https://www.linkedin.com/learning/ken-blanchard-on-servant-leadership/habits-of-a-servant-leader

[Transcript] People often ask me well how do I behave in my good intentions? I get in my heart that I want to be a servant leader and I think I understand the concept in my head and all, but how do I make sure that I behave in my good intentions? It really has to do with your habits. What do you do on a daily basis to recalibrate who you want to be? And I think it has to do with how do you enter your day. I'm convinced we have two selves, we have an external task oriented self that's used to getting jobs done, and then we have a thoughtful reflective self.

Now which of those two selves do you think wakes up quicker in the morning? It's the task oriented self. What happens, the alarm goes off and my friend John Artberg says what an awful term for that, why isn't the opportunity clock? Or it's going to be a great day, no the alarm, and you jump out of bed and you're trying to eat while you're washing, you know you jump in the car and you're on your car phone, you get to the office, you go to this meeting, that meeting and you know you get home at night, you're absolutely exhausted seven or eight o'clock at night. You fall into bed, you don't even have any energy to stay goodnight to anybody who might be lying next to you.

And boom you're into the next day the same way. And you're caught in a rat race, and I love the great Hollywood philosopher Lily Tallman, she always said the problem with a rat race is even if you win it, you're still a rat. And so what you really have to do I think is enter your day slowly and start by opening your thoughtful, reflective self. I like to put my hands on my knees, sit on the side of the bed and think about what am I going to do today, what are my concerns. And I just quietly kind of lie those down and then I put my hands in an upward state and just kind of quiet myself and think about who do I want to be today, you know.

Or how do I want to behave, what do I want to do and I always end up reading my own mission statement. And then I've written my own obituary you know, a lot of people say you're a little sick Blanchard, but if you know any of the story about that Alfred Nobel, his brother died in Sweden and he went to read the obituary of his brother and they got he and his brother mixed up. And he got to read his own obituary, you know, and he was involved in making dynamite and all.

They talked about destruction and all those kind of things, and he thought oh my God, that was awful. And he decided how could he rewrite his obituary, so he would be remembered differently. The other people, around they said what's the opposite of destruction? They said peace so he redesigned his life, so he'd be remembered for peace. And boy that's what your obituary is, how do you want to be remembered? And then what are your values? My values are spiritual peace, integrity, love and joy. And I read my values and I've defined those. [/Transcript]


4) [Wired] Tom Simonite: Google’S AI Guru wants computers to think more like brains

Source: https://www.wired.com/story/googles-ai-guru-computers-think-more-like-brains/

[Excerpt]
WIRED: Artificial intelligence can raise ethical questions in everyday situations, too. For example, when software is used to make decisions in social services, or health care. What should we look out for?

Geoff Hinton: I’m an expert on trying to get the technology to work, not an expert on social policy. One place where I do have technical expertise that’s relevant is [whether] regulators should insist that you can explain how your AI system works. I think that would be a complete disaster.

People can’t explain how they work, for most of the things they do. When you hire somebody, the decision is based on all sorts of things you can quantify, and then all sorts of gut feelings. People have no idea how they do that. If you ask them to explain their decision, you are forcing them to make up a story.

Neural nets have a similar problem. When you train a neural net, it will learn a billion numbers that represent the knowledge it has extracted from the training data. If you put in an image, out comes the right decision, say, whether this was a pedestrian or not. But if you ask “Why did it think that?” well if there were any simple rules for deciding whether an image contains a pedestrian or not, it would have been a solved problem ages ago.

[...]

WIRED: So how can we know when to trust one of these systems?

Geoff Hinton: You should regulate them based on how they perform. You run the experiments to see if the thing’s biased, or if it is likely to kill fewer people than a person. With self-driving cars, I think people kind of accept that now. That even if you don’t quite know how a self-driving car does it all, if it has a lot fewer accidents than a person-driven car then it’s a good thing. I think we’re going to have to do it like you would for people: You just see how they perform, and if they repeatedly run into difficulties then you say they’re not so good.

[...]

WIRED: The recent boom of interest and investment in AI and machine learning means there’s more funding for research than ever. Does the rapid growth of the field also bring new challenges?

Geoff Hinton: One big challenge the community faces is that if you want to get a paper published in machine learning now it's got to have a table in it, with all these different data sets across the top, and all these different methods along the side, and your method has to look like the best one. If it doesn’t look like that, it’s hard to get published. I don't think that's encouraging people to think about radically new ideas.

Now if you send in a paper that has a radically new idea, there's no chance in hell it will get accepted, because it's going to get some junior reviewer who doesn't understand it. Or it’s going to get a senior reviewer who's trying to review too many papers and doesn't understand it first time round and assumes it must be nonsense. Anything that makes the brain hurt is not going to get accepted. And I think that's really bad.

What we should be going for, particularly in the basic science conferences, is radically new ideas. Because we know a radically new idea in the long run is going to be much more influential than a tiny improvement. That's I think the main downside of the fact that we've got this inversion now, where you've got a few senior guys and a gazillion young guys.

[...]

WIRED: Some scholars have warned that the current hype could tip into an “AI winter,” like in the 1980s, when interest and funding dried up because progress didn’t meet expectations.

Geoff Hinton: No, there's not going to be an AI winter, because it drives your cellphone. In the old AI winters, AI wasn't actually part of your everyday life. Now it is.
[/Excerpt]


5) [Linkedin Learning] Ash Coleman: Agile Testing

Source: https://www.linkedin.com/learning/agile-testing-2/story-kickoff

[Transcript] Technology is complex and constantly evolving. It's no wonder technology is always under review and reform, a practice known as versioning. Building tools that users engage with on a daily basis pretty much guarantees ongoing revision. So long as it's in the hands of individuals as time passes there will be a need to upgrade, fix or resolve the issues in order to stay relevant to users. The funny part about this is these cycles of review and reform begin well before a developer even begins to write a single line of code.

The feeling of walking out of a planning meeting with all the details to begin working on a project is typically short lived. From the moment planning ends to when the ticket is pulled and is ready for development more information will become available. This new scope may contradict assumptions the team made at the time of planning. This natural evolution comes from a wide array of events such as better advancements and versions of a technology or tool being released, updated clients, browsers or devices, or maybe a common approach that is no longer reliable and an alternative needs to be used.

All of these slight variations in the plans suggest a new approach, and for that to happen there needs to be a discussion. This is called the story kickoff. At story kickoff considerations are brought to the table around how technology or deliverables have evolved since the planning of the ticket, and the establishment of clear expectations are discussed. At the start of every user story, with your three amigos (stakeholder/product owner, developer, tester) discussion should happen before the commencement of development in order to realign expectations.

This includes finalizing acceptance criteria, discussing changes in the original plan of development, key testing approaches and any other information that makes the execution of this story crystal clear. [/Transcript]


6) Other learnings in week 50:

[Linkedin Learning] Todd Dewett: Giving Your Elevator Pitch https://www.linkedin.com/learning/giving-your-elevator-pitch/

[Linkedin Learning] Sara Canaday: Managing High Potentials https://www.linkedin.com/learning/managing-high-potentials/

Post a Comment

0 Comments