Lars Erik Jordet has been programming since he was 12 years old. When he was a teenager, he began dabbling in artificial intelligence (AI) as a hobby. These days, he’s a Senior Data Scientist at StormGeo. We spoke to him about where AI fits into the workplace, how AI has changed working with weather data, and the reason he never tires of the industry.
Here in Oslo, we mostly analyze the electricity market. This means informing our customers on what the power price will be tomorrow and further ahead. Tomorrow is important because there is an auction every day that sets the price for the next day. If our prediction for the next day is good, our customers either save or make a lot of money. Right now I’m working on power consumption in the Nordic countries, figuring out how much electricity people and industries use in this region. A lot of consumption in the Nordics is weather-driven, so we analyze a lot of weather data.
We can’t manually capture all the variables within the weather over such a wide area. You can check them for a single location and you can probably check them for 100 locations, but you can’t check all these inferences for 10,000 locations in multiple countries. So we need a computer to do all these tiny checks, but we can’t tell the computer what to check for thousands of locations. That’s why we let it learn from data for each location and figure it out for us.
This isn’t a case of the “big data” that was talked about a few years ago. Big data simply meant that the data was bigger than could be stored on a single server, larger than one computer can check. In this case, there is more data than is feasible for a group of humans to work with. It’s too costly in terms of time and we can’t do it fast enough to make it worthwhile.
Many forecasts work on a grid with an 11 km resolution. If we want to know the forecast for a smaller area, we have to consider certain circumstance, such as if there are tall buildings in the area, which would mean it’s not as cold as the forecast says. Or, there might be a wind corridor, making it even colder than the forecast says.
There’s a joke among physicists that every cow is spherical, meaning when you work with theoretical physics, everything is highly simplified. A scientific model of a cow is just a sphere, not a complex cow with legs and a tail. That’s the issue with these grid forecasts. With an 11 km resolution, you can’t capture every little hill and valley.
So there will always be a difference between the observed temperature and the forecast temperature. We use AI to correlate the observed weather with what the forecast says.
I worked at Nena for 15 years before we became a part of StormGeo. It was my first “real” job after finishing my bachelor’s degree in computer science in 2003. I mainly spent that time creating traditional models—considering every physical property of a system—all by hand. AI wasn’t as proven a technique as it is now, and it was more common to have models that could be explained from end to end. Modeling this way is really time consuming, which is why we’re now working more with AI.
One of the best things about being a part of StormGeo is having access to so much data. Before the acquisition, we paid for every little scrap of weather data, so I learnt to make do with very little. Now I’m in a whole new world where if I need all the weather forecasts for the entire globe, that’s no problem. We can do a lot of amazing things with all that data.
If people saw how silly computers can be sometimes, they would be more relieved than scared. I don’t think we can ‘AI-away’ every person—there’s a lot that we will always need humans for, specifically things that happen outside of a pattern, such as extreme weather events. The benefit of AI is that it can do things that would be too costly or time-consuming for a person to do.
I love learning new things and AI makes me feel like I’m learning along with the machine. There’s always something new to be discovered in this field. You can apply certain techniques to old problems and suddenly they’re solvable. When new techniques pop up, which has happened a lot over the past 15 years, we have to learn them and see how they apply to our data. If you read up on what others have done, you often can’t see what they’ve tried that hasn’t worked because they don’t publish that. So you fall into the same traps that they did because the obvious solution isn’t the right one. That’s part of the game. But in failing, you still learn something and that’s always good.
When I was working on my first model, I thought, ‘This looks really good, I’ve gotten really good results.’ But it turned out I’d been calculating the wrong results. When you have really good results and the person for whom you’re doing the calculations says, ‘This isn’t good enough,’ how do you improve? I had to point the camera in a different direction and basically trust the machine less. Every AI researcher quickly finds out that you can’t just throw data at a machine and expect it to figure it out. You have to think about what you’re doing—ask why. Why is this working? Not, ‘Oh it’s working.’ You have to feed it bad data to see if you still get good results with bad data. Then you know something is wrong. I’ve had a lot of fun with that.