<img height="1" width="1" src="https://www.facebook.com/tr?id=205228923362421&amp;ev=PageView &amp;noscript=1">
SPONSORED EDITION

AWS in Orbit: Generative AI and Space Resiliency.

ULA’s Delta Heavy IV launches for the last time. NASA shares their space sustainability strategy. Astrobotic plans to bring 3D printing to the moon. And more.

Follow

Subscribe

Summary

The United Launch Alliance’s Delta IV rocket carries an NRO satellite to orbit for its final flight. NASA unveils their Space Sustainability Strategy. Astrobotic has announced that it is working on a project to bring 3D printing to the Moon, and more.

Remember to leave us a 5-star rating and review in your favorite podcast app.

Miss an episode? Sign up for our weekly intelligence roundup, Signals and Space. And be sure to follow T-Minus on Instagram and LinkedIn.

T-Minus Guest

Our guests are Kathy O'Donnell, Senior Manager, AWS Space Specialist Solutions Architecture, and Derek McCoy, Head of Channel, Enterprise, & Public Sector at Rescale.

You can learn more about AWS Aerospace and Satellite on their website

Selected Reading

Ending an era, final Delta 4 Heavy boosts classified spy satellite into orbit - CBS News 

NASA’s Space Sustainability Strategy

Next Step Toward the Moon: LZH and TU Berlin partner with Astrobotic

Vast’s Haven-1 to be World’s First Commercial Space Station Connected by SpaceX Starlink

Firefly Aerospace Announces Agreement with Klepsydra Technologies to Demonstrate Edge Computing in Space

Aegis Aerospace Closes Strategic Acquisition

Russia aborts planned test launch of new heavy-lift space rocket

T-Minus Crew Survey

We want to hear from you! Please complete our 4 question survey. It’ll help us get better and deliver you the most mission-critical space intel every day.

Want to hear your company in the show?

You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at space@n2k.com to request more info.

Want to join us for an interview?

Please send your pitch to space-editor@n2k.com and include your name, affiliation, and topic proposal.

T-Minus is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc.

We're continuing our special edition of our Daily Show this week as we are recording from the 39th Space Symposium in Colorado Springs, Colorado.

We are now on the third day of this massive space conference, and as this is a special edition after all, stay tuned after our Daily News Roundup for our conversation recorded today for the AWS In Orbit series.

Today is April 10th, 2024.

I'm Maria Varmausis, and this is T-minus at the 39th Space Symposium.

We say goodbye to ULA's Delta Heavy 4.

NASA unveils their space sustainability strategy.

AstroVodx shares details on a project to bring 3D printing to the moon.

And we're bringing you the second installment of the AWS In Orbit podcast series at the 39th Space Symposium.

We're going to be talking to rescale an AWS Aerospace and Satellite about the use of artificial intelligence in managing space data in data lakes.

So stay with us for the second part of the show.

First to today's intelligence briefing, and we really couldn't start the show without acknowledging the end of an era for the Delta Heavy 4.

The final flight of the United Launch Alliance rocket lifted off from Florida yesterday.

The heavy rocket carried a classified spy satellite provided by the National Reconnaissance Office.

No details about the NROL 70 payload were released, no big surprise there.

The National Reconnaissance Office declared the launch a success, indicating the satellite reached its planned orbit.

We have to admit that it was actually really super cool to hear the cheers on the exhibit floor at the Space Symposium as the Delta IV lifted off for the last time.

And speaking of the 39th Space Symposium, NASA used the event to unveil their space sustainability strategy.

Deputy Administrator Pam Melroy delivered the strategy in a keynote address saying that, quote, "Space is busy and only getting busier.

If we want to make sure that critical parts of space are preserved so that our children and grandchildren can continue to use them for the benefit of humanity, the time to act is now."

NASA is making sure that we are aligning our resources to support sustainable activity for us and for all.

And you can find more details by following the link in our show notes.

Astrobotic has announced that it's working on a project to bring 3D printing to the moon.

Laser Zentrum Hanover EV has contracted with Astrobotic for a flight to the moon, or the moonrise mission set to take place in late 2026.

VAST has announced that its commercial space station, the Haven-1, will be equipped with a SpaceX Starlink laser terminal.

The company says that the partnership will provide connectivity to its crew users, internal payload racks, external cameras, and instruments.

The companies have reached an agreement for SpaceX to provide Starlink connectivity to future VAST platforms, including connectivity for VAST's next space station, which the company plans to bid for in NASA's upcoming commercial low-Earth orbit destinations competition.

Firefly Aerospace has announced a new agreement to host the CLEPSIDRA AI application on Firefly's Elytra vehicle, which will launch aboard Alpha later this year.

According to the press release, Elytra will be equipped to support a variety of hosted software applications with real-time data processing on orbit.

The company says that the Elytra Edge Compute Platform interfaces directly with the vehicle's core avionics with the ability to receive payload, sensor, and camera telemetry data.

In another announcement we want to cover from Colorado Springs, SpaceISAC, a US-based information sharing and analysis center focused on space industry threats, has signed a memorandum of understanding with the Israeli Space Agency.

Very cool.

And that concludes our briefing for today.

You'll find links to further information on the stories that we've mentioned in our show notes.

We've also included an acquisition announcement from Aegis Space in details about Russia's aborted mission for its new heavy-lift vehicle.

N2K Space is working with Amazon Web Services Aerospace and Satellite to bring the AWS in orbit podcast series to the 39th Space Symposium in Colorado Springs from April 8th to 11th.

And we are broadcasting from the AWS Boost, number 1036 in the North Hall, Tuesday through Thursday from 9 to 11 a.m.

So come on by our booth to catch us in action, see us and say hi in the fishbowl, and share your story, or you can email us at space@n2k.com to set up a meeting with our team.

Hey T-minus crew, if you find this podcast useful, please do us a favor and share a five-star rating and short review in your favorite podcast app.

It'll help other space professionals like you to find the show and join the T-minus crew.

Thank you so much for all your support everybody.

We really appreciate it. . . .. . ..

[Music] Hi, I'm Maria Varmausis, host of the T-Minus Space Daily Podcast, and this is AWS in Orbit, Generative AI and Resiliency.

We're bringing you the second installment of the AWS in Orbit podcast series at the 39th Space Symposium.

And in this episode, I'll be speaking to representatives from Rescale and AWS Aerospace and Satellite about improving space resilience and scaling customer success using Generative AI.

[Music] My name is Eric McCoy.

I lead our enterprise public sector and channel businesses here at Rescale.

I've been with the company for about six years, and I've also recently been helping lead our Tiger team around AI and Generative AI specifically and how we're helping a number of companies out there in the industries to enhance the way that they're doing their physics simulations and get to market and do rapid prototyping a lot faster.

Very cool.

Thank you, Derek.

And Kathy, over to you for your intro.

Hi, yeah, I'm Kathy O'Donnell.

I lead the Space Specialist Solutions architecture team in Aerospace and Satellite.

I also, in the past year, started leading our Generative AI and Space Initiative at AWS.

Fantastic.

Thank you both for joining me today and welcome.

So glad to be speaking with you.

So, Derek, let's start with you.

Tell me a bit about Rescale.

Yeah, absolutely.

So Rescale is a company that has been around for about 12 years.

And where we fit in the space is that we've been supporting companies in their HPC orchestration with partners such as AWS.

So the different industries that we support today are across Aerospace and Defense, Space Exploration, Manufacturing, Automotive, Life Science, and others.

And the users that we're supporting are the engineers that are doing modeling simulation, the research and development folks, as well as the scientific researchers out there.

Specifically to some of the use cases that we are looking to do for our customers is we're giving them the ability to explore wider design spaces and take the physical testing aspect down and be able to do more prototyping in a digital world in order to get to market a lot faster.

And to be able to deliver on deadlines in a secure fashion.

You mentioned security and I wanted to ask about that because I imagine given what Rescale does, you have a lot of government customers.

So can you tell me about what your government customers are looking for in security?

Yeah, absolutely.

So this is something that's really evolved over the last few years.

We've seen a major increase in the way that the government and their partners are adopting cloud.

And with that has come a lot of regulation and constraints around the security of their data, of the way that they're doing compute.

And so there's a number of different accreditation out there that customers of ours as well as ourselves have been achieving along the way with great partners like AWS.

Those include but are not limited to FedRAMP, ITAR, IL5, IL6, and beyond.

This ultimately is an impact level of data security as well as the ability to make sure that customers are able to scale out as they look at these mission ready type of initiatives that they're winning and delivering for the government.

So it's an area that most customers are struggling with in a number of areas because there's a lot of nuances.

And on top of that, there's also a lot of different components in the technology stack that need to be taken into consideration.

There's data layers, there's compute layers.

There are third parties that you're using for metadata and logging and so forth.

We're fortunate to have a tightly lined relationship with AWS where we take advantage of their full catalog and expertise in order to make sure we can create a full turnkey solution for the market.

Fantastic, fantastic.

Kathy, do you want to add anything to that?

Oh, no, I'm just really excited to have partners that care so much about security because, you know, a lot of our customers, that is so key and we don't want people to think that the cloud is less secure when in fact, you know, that is one of our primary jobs at AWS is ensuring that security.

So it's just really great to hear our partners also carrying that like a tenant with them.

Fantastic.

So we are here to talk about AI and generative AI these days.

So can you tell me a bit more about how generative AI and AI factor into your customers' missions and their security concerns?

Yeah, absolutely.

So there's a number of ways that we could take this.

And I think the first thing I would say is that in the world that Kathy and I, you know, are working within a lot of the simulations that people are doing around physics and physics introduces a different component to generative AI where we've heard a lot in the market about large language models.

Yes.

Where I think that we typically see more is around large physics models.

And what comes with that is still the same requirements around compute and infrastructure and orchestration, but it also brings in a lot of nuances around what is stability around the physics.

How do we actually look at these problems?

How do we orchestrate a foundation to make sure that we can build upon the data over a number of historical years, future years, bringing in real data as well as synthetic data to ultimately get a verification or a validation that makes us comfortable with running against our traditional solvers?

So where our customers are leaning in a lot is looking at how do we create these neural networks and so forth to get a lot further in our simulations a lot faster?

How do we take the traditional workloads that are scaling up to hundreds of thousands of cores and taking multiple days to run down to just hours or minutes?

Yeah.

I don't know if you have any thoughts, Kathy.

Yeah.

One of the things that I love about working with the cloud is that ability to scale when you need to scale.

Yeah.

So traditionally, like back in the day, back when I was a youngster, you know, we had to provision all of that.

And it was very expensive.

It took a long time to do.

And then when you were done with it, what would you do with all that compute?

Yeah.

So one of the things that I love about AWS and just the cloud structure in general, it's that ability to scale up quickly and to bring it down when you've completed your work.

Yeah.

And I would also add to that, you know, one of the great things about the cloud is having the ability to have a fragmented architecture catalog.

So, you know, obviously there's new GPUs and CPUs coming out all the time.

All the time.

Yep.

And in this modeling and simulation world, there are a lot of dependencies based on the workflow you're doing with the different infrastructure that's optimized for it.

Yeah.

And having the accessibility in a product like AWS to be able to span across all of those.

So you have the heterogeneity to be able to go out there and do small test cases.

But then you have a homogeneous cluster to scale when needed.

Once you get to that validation stage and you say, okay, I'm starting to get more comfortable with this.

Now I want to run something at scale.

Yeah.

You have the flexibility to really put those things together.

Yeah.

Can you tell me a bit more about like that customer experience doing what you just mentioned?

I'd love to hear more about that.

Yeah.

Absolutely.

So the way that we've approached the problem is that we truly want to mimic the way that AWS builds their business model where they do have flexibility for on-demand infrastructure as well as other components.

As well as other components of the technology stack.

And where we really leverage our platform is being able to allow customers to go in there and choose what they want immediately.

So the traditional method that customers go through is that when they go into a partnership with us in AWS is we set them up in two to three weeks time where traditionally that can take months on end from supply chain issues.

If they are looking to develop their own infrastructure and set that up, do the different OS layers and so forth.

Well, we have that all pre-configured and installed and we work with our AWS counterparts to make sure that we optimize based on all of the different software vendors available out there with different solvers.

We also support a number of government codes as we've talked about.

Yep.

Such as NASA fund 3D, card 3D, the DOD create codes.

And what we do is we actually use AI within our own platform to recommend based on their job attributes what they should be using for AWS infrastructure in their workloads.

I imagine that also, I'm sorry, Kathy, I was going to say imagine it scales well, but go ahead, Kathy, go ahead.

I was going to say, that's super cool because I mean, it can be really difficult.

We have a lot of different options at AWS because we serve a lot of different industries and customers.

And sometimes it can be very difficult to know what do you want to use, like which one of these things is going to work best for your workload.

And so having a partner to help you and especially using ML to help with that decision making, I just think that's awesome.

I just think that's neat.

Yeah, that's a very valid comment, honestly.

Yeah, and I was thinking as you're describing it, that must scale really well for the customer as a repeatable process.

Absolutely, it does.

And we try to build in functionality together as we find customer requirements to make that very repeatable.

So scale is one thing on the infrastructure side, but another area within this domain is the ability to walk in and have those different inputs you have, but have the job orchestration there for you.

So building out templates and computational workflows and so forth.

And those can become dynamic where there's multiple different infrastructures, multiple dependencies where they need results from the previous step of the job.

And so we look as a partnership to say what is the next coming thing as people look to kind of elevate the way that they approach the government requirements and things that are down the line for us.

Fantastic.

So we're talking about reducing those complexities, scaling results.

Let's talk about results a little bit.

So can you talk a little bit about how introducing AI into customer workflows has created great customer success?

Yeah, absolutely.

So we've had a few different ways that customers have gone about using AI.

Primarily, the ultimate goal is that we get to a repeatable process where generative AI is the go to.

And what that looks like in the world that we're living in is being able to put meshes and geometries into a generative AI system and be able to get an accurate result out where we understand, you know, whether it be the drag coefficient on something as it leaves orbit because, you know, the heat and so forth has a different exchange there.

Yep.

We also see customers that are building neural nets quite frequently on more regular workloads like computation flow dynamics and where what they do there is they take all of the data that they're running, they build the synthetic data if they need it.

And we try to unify that data so that way you can build a neural net around it and apply that neural net to run inference against the job.

And with that, you're usually able to see that 1000 to 10,000 X speed up time with somewhere between 95 and 98% accuracy right now.

Wow, yeah.

And then more traditionally, you know, we have customers that are looking for ML optimization.

We have benchmark space systems that we've worked together with where they're typical workload where they have 20 studies and they're running on a number of different CPUs.

We want to reduce their time from eight to 10 hours per study on dozens, if not hundreds, if not thousands of CPUs to accelerate that through GPUs or be able to accelerate that from, you know, running inference against some of these neural nets.

And so we look for opportunities like that where we can reduce their time down 85% or more.

That's huge.

So yeah, it gives engineers the results they're looking for.

Yeah.

And we're able to actually make the right decisions on the next evolution of their project.

Make do those tests and forget that data.

That's so important in that that time to results.

It's massive.

Yeah.

Go ahead, Kathy.

Yeah, I think it's really interesting when people using AWS, it when it clicks for them that by using GPUs, which to be honest, are a little more expensive per hour than a CPU, but it runs so much faster.

Yeah.

That you not only have a time saving, you can also realize a cost saving as well.

Yeah.

And it's really neat when someone's like, oh, like, hold a moment.

Yep.

Yeah.

And I think that, you know, I'd be interested in your opinion, too.

There's always this debate when it comes to high performance computing and AI between TCO and ROI.

And I do think that is like an interesting thing that AI and generative AI specifically, it's kind of flipping the tables on this where I think that, you know, we're going to get to a point where it's worth the investment for the results you're getting.

Because not only are you getting the faster time, but you're looking at a wider design space.

You're going to create better products.

Yeah.

I mean, sometimes it is difficult to measure because I see generative AI as being an augmentive technology.

Yes.

It helps you do your job faster.

Yeah.

But how do you measure that?

Right.

I mean, you know, you have measures of FTE hours, but when you do knowledge work, when you do innovative technology, like, you're still working your entire week.

Yeah.

And it does do much more cool stuff.

Yeah.

And so, yeah, it is a big question.

How do we measure the actual, like, augmentation and how much better you're doing?

Right.

Sort of like a gut feeling at a certain point.

Yeah.

I mean, honestly, yeah.

Yeah.

I could actually see that.

Now, Kathy, I wanted to ask you about customers validating AI results to -- and ensuring customer trust.

Can you talk a little bit about that?

Yeah.

We don't have, like, one singular way to evaluate results coming out of generative AI because if you've ever used, like, a chatbot, you know that it changes up its answers.

You can do a lot with how you set the parameters for that large language model.

But we do have customers doing some really interesting things around testing, like, coming up with a, you know, two or 300 long list of questions with the answers that they want.

And then using another large language model to see if the one that they're using to answer, like, matches the answer they expected.

Hmm.

You know, and when you're measuring large language models, there is a suite of tests that we use to compare different models in performance.

Makes sense.

Yep.

Yeah.

So we were really excited because we recently introduced Claw 3 onto the Bedrock platform.

So that's from Anthropic.

And it is right now top of the leaderboard, like, Claw 3, OBIS, in those set of tests.

So pretty cool stuff, honestly.

So, Kathy, I want to ask you a follow-up question.

Unless Derek, you have something you want to add to that?

No, I think it's creative and out of the box.

That's simple.

I mean, I love hearing these use cases.

Yeah.

I was going to ask about use cases.

So AWS customers using generative AI, other use cases, anything you want to mention there?

Kathy, yep.

Yeah.

Well, so what we see a lot of, and I tend to split up use cases into a few different buckets, there are the use cases that you have just by virtue of being a company.

Yeah.

So, you know, interacting with your customers, like having a chat bot to do that, having, you know, interactive websites powered by a foundation model.

Then you have your business processes.

How can you speed up, like, search across all of your internal documentation?

So, we see a lot of customers doing that.

I could so see the value in that.

Oh, my goodness.

Yeah.

Well, especially if you've got a company stretching back 20, 30, 40 years.

Yep.

And, you know, they've done their thing.

They've digitized all of their run documentation.

Oh, my goodness.

But now it's like, well, how do I, how do I index that?

How do I ask questions?

And so you have to look through a huge index of, or tables of content.

Yeah.

Instead, what you can do is you load it up into a database that you can then use retrieval augmented generation and a foundation model together.

Fantastic.

Yep.

And so you just ask human natural language questions and it will pull together all the different pieces over those two, three, four decades of work and give you an answer, which is so cool.

Instead of asking your most senior person who's probably super busy.

Oh, yeah.

And you're like, "I'm going to do my own national knowledge."

Dave.

Yeah.

So, if you're bugging Dave about it, you've got, well, actually Dave, yeah.

Yeah.

And like, there's just a huge value out there for workforce development and upskilling.

Yeah.

You know, I think all of us are employees of a company and we all desire and, you know, yearn for that.

And I think like, you know, the ability to have that at your fingertips is in and of itself a huge value to the companies to retain employees.

Yeah, absolutely.

It's nothing more frustrating.

You're looking for those answers and there's that one person who knows and they're busy or they're gone.

I mean, it's just, yeah, I can huge value there.

Yeah.

And one of the big things we have coming up now is Amazon Q.

So we have Amazon Q Business and Amazon Q Developer.

And I got to tell you, so there's a lot of essays here from AWS.

You should go talk to them in the past week.

We've all really started using Amazon Q Developer.

And it is, it changes the way that you program because it's not just code completion.

You can actually like highlight sections of your code and say, OK, what exactly does this function do?

Oh, you can say, I have a bug here.

I'm not getting the right result.

Can you see what might be wrong?

That saves so much time.

It is even begin days.

Yeah.

I've had at least three people so far tell me they've stopped using Google when they're coding and have just started using Amazon Q.

Stack Overflows hits are going to start dropping.

Yeah.

I was telling someone, you know, back in the day, which is like two weeks ago, you know, if you had a problem, you had to Google.

Yep.

And then you'd get all the stack stack overflow hits.

And of course, it would be a guy with your same exact question.

You'd answer the question.

Right.

But 14 years ago, the top answer being solved.

Yeah.

And no result.

Yeah.

You have no idea what's the answer.

You're like, OK, what happened?

Yeah.

I've been there.

I've personally been there.

I'm feeling that frustration.

I remember.

All right.

So we are coming up on time before we go down that whole rabbit hole.

I want to make sure we talk about wrap up lessons learned.

So Derek, why don't we start with you?

Yeah.

So, you know, I think, I think overall what I've been seeing in the market is building a foundational platform in order to make sure that you're good at the long term and that you can go down the exploration stage, but also see the immediate results is really important.

And so I think chart, you know, making sure that you're a good steward of your own technology stack and your partner ecosystem is really important here.

Obviously, we want to get to the point where, you know, large physics models and generative AI is an out of the box solution for a lot of these companies doing physics work.

So I do think, you know, my thoughts on it at least from a personal standpoint is that people need to do the fundamentals first before they move up the stack, but also open your mind to be able to explore these type of initiatives, make the investments where you see fit, work with your partners and advisors and so forth in the industry, and make sure you're understanding what that ties back to from your day to day business.

So, you know, I'll pass to you, Kathy.

Great point, Derek.

Thank you.

That was great.

Yeah, I think what I'd like to end with is innovation is key.

We should always be innovating, but you want to make sure that you can do that in a secure manner, in a safe manner, because I know as, you know, a programmer, as a technologist, I want to do the crazy stuff, but I don't want to take down the company.

I don't want to ruin our code base.

So, so making sure that we can do that securely.

And one of the things that I really like about partners like Rescale and about the AWS platform is that really is one of our key goals is making sure that we're secure.

Wonderful.

Derek and Kathy, it was a joy speaking with you both today.

Thank you so much for joining me.

Yeah, thank you.

Thank you.

[Music] That's it for T-minus for April 10th, 2024.

For additional resources from today's report, check out our show notes at space.n2k.com.

We're privileged that N2K and podcasts like T-minus are part of the daily routine of many of the most influential leaders and operators in the public and private sector, from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies.

This episode was produced by Alice Carouse and Laura Barber for AWS Aerospace and Satellite.

Mixing by Elliot Peltzman and Trey Hester with original music and sound design by Elliot Peltzman.

Our associate producer is Liz Stokes.

Our executive producer is Jen Ivan.

Our VP is Brandon Karp.

And I'm Maria Varmausus.

Thanks for listening.

We'll see you next time.

[Music] T-minus.

[Music] [BLANK_AUDIO]

Similar posts

Stay in the loop on new releases. 

Subscribe below to receive information about new blog posts, podcasts, newsletters, and product information.