How AWS has been transforming ideas into reality

Shefali Sharma
8 min readSep 22, 2020

Cloud Computing

To implement any software-defined technology today, we have to program it. The typical requirements for a software program include a computing unit (The memory and processing power needed to run applications), an operating system, and a storage unit. Therefore for large-scale projects organizations need to maintain a data center, which houses their computing, storage, and networking resources. But, not every organization can afford to hire and maintain such infrastructure.

Instead of buying and maintaining physical data centers or any other hardware, we can access various IT resources on an as-needed basis (there is no upfront cost to build any sort of system) from a cloud provider. Cloud computing is defined as the delivery of computing services- servers, storage, databases, networking, software, analytics, and more, over the internet. Users can remotely access the resources, at their own convenience. The companies that provide cloud computing services are called cloud providers eg AWS, GCP, Alibaba, MS Azure, etc.

In simple terms, Cloud Computing is all about using somebody else’s resources but via a network.

Remotely accessing the cloud resources

Cloud providers mainly provide three types of services:

  • Infrastructure as a service (IaaS) is an online service that provides high-level APIs used to dereference various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup, etc.
  • Platform as a Service (PaaS) provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app.
  • Software as a service (SaaS) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted on the cloud.

Comparison between On-Premises and Cloud Computing costs:

Amazon Web Services

AWS is the most preferred cloud service today. It holds around a 52% share in the market. It offers a cornucopia of cloud platforms and APIs that can help us to build highly flexible, scalable, secure, and reliable applications at a very reasonable cost.

In 2006 AWS began offering some IT resources to various businesses in the form of web services, now popularly known as cloud computing. This helped businesses to scale up or scale down their operation and storage needs as per the requirement. Rather than purchasing and installing expensive upgrades, AWS could do it for them. This saved the businesses a lot of time and effort. This model grew more popular with time, and AWS also developed more cloud-based services and APIs for different purposes. This way it has empowered many startups.

As of now, AWS has 77 availability zones within 24 geographical locations across the world. An availability zone is a server farm in a region available for use by AWS customers. Each zone in a region has redundant and separate power, networking, and connectivity to reduce the likelihood of two zones failing simultaneously.

You can explore all the services that AWS has to offer here.

Case Study: How various research centers are using AWS

Today many research agencies and universities across the world are using AWS for their research projects. With AWS these organizations can avoid upfront hardware costs and offload routine infrastructure management tasks. Researchers now can quickly analyze massive data pipelines, store petabytes of data, advance research using transformative technologies like ML and AI.

High-performance computing

AWS provides a very elastic and scalable cloud infrastructure to run HPC applications. With virtually unlimited capacity, researchers can innovate beyond the limitations of on-premises HPC infrastructure. AWS delivers an integrated suite of services that provides everything needed to quickly and easily build and manage HPC clusters in the cloud to run the most compute-intensive workloads across various industry verticals. These clusters help researchers gain deeper insights into genomics, computational chemistry, financial risk modeling, computer-aided engineering, weather prediction, seismic imaging, machine learning, deep learning, big data analytics, and autonomous driving.

Our collaborators are asking us for the data to be processed as quickly as possible, so they can analyze cancer samples against other cancer samples in the database. Using AWS, we can get the results to them in days instead of months, which could contribute to faster disease diagnoses. — Benedict Paten, Director, Computational Genomics Lab, UC Santa Cruz Genomics Institute

Agricultural Research

Bayer Crop Science is a German multinational pharmaceutical and life sciences company helping farmers adopt better farming practices. For the past several years, the company has incorporated the Internet of Things (IoT) devices to gain new business insights from agricultural data. Harvesting machines retrofitted with sensors record traits such as yield, shell weight, and moisture, which the company manually transmitted to its data centers for eventual delivery to business analysts. However, the manual data-analysis process was time-consuming.

Bayer Crop Science built a new IoT pipeline, based on AWS IoT Core, that manages the collection, processing, and analysis of seed-growing data, including temperature, humidity levels, and current soil conditions. The solution can handle multiple terabytes of data from seed transportation, planting, and growing in the company’s research fields across the globe. This is enabling the company to make better business decisions about seeds and growing conditions by gaining faster access to field data.

The company is also planning to use AWS IoT Analytics to capture and analyze drone imagery and data from environmental IoT sensors in greenhouses for monitoring and optimizing growing conditions.

We are getting real-time data ingestion of temperature, soil, and humidity measurements, so we can more easily understand the traits of seeds and crops — Subrahmanya, IoT product manager for Bayer Crop Science.

Space exploration

Technological advances and open data initiatives provide new paths to explore space, impacting our lives here on Earth. Cloud-enabled technologies can help us to develop solutions from satellite data. Services provided by AWS are widely used by organizations like NASA and ESA. AWS and the AWS Partner Network (APN) are also providing startups the support they need to improve the management of satellites and enabling autonomous operations and robotics in space.

Analyzing superstorms

NASA scientists are currently trying to understand what turns an average solar storm into a superstorm. The more we understand what causes such space weather, the more we can improve our ability to forecast and mitigate the effects.

NASA uses unsupervised learning and anomaly detection to explore the extreme conditions associated with superstorms. It uses the Amazon Machine learning lab and some other services for the analysis. With the power and speed of AWS, analyses to predict superstorms can be carried out by sifting through as many as 1,000 datasets at a time. Till now, observational data from 50+ satellite missions containing images, time-series, and miscellaneous telemetry data has been collected.

NASA’s image gallery

In 2012, NASA JPL (Jet Propulsion Laboratory) had a goal to land a 2,000-pound rover on Mars completely automated. As part of that Mars Rover mission, JPL worked with AWS to process and share images from Mars. Raw images from the Rover were immediately accessible to anyone via their smart devices — powered by AWS. NASA JPL was able to stream 150 TB of data in just a few hours, garnering 80,000 requests per second. This was beyond what they could handle using traditional infrastructure, but the AWS Cloud provided them with the scalability and elasticity needed to meet demand.

Sending data to space

Satellite data offers customers a way to build applications that help humans explore space and improve life here on Earth. Customers use AWS Ground Station’s global footprint to downlink data when and where they need it, get timely data, and build new applications quickly based on readily available satellite data. All of this is possible without having to buy, lease, and maintain complex and expensive infrastructure.

A customer, Maxar Technologies, is preparing to launch its new WorldView Legion constellation next year. Maxar can plan collections and downlink imagery using AWS Ground Station in the cloud to make insights about our changing Earth immediately available. Their engineers are continuously working on innovative space exploration missions. By using AWS Cloud storage, artificial intelligence, and ML services, Maxar extracts actionable, Earth intelligence from the imagery for its customers when and where it matters most.

These are just a few examples of how NASA relies on the cloud to bring space closer to us than ever before.

Quantum Computing on AWS

Quantum computing is considered to be the future of computing. Quantum computers are hypothesized to be much more efficient and powerful than the existing supercomputers. A qubit is the basic unit of quantum information. Qubits use various quantum phenomena like superposition and entanglement for computation.

Amazon Braket is a service that enables us to design, execute, and test quantum algorithms on simulated quantum computers that run on existing quantum processors. The algorithms are implemented in Python that makes use of the Amazon Braket SDK.

In reality, quantum processors still require very stringent operational conditions, for example, supercooled environments free of electrical, thermal, and magnetic noise. This makes the cloud the ideal means to provide access to them to a larger public — AWS chief evangelist Jeff Barr.

Thus we see that AWS has a lot of potentials, it provides us almost all that is required to implement smart solutions.

--

--