Amazon Lookout and Cloudwatch Metric Streams

Samuel Arogbonlo
CodeX
Published in
3 min readApr 5, 2021

--

Observability Techniques. Photo Credit: Newswire Report

In utilizing AWS, there are streams of data involved, metrics put into consideration and most importantly, infrastructure to be analysed. The good thing about AWS is they tend to always work on updates weekly so everyone has changelogs that give customers better experiences with the product. There have been several updates but I have picked interest in Amazon Lookout and CloudWatch metrics; we will be having a little chat about the most recent updates in the space.

Amazon Lookout

Businesses and organizations have time-series data that are quantified and analysed many times but for some very large scale data even small data, it is expedient to know if there are irregularities in the data — this is where the Amazon Lookout could be featured. By definition, it is used to detect anomalies in business metrics. This data includes prices, revenue, dates amongst many other things relevant to the business. It could be set up from the console and users could have certain metrics that will guide the service on how to report any form of anomalies in the data. AWS has also updated that it supports at least 19 different data storage including cloudwatch, salesforce amongst many others.

AWS Lookout is a very essential tool where you can set up metrics for the anomalous detections, get alerts on a certain part of the service desired, retrieve datasets and many other interesting system data choices. An idea of the operational infrastructure as AWS has described is shown thus:

After understanding the flow of the infrastructure from the data source/bank to the metrics used for the diverse detections then finally to the several AWS services that will report to the webhooks; it is now time to take the bull by the horn and try something out. I could have done a tutorial piece but I think there is a repository that does the job better, please engage and return questions if required. Check here but then if you are interested in the general documentation, be my guest here

Cloudwatch Metric

For AWS users, I am sure you understand the relevance of CloudWatch in monitoring your services, checking out logs, following up on the systems amongst many others. Many times, it could be difficult to collect any delivered metrics, logs and data and I believe this is where the CloudWatch Metric tools come in for the storage of relevant data in the Kinesis Data Firehose (registry).

Now using aggregating metrics holding in the system could be useful but there can be some sort of lags in metric releases and could affect the general operation and monitoring of the system. So partners like Datadog, New Relic and others are using the CloudWatch metric to help offer solutions to customers. AWS has made it possible for users to be able to hose out data to the different data storages; Redshift, S3 buckets and several others.

CloudWatch Metric Streams is available now and you can use it to stream metrics to a Kinesis Data Firehose of your own or an AWS Partner. The streams are available in all the AWS zones except AWS China (Beijing) Region and the AWS China (Ningxia) Region. Also, the payments are favourable for business but to have a deeper understanding, check here. Finally to get a step by step process on how this is done, check the documentation here

Now, for the record, this content is open to all kinds of people in the community. If you have questions, comments and contributions, shoot here or reach out to me on Twitter and Github.

Thanks for reading ❤️

Please leave a comment if you have any thoughts about the topic — I am open to learning and knowledge explorations.

I can imagine how helpful this post has been, do leave a clap 👏 below a few times to show your support for the author

--

--

Samuel Arogbonlo
CodeX

A writer for Cloud and DevOps with a sprinkle of other interesting software concepts.