The Author Online Book Forums are Moving

The Author Online Book Forums will soon redirect to Manning's liveBook and liveVideo. All book forum content will migrate to liveBook's discussion forum and all video forum content will migrate to liveVideo. Log in to liveBook or liveVideo with your Manning credentials to join the discussion!

Thank you for your engagement in the AoF over the years! We look forward to offering you a more enhanced forum experience.

favetelinguis (18) [Avatar] Offline
#1
In the metrics section you had a very nice explanation on how to extract metrics from your logs and send them to Cloudwatch metrics. What is the the upside of going with this approach instead of just sedning the metrics on to for example Kibana and use Kibana as your metrics tool?
Yan Cui (73) [Avatar] Offline
#2
As I mentioned in that unit, the reason for doing this is to avoid the extra latency introduced by sending metrics data to whatever monitoring system you use - especially as these latency overhead can compound when a single user action requires a chain of API calls in your microservices architecture.

You can absolutely apply the same approach and send metrics to a splunk or honeycomb or whatever external tool you want to use, the key point there is to do it outside of the function's invocation time.

As I mentioned in that unit, this is not necessary with async workflows - e.g. functions that process Kinesis events, or SNS messages. But for APIs where the latency is user-facing, it's something to consider if you want to minimise the user facing latency.

Another constraint we have for the course, and one of the main reason why we mainly used AWS services is to limit the number of tools we have to teach you. Logz.io was the main exception because it has a free tier, whereas you'd end up paying for Amazon Elasticsearch if we use that instead.