有关与适用于 LiveAnalytics 的 Amazon Timestream 类似的功能,可以考虑使用适用于 InfluxDB 的 Amazon Timestream。适用于 InfluxDB 的 Amazon Timestream 提供简化的数据摄取和个位数毫秒级的查询响应时间,以实现实时分析。点击此处了解更多信息。
包含时间序列函数的查询
主题
示例数据集和查询
您可以使用适用于 LiveAnalytics 的 Timestream,以了解和提升服务和应用程序的性能与可用性。以下是示例表及其上运行的示例查询。
表 ec2_metrics 存储遥测数据,例如 EC2 实例的 CPU 利用率及其他指标。您可以查看下表。
| Time | 区域 | az | 主机名 | measure_name | measure_value::double | measure_value::bigint |
|---|---|---|---|---|---|---|
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
cpu_utilization |
35.1 |
null |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
memory_utilization |
55.3 |
null |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
network_bytes_in |
null |
1500 |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
network_bytes_out |
null |
6,700 |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
cpu_utilization |
38.5 |
null |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
memory_utilization |
58.4 |
null |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
network_bytes_in |
null |
23000 |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
network_bytes_out |
null |
12000 |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
cpu_utilization |
45.0 |
null |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
memory_utilization |
65.8 |
null |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
network_bytes_in |
null |
15000 |
|
2019-12-04 19:00:00.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
network_bytes_out |
null |
836000 |
|
2019-12-04 19:00:05.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
cpu_utilization |
55.2 |
null |
|
2019-12-04 19:00:05.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
memory_utilization |
75.0 |
null |
|
2019-12-04 19:00:05.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
network_bytes_in |
null |
1245 |
|
2019-12-04 19:00:05.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
network_bytes_out |
null |
68432 |
|
2019-12-04 19:00:08.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
cpu_utilization |
65.6 |
null |
|
2019-12-04 19:00:08.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
memory_utilization |
85.3 |
null |
|
2019-12-04 19:00:08.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
network_bytes_in |
null |
1245 |
|
2019-12-04 19:00:08.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
network_bytes_out |
null |
68432 |
|
2019-12-04 19:00:20.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
cpu_utilization |
12.1 |
null |
|
2019-12-04 19:00:20.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
memory_utilization |
32.0 |
null |
|
2019-12-04 19:00:20.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
network_bytes_in |
null |
1400 |
|
2019-12-04 19:00:20.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
network_bytes_out |
null |
345 |
|
2019-12-04 19:00:10.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
cpu_utilization |
15.3 |
null |
|
2019-12-04 19:00:10.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
memory_utilization |
35.4 |
null |
|
2019-12-04 19:00:10.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
network_bytes_in |
null |
23 |
|
2019-12-04 19:00:10.000000000 |
us-east-1 |
us–east–1a |
frontend01 |
network_bytes_out |
null |
0 |
|
2019-12-04 19:00:16.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
cpu_utilization |
44.0 |
null |
|
2019-12-04 19:00:16.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
memory_utilization |
64.2 |
null |
|
2019-12-04 19:00:16.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
network_bytes_in |
null |
1450 |
|
2019-12-04 19:00:16.000000000 |
us-east-1 |
us–east–1b |
frontend02 |
network_bytes_out |
null |
200 |
|
2019-12-04 19:00:40.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
cpu_utilization |
66.4 |
null |
|
2019-12-04 19:00:40.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
memory_utilization |
86.3 |
null |
|
2019-12-04 19:00:40.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
network_bytes_in |
null |
300 |
|
2019-12-04 19:00:40.000000000 |
us-east-1 |
us–east–1c |
frontend03 |
network_bytes_out |
null |
423 |
计算过去 2 小时内特定 EC2 主机的平均 CPU 利用率、p90、p95 和 p99:
SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization, ROUND(APPROX_PERCENTILE(measure_value::double, 0.9), 2) AS p90_cpu_utilization, ROUND(APPROX_PERCENTILE(measure_value::double, 0.95), 2) AS p95_cpu_utilization, ROUND(APPROX_PERCENTILE(measure_value::double, 0.99), 2) AS p99_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(2h) GROUP BY region, hostname, az, BIN(time, 15s) ORDER BY binned_timestamp ASC
确定 CPU 利用率比过去 2 小时整个实例集平均 CPU 利用率高出 10% 或以上的 EC2 主机:
WITH avg_fleet_utilization AS ( SELECT COUNT(DISTINCT hostname) AS total_host_count, AVG(measure_value::double) AS fleet_avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND time > ago(2h) ), avg_per_host_cpu AS ( SELECT region, az, hostname, AVG(measure_value::double) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND time > ago(2h) GROUP BY region, az, hostname ) SELECT region, az, hostname, avg_cpu_utilization, fleet_avg_cpu_utilization FROM avg_fleet_utilization, avg_per_host_cpu WHERE avg_cpu_utilization > 1.1 * fleet_avg_cpu_utilization ORDER BY avg_cpu_utilization DESC
计算过去 2 小时内特定 EC2 主机的 CPU 平均利用率,按 30 秒间隔进行分箱:
SELECT BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(2h) GROUP BY hostname, BIN(time, 30s) ORDER BY binned_timestamp ASC
计算过去 2 小时内特定 EC2 主机的 CPU 平均利用率,按 30 秒间隔进行分箱,并使用线性插值填补缺失值:
WITH binned_timeseries AS ( SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(2h) GROUP BY hostname, BIN(time, 30s) ), interpolated_timeseries AS ( SELECT hostname, INTERPOLATE_LINEAR( CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization), SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization FROM binned_timeseries GROUP BY hostname ) SELECT time, ROUND(value, 2) AS interpolated_cpu FROM interpolated_timeseries CROSS JOIN UNNEST(interpolated_avg_cpu_utilization)
计算过去 2 小时内特定 EC2 主机的 CPU 平均利用率,按 30 秒间隔进行分箱,并使用基于末次观测值结转的插值填补缺失值:
WITH binned_timeseries AS ( SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization FROM "sampleDB".DevOps WHERE measure_name = 'cpu_utilization' AND hostname = 'host-Hovjv' AND time > ago(2h) GROUP BY hostname, BIN(time, 30s) ), interpolated_timeseries AS ( SELECT hostname, INTERPOLATE_LOCF( CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization), SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization FROM binned_timeseries GROUP BY hostname ) SELECT time, ROUND(value, 2) AS interpolated_cpu FROM interpolated_timeseries CROSS JOIN UNNEST(interpolated_avg_cpu_utilization)