hash
stringlengths 40
40
| date
stringdate 2018-12-11 14:31:19
2025-03-22 02:45:31
| author
stringclasses 280
values | commit_message
stringlengths 14
176
| is_merge
bool 1
class | git_diff
stringlengths 198
25.8M
⌀ | type
stringclasses 83
values | masked_commit_message
stringlengths 8
170
|
|---|---|---|---|---|---|---|---|
b2f30ccd4a34e310074b73d9547f27a3bfc6bfc9
|
2023-09-26 16:47:13
|
Ashwanth
|
doc: log retention page improvements (#10665)
| false
|
diff --git a/docs/sources/operations/storage/retention.md b/docs/sources/operations/storage/retention.md
index 14b4bc1c7b3d8..b6f35cee6e4b2 100644
--- a/docs/sources/operations/storage/retention.md
+++ b/docs/sources/operations/storage/retention.md
@@ -6,41 +6,52 @@ weight: 600
---
# Log retention
-Retention in Grafana Loki is achieved either through the [Table Manager](#table-manager) or the [Compactor](#compactor).
+Retention in Grafana Loki is achieved through the [Compactor](#compactor).
+By default the `compactor.retention-enabled` flag is not set, so the logs sent to Loki live forever.
-By default, when `table_manager.retention_deletes_enabled` or `compactor.retention_enabled` flags are not set, then logs sent to Loki live forever.
+{{% admonition type="note" %}}
+If you have a lifecycle policy configured on the object store, please ensure that it is longer than the retention period.
+{{% /admonition %}}
-Retention through the [Table Manager]({{< relref "./table-manager" >}}) is achieved by relying on the object store TTL feature, and will work for both [boltdb-shipper]({{< relref "./boltdb-shipper" >}}) store and chunk/index store. However retention through the [Compactor]({{< relref "./boltdb-shipper#compactor" >}}) is supported only with the [boltdb-shipper]({{< relref "./boltdb-shipper" >}}) and tsdb store.
+Granular retention policies to apply retention at per tenant or per stream level are also supported by the Compactor.
-The Compactor retention will become the default and have long term support. It supports more granular retention policies on per tenant and per stream use cases.
+{{% admonition type="note" %}}
+The Compactor does not support retention on [legacy index types]({{< relref "../../storage#index-storage" >}}). Please use the [Table Manager]({{< relref "./table-manager" >}}) when using legacy index types.
+Both the Table manager and legacy index types are deprecated and may be removed in future major versions of Loki.
+{{% /admonition %}}
## Compactor
-The [Compactor]({{< relref "./boltdb-shipper#compactor" >}}) can deduplicate index entries. It can also apply granular retention. When applying retention with the Compactor, the [Table Manager]({{< relref "./table-manager" >}}) is unnecessary.
+The Compactor is responsible for compaction of index files and applying log retention.
-> Run the Compactor as a singleton (a single instance).
+{{% admonition type="note" %}}
+Run the Compactor as a singleton (a single instance).
+{{% /admonition %}}
-Compaction and retention are idempotent. If the Compactor restarts, it will continue from where it left off.
+The Compactor loops to apply compaction and retention at every `compactor.compaction-interval`, or as soon as possible if running behind.
+Both compaction and retention are idempotent. If the Compactor restarts, it will continue from where it left off.
-The Compactor loops to apply compaction and retention at every `compaction_interval`, or as soon as possible if running behind.
-
-The Compactor's algorithm to update the index:
-
-- For each table within each day:
- - Compact the table into a single index file.
- - Traverse the entire index. Use the tenant configuration to identify and mark chunks that need to be removed.
- - Remove marked chunks from the index and save their reference in a file on disk.
+The Compactor's algorithm to apply retention is as follows:
+- For each day or table (one table per day with 24h index period):
+ - Compact multiple index files in the table into per-tenant index files. Result of compaction is a single index file per tenant per day.
+ - Traverse the per-tenant index. Use the tenant configuration to identify the chunks that need to be removed.
+ - Remove the references to the matching chunks from the index and add the chunk references to a marker file on disk.
- Upload the new modified index files.
-The retention algorithm is applied to the index. Chunks are not deleted while applying the retention algorithm. The chunks will be deleted by the Compactor asynchronously when swept.
-
-Marked chunks will only be deleted after `retention_delete_delay` configured is expired because:
+Chunks are not deleted while applying the retention algorithm on the index. They are deleted asynchronously by a sweeper process
+and this delay can be configured by setting `-compactor.retention-delete-delay`. Marker files are used to keep track of the chunks pending for deletion.
-- boltdb-shipper indexes are refreshed from the shared store on components using it (querier and ruler) at a specific interval. This means deleting chunks instantly could lead to components still having reference to old chunks and so they could fails to execute queries. Having a delay allows for components to refresh their store and so remove gracefully their reference of those chunks.
+Chunks cannot be deleted immediately for the following reasons:
+- Index Gateway downloads a copy of the index files to serve queries and refreshes them at a regular interval.
+ Having a delay allows the index gateways to pull the modified index file which would not contain any reference to the chunks marked for deletion.
+ Without the delay, index files (that are stale) on the gateways could refer to already deleted chunks leading to query failures.
- It provides a short window of time in which to cancel chunk deletion in the case of a configuration mistake.
-Marker files (containing chunks to delete) should be stored on a persistent disk, since the disk will be the sole reference to them.
+Marker files should be stored on a persistent disk to ensure that the chunks pending for deletion are processed even if the Compactor process restarts.
+{{% admonition type="note" %}}
+We recommend running Compactor as a stateful deployment (StatefulSet when using Kubernetes) with a persistent storage for storing marker files.
+{{% /admonition %}}
### Retention Configuration
@@ -72,13 +83,11 @@ storage_config:
bucket_name: loki
```
-> Note that retention is only available if the index period is 24h.
-
-Set `retention_enabled` to true. Without this, the Compactor will only compact tables.
-
-Define `schema_config` and `storage_config` to access the storage.
+{{% admonition type="note" %}}
+Retention is only available if the index period is 24h. Single store TSDB and single store BoltDB require 24h index period.
+{{% /admonition %}}
-The index period must be 24h.
+`retention_enabled` should be set to true. Without this, the Compactor will only compact tables.
`working_directory` is the directory where marked chunks and temporary tables will be saved.
@@ -94,12 +103,14 @@ Retention period is configured within the [`limits_config`]({{< relref "../../co
There are two ways of setting retention policies:
-- `retention_period` which is applied globally.
-- `retention_stream` which is only applied to chunks matching the selector
+- `retention_period` which is applied globally for all log streams.
+- `retention_stream` which is only applied to log streams matching the selector.
-> The minimum retention period is 24h.
+{{% admonition type="note" %}}
+The minimum retention period is 24h.
+{{% /admonition %}}
-This example configures global retention:
+This example configures global retention that applies to all tenants (unless overridden by configuring per-tenant overrides):
```yaml
...
@@ -113,16 +124,18 @@ limits_config:
...
```
-**NOTE:** You can only use label matchers in the `selector` field of a `retention_stream` definition. Arbitrary LogQL expressions are not supported.
+{{% admonition type="note" %}}
+You can only use label matchers in the `selector` field of a `retention_stream` definition. Arbitrary LogQL expressions are not supported.
+{{% /admonition %}}
-Per tenant retention can be defined using the `/etc/overrides.yaml` files. For example:
+Per tenant retention can be defined by configuring [runtime overrides]({{< relref "../../configure#runtime-configuration-file" >}}). For example:
```yaml
overrides:
"29":
retention_period: 168h
retention_stream:
- - selector: '{namespace="prod", container=~"(nginx|loki)"}'
+ - selector: '{namespace="prod"}'
priority: 2
period: 336h
- selector: '{container="loki"}'
@@ -135,14 +148,17 @@ overrides:
period: 24h
```
-A rule to apply is selected by choosing the first in this list that matches:
-
-1. If a per-tenant `retention_stream` matches the current stream, the highest priority is picked.
-2. If a global `retention_stream` matches the current stream, the highest priority is picked.
+Retention period for a given stream is decided based on the first match in this list:
+1. If mutiple per-tenant `retention_stream` selectors match the stream, retention period with the highest priority is picked.
+2. If multiple global `retention_stream` selectors match the stream, retention period with the highest priority is picked. This value is not considered if per-tenant `retention_stream` is set.
3. If a per-tenant `retention_period` is specified, it will be applied.
-4. The global `retention_period` will be selected if nothing else matched.
+4. The global `retention_period` will be applied if none of the above match.
5. If no global `retention_period` is specified, the default value of `744h` (30days) retention is used.
+{{% admonition type="note" %}}
+The larger the priority value, the higher the priority.
+{{% /admonition %}}
+
Stream matching uses the same syntax as Prometheus label matching:
- `=`: Select labels that are exactly equal to the provided string.
@@ -150,19 +166,23 @@ Stream matching uses the same syntax as Prometheus label matching:
- `=~`: Select labels that regex-match the provided string.
- `!~`: Select labels that do not regex-match the provided string.
-The example configurations will set these rules:
-
-- All tenants except `29` and `30` in the `dev` namespace will have a retention period of `24h` hours.
-- All tenants except `29` and `30` that are not in the `dev` namespace will have the retention period of `744h`.
+The example configurations defined above will result in the following retention periods:
- For tenant `29`:
- - All streams except those in the container `loki` or in the namespace `prod` will have retention period of `168h` (1 week).
- - All streams in the `prod` namespace will have a retention period of `336h` (2 weeks), even if the container label is `loki`, since the priority of the `prod` rule is higher.
+ - Streams that have the namespace label `prod` will have a retention period of `336h` (2 weeks), even if the container label is `loki`, since the priority of the `prod` rule is higher.
- Streams that have the container label `loki` but are not in the namespace `prod` will have a `72h` retention period.
+ - For the rest of the streams in this tenant, per-tenant override `retention_period` value of `168h` is applied.
- For tenant `30`:
- - All streams except those having the container label `nginx` will have the global retention period of `744h`, since there is no override specified.
- - Streams that have the label `nginx` will have a retention period of `24h`.
+ - Streams that have the label `nginx` and level `debug` will have a retention period of `24h`.
+ - For the rest of the streams in this tenant the global retention period of `744h`, since there is no override specified.
+- All tenants except `29` and `30`:
+ - Streams that have the namespace label `dev` will have a retention period of `24h` hours.
+ - Streams except those with the namespace label `dev` will have the retention period of `744h`.
-## Table Manager
+## Table Manager (deprecated)
+
+Retention through the [Table Manager]({{< relref "./table-manager" >}}) is
+achieved by relying on the object store TTL feature, and will work for both
+[boltdb-shipper]({{< relref "./boltdb-shipper" >}}) store and chunk/index stores.
In order to enable the retention support, the Table Manager needs to be
configured to enable deletions and a retention period. Please refer to the
@@ -173,14 +193,18 @@ Alternatively, the `table-manager.retention-period` and
provided retention period needs to be a duration represented as a string that
can be parsed using the Prometheus common model [ParseDuration](https://pkg.go.dev/github.com/prometheus/common/model#ParseDuration). Examples: `7d`, `1w`, `168h`.
-> **WARNING**: The retention period must be a multiple of the index and chunks table
+{{% admonition type="warning" %}}
+The retention period must be a multiple of the index and chunks table
`period`, configured in the [`period_config`]({{< relref "../../configure#period_config" >}})
block. See the [Table Manager]({{< relref "./table-manager#retention" >}}) documentation for
more information.
+{{% /admonition %}}
-> **NOTE**: To avoid querying of data beyond the retention period,
-`max_look_back_period` config in [`chunk_store_config`]({{< relref "../../configure#chunk_store_config" >}}) must be set to a value less than or equal to
+{{% admonition type="note" %}}
+To avoid querying of data beyond the retention period,
+`max_query_lookback` config in [`limits_config`]({{< relref "../../configure#limits_config" >}}) must be set to a value less than or equal to
what is set in `table_manager.retention_period`.
+{{% /admonition %}}
When using S3 or GCS, the bucket storing the chunks needs to have the expiry
policy set correctly. For more details check
@@ -188,8 +212,9 @@ policy set correctly. For more details check
or
[GCS's documentation](https://cloud.google.com/storage/docs/managing-lifecycles).
-Currently, the retention policy can only be set globally. A per-tenant retention
-policy with an API to delete ingested logs is still under development.
+The retention policy for Table manager can only be set globally.
+Per-tenant and per-stream retention policies along with support for deleting
+ingested logs using an API are only supported by Compactor retention.
Since a design goal of Loki is to make storing logs cheap, a volume-based
deletion API is deprioritized. Until this feature is released, if you suddenly
@@ -224,8 +249,8 @@ storage_config:
gcs:
bucket_name: GCS_BUCKET_NAME
-chunk_store_config:
- max_look_back_period: 672h
+limits_config:
+ max_query_lookback: 672h
table_manager:
retention_deletes_enabled: true
diff --git a/pkg/validation/limits.go b/pkg/validation/limits.go
index ee394f3d82633..cbcd6c2ffb2a7 100644
--- a/pkg/validation/limits.go
+++ b/pkg/validation/limits.go
@@ -189,9 +189,9 @@ type Limits struct {
}
type StreamRetention struct {
- Period model.Duration `yaml:"period" json:"period"`
- Priority int `yaml:"priority" json:"priority"`
- Selector string `yaml:"selector" json:"selector"`
+ Period model.Duration `yaml:"period" json:"period" doc:"description:Retention period applied to the log lines matching the selector."`
+ Priority int `yaml:"priority" json:"priority" doc:"description:The larger the value, the higher the priority."`
+ Selector string `yaml:"selector" json:"selector" doc:"description:Stream selector expression."`
Matchers []*labels.Matcher `yaml:"-" json:"-"` // populated during validation.
}
|
doc
|
log retention page improvements (#10665)
|
24accf6a5b1ee1706e3fc6adf82b4940ba7db5e1
|
2021-10-06 03:51:47
|
Karen Miller
|
docs: correctly represent product name (#4416)
| false
|
diff --git a/docs/sources/_index.md b/docs/sources/_index.md
index 36a7baa073279..d3d13f94fa15a 100644
--- a/docs/sources/_index.md
+++ b/docs/sources/_index.md
@@ -1,10 +1,10 @@
---
-title: Loki Documentation
+title: Grafana Loki
aliases:
- /docs/loki/
---
-# Loki Documentation
+# Grafana Loki Documentation
<p align="center"> <img src="logo_and_name.png" alt="Loki Logo"> <br>
<small>Like Prometheus, but for logs!</small> </p>
diff --git a/docs/sources/api/_index.md b/docs/sources/api/_index.md
index df2082fe7e34b..7d441949c408b 100644
--- a/docs/sources/api/_index.md
+++ b/docs/sources/api/_index.md
@@ -3,9 +3,9 @@ title: HTTP API
weight: 900
---
-# Loki HTTP API
+# Grafana Loki HTTP API
-Loki exposes an HTTP API for pushing, querying, and tailing log data.
+Grafana Loki exposes an HTTP API for pushing, querying, and tailing log data.
Note that [authenticating](../operations/authentication/) against the API is
out of scope for Loki.
diff --git a/docs/sources/best-practices/_index.md b/docs/sources/best-practices/_index.md
index 6632d636711b9..92dd7cafa0965 100644
--- a/docs/sources/best-practices/_index.md
+++ b/docs/sources/best-practices/_index.md
@@ -2,9 +2,9 @@
title: Best practices
weight: 400
---
-# Loki label best practices
+# Grafana Loki label best practices
-Loki is under active development, and we are constantly working to improve performance. But here are some of the most current best practices for labels that will give you the best experience with Loki.
+Grafana Loki is under active development, and we are constantly working to improve performance. But here are some of the most current best practices for labels that will give you the best experience with Loki.
## Static labels are good
diff --git a/docs/sources/clients/_index.md b/docs/sources/clients/_index.md
index 8b2c4cb367c8e..c824a97bcf462 100644
--- a/docs/sources/clients/_index.md
+++ b/docs/sources/clients/_index.md
@@ -2,9 +2,9 @@
title: Clients
weight: 600
---
-# Loki clients
+# Grafana Loki clients
-Loki supports the following official clients for sending logs:
+Grafana Loki supports the following official clients for sending logs:
- [Promtail](promtail/)
- [Docker Driver](docker-driver/)
diff --git a/docs/sources/clients/aws/_index.md b/docs/sources/clients/aws/_index.md
index 05196210355c7..1c6bae40e4fd5 100644
--- a/docs/sources/clients/aws/_index.md
+++ b/docs/sources/clients/aws/_index.md
@@ -1,8 +1,9 @@
---
title: AWS
+weight: 30
---
-Sending logs from AWS services to Loki is a little different depending on what AWS service you are using:
+Sending logs from AWS services to Grafana Loki is a little different depending on what AWS service you are using:
* [Elastic Compute Cloud (EC2)](ec2/)
* [Elastic Container Service (ECS)](ecs/)
diff --git a/docs/sources/clients/aws/ec2/_index.md b/docs/sources/clients/aws/ec2/_index.md
index faf4b89db6548..4a1fe76b87b8b 100644
--- a/docs/sources/clients/aws/ec2/_index.md
+++ b/docs/sources/clients/aws/ec2/_index.md
@@ -3,7 +3,7 @@ title: EC2
---
# Running Promtail on AWS EC2
-In this tutorial we're going to setup [Promtail](../../promtail/) on an AWS EC2 instance and configure it to sends all its logs to a Loki instance.
+In this tutorial we're going to setup [Promtail](../../promtail/) on an AWS EC2 instance and configure it to sends all its logs to a Grafana Loki instance.
<!-- TOC -->
diff --git a/docs/sources/clients/aws/ecs/_index.md b/docs/sources/clients/aws/ecs/_index.md
index 4f38b4b608a5e..6ecef2b7726e8 100644
--- a/docs/sources/clients/aws/ecs/_index.md
+++ b/docs/sources/clients/aws/ecs/_index.md
@@ -3,7 +3,7 @@ title: ECS
---
# Sending Logs From AWS Elastic Container Service (ECS)
-[ECS][ECS] is the fully managed container orchestration service by Amazon. Combined with [Fargate][Fargate] you can run your container workload without the need to provision your own compute resources. In this tutorial we will see how you can leverage [Firelens][Firelens] an AWS log router to forward all your logs and your workload metadata to a Loki instance.
+[ECS][ECS] is the fully managed container orchestration service by Amazon. Combined with [Fargate][Fargate] you can run your container workload without the need to provision your own compute resources. In this tutorial we will see how you can leverage [Firelens][Firelens] an AWS log router to forward all your logs and your workload metadata to a Grafana Loki instance.
After this tutorial you will able to query all your logs in one place using Grafana.
diff --git a/docs/sources/clients/aws/eks/_index.md b/docs/sources/clients/aws/eks/_index.md
index ef75dfdc381c4..c192ab39cd522 100644
--- a/docs/sources/clients/aws/eks/_index.md
+++ b/docs/sources/clients/aws/eks/_index.md
@@ -25,7 +25,7 @@ Before we start you'll need:
- The [AWS CLI][aws cli] configured (run `aws configure`).
- [kubectl][kubectl] and [eksctl][eksctl] installed.
-- A Grafana instance with a Loki data source already configured, you can use [GrafanaCloud][GrafanaCloud] free trial.
+- A Grafana instance with a Grafana Loki data source already configured, you can use [GrafanaCloud][GrafanaCloud] free trial.
For the sake of simplicity we'll use a [GrafanaCloud][GrafanaCloud] Loki and Grafana instances, you can get an free account for this tutorial on our [website][GrafanaCloud], but all the steps are the same if you're running your own Open Source version of Loki and Grafana instances.
diff --git a/docs/sources/clients/docker-driver/_index.md b/docs/sources/clients/docker-driver/_index.md
index 5bbd2c78ecfff..39f49188937a9 100644
--- a/docs/sources/clients/docker-driver/_index.md
+++ b/docs/sources/clients/docker-driver/_index.md
@@ -1,9 +1,10 @@
---
title: Docker driver
+weight: 40
---
# Docker Driver Client
-Loki officially supports a Docker plugin that will read logs from Docker
+Grafana Loki officially supports a Docker plugin that will read logs from Docker
containers and ship them to Loki. The plugin can be configured to send the logs
to a private Loki instance or [Grafana Cloud](https://grafana.com/oss/loki).
diff --git a/docs/sources/clients/docker-driver/configuration.md b/docs/sources/clients/docker-driver/configuration.md
index 00a0a4049a1f4..dc7a273761d25 100644
--- a/docs/sources/clients/docker-driver/configuration.md
+++ b/docs/sources/clients/docker-driver/configuration.md
@@ -8,7 +8,7 @@ each container will use the default driver unless configured otherwise.
## Installation
-Before configuring the plugin, [install or upgrade the Loki Docker Driver Client](../../docker-driver/)
+Before configuring the plugin, [install or upgrade the Grafana Loki Docker Driver Client](../../docker-driver/)
## Change the logging driver for a container
diff --git a/docs/sources/clients/fluentbit/_index.md b/docs/sources/clients/fluentbit/_index.md
index 0a871d233c192..a0ffbd4367dce 100644
--- a/docs/sources/clients/fluentbit/_index.md
+++ b/docs/sources/clients/fluentbit/_index.md
@@ -1,9 +1,10 @@
---
title: Fluentbit
+weight: 50
---
# Fluentbit Loki Output Plugin
-[Fluent Bit](https://fluentbit.io/) is a Fast and Lightweight Data Forwarder, it can be configured with the [Loki output plugin](https://fluentbit.io/documentation/0.12/output/) to ship logs to Loki. You can define which log files you want to collect using the [`Tail`](https://fluentbit.io/documentation/0.12/input/tail.html) or [`Stdin`](https://docs.fluentbit.io/manual/pipeline/inputs/standard-input) [input plugin](https://fluentbit.io/documentation/0.12/getting_started/input.html). Additionally Fluent Bit supports multiple `Filter` and `Parser` plugins (`Kubernetes`, `JSON`, etc..) to structure and alter log lines.
+[Fluent Bit](https://fluentbit.io/) is a Fast and Lightweight Data Forwarder, it can be configured with the [Grafana Loki output plugin](https://fluentbit.io/documentation/0.12/output/) to ship logs to Loki. You can define which log files you want to collect using the [`Tail`](https://fluentbit.io/documentation/0.12/input/tail.html) or [`Stdin`](https://docs.fluentbit.io/manual/pipeline/inputs/standard-input) [input plugin](https://fluentbit.io/documentation/0.12/getting_started/input.html). Additionally Fluent Bit supports multiple `Filter` and `Parser` plugins (`Kubernetes`, `JSON`, etc..) to structure and alter log lines.
## Usage
diff --git a/docs/sources/clients/fluentd/_index.md b/docs/sources/clients/fluentd/_index.md
index ed3395c2eeb7f..7e3300cc86297 100644
--- a/docs/sources/clients/fluentd/_index.md
+++ b/docs/sources/clients/fluentd/_index.md
@@ -1,9 +1,10 @@
---
title: Fluentd
+weight: 60
---
# Fluentd Loki Output Plugin
-Loki has a [Fluentd](https://www.fluentd.org/) output plugin called
+Grafana Loki has a [Fluentd](https://www.fluentd.org/) output plugin called
`fluent-plugin-grafana-loki` that enables shipping logs to a private Loki
instance or [Grafana Cloud](https://grafana.com/products/cloud/).
diff --git a/docs/sources/clients/lambda-promtail/_index.md b/docs/sources/clients/lambda-promtail/_index.md
index 861b0b70041a6..578cc4928f377 100644
--- a/docs/sources/clients/lambda-promtail/_index.md
+++ b/docs/sources/clients/lambda-promtail/_index.md
@@ -1,9 +1,10 @@
---
title: Lambda Promtail
+weight: 20
---
# Lambda Promtail
-Loki includes an [AWS SAM](https://aws.amazon.com/serverless/sam/) package template for shipping Cloudwatch logs to Loki via a [set of Promtails](https://github.com/grafana/loki/tree/master/tools/lambda-promtail). This is done via an intermediary [lambda function](https://aws.amazon.com/lambda/) which processes cloudwatch events and propagates them to a Promtail instance (or set of instances behind a load balancer) via the push-api [scrape config](../promtail/configuration#loki_push_api_config).
+Grafana Loki includes an [AWS SAM](https://aws.amazon.com/serverless/sam/) package template for shipping Cloudwatch logs to Loki via a [set of Promtails](https://github.com/grafana/loki/tree/master/tools/lambda-promtail). This is done via an intermediary [lambda function](https://aws.amazon.com/lambda/) which processes cloudwatch events and propagates them to a Promtail instance (or set of instances behind a load balancer) via the push-api [scrape config](../promtail/configuration#loki_push_api_config).
## Uses
diff --git a/docs/sources/clients/logstash/_index.md b/docs/sources/clients/logstash/_index.md
index 1142c2485b1d7..b64ef2fc69aaa 100644
--- a/docs/sources/clients/logstash/_index.md
+++ b/docs/sources/clients/logstash/_index.md
@@ -1,9 +1,10 @@
---
title: Logstash
+weight: 70
---
# Logstash
-Loki has a [Logstash](https://www.elastic.co/logstash) output plugin called
+Grafana Loki has a [Logstash](https://www.elastic.co/logstash) output plugin called
`logstash-output-loki` that enables shipping logs to a Loki
instance or [Grafana Cloud](https://grafana.com/products/cloud/).
diff --git a/docs/sources/clients/promtail/_index.md b/docs/sources/clients/promtail/_index.md
index 6838699ed27a8..cb7db92438f1d 100644
--- a/docs/sources/clients/promtail/_index.md
+++ b/docs/sources/clients/promtail/_index.md
@@ -1,9 +1,10 @@
---
title: Promtail
+weight: 10
---
# Promtail
-Promtail is an agent which ships the contents of local logs to a private Loki
+Promtail is an agent which ships the contents of local logs to a private Grafana Loki
instance or [Grafana Cloud](https://grafana.com/oss/loki). It is usually
deployed to every machine that has applications needed to be monitored.
diff --git a/docs/sources/clients/promtail/configuration.md b/docs/sources/clients/promtail/configuration.md
index 6603535bcf8e0..4cb4bee7a9649 100644
--- a/docs/sources/clients/promtail/configuration.md
+++ b/docs/sources/clients/promtail/configuration.md
@@ -80,7 +80,7 @@ Where default_value is the value to use if the environment variable is undefined
[server: <server_config>]
# Describes how Promtail connects to multiple instances
-# of Loki, sending logs to each.
+# of Grafana Loki, sending logs to each.
# WARNING: If one of the remote Loki servers fails to respond or responds
# with any error which is retryable, this will impact sending logs to any
# other configured remote Loki servers. Sending is done on a single thread!
diff --git a/docs/sources/clients/promtail/gcplog-cloud.md b/docs/sources/clients/promtail/gcplog-cloud.md
index 9476018929a47..72372e789e51b 100644
--- a/docs/sources/clients/promtail/gcplog-cloud.md
+++ b/docs/sources/clients/promtail/gcplog-cloud.md
@@ -3,7 +3,7 @@ title: Cloud setup GCP Logs
---
# Cloud setup GCP logs
-This document explain how one can setup Google Cloud Platform to forward its cloud resource logs from a particular GCP project into Google Pubsub topic so that is available for Loki Promtail to consume.
+This document explain how one can setup Google Cloud Platform to forward its cloud resource logs from a particular GCP project into Google Pubsub topic so that is available for Promtail to consume.
This document assumes, that reader have `gcloud` installed and have required permissions(as mentioned in #[Roles and Permission] section)
@@ -46,7 +46,7 @@ For more information on adding `log-filter` refer this [document](https://cloud.
We cover more advanced `log-filter` [below](#Advanced-Log-filter)
-## Create Pubsub subscription for Loki
+## Create Pubsub subscription for Grafana Loki
We create subscription for the pubsub topic we create above and Promtail uses this subscription to consume log messages.
@@ -94,7 +94,7 @@ gcloud pubsub subscriptions seek projects/my-project/subscriptions/cloud-logs --
## Advanced log filter
-So far we've covered admitting GCS bucket logs into Loki, but often one may need to add multiple cloud resource logs and may also need to exclude unnecessary logs. The following is a more complex example.
+So far we've covered admitting GCS bucket logs into Grafana Loki, but often one may need to add multiple cloud resource logs and may also need to exclude unnecessary logs. The following is a more complex example.
We use the `log-filter` option to include logs and the `exclusion` option to exclude them.
diff --git a/docs/sources/clients/promtail/pipelines.md b/docs/sources/clients/promtail/pipelines.md
index 765717ce897a6..d7d31e9e88aad 100644
--- a/docs/sources/clients/promtail/pipelines.md
+++ b/docs/sources/clients/promtail/pipelines.md
@@ -33,7 +33,7 @@ something with that extracted data. The most common action stage will be a
A common stage will also be the [match](../stages/match/) stage to selectively
apply stages or drop entries based on a [LogQL stream selector and filter expressions](../../../logql/).
-Note that pipelines can not currently be used to deduplicate logs; Loki will
+Note that pipelines can not currently be used to deduplicate logs; Grafana Loki will
receive the same log line multiple times if, for example:
1. Two scrape configs read from the same file
diff --git a/docs/sources/clients/promtail/scraping.md b/docs/sources/clients/promtail/scraping.md
index 44c9d1516ddc4..ccca42445afd0 100644
--- a/docs/sources/clients/promtail/scraping.md
+++ b/docs/sources/clients/promtail/scraping.md
@@ -33,7 +33,7 @@ There are different types of labels present in Promtail:
- Labels starting with `__` (two underscores) are internal labels. They usually
come from dynamic sources like service discovery. Once relabeling is done,
they are removed from the label set. To persist internal labels so they're
- sent to Loki, rename them so they don't start with `__`. See
+ sent to Grafana Loki, rename them so they don't start with `__`. See
[Relabeling](#relabeling) for more information.
- Labels starting with `__meta_kubernetes_pod_label_*` are "meta labels" which
diff --git a/docs/sources/configuration/_index.md b/docs/sources/configuration/_index.md
index da897bc53335b..7f65e9b3cd554 100644
--- a/docs/sources/configuration/_index.md
+++ b/docs/sources/configuration/_index.md
@@ -2,9 +2,9 @@
title: Configuration
weight: 500
---
-# Configuring Loki
+# Configuring Grafana Loki
-Loki is configured in a YAML file (usually referred to as `loki.yaml` )
+Grafana Loki is configured in a YAML file (usually referred to as `loki.yaml` )
which contains information on the Loki server and its individual components,
depending on which mode Loki is launched in.
diff --git a/docs/sources/configuration/examples.md b/docs/sources/configuration/examples.md
index 539a7a3ee7f9c..939c1aa2a2e16 100644
--- a/docs/sources/configuration/examples.md
+++ b/docs/sources/configuration/examples.md
@@ -1,9 +1,9 @@
---
title: Examples
---
-# Loki Configuration Examples
+# Grafana Loki Configuration Examples
-## Complete Local config
+## Complete Local configuration
```yaml
auth_enabled: false
diff --git a/docs/sources/configuration/query-frontend.md b/docs/sources/configuration/query-frontend.md
index 7da8944df3666..57ed17ddf7454 100644
--- a/docs/sources/configuration/query-frontend.md
+++ b/docs/sources/configuration/query-frontend.md
@@ -9,7 +9,7 @@ This aims to be a general purpose example; there are a number of substitutions t
## Use case
-It's a common occurrence to start running Loki as a single binary while trying it out in order to simplify deployments and defer learning the (initially unnecessary) nitty gritty details. As we become more comfortable with its paradigms and begin migrating towards a more production ready deployment there are a number of things to be aware of. A common bottleneck is on the read path: queries that executed effortlessly on small data sets may churn to a halt on larger ones. Sometimes we can solve this with more queriers. However, that doesn't help when our queries are too large for a single querier to execute. Then we need the query frontend.
+It's a common occurrence to start running Grafana Loki as a single binary while trying it out in order to simplify deployments and defer learning the (initially unnecessary) nitty gritty details. As we become more comfortable with its paradigms and begin migrating towards a more production ready deployment there are a number of things to be aware of. A common bottleneck is on the read path: queries that executed effortlessly on small data sets may churn to a halt on larger ones. Sometimes we can solve this with more queriers. However, that doesn't help when our queries are too large for a single querier to execute. Then we need the query frontend.
### Parallelization
diff --git a/docs/sources/fundamentals/architecture/_index.md b/docs/sources/fundamentals/architecture/_index.md
index 76b40e9071e21..eab0b0a6cf695 100644
--- a/docs/sources/fundamentals/architecture/_index.md
+++ b/docs/sources/fundamentals/architecture/_index.md
@@ -4,12 +4,12 @@ weight: 200
aliases:
- /docs/loki/latest/architecture/
---
-# Loki's Architecture
+# Grafana Loki's Architecture
## Multi-tenancy
All data, both in memory and in long-term storage, may be partitioned by a
-tenant ID, pulled from the `X-Scope-OrgID` HTTP header in the request when Loki
+tenant ID, pulled from the `X-Scope-OrgID` HTTP header in the request when Grafana Loki
is running in multi-tenant mode. When Loki is **not** in multi-tenant mode, the
header is ignored and the tenant ID is set to "fake", which will appear in the
index and in stored chunks.
diff --git a/docs/sources/fundamentals/architecture/distributor.md b/docs/sources/fundamentals/architecture/distributor.md
index 2bcbda2bb8b65..adbca4e69685d 100644
--- a/docs/sources/fundamentals/architecture/distributor.md
+++ b/docs/sources/fundamentals/architecture/distributor.md
@@ -8,7 +8,7 @@ Distributors are stateless and communicate with ingesters via [gRPC](https://grp
## Where does it live?
-The distributor is the first component on Loki's write path downstream from any gateways providing auth or load balancing. It's responsible for validating, preprocessing, and applying a subset of rate limiting to incoming data before sending it to the ingester component. It is important that a load balancer sits in front of the distributor in order to properly balance traffic to them.
+The distributor is the first component on Grafana Loki's write path downstream from any gateways providing auth or load balancing. It's responsible for validating, preprocessing, and applying a subset of rate limiting to incoming data before sending it to the ingester component. It is important that a load balancer sits in front of the distributor in order to properly balance traffic to them.
## What does it do?
diff --git a/docs/sources/fundamentals/labels.md b/docs/sources/fundamentals/labels.md
index e11bb2b7c2c5e..c06bec2273509 100644
--- a/docs/sources/fundamentals/labels.md
+++ b/docs/sources/fundamentals/labels.md
@@ -8,7 +8,7 @@ aliases:
Labels are key value pairs and can be defined as anything! We like to refer to them as metadata to describe a log stream. If you are familiar with Prometheus, there are a few labels you are used to seeing like `job` and `instance`, and I will use those in the coming examples.
-The scrape configs we provide with Loki define these labels, too. If you are using Prometheus, having consistent labels between Loki and Prometheus is one of Loki's superpowers, making it incredibly [easy to correlate your application metrics with your log data](https://grafana.com/blog/2019/05/06/how-loki-correlates-metrics-and-logs-and-saves-you-money/).
+The scrape configs we provide with Grafana Loki define these labels, too. If you are using Prometheus, having consistent labels between Loki and Prometheus is one of Loki's superpowers, making it incredibly [easy to correlate your application metrics with your log data](https://grafana.com/blog/2019/05/06/how-loki-correlates-metrics-and-logs-and-saves-you-money/).
## How Loki uses labels
diff --git a/docs/sources/fundamentals/overview/comparisons.md b/docs/sources/fundamentals/overview/comparisons.md
index 11aab75c45a4f..d366cd88b5889 100644
--- a/docs/sources/fundamentals/overview/comparisons.md
+++ b/docs/sources/fundamentals/overview/comparisons.md
@@ -3,7 +3,7 @@ title: Comparisons
---
# Loki compared to other log systems
-## Loki / Promtail / Grafana vs EFK
+## Grafana Loki / Promtail / Grafana vs EFK
The EFK (Elasticsearch, Fluentd, Kibana) stack is used to ingest, visualize, and
query for logs from various sources.
@@ -13,7 +13,7 @@ keys for each object and the contents of each key are indexed. Data can then be
queried using a JSON object to define a query (called the Query DSL) or through
the Lucene query language.
-In comparison, Loki in single-binary mode can store data on-disk, but in
+In comparison, Grafana Loki in single-binary mode can store data on-disk, but in
horizontally-scalable mode data is stored in a cloud storage system such as S3,
GCS, or Cassandra. Logs are stored in plaintext form tagged with a set of label
names and values, where only the label pairs are indexed. This tradeoff makes it
diff --git a/docs/sources/getting-started/_index.md b/docs/sources/getting-started/_index.md
index 24b11b709a325..5caeb8b7594ba 100644
--- a/docs/sources/getting-started/_index.md
+++ b/docs/sources/getting-started/_index.md
@@ -2,7 +2,7 @@
title: Getting started
weight: 300
---
-# Getting started with Loki
+# Getting started with Grafana Loki
> **Note:** You can use [Grafana Cloud](https://grafana.com/products/cloud/features/#cloud-logs) to avoid installing, maintaining, and scaling your own instance of Grafana Loki. The free forever plan includes 50GB of free logs. [Create a free account to get started](https://grafana.com/auth/sign-up/create-user?pg=docs-grafana-install&plcmt=in-text).
diff --git a/docs/sources/getting-started/get-logs-into-loki.md b/docs/sources/getting-started/get-logs-into-loki.md
index 731d9f70ee927..cddc738347df9 100644
--- a/docs/sources/getting-started/get-logs-into-loki.md
+++ b/docs/sources/getting-started/get-logs-into-loki.md
@@ -1,9 +1,10 @@
---
title: Get logs into Loki
+weight: 10
---
-# Get logs into Loki
+# Get logs into Grafana Loki
-After you [install and run Loki](../../installation/local/), you probably want to get logs from other applications into it.
+After you [install and run Grafana Loki](../../installation/local/), you probably want to get logs from other applications into it.
To get application logs into Loki, you need to edit the [Promtail]({{< relref "../clients/promtail" >}}) configuration file.
diff --git a/docs/sources/getting-started/grafana.md b/docs/sources/getting-started/grafana.md
index 6903b57d605ba..f88d8857a168c 100644
--- a/docs/sources/getting-started/grafana.md
+++ b/docs/sources/getting-started/grafana.md
@@ -1,10 +1,11 @@
---
title: Loki in Grafana
+weight: 30
---
# Loki in Grafana
[Grafana 6.0](https://grafana.com/grafana/download/6.0.0) and more recent
-versions have built-in support for Loki.
+versions have built-in support for Grafana Loki.
Use [Grafana 6.3](https://grafana.com/grafana/download/6.3.0) or a more
recent version to take advantage of [LogQL]({{< relref "../logql/_index.md" >}}) functionality.
diff --git a/docs/sources/getting-started/logcli.md b/docs/sources/getting-started/logcli.md
index 5c61fbc4d094f..836f2caf02261 100644
--- a/docs/sources/getting-started/logcli.md
+++ b/docs/sources/getting-started/logcli.md
@@ -1,9 +1,10 @@
---
title: LogCLI
+weight: 20
---
-# LogCLI, Loki's command-line interface
+# LogCLI, Grafana Loki's command-line interface
-LogCLI is the command-line interface to Loki.
+LogCLI is the command-line interface to Grafana Loki.
It facilitates running [LogQL]({{< relref "../logql/_index.md" >}})
queries against a Loki instance.
diff --git a/docs/sources/getting-started/troubleshooting.md b/docs/sources/getting-started/troubleshooting.md
index 3e246781e87c3..95f2e0106b7d2 100644
--- a/docs/sources/getting-started/troubleshooting.md
+++ b/docs/sources/getting-started/troubleshooting.md
@@ -1,11 +1,12 @@
---
title: Troubleshooting
+weight: 40
---
-# Troubleshooting Loki
+# Troubleshooting Grafana Loki
## "Loki: Bad Gateway. 502"
-This error can appear in Grafana when Loki is added as a
+This error can appear in Grafana when Grafana Loki is added as a
datasource, indicating that Grafana in unable to connect to Loki. There may
one of many root causes:
diff --git a/docs/sources/installation/docker.md b/docs/sources/installation/docker.md
index 94cc51b3b1048..a561629d3db2f 100644
--- a/docs/sources/installation/docker.md
+++ b/docs/sources/installation/docker.md
@@ -2,9 +2,9 @@
title: Docker
weight: 30
---
-# Install Loki with Docker or Docker Compose
+# Install Grafana Loki with Docker or Docker Compose
-You can install Loki and Promtail with Docker or Docker Compose if you are evaluating, testing, or developing Loki.
+You can install Grafana Loki and Promtail with Docker or Docker Compose if you are evaluating, testing, or developing Loki.
For production, we recommend installing with Tanka or Helm.
The configuration acquired with these installation instructions run Loki as a single binary.
diff --git a/docs/sources/installation/helm.md b/docs/sources/installation/helm.md
index 7bbda32109432..3234a96c2d401 100644
--- a/docs/sources/installation/helm.md
+++ b/docs/sources/installation/helm.md
@@ -2,9 +2,9 @@
title: Helm
weight: 20
---
-# Install Loki with Helm
+# Install Grafana Loki with Helm
-The Helm installation runs the Loki cluster as a single binary.
+The Helm installation runs the Grafana Loki cluster as a single binary.
## Prerequisites
diff --git a/docs/sources/installation/install-from-source.md b/docs/sources/installation/install-from-source.md
index 3013b2a0c792e..0613c6bbac174 100644
--- a/docs/sources/installation/install-from-source.md
+++ b/docs/sources/installation/install-from-source.md
@@ -4,7 +4,7 @@ weight: 50
---
# Build from source
-Clone the Loki repository and use the provided `Makefile`
+Clone the Grafana Loki repository and use the provided `Makefile`
to build Loki from source.
## Prerequisites
diff --git a/docs/sources/installation/local.md b/docs/sources/installation/local.md
index 4c734d9226fa1..e0e44df8c6b93 100644
--- a/docs/sources/installation/local.md
+++ b/docs/sources/installation/local.md
@@ -2,9 +2,9 @@
title: Local
weight: 40
---
-# Install and run Loki locally
+# Install and run Grafana Loki locally
-In order to log events with Loki, download and install both Promtail and Loki.
+In order to log events with Grafana Loki, download and install both Promtail and Loki.
- Loki is the logging engine.
- Promtail sends logs to Loki.
diff --git a/docs/sources/installation/tanka.md b/docs/sources/installation/tanka.md
index b1c2e56c36a1f..51f837a2de075 100644
--- a/docs/sources/installation/tanka.md
+++ b/docs/sources/installation/tanka.md
@@ -2,11 +2,11 @@
title: Tanka
weight: 10
---
-# Install Loki with Tanka
+# Install Grafana Loki with Tanka
[Tanka](https://tanka.dev) is a reimplementation of
[Ksonnet](https://ksonnet.io) that Grafana Labs created after Ksonnet was
-deprecated. Tanka is used by Grafana Labs to run Loki in production.
+deprecated. Tanka is used by Grafana Labs to run Grafana Loki in production.
The Tanka installation runs the Loki cluster in microservices mode.
diff --git a/docs/sources/logql/_index.md b/docs/sources/logql/_index.md
index 34a132e76996d..225adfe7cc7d5 100644
--- a/docs/sources/logql/_index.md
+++ b/docs/sources/logql/_index.md
@@ -4,7 +4,7 @@ weight: 700
---
# LogQL: Log query language
-LogQL is Loki's PromQL-inspired query language.
+LogQL is Grafana Loki's PromQL-inspired query language.
Queries act as if they are a distributed `grep` to aggregate log sources.
LogQL uses labels and operators for filtering.
diff --git a/docs/sources/logql/log_queries.md b/docs/sources/logql/log_queries.md
index ea135f76d4a64..da8ac6a9a9835 100644
--- a/docs/sources/logql/log_queries.md
+++ b/docs/sources/logql/log_queries.md
@@ -49,7 +49,7 @@ A stream may contain other pairs of labels and values,
but only the specified pairs within the stream selector are used to determine
which streams will be included within the query results.
-The same rules that apply for [Prometheus Label Selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#instant-vector-selectors) apply for Loki log stream selectors.
+The same rules that apply for [Prometheus Label Selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#instant-vector-selectors) apply for Grafana Loki log stream selectors.
The `=` operator after the label name is a **label matching operator**.
The following label matching operators are supported:
diff --git a/docs/sources/logql/metric_queries.md b/docs/sources/logql/metric_queries.md
index 34d1906e413a1..c00eb5f23fad4 100644
--- a/docs/sources/logql/metric_queries.md
+++ b/docs/sources/logql/metric_queries.md
@@ -16,7 +16,7 @@ All labels, including extracted ones, will be available for aggregations and gen
## Range Vector aggregation
LogQL shares the [range vector](https://prometheus.io/docs/prometheus/latest/querying/basics/#range-vector-selectors) concept of Prometheus.
-In Loki, the selected range of samples is a range of selected log or label values.
+In Grafana Loki, the selected range of samples is a range of selected log or label values.
The aggregation is applied over a time duration.
Loki defines [Time Durations](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations) with the same syntax as Prometheus.
diff --git a/docs/sources/logql/template_functions.md b/docs/sources/logql/template_functions.md
index 241cf200aa1d7..aa82f626b31c1 100644
--- a/docs/sources/logql/template_functions.md
+++ b/docs/sources/logql/template_functions.md
@@ -40,7 +40,7 @@ Examples:
`{{ToUpper "This is a string" | ToLower}}`
```
-> **Note:** In Loki 2.1 you can also use respectively [`lower`](#lower) and [`upper`](#upper) shortcut, e.g `{{.request_method | lower }}`.
+> **Note:** In Grafana Loki 2.1 you can also use respectively [`lower`](#lower) and [`upper`](#upper) shortcut, e.g `{{.request_method | lower }}`.
## Replace string
diff --git a/docs/sources/maintaining/_index.md b/docs/sources/maintaining/_index.md
index ff901984017a0..61b53d59da240 100644
--- a/docs/sources/maintaining/_index.md
+++ b/docs/sources/maintaining/_index.md
@@ -2,6 +2,6 @@
title: Maintaining
weight: 1200
---
-# Loki Maintainers Guide
+# Grafana Loki Maintainers' Guide
-This section details information for maintainers of Loki.
+This section details information for maintainers of Grafana Loki.
diff --git a/docs/sources/maintaining/release-loki-build-image.md b/docs/sources/maintaining/release-loki-build-image.md
index ac873b69dbf00..ae29c4109e021 100644
--- a/docs/sources/maintaining/release-loki-build-image.md
+++ b/docs/sources/maintaining/release-loki-build-image.md
@@ -3,7 +3,7 @@ title: Releasing Loki Build Image
---
# Releasing `loki-build-image`
-The [`loki-build-image`](https://github.com/grafana/loki/tree/master/loki-build-image) is the Docker image used to run tests and build Loki binaries in CI.
+The [`loki-build-image`](https://github.com/grafana/loki/tree/master/loki-build-image) is the Docker image used to run tests and build Grafana Loki binaries in CI.
## How To Perform a Release
diff --git a/docs/sources/maintaining/release.md b/docs/sources/maintaining/release.md
index 3995b03b75cd8..3a6fd9f32d5dc 100644
--- a/docs/sources/maintaining/release.md
+++ b/docs/sources/maintaining/release.md
@@ -1,9 +1,9 @@
---
title: Releasing Loki
---
-# Releasing Loki
+# Releasing Grafana Loki
-This document is a series of instructions for core Loki maintainers to be able
+This document is a series of instructions for core Grafana Loki maintainers to be able
to publish a new Loki release.
## Prerequisites
diff --git a/docs/sources/operations/authentication.md b/docs/sources/operations/authentication.md
index 5d5f6228df71b..6642bab3585d6 100644
--- a/docs/sources/operations/authentication.md
+++ b/docs/sources/operations/authentication.md
@@ -2,9 +2,9 @@
title: Authentication
weight: 10
---
-# Authentication with Loki
+# Authentication with Grafana Loki
-Loki does not come with any included authentication layer. Operators are
+Grafana Loki does not come with any included authentication layer. Operators are
expected to run an authenticating reverse proxy in front of your services, such
as NGINX using basic auth or an OAuth2 proxy.
diff --git a/docs/sources/operations/loki-canary.md b/docs/sources/operations/loki-canary.md
index 206517ccee1db..a4f2dd647cb94 100644
--- a/docs/sources/operations/loki-canary.md
+++ b/docs/sources/operations/loki-canary.md
@@ -7,7 +7,7 @@ weight: 60

Loki Canary is a standalone app that audits the log-capturing performance of
-a Loki cluster.
+a Grafana Loki cluster.
Loki Canary generates artificial log lines.
These log lines are sent to the Loki cluster.
diff --git a/docs/sources/operations/multi-tenancy.md b/docs/sources/operations/multi-tenancy.md
index 87d4947380f21..b72df12e155a3 100644
--- a/docs/sources/operations/multi-tenancy.md
+++ b/docs/sources/operations/multi-tenancy.md
@@ -2,9 +2,9 @@
title: Multi-tenancy
weight: 50
---
-# Loki Multi-Tenancy
+# Grafana Loki Multi-Tenancy
-Loki is a multi-tenant system; requests and data for tenant A are isolated from
+Grafana Loki is a multi-tenant system; requests and data for tenant A are isolated from
tenant B. Requests to the Loki API should include an HTTP header
(`X-Scope-OrgID`) that identifies the tenant for the request.
diff --git a/docs/sources/operations/observability.md b/docs/sources/operations/observability.md
index 3c1c95ed126ad..82a419e12d326 100644
--- a/docs/sources/operations/observability.md
+++ b/docs/sources/operations/observability.md
@@ -2,9 +2,9 @@
title: Observability
weight: 20
---
-# Observing Loki
+# Observing Grafana Loki
-Both Loki and Promtail expose a `/metrics` endpoint that expose Prometheus
+Both Grafana Loki and Promtail expose a `/metrics` endpoint that expose Prometheus
metrics. You will need a local Prometheus and add Loki and Promtail as targets.
See [configuring
Prometheus](https://prometheus.io/docs/prometheus/latest/configuration/configuration)
diff --git a/docs/sources/operations/scalability.md b/docs/sources/operations/scalability.md
index b99d968bc673c..5c45330de49cf 100644
--- a/docs/sources/operations/scalability.md
+++ b/docs/sources/operations/scalability.md
@@ -2,10 +2,10 @@
title: Scalability
weight: 30
---
-# Scaling with Loki
+# Scaling with Grafana Loki
See [Loki: Prometheus-inspired, open source logging for cloud natives](https://grafana.com/blog/2018/12/12/loki-prometheus-inspired-open-source-logging-for-cloud-natives/)
-for a discussion about Loki's scalability.
+for a discussion about Grafana Loki's scalability.
When scaling Loki, operators should consider running several Loki processes
partitioned by role (ingester, distributor, querier) rather than a single Loki
diff --git a/docs/sources/operations/storage/_index.md b/docs/sources/operations/storage/_index.md
index da44864374e87..57df3a0a986ef 100644
--- a/docs/sources/operations/storage/_index.md
+++ b/docs/sources/operations/storage/_index.md
@@ -2,11 +2,11 @@
title: Storage
weight: 40
---
-# Loki Storage
+# Grafana Loki Storage
[High level storage overview here]({{< relref "../../storage/_index.md" >}})
-Loki needs to store two different types of data: **chunks** and **indexes**.
+Grafana Loki needs to store two different types of data: **chunks** and **indexes**.
Loki receives logs in separate streams, where each stream is uniquely identified
by its tenant ID and its set of labels. As log entries from a stream arrive,
diff --git a/docs/sources/operations/storage/boltdb-shipper.md b/docs/sources/operations/storage/boltdb-shipper.md
index 788807a9c359b..c745db77740cb 100644
--- a/docs/sources/operations/storage/boltdb-shipper.md
+++ b/docs/sources/operations/storage/boltdb-shipper.md
@@ -3,7 +3,7 @@ title: Single Store (boltdb-shipper)
---
# Single Store Loki (boltdb-shipper index type)
-BoltDB Shipper lets you run Loki without any dependency on NoSQL stores for storing index.
+BoltDB Shipper lets you run Grafana Loki without any dependency on NoSQL stores for storing index.
It locally stores the index in BoltDB files instead and keeps shipping those files to a shared object store i.e the same object store which is being used for storing chunks.
It also keeps syncing BoltDB files from shared object store to a configured local directory for getting index entries created by other services of same Loki cluster.
This helps run Loki with one less dependency and also saves costs in storage since object stores are likely to be much cheaper compared to cost of a hosted NoSQL store or running a self hosted instance of Cassandra.
diff --git a/docs/sources/operations/storage/filesystem.md b/docs/sources/operations/storage/filesystem.md
index 95d0cda2c5222..b98d6c41108bf 100644
--- a/docs/sources/operations/storage/filesystem.md
+++ b/docs/sources/operations/storage/filesystem.md
@@ -3,7 +3,7 @@ title: Filesystem
---
# Filesystem Object Store
-The filesystem object store is the easiest to get started with Loki but there are some pros/cons to this approach.
+The filesystem object store is the easiest to get started with Grafana Loki but there are some pros/cons to this approach.
Very simply it stores all the objects (chunks) in the specified directory:
@@ -15,7 +15,7 @@ storage_config:
A folder is created for every tenant all the chunks for one tenant are stored in that directory.
-If loki is run in single-tenant mode, all the chunks are put in a folder named `fake` which is the synthesized tenant name used for single tenant mode.
+If Loki is run in single-tenant mode, all the chunks are put in a folder named `fake` which is the synthesized tenant name used for single tenant mode.
See [multi-tenancy](../../multi-tenancy/) for more information.
diff --git a/docs/sources/operations/storage/logs-deletion.md b/docs/sources/operations/storage/logs-deletion.md
index 69432d9c285b9..2b107b250818b 100644
--- a/docs/sources/operations/storage/logs-deletion.md
+++ b/docs/sources/operations/storage/logs-deletion.md
@@ -6,7 +6,7 @@ weight: 60
<span style="background-color:#f3f973;">Log entry deletion is experimental. It is only supported for the BoltDB Shipper index store.</span>
-Loki supports the deletion of log entries from specified streams.
+Grafana Loki supports the deletion of log entries from specified streams.
Log entries that fall within a specified time window are those that will be deleted.
The Compactor component exposes REST endpoints that process delete requests.
diff --git a/docs/sources/operations/storage/retention.md b/docs/sources/operations/storage/retention.md
index cf1c04fc6c165..096839c2540d0 100644
--- a/docs/sources/operations/storage/retention.md
+++ b/docs/sources/operations/storage/retention.md
@@ -1,9 +1,9 @@
---
title: Retention
---
-# Loki Storage Retention
+# Grafana Loki Storage Retention
-Retention in Loki is achieved either through the [Table Manager](#table-manager) or the [Compactor](#Compactor).
+Retention in Grafana Loki is achieved either through the [Table Manager](#table-manager) or the [Compactor](#Compactor).
Retention through the [Table Manager](../table-manager/) is achieved by relying on the object store TTL feature, and will work for both [boltdb-shipper](../boltdb-shipper) store and chunk/index store. However retention through the [Compactor](../boltdb-shipper#compactor) is supported only with the [boltdb-shipper](../boltdb-shipper) store.
diff --git a/docs/sources/operations/storage/table-manager.md b/docs/sources/operations/storage/table-manager.md
index 5de996faea4e4..2cd5969e5e173 100644
--- a/docs/sources/operations/storage/table-manager.md
+++ b/docs/sources/operations/storage/table-manager.md
@@ -3,7 +3,7 @@ title: Table manager
---
# Table Manager
-Loki supports storing indexes and chunks in table-based data storages. When
+Grafana Loki supports storing indexes and chunks in table-based data storages. When
such a storage type is used, multiple tables are created over the time: each
table - also called periodic table - contains the data for a specific time
range.
diff --git a/docs/sources/operations/storage/wal.md b/docs/sources/operations/storage/wal.md
index 718a9028f47cc..151b1d30807f3 100644
--- a/docs/sources/operations/storage/wal.md
+++ b/docs/sources/operations/storage/wal.md
@@ -6,7 +6,7 @@ title: Write Ahead Log
Ingesters temporarily store data in memory. In the event of a crash, there could be data loss. The WAL helps fill this gap in reliability.
-The WAL in Loki records incoming data and stores it on the local file system in order to guarantee persistence of acknowledged data in the event of a process crash. Upon restart, Loki will "replay" all of the data in the log before registering itself as ready for subsequent writes. This allows Loki to maintain the performance & cost benefits of buffering data in memory _and_ durability benefits (it won't lose data once a write has been acknowledged).
+The WAL in Grafana Loki records incoming data and stores it on the local file system in order to guarantee persistence of acknowledged data in the event of a process crash. Upon restart, Loki will "replay" all of the data in the log before registering itself as ready for subsequent writes. This allows Loki to maintain the performance & cost benefits of buffering data in memory _and_ durability benefits (it won't lose data once a write has been acknowledged).
This section will use Kubernetes as a reference deployment paradigm in the examples.
diff --git a/docs/sources/rules/_index.md b/docs/sources/rules/_index.md
index 92cc527a53940..217b0491aedbc 100644
--- a/docs/sources/rules/_index.md
+++ b/docs/sources/rules/_index.md
@@ -7,7 +7,7 @@ weight: 700
# Rules and the Ruler
-Loki includes a component called the Ruler, adapted from our upstream project, Cortex. The Ruler is responsible for continually evaluating a set of configurable queries and performing an action based on the result.
+Grafana Loki includes a component called the Ruler, adapted from our upstream project, Cortex. The Ruler is responsible for continually evaluating a set of configurable queries and performing an action based on the result.
This example configuration sources rules from a local disk.
diff --git a/docs/sources/storage/_index.md b/docs/sources/storage/_index.md
index 9c6ab51236684..5bd90947ddcdb 100644
--- a/docs/sources/storage/_index.md
+++ b/docs/sources/storage/_index.md
@@ -4,7 +4,7 @@ weight: 1010
---
# Storage
-Unlike other logging systems, Loki is built around the idea of only indexing
+Unlike other logging systems, Grafana Loki is built around the idea of only indexing
metadata about your logs: labels (just like Prometheus labels). Log data itself
is then compressed and stored in chunks in object stores such as S3 or GCS, or
even locally on the filesystem. A small index and highly compressed chunks
diff --git a/docs/sources/upgrading/_index.md b/docs/sources/upgrading/_index.md
index 87ddc01d132b6..969d4f12fc979 100644
--- a/docs/sources/upgrading/_index.md
+++ b/docs/sources/upgrading/_index.md
@@ -3,9 +3,9 @@ title: Upgrading
weight: 250
---
-# Upgrading Loki
+# Upgrading Grafana Loki
-Every attempt is made to keep Loki backwards compatible, such that upgrades should be low risk and low friction.
+Every attempt is made to keep Grafana Loki backwards compatible, such that upgrades should be low risk and low friction.
Unfortunately Loki is software and software is hard and sometimes we are forced to make decisions between ease of use and ease of maintenance.
|
docs
|
correctly represent product name (#4416)
|
bf3bca3e991bb47f8748e4a6d02d98fda9010112
|
2022-06-30 14:03:34
|
Joan López de la Franca Beltran
|
tools: add lambda-promtail missing errchecks (#6541)
| false
|
diff --git a/tools/lambda-promtail/lambda-promtail/cw.go b/tools/lambda-promtail/lambda-promtail/cw.go
index 3225e32cb22ee..7632a7e1a3295 100644
--- a/tools/lambda-promtail/lambda-promtail/cw.go
+++ b/tools/lambda-promtail/lambda-promtail/cw.go
@@ -31,19 +31,24 @@ func parseCWEvent(ctx context.Context, b *batch, ev *events.CloudwatchLogsEvent)
for _, event := range data.LogEvents {
timestamp := time.UnixMilli(event.Timestamp)
- b.add(ctx, entry{labels, logproto.Entry{
+ if err := b.add(ctx, entry{labels, logproto.Entry{
Line: event.Message,
Timestamp: timestamp,
- }})
+ }}); err != nil {
+ return err
+ }
}
return nil
}
func processCWEvent(ctx context.Context, ev *events.CloudwatchLogsEvent) error {
- batch, _ := newBatch(ctx)
+ batch, err := newBatch(ctx)
+ if err != nil {
+ return err
+ }
- err := parseCWEvent(ctx, batch, ev)
+ err = parseCWEvent(ctx, batch, ev)
if err != nil {
return err
}
@@ -52,5 +57,6 @@ func processCWEvent(ctx context.Context, ev *events.CloudwatchLogsEvent) error {
if err != nil {
return err
}
+
return nil
}
diff --git a/tools/lambda-promtail/lambda-promtail/main.go b/tools/lambda-promtail/lambda-promtail/main.go
index 9aca2126f9d9c..e6690ed954cf1 100644
--- a/tools/lambda-promtail/lambda-promtail/main.go
+++ b/tools/lambda-promtail/lambda-promtail/main.go
@@ -5,12 +5,13 @@ import (
"encoding/json"
"errors"
"fmt"
- "github.com/prometheus/common/model"
"net/url"
"os"
"strconv"
"strings"
+ "github.com/prometheus/common/model"
+
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go-v2/service/s3"
@@ -136,11 +137,11 @@ func handler(ctx context.Context, ev map[string]interface{}) error {
return err
}
- switch event.(type) {
+ switch evt := event.(type) {
case *events.S3Event:
- return processS3Event(ctx, event.(*events.S3Event))
+ return processS3Event(ctx, evt)
case *events.CloudwatchLogsEvent:
- return processCWEvent(ctx, event.(*events.CloudwatchLogsEvent))
+ return processCWEvent(ctx, evt)
}
return err
diff --git a/tools/lambda-promtail/lambda-promtail/promtail.go b/tools/lambda-promtail/lambda-promtail/promtail.go
index 64d1348b76d6a..9df78fb61e80e 100644
--- a/tools/lambda-promtail/lambda-promtail/promtail.go
+++ b/tools/lambda-promtail/lambda-promtail/promtail.go
@@ -45,8 +45,9 @@ func newBatch(ctx context.Context, entries ...entry) (*batch, error) {
}
for _, entry := range entries {
- err := b.add(ctx, entry)
- return b, err
+ if err := b.add(ctx, entry); err != nil {
+ return nil, err
+ }
}
return b, nil
diff --git a/tools/lambda-promtail/lambda-promtail/s3.go b/tools/lambda-promtail/lambda-promtail/s3.go
index 0ccb71451413b..2d2ba59906f8c 100644
--- a/tools/lambda-promtail/lambda-promtail/s3.go
+++ b/tools/lambda-promtail/lambda-promtail/s3.go
@@ -75,7 +75,6 @@ func parseS3Log(ctx context.Context, b *batch, labels map[string]string, obj io.
ls = applyExtraLabels(ls)
for scanner.Scan() {
- i := 0
log_line := scanner.Text()
match := timestampRegex.FindStringSubmatch(log_line)
@@ -84,11 +83,12 @@ func parseS3Log(ctx context.Context, b *batch, labels map[string]string, obj io.
return err
}
- b.add(ctx, entry{ls, logproto.Entry{
+ if err := b.add(ctx, entry{ls, logproto.Entry{
Line: log_line,
Timestamp: timestamp,
- }})
- i++
+ }}); err != nil {
+ return err
+ }
}
return nil
@@ -114,8 +114,10 @@ func getLabels(record events.S3EventRecord) (map[string]string, error) {
}
func processS3Event(ctx context.Context, ev *events.S3Event) error {
-
- batch, _ := newBatch(ctx)
+ batch, err := newBatch(ctx)
+ if err != nil {
+ return err
+ }
for _, record := range ev.Records {
labels, err := getLabels(record)
@@ -135,7 +137,7 @@ func processS3Event(ctx context.Context, ev *events.S3Event) error {
}
- err := sendToPromtail(ctx, batch)
+ err = sendToPromtail(ctx, batch)
if err != nil {
return err
}
|
tools
|
add lambda-promtail missing errchecks (#6541)
|
8ebce009f327068956c13766bab75beb2608edf7
|
2024-06-26 14:35:13
|
Salva Corts
|
refactor(blooms): Implement retry in builder (#13306)
| false
|
diff --git a/docs/sources/shared/configuration.md b/docs/sources/shared/configuration.md
index 89d4615418459..a3966db2f9af1 100644
--- a/docs/sources/shared/configuration.md
+++ b/docs/sources/shared/configuration.md
@@ -377,6 +377,19 @@ bloom_build:
# CLI flag: -bloom-build.builder.planner-address
[planner_address: <string> | default = ""]
+ backoff_config:
+ # Minimum delay when backing off.
+ # CLI flag: -bloom-build.builder.backoff.backoff-min-period
+ [min_period: <duration> | default = 100ms]
+
+ # Maximum delay when backing off.
+ # CLI flag: -bloom-build.builder.backoff.backoff-max-period
+ [max_period: <duration> | default = 10s]
+
+ # Number of times to backoff and retry before failing.
+ # CLI flag: -bloom-build.builder.backoff.backoff-retries
+ [max_retries: <int> | default = 10]
+
# Experimental: The bloom_gateway block configures the Loki bloom gateway
# server, responsible for serving queries for filtering chunks based on filter
# expressions.
diff --git a/pkg/bloombuild/builder/builder.go b/pkg/bloombuild/builder/builder.go
index 3a5638ab46654..f05c1fc08fc3a 100644
--- a/pkg/bloombuild/builder/builder.go
+++ b/pkg/bloombuild/builder/builder.go
@@ -10,6 +10,7 @@ import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/google/uuid"
+ "github.com/grafana/dskit/backoff"
"github.com/grafana/dskit/services"
"github.com/grafana/dskit/user"
"github.com/pkg/errors"
@@ -110,12 +111,36 @@ func (b *Builder) stopping(_ error) error {
}
func (b *Builder) running(ctx context.Context) error {
+ // Retry if the connection to the planner is lost.
+ retries := backoff.New(ctx, b.cfg.BackoffConfig)
+ for retries.Ongoing() {
+ err := b.connectAndBuild(ctx)
+ if err == nil || errors.Is(err, context.Canceled) {
+ break
+ }
+
+ level.Error(b.logger).Log("msg", "failed to connect and build. Retrying", "err", err)
+ retries.Wait()
+ }
+
+ if err := retries.Err(); err != nil {
+ if errors.Is(err, context.Canceled) {
+ return nil
+ }
+ return fmt.Errorf("failed to connect and build: %w", err)
+ }
+
+ return nil
+}
+
+func (b *Builder) connectAndBuild(
+ ctx context.Context,
+) error {
opts, err := b.cfg.GrpcConfig.DialOption(nil, nil)
if err != nil {
return fmt.Errorf("failed to create grpc dial options: %w", err)
}
- // TODO: Wrap hereafter in retry logic
conn, err := grpc.DialContext(ctx, b.cfg.PlannerAddress, opts...)
if err != nil {
return fmt.Errorf("failed to dial bloom planner: %w", err)
@@ -150,8 +175,8 @@ func (b *Builder) builderLoop(c protos.PlannerForBuilder_BuilderLoopClient) erro
}
for b.State() == services.Running {
- // When the planner connection closes or the builder stops, the context
- // will be canceled and the loop will exit.
+ // When the planner connection closes, an EOF or "planner shutting down" error is returned.
+ // When the builder is shutting down, a gRPC context canceled error is returned.
protoTask, err := c.Recv()
if err != nil {
if status.Code(err) == codes.Canceled {
@@ -162,6 +187,8 @@ func (b *Builder) builderLoop(c protos.PlannerForBuilder_BuilderLoopClient) erro
return fmt.Errorf("failed to receive task from planner: %w", err)
}
+ logger := log.With(b.logger, "task", protoTask.Task.Id)
+
b.metrics.taskStarted.Inc()
start := time.Now()
status := statusSuccess
@@ -169,7 +196,7 @@ func (b *Builder) builderLoop(c protos.PlannerForBuilder_BuilderLoopClient) erro
newMetas, err := b.processTask(c.Context(), protoTask.Task)
if err != nil {
status = statusFailure
- level.Error(b.logger).Log("msg", "failed to process task", "err", err)
+ level.Error(logger).Log("msg", "failed to process task", "err", err)
}
b.metrics.taskCompleted.WithLabelValues(status).Inc()
@@ -197,13 +224,25 @@ func (b *Builder) notifyTaskCompletedToPlanner(
CreatedMetas: metas,
}
- // TODO: Implement retry
- if err := c.Send(&protos.BuilderToPlanner{
- BuilderID: b.ID,
- Result: *result.ToProtoTaskResult(),
- }); err != nil {
+ // We have a retry mechanism upper in the stack, but we add another one here
+ // to try our best to avoid losing the task result.
+ retries := backoff.New(c.Context(), b.cfg.BackoffConfig)
+ for retries.Ongoing() {
+ if err := c.Send(&protos.BuilderToPlanner{
+ BuilderID: b.ID,
+ Result: *result.ToProtoTaskResult(),
+ }); err == nil {
+ break
+ }
+
+ level.Error(b.logger).Log("msg", "failed to acknowledge task completion to planner. Retrying", "err", err)
+ retries.Wait()
+ }
+
+ if err := retries.Err(); err != nil {
return fmt.Errorf("failed to acknowledge task completion to planner: %w", err)
}
+
return nil
}
diff --git a/pkg/bloombuild/builder/builder_test.go b/pkg/bloombuild/builder/builder_test.go
index 149e43f3234d3..764e8cb6350f8 100644
--- a/pkg/bloombuild/builder/builder_test.go
+++ b/pkg/bloombuild/builder/builder_test.go
@@ -4,10 +4,12 @@ import (
"context"
"fmt"
"net"
+ "sync"
"testing"
"time"
"github.com/go-kit/log"
+ "github.com/grafana/dskit/backoff"
"github.com/grafana/dskit/flagext"
"github.com/grafana/dskit/services"
"github.com/prometheus/client_golang/prometheus"
@@ -26,6 +28,7 @@ import (
func Test_BuilderLoop(t *testing.T) {
logger := log.NewNopLogger()
+ //logger := log.NewLogfmtLogger(os.Stdout)
schemaCfg := config.SchemaConfig{
Configs: []config.PeriodConfig{
@@ -69,9 +72,17 @@ func Test_BuilderLoop(t *testing.T) {
server, err := newFakePlannerServer(tasks)
require.NoError(t, err)
+ // Start the server so the builder can connect and receive tasks.
+ server.Start()
+
limits := fakeLimits{}
cfg := Config{
PlannerAddress: server.Addr(),
+ BackoffConfig: backoff.Config{
+ MinBackoff: 1 * time.Second,
+ MaxBackoff: 10 * time.Second,
+ MaxRetries: 5,
+ },
}
flagext.DefaultValues(&cfg.GrpcConfig)
@@ -87,10 +98,28 @@ func Test_BuilderLoop(t *testing.T) {
err = services.StartAndAwaitRunning(context.Background(), builder)
require.NoError(t, err)
+ // Wait for at least one task to be processed.
require.Eventually(t, func() bool {
- return int(server.completedTasks.Load()) == len(tasks)
+ return server.CompletedTasks() > 0
}, 5*time.Second, 100*time.Millisecond)
+ // Right after stop it so connection is broken, and builder will retry.
+ server.Stop()
+
+ // While the server is stopped, the builder should keep retrying to connect but no tasks should be processed.
+ // Note this is just a way to sleep while making sure no tasks are processed.
+ tasksProcessedSoFar := server.CompletedTasks()
+ require.Never(t, func() bool {
+ return server.CompletedTasks() > tasksProcessedSoFar
+ }, 5*time.Second, 500*time.Millisecond)
+
+ // Now we start the server so the builder can connect and receive tasks.
+ server.Start()
+
+ require.Eventually(t, func() bool {
+ return server.CompletedTasks() >= len(tasks)
+ }, 30*time.Second, 500*time.Millisecond)
+
err = services.StopAndAwaitTerminated(context.Background(), builder)
require.NoError(t, err)
@@ -102,41 +131,62 @@ type fakePlannerServer struct {
completedTasks atomic.Int64
shutdownCalled bool
- addr string
+ listenAddr string
grpcServer *grpc.Server
+ wg sync.WaitGroup
}
func newFakePlannerServer(tasks []*protos.ProtoTask) (*fakePlannerServer, error) {
- lis, err := net.Listen("tcp", "localhost:0")
- if err != nil {
- return nil, err
- }
-
server := &fakePlannerServer{
- tasks: tasks,
- addr: lis.Addr().String(),
- grpcServer: grpc.NewServer(),
+ tasks: tasks,
}
- protos.RegisterPlannerForBuilderServer(server.grpcServer, server)
- go func() {
- if err := server.grpcServer.Serve(lis); err != nil {
- panic(err)
- }
- }()
-
return server, nil
}
func (f *fakePlannerServer) Addr() string {
- return f.addr
+ if f.listenAddr == "" {
+ panic("server not started")
+ }
+ return f.listenAddr
}
func (f *fakePlannerServer) Stop() {
- f.grpcServer.Stop()
+ if f.grpcServer != nil {
+ f.grpcServer.Stop()
+ }
+
+ f.wg.Wait()
+}
+
+func (f *fakePlannerServer) Start() {
+ f.Stop()
+
+ lisAddr := "localhost:0"
+ if f.listenAddr != "" {
+ // Reuse the same address if the server was stopped and started again.
+ lisAddr = f.listenAddr
+ }
+
+ lis, err := net.Listen("tcp", lisAddr)
+ if err != nil {
+ panic(err)
+ }
+ f.listenAddr = lis.Addr().String()
+
+ f.grpcServer = grpc.NewServer()
+ protos.RegisterPlannerForBuilderServer(f.grpcServer, f)
+ go func() {
+ if err := f.grpcServer.Serve(lis); err != nil {
+ panic(err)
+ }
+ }()
}
func (f *fakePlannerServer) BuilderLoop(srv protos.PlannerForBuilder_BuilderLoopServer) error {
+ f.wg.Add(1)
+ defer f.wg.Done()
+
// Receive Ready
if _, err := srv.Recv(); err != nil {
return fmt.Errorf("failed to receive ready: %w", err)
@@ -149,7 +199,8 @@ func (f *fakePlannerServer) BuilderLoop(srv protos.PlannerForBuilder_BuilderLoop
if _, err := srv.Recv(); err != nil {
return fmt.Errorf("failed to receive task response: %w", err)
}
- f.completedTasks.Add(1)
+ time.Sleep(10 * time.Millisecond) // Simulate task processing time to add some latency.
+ f.completedTasks.Inc()
}
// No more tasks. Wait until shutdown.
@@ -157,6 +208,10 @@ func (f *fakePlannerServer) BuilderLoop(srv protos.PlannerForBuilder_BuilderLoop
return nil
}
+func (f *fakePlannerServer) CompletedTasks() int {
+ return int(f.completedTasks.Load())
+}
+
func (f *fakePlannerServer) NotifyBuilderShutdown(_ context.Context, _ *protos.NotifyBuilderShutdownRequest) (*protos.NotifyBuilderShutdownResponse, error) {
f.shutdownCalled = true
return &protos.NotifyBuilderShutdownResponse{}, nil
diff --git a/pkg/bloombuild/builder/config.go b/pkg/bloombuild/builder/config.go
index 25cefa4215224..d0c553104b09e 100644
--- a/pkg/bloombuild/builder/config.go
+++ b/pkg/bloombuild/builder/config.go
@@ -4,6 +4,7 @@ import (
"flag"
"fmt"
+ "github.com/grafana/dskit/backoff"
"github.com/grafana/dskit/grpcclient"
)
@@ -11,12 +12,14 @@ import (
type Config struct {
GrpcConfig grpcclient.Config `yaml:"grpc_config"`
PlannerAddress string `yaml:"planner_address"`
+ BackoffConfig backoff.Config `yaml:"backoff_config"`
}
// RegisterFlagsWithPrefix registers flags for the bloom-planner configuration.
func (cfg *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.StringVar(&cfg.PlannerAddress, prefix+".planner-address", "", "Hostname (and port) of the bloom planner")
cfg.GrpcConfig.RegisterFlagsWithPrefix(prefix+".grpc", f)
+ cfg.BackoffConfig.RegisterFlagsWithPrefix(prefix+".backoff", f)
}
func (cfg *Config) Validate() error {
|
refactor
|
Implement retry in builder (#13306)
|
9315b3d03d790506cf8e69fb7407b476de9d0ed6
|
2024-08-09 16:08:47
|
Grot (@grafanabot)
|
chore(ci): Update yaml file `./production/helm/loki/values.yaml` (+1 other) (#13832)
| false
|
diff --git a/docs/sources/setup/install/helm/reference.md b/docs/sources/setup/install/helm/reference.md
index 8136018909684..80b4ab5fe1cbb 100644
--- a/docs/sources/setup/install/helm/reference.md
+++ b/docs/sources/setup/install/helm/reference.md
@@ -2640,7 +2640,7 @@ null
"tolerations": []
},
"useExternalLicense": false,
- "version": "3.1.0"
+ "version": "3.1.1"
}
</pre>
</td>
@@ -5689,6 +5689,7 @@ null
"s3": {
"accessKeyId": null,
"backoff_config": {},
+ "disable_dualstack": false,
"endpoint": null,
"http_config": {},
"insecure": false,
diff --git a/production/helm/loki/CHANGELOG.md b/production/helm/loki/CHANGELOG.md
index ec57c7f404e48..6c9970616fa28 100644
--- a/production/helm/loki/CHANGELOG.md
+++ b/production/helm/loki/CHANGELOG.md
@@ -13,6 +13,13 @@ Entries should include a reference to the pull request that introduced the chang
[//]: # (<AUTOMATED_UPDATES_LOCATOR> : do not remove this line. This locator is used by the CI pipeline to automatically create a changelog entry for each new Loki release. Add other chart versions and respective changelog entries bellow this line.)
+## 6.10.0
+
+- [CHANGE] Changed version of Grafana Enterprise Logs to 3.1.1
+- [CHANGE] Changed version of Grafana Loki to 3.1.1
+- [ENHANCEMENT] Added ability to disable AWS S3 dualstack endpoint usage.
+
+
## 6.9.0
- [BUGFIX] Fixed how we set imagePullSecrets for the memcached and provisioner.
diff --git a/production/helm/loki/Chart.yaml b/production/helm/loki/Chart.yaml
index 659d9f516ff7c..4b35a38ae3262 100644
--- a/production/helm/loki/Chart.yaml
+++ b/production/helm/loki/Chart.yaml
@@ -2,8 +2,8 @@ apiVersion: v2
name: loki
description: Helm chart for Grafana Loki and Grafana Enterprise Logs supporting both simple, scalable and distributed modes.
type: application
-appVersion: 3.1.0
-version: 6.9.0
+appVersion: 3.1.1
+version: 6.10.0
home: https://grafana.github.io/helm-charts
sources:
- https://github.com/grafana/loki
diff --git a/production/helm/loki/README.md b/production/helm/loki/README.md
index 35f6b9a3faab3..e5eee43c87d75 100644
--- a/production/helm/loki/README.md
+++ b/production/helm/loki/README.md
@@ -1,6 +1,6 @@
# loki
-  
+  
Helm chart for Grafana Loki and Grafana Enterprise Logs supporting both simple, scalable and distributed modes.
diff --git a/production/helm/loki/templates/_helpers.tpl b/production/helm/loki/templates/_helpers.tpl
index 91b453efa062a..4ec80d2b4db29 100644
--- a/production/helm/loki/templates/_helpers.tpl
+++ b/production/helm/loki/templates/_helpers.tpl
@@ -237,6 +237,9 @@ s3:
{{- end }}
s3forcepathstyle: {{ .s3ForcePathStyle }}
insecure: {{ .insecure }}
+ {{- with .disable_dualstack }}
+ disable_dualstack: {{ . }}
+ {{- end }}
{{- with .http_config}}
http_config:
{{ toYaml . | indent 4 }}
diff --git a/production/helm/loki/values.yaml b/production/helm/loki/values.yaml
index 7c06497d26d21..bffeca816a62c 100644
--- a/production/helm/loki/values.yaml
+++ b/production/helm/loki/values.yaml
@@ -329,6 +329,7 @@ loki:
http_config: {}
# -- Check https://grafana.com/docs/loki/latest/configure/#s3_storage_config for more info on how to provide a backoff_config
backoff_config: {}
+ disable_dualstack: false
gcs:
chunkBufferSize: 0
requestTimeout: "0s"
@@ -450,7 +451,7 @@ enterprise:
# Enable enterprise features, license must be provided
enabled: false
# Default verion of GEL to deploy
- version: 3.1.0
+ version: 3.1.1
# -- Optional name of the GEL cluster, otherwise will use .Release.Name
# The cluster name must match what is in your GEL license
cluster_name: null
@@ -1012,9 +1013,11 @@ gateway:
htpasswd: >-
{{ if .Values.loki.tenants }}
+
{{- range $t := .Values.loki.tenants }}
{{ htpasswd (required "All tenants must have a 'name' set" $t.name) (required "All tenants must have a 'password' set" $t.password) }}
+
{{- end }}
{{ else }} {{ htpasswd (required "'gateway.basicAuth.username' is required" .Values.gateway.basicAuth.username) (required "'gateway.basicAuth.password' is required" .Values.gateway.basicAuth.password) }} {{ end }}
# -- Existing basic auth secret to use. Must contain '.htpasswd'
|
chore
|
Update yaml file `./production/helm/loki/values.yaml` (+1 other) (#13832)
|
913e9f93477b5b811fbcf44d0e750f600c9ded69
|
2024-08-17 01:14:40
|
Trevor Whitney
|
feat: aggregate byte and count metrics (#13731)
| false
|
diff --git a/cmd/loki/loki-local-config.yaml b/cmd/loki/loki-local-config.yaml
index ade3febc5e27e..38efa3f6bf6e7 100644
--- a/cmd/loki/loki-local-config.yaml
+++ b/cmd/loki/loki-local-config.yaml
@@ -42,7 +42,7 @@ pattern_ingester:
enabled: true
metric_aggregation:
enabled: true
- log_push_observations: true
+ loki_address: localhost:3100
ruler:
alertmanager_url: http://localhost:9093
diff --git a/docs/sources/shared/configuration.md b/docs/sources/shared/configuration.md
index d51fc86d5eb0f..3840252f1df69 100644
--- a/docs/sources/shared/configuration.md
+++ b/docs/sources/shared/configuration.md
@@ -612,6 +612,206 @@ pattern_ingester:
# CLI flag: -pattern-ingester.max-eviction-ratio
[max_eviction_ratio: <float> | default = 0.25]
+ # Configures the metric aggregation and storage behavior of the pattern
+ # ingester.
+ metric_aggregation:
+ # Whether the pattern ingester metric aggregation is enabled.
+ # CLI flag: -pattern-ingester.metric-aggregation.enabled
+ [enabled: <boolean> | default = false]
+
+ # How often to downsample metrics from raw push observations.
+ # CLI flag: -pattern-ingester.metric-aggregation.downsample-period
+ [downsample_period: <duration> | default = 10s]
+
+ # The address of the Loki instance to push aggregated metrics to.
+ # CLI flag: -pattern-ingester.metric-aggregation.loki-address
+ [loki_address: <string> | default = ""]
+
+ # The timeout for writing to Loki.
+ # CLI flag: -pattern-ingester.metric-aggregation.timeout
+ [timeout: <duration> | default = 10s]
+
+ # How long to wait in between pushes to Loki.
+ # CLI flag: -pattern-ingester.metric-aggregation.push-period
+ [push_period: <duration> | default = 30s]
+
+ # The HTTP client configuration for pushing metrics to Loki.
+ http_client_config:
+ basic_auth:
+ [username: <string> | default = ""]
+
+ [username_file: <string> | default = ""]
+
+ [username_ref: <string> | default = ""]
+
+ [password: <string> | default = ""]
+
+ [password_file: <string> | default = ""]
+
+ [password_ref: <string> | default = ""]
+
+ authorization:
+ [type: <string> | default = ""]
+
+ [credentials: <string> | default = ""]
+
+ [credentials_file: <string> | default = ""]
+
+ [credentials_ref: <string> | default = ""]
+
+ oauth2:
+ [client_id: <string> | default = ""]
+
+ [client_secret: <string> | default = ""]
+
+ [client_secret_file: <string> | default = ""]
+
+ [client_secret_ref: <string> | default = ""]
+
+ [scopes: <list of strings>]
+
+ [token_url: <string> | default = ""]
+
+ [endpoint_params: <map of string to string>]
+
+ tls_config:
+ [ca: <string> | default = ""]
+
+ [cert: <string> | default = ""]
+
+ [key: <string> | default = ""]
+
+ [ca_file: <string> | default = ""]
+
+ [cert_file: <string> | default = ""]
+
+ [key_file: <string> | default = ""]
+
+ [ca_ref: <string> | default = ""]
+
+ [cert_ref: <string> | default = ""]
+
+ [key_ref: <string> | default = ""]
+
+ [server_name: <string> | default = ""]
+
+ [insecure_skip_verify: <boolean>]
+
+ [min_version: <int>]
+
+ [max_version: <int>]
+
+ proxy_url:
+ [url: <url>]
+
+ [no_proxy: <string> | default = ""]
+
+ [proxy_from_environment: <boolean>]
+
+ [proxy_connect_header: <map of string to list of strings>]
+
+ [bearer_token: <string> | default = ""]
+
+ [bearer_token_file: <string> | default = ""]
+
+ tls_config:
+ [ca: <string> | default = ""]
+
+ [cert: <string> | default = ""]
+
+ [key: <string> | default = ""]
+
+ [ca_file: <string> | default = ""]
+
+ [cert_file: <string> | default = ""]
+
+ [key_file: <string> | default = ""]
+
+ [ca_ref: <string> | default = ""]
+
+ [cert_ref: <string> | default = ""]
+
+ [key_ref: <string> | default = ""]
+
+ [server_name: <string> | default = ""]
+
+ [insecure_skip_verify: <boolean>]
+
+ [min_version: <int>]
+
+ [max_version: <int>]
+
+ [follow_redirects: <boolean>]
+
+ [enable_http2: <boolean>]
+
+ proxy_url:
+ [url: <url>]
+
+ [no_proxy: <string> | default = ""]
+
+ [proxy_from_environment: <boolean>]
+
+ [proxy_connect_header: <map of string to list of strings>]
+
+ http_headers:
+ [: <map of string to Header>]
+
+ # Whether to use TLS for pushing metrics to Loki.
+ # CLI flag: -pattern-ingester.metric-aggregation.tls
+ [use_tls: <boolean> | default = false]
+
+ # The basic auth configuration for pushing metrics to Loki.
+ basic_auth:
+ # Basic auth username for sending aggregations back to Loki.
+ # CLI flag: -pattern-ingester.metric-aggregation.basic-auth.username
+ [username: <string> | default = ""]
+
+ # Basic auth password for sending aggregations back to Loki.
+ # CLI flag: -pattern-ingester.metric-aggregation.basic-auth.password
+ [password: <string> | default = ""]
+
+ # The backoff configuration for pushing metrics to Loki.
+ backoff_config:
+ # Minimum delay when backing off.
+ # CLI flag: -pattern-ingester.metric-aggregation.backoff-min-period
+ [min_period: <duration> | default = 100ms]
+
+ # Maximum delay when backing off.
+ # CLI flag: -pattern-ingester.metric-aggregation.backoff-max-period
+ [max_period: <duration> | default = 10s]
+
+ # Number of times to backoff and retry before failing.
+ # CLI flag: -pattern-ingester.metric-aggregation.backoff-retries
+ [max_retries: <int> | default = 10]
+
+ # Configures the pattern tee which forwards requests to the pattern ingester.
+ tee_config:
+ # The size of the batch of raw logs to send for template mining
+ # CLI flag: -pattern-ingester.tee.batch-size
+ [batch_size: <int> | default = 5000]
+
+ # The max time between batches of raw logs to send for template mining
+ # CLI flag: -pattern-ingester.tee.batch-flush-interval
+ [batch_flush_interval: <duration> | default = 1s]
+
+ # The number of log flushes to queue before dropping
+ # CLI flag: -pattern-ingester.tee.flush-queue-size
+ [flush_queue_size: <int> | default = 1000]
+
+ # the number of concurrent workers sending logs to the template service
+ # CLI flag: -pattern-ingester.tee.flush-worker-count
+ [flush_worker_count: <int> | default = 100]
+
+ # The max time we will try to flush any remaining logs to be mined when the
+ # service is stopped
+ # CLI flag: -pattern-ingester.tee.stop-flush-timeout
+ [stop_flush_timeout: <duration> | default = 30s]
+
+ # Timeout for connections between the Loki and the pattern ingester.
+ # CLI flag: -pattern-ingester.connection-timeout
+ [connection_timeout: <duration> | default = 2s]
+
# The index_gateway block configures the Loki index gateway server, responsible
# for serving index queries without the need to constantly interact with the
# object store.
diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go
index cd9524a9168ec..f88d7424c5ee0 100644
--- a/pkg/distributor/distributor.go
+++ b/pkg/distributor/distributor.go
@@ -60,16 +60,6 @@ const (
ringKey = "distributor"
ringAutoForgetUnhealthyPeriods = 2
-
- levelLabel = "detected_level"
- logLevelDebug = "debug"
- logLevelInfo = "info"
- logLevelWarn = "warn"
- logLevelError = "error"
- logLevelFatal = "fatal"
- logLevelCritical = "critical"
- logLevelTrace = "trace"
- logLevelUnknown = "unknown"
)
var (
@@ -406,9 +396,9 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
} else {
logLevel = detectLogLevelFromLogEntry(entry, structuredMetadata)
}
- if logLevel != logLevelUnknown && logLevel != "" {
+ if logLevel != constants.LogLevelUnknown && logLevel != "" {
entry.StructuredMetadata = append(entry.StructuredMetadata, logproto.LabelAdapter{
- Name: levelLabel,
+ Name: constants.LevelLabel,
Value: logLevel,
})
}
@@ -883,24 +873,24 @@ func detectLogLevelFromLogEntry(entry logproto.Entry, structuredMetadata labels.
if otlpSeverityNumberTxt := structuredMetadata.Get(push.OTLPSeverityNumber); otlpSeverityNumberTxt != "" {
otlpSeverityNumber, err := strconv.Atoi(otlpSeverityNumberTxt)
if err != nil {
- return logLevelInfo
+ return constants.LogLevelInfo
}
if otlpSeverityNumber == int(plog.SeverityNumberUnspecified) {
- return logLevelUnknown
+ return constants.LogLevelUnknown
} else if otlpSeverityNumber <= int(plog.SeverityNumberTrace4) {
- return logLevelTrace
+ return constants.LogLevelTrace
} else if otlpSeverityNumber <= int(plog.SeverityNumberDebug4) {
- return logLevelDebug
+ return constants.LogLevelDebug
} else if otlpSeverityNumber <= int(plog.SeverityNumberInfo4) {
- return logLevelInfo
+ return constants.LogLevelInfo
} else if otlpSeverityNumber <= int(plog.SeverityNumberWarn4) {
- return logLevelWarn
+ return constants.LogLevelWarn
} else if otlpSeverityNumber <= int(plog.SeverityNumberError4) {
- return logLevelError
+ return constants.LogLevelError
} else if otlpSeverityNumber <= int(plog.SeverityNumberFatal4) {
- return logLevelFatal
+ return constants.LogLevelFatal
}
- return logLevelUnknown
+ return constants.LogLevelUnknown
}
return extractLogLevelFromLogLine(entry.Line)
@@ -917,19 +907,19 @@ func extractLogLevelFromLogLine(log string) string {
switch {
case bytes.EqualFold(v, []byte("trace")), bytes.EqualFold(v, []byte("trc")):
- return logLevelTrace
+ return constants.LogLevelTrace
case bytes.EqualFold(v, []byte("debug")), bytes.EqualFold(v, []byte("dbg")):
- return logLevelDebug
+ return constants.LogLevelDebug
case bytes.EqualFold(v, []byte("info")), bytes.EqualFold(v, []byte("inf")):
- return logLevelInfo
+ return constants.LogLevelInfo
case bytes.EqualFold(v, []byte("warn")), bytes.EqualFold(v, []byte("wrn")):
- return logLevelWarn
+ return constants.LogLevelWarn
case bytes.EqualFold(v, []byte("error")), bytes.EqualFold(v, []byte("err")):
- return logLevelError
+ return constants.LogLevelError
case bytes.EqualFold(v, []byte("critical")):
- return logLevelCritical
+ return constants.LogLevelCritical
case bytes.EqualFold(v, []byte("fatal")):
- return logLevelFatal
+ return constants.LogLevelFatal
default:
return detectLevelFromLogLine(log)
}
@@ -984,21 +974,21 @@ func isJSON(line string) bool {
func detectLevelFromLogLine(log string) string {
if strings.Contains(log, "info:") || strings.Contains(log, "INFO:") ||
strings.Contains(log, "info") || strings.Contains(log, "INFO") {
- return logLevelInfo
+ return constants.LogLevelInfo
}
if strings.Contains(log, "err:") || strings.Contains(log, "ERR:") ||
strings.Contains(log, "error") || strings.Contains(log, "ERROR") {
- return logLevelError
+ return constants.LogLevelError
}
if strings.Contains(log, "warn:") || strings.Contains(log, "WARN:") ||
strings.Contains(log, "warning") || strings.Contains(log, "WARNING") {
- return logLevelWarn
+ return constants.LogLevelWarn
}
if strings.Contains(log, "CRITICAL:") || strings.Contains(log, "critical:") {
- return logLevelCritical
+ return constants.LogLevelCritical
}
if strings.Contains(log, "debug:") || strings.Contains(log, "DEBUG:") {
- return logLevelDebug
+ return constants.LogLevelDebug
}
- return logLevelUnknown
+ return constants.LogLevelUnknown
}
diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go
index c21b1e2561cd2..bcd289d3a3dff 100644
--- a/pkg/distributor/distributor_test.go
+++ b/pkg/distributor/distributor_test.go
@@ -1485,8 +1485,8 @@ func Test_DetectLogLevels(t *testing.T) {
require.Equal(t, `{foo="bar"}`, topVal.Streams[0].Labels)
require.Equal(t, push.LabelsAdapter{
{
- Name: levelLabel,
- Value: logLevelWarn,
+ Name: constants.LevelLabel,
+ Value: constants.LogLevelWarn,
},
}, topVal.Streams[0].Entries[0].StructuredMetadata)
})
@@ -1502,8 +1502,8 @@ func Test_DetectLogLevels(t *testing.T) {
require.Equal(t, `{foo="bar", level="debug"}`, topVal.Streams[0].Labels)
sm := topVal.Streams[0].Entries[0].StructuredMetadata
require.Len(t, sm, 1)
- require.Equal(t, sm[0].Name, levelLabel)
- require.Equal(t, sm[0].Value, logLevelDebug)
+ require.Equal(t, sm[0].Name, constants.LevelLabel)
+ require.Equal(t, sm[0].Value, constants.LogLevelDebug)
})
t.Run("log level detection enabled but log level already present as structured metadata", func(t *testing.T) {
@@ -1514,7 +1514,7 @@ func Test_DetectLogLevels(t *testing.T) {
writeReq.Streams[0].Entries[0].StructuredMetadata = push.LabelsAdapter{
{
Name: "severity",
- Value: logLevelWarn,
+ Value: constants.LogLevelWarn,
},
}
_, err := distributors[0].Push(ctx, writeReq)
@@ -1525,10 +1525,10 @@ func Test_DetectLogLevels(t *testing.T) {
require.Equal(t, push.LabelsAdapter{
{
Name: "severity",
- Value: logLevelWarn,
+ Value: constants.LogLevelWarn,
}, {
- Name: levelLabel,
- Value: logLevelWarn,
+ Name: constants.LevelLabel,
+ Value: constants.LogLevelWarn,
},
}, sm)
})
@@ -1551,7 +1551,7 @@ func Test_detectLogLevelFromLogEntry(t *testing.T) {
},
},
},
- expectedLogLevel: logLevelDebug,
+ expectedLogLevel: constants.LogLevelDebug,
},
{
name: "invalid severity number should not cause any issues",
@@ -1563,126 +1563,126 @@ func Test_detectLogLevelFromLogEntry(t *testing.T) {
},
},
},
- expectedLogLevel: logLevelInfo,
+ expectedLogLevel: constants.LogLevelInfo,
},
{
name: "non otlp without any of the log level keywords in log line",
entry: logproto.Entry{
Line: "foo",
},
- expectedLogLevel: logLevelUnknown,
+ expectedLogLevel: constants.LogLevelUnknown,
},
{
name: "non otlp with log level keywords in log line",
entry: logproto.Entry{
Line: "this is a warning log",
},
- expectedLogLevel: logLevelWarn,
+ expectedLogLevel: constants.LogLevelWarn,
},
{
name: "json log line with an error",
entry: logproto.Entry{
Line: `{"foo":"bar","msg":"message with keyword error but it should not get picked up","level":"critical"}`,
},
- expectedLogLevel: logLevelCritical,
+ expectedLogLevel: constants.LogLevelCritical,
},
{
name: "json log line with an error",
entry: logproto.Entry{
Line: `{"FOO":"bar","MSG":"message with keyword error but it should not get picked up","LEVEL":"Critical"}`,
},
- expectedLogLevel: logLevelCritical,
+ expectedLogLevel: constants.LogLevelCritical,
},
{
name: "json log line with an warning",
entry: logproto.Entry{
Line: `{"foo":"bar","msg":"message with keyword warn but it should not get picked up","level":"warn"}`,
},
- expectedLogLevel: logLevelWarn,
+ expectedLogLevel: constants.LogLevelWarn,
},
{
name: "json log line with an warning",
entry: logproto.Entry{
Line: `{"foo":"bar","msg":"message with keyword warn but it should not get picked up","SEVERITY":"FATAL"}`,
},
- expectedLogLevel: logLevelFatal,
+ expectedLogLevel: constants.LogLevelFatal,
},
{
name: "json log line with an error in block case",
entry: logproto.Entry{
Line: `{"foo":"bar","msg":"message with keyword warn but it should not get picked up","level":"ERR"}`,
},
- expectedLogLevel: logLevelError,
+ expectedLogLevel: constants.LogLevelError,
},
{
name: "json log line with an INFO in block case",
entry: logproto.Entry{
Line: `{"foo":"bar","msg":"message with keyword INFO get picked up"}`,
},
- expectedLogLevel: logLevelInfo,
+ expectedLogLevel: constants.LogLevelInfo,
},
{
name: "logfmt log line with an INFO and not level returns info log level",
entry: logproto.Entry{
Line: `foo=bar msg="message with info and not level should get picked up"`,
},
- expectedLogLevel: logLevelInfo,
+ expectedLogLevel: constants.LogLevelInfo,
},
{
name: "logfmt log line with a warn",
entry: logproto.Entry{
Line: `foo=bar msg="message with keyword error but it should not get picked up" level=warn`,
},
- expectedLogLevel: logLevelWarn,
+ expectedLogLevel: constants.LogLevelWarn,
},
{
name: "logfmt log line with a warn with camel case",
entry: logproto.Entry{
Line: `foo=bar msg="message with keyword error but it should not get picked up" level=Warn`,
},
- expectedLogLevel: logLevelWarn,
+ expectedLogLevel: constants.LogLevelWarn,
},
{
name: "logfmt log line with a trace",
entry: logproto.Entry{
Line: `foo=bar msg="message with keyword error but it should not get picked up" level=Trace`,
},
- expectedLogLevel: logLevelTrace,
+ expectedLogLevel: constants.LogLevelTrace,
},
{
name: "logfmt log line with some other level returns unknown log level",
entry: logproto.Entry{
Line: `foo=bar msg="message with keyword but it should not get picked up" level=NA`,
},
- expectedLogLevel: logLevelUnknown,
+ expectedLogLevel: constants.LogLevelUnknown,
},
{
name: "logfmt log line with label Severity is allowed for level detection",
entry: logproto.Entry{
Line: `foo=bar msg="message with keyword but it should not get picked up" severity=critical`,
},
- expectedLogLevel: logLevelCritical,
+ expectedLogLevel: constants.LogLevelCritical,
},
{
name: "logfmt log line with label Severity with camelcase is allowed for level detection",
entry: logproto.Entry{
Line: `Foo=bar MSG="Message with keyword but it should not get picked up" Severity=critical`,
},
- expectedLogLevel: logLevelCritical,
+ expectedLogLevel: constants.LogLevelCritical,
},
{
name: "logfmt log line with a info with non standard case",
entry: logproto.Entry{
Line: `foo=bar msg="message with keyword error but it should not get picked up" level=inFO`,
},
- expectedLogLevel: logLevelInfo,
+ expectedLogLevel: constants.LogLevelInfo,
},
{
name: "logfmt log line with a info with non block case for level",
entry: logproto.Entry{
Line: `FOO=bar MSG="message with keyword error but it should not get picked up" LEVEL=inFO`,
},
- expectedLogLevel: logLevelInfo,
+ expectedLogLevel: constants.LogLevelInfo,
},
} {
t.Run(tc.name, func(t *testing.T) {
@@ -1707,7 +1707,7 @@ func Benchmark_extractLogLevelFromLogLine(b *testing.B) {
for i := 0; i < b.N; i++ {
level := extractLogLevelFromLogLine(logLine)
- require.Equal(b, logLevelUnknown, level)
+ require.Equal(b, constants.LogLevelUnknown, level)
}
}
@@ -1716,7 +1716,7 @@ func Benchmark_optParseExtractLogLevelFromLogLineJson(b *testing.B) {
for i := 0; i < b.N; i++ {
level := extractLogLevelFromLogLine(logLine)
- require.Equal(b, logLevelError, level)
+ require.Equal(b, constants.LogLevelError, level)
}
}
@@ -1725,6 +1725,6 @@ func Benchmark_optParseExtractLogLevelFromLogLineLogfmt(b *testing.B) {
for i := 0; i < b.N; i++ {
level := extractLogLevelFromLogLine(logLine)
- require.Equal(b, logLevelInfo, level)
+ require.Equal(b, constants.LogLevelInfo, level)
}
}
diff --git a/pkg/distributor/validator.go b/pkg/distributor/validator.go
index b4f730a58a7fa..fedbc6e8fbc0c 100644
--- a/pkg/distributor/validator.go
+++ b/pkg/distributor/validator.go
@@ -158,6 +158,11 @@ func (v Validator) ValidateLabels(ctx validationContext, ls labels.Labels, strea
return fmt.Errorf(validation.MissingLabelsErrorMsg)
}
+ // Skip validation for aggregated metric streams, as we create those for internal use
+ if ls.Has(push.AggregatedMetricLabel) {
+ return nil
+ }
+
numLabelNames := len(ls)
// This is a special case that's often added by the Loki infrastructure. It may result in allowing one extra label
// if incoming requests already have a service_name
diff --git a/pkg/loghttp/push/push.go b/pkg/loghttp/push/push.go
index a9b174952f286..e048546fb4083 100644
--- a/pkg/loghttp/push/push.go
+++ b/pkg/loghttp/push/push.go
@@ -10,8 +10,6 @@ import (
"net/http"
"time"
- "github.com/grafana/loki/v3/pkg/logql/syntax"
-
"github.com/go-kit/log/level"
"github.com/grafana/loki/pkg/push"
@@ -27,6 +25,7 @@ import (
"github.com/grafana/loki/v3/pkg/analytics"
"github.com/grafana/loki/v3/pkg/loghttp"
"github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/logql/syntax"
"github.com/grafana/loki/v3/pkg/util"
"github.com/grafana/loki/v3/pkg/util/constants"
"github.com/grafana/loki/v3/pkg/util/unmarshal"
@@ -40,18 +39,18 @@ var (
Namespace: constants.Loki,
Name: "distributor_bytes_received_total",
Help: "The total number of uncompressed bytes received per tenant. Includes structured metadata bytes.",
- }, []string{"tenant", "retention_hours"})
+ }, []string{"tenant", "retention_hours", "aggregated_metric"})
structuredMetadataBytesIngested = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: constants.Loki,
Name: "distributor_structured_metadata_bytes_received_total",
Help: "The total number of uncompressed bytes received per tenant for entries' structured metadata",
- }, []string{"tenant", "retention_hours"})
+ }, []string{"tenant", "retention_hours", "aggregated_metric"})
linesIngested = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: constants.Loki,
Name: "distributor_lines_received_total",
Help: "The total number of lines received per tenant",
- }, []string{"tenant"})
+ }, []string{"tenant", "aggregated_metric"})
bytesReceivedStats = analytics.NewCounter("distributor_bytes_received")
structuredMetadataBytesReceivedStats = analytics.NewCounter("distributor_structured_metadata_bytes_received")
@@ -59,9 +58,10 @@ var (
)
const (
- applicationJSON = "application/json"
- LabelServiceName = "service_name"
- ServiceUnknown = "unknown_service"
+ applicationJSON = "application/json"
+ LabelServiceName = "service_name"
+ ServiceUnknown = "unknown_service"
+ AggregatedMetricLabel = "__aggregated_metric__"
)
type TenantsRetention interface {
@@ -83,8 +83,10 @@ func (EmptyLimits) DiscoverServiceName(string) []string {
return nil
}
-type RequestParser func(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker) (*logproto.PushRequest, *Stats, error)
-type RequestParserWrapper func(inner RequestParser) RequestParser
+type (
+ RequestParser func(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker) (*logproto.PushRequest, *Stats, error)
+ RequestParserWrapper func(inner RequestParser) RequestParser
+)
type Stats struct {
Errs []error
@@ -100,6 +102,8 @@ type Stats struct {
BodySize int64
// Extra is a place for a wrapped perser to record any interesting stats as key-value pairs to be logged
Extra []any
+
+ IsAggregatedMetric bool
}
func ParseRequest(logger log.Logger, userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, pushRequestParser RequestParser, tracker UsageTracker) (*logproto.PushRequest, error) {
@@ -112,10 +116,12 @@ func ParseRequest(logger log.Logger, userID string, r *http.Request, tenantsRete
entriesSize int64
structuredMetadataSize int64
)
+
+ isAggregatedMetric := fmt.Sprintf("%t", pushStats.IsAggregatedMetric)
+
for retentionPeriod, size := range pushStats.LogLinesBytes {
retentionHours := RetentionPeriodToString(retentionPeriod)
-
- bytesIngested.WithLabelValues(userID, retentionHours).Add(float64(size))
+ bytesIngested.WithLabelValues(userID, retentionHours, isAggregatedMetric).Add(float64(size))
bytesReceivedStats.Inc(size)
entriesSize += size
}
@@ -123,8 +129,8 @@ func ParseRequest(logger log.Logger, userID string, r *http.Request, tenantsRete
for retentionPeriod, size := range pushStats.StructuredMetadataBytes {
retentionHours := RetentionPeriodToString(retentionPeriod)
- structuredMetadataBytesIngested.WithLabelValues(userID, retentionHours).Add(float64(size))
- bytesIngested.WithLabelValues(userID, retentionHours).Add(float64(size))
+ structuredMetadataBytesIngested.WithLabelValues(userID, retentionHours, isAggregatedMetric).Add(float64(size))
+ bytesIngested.WithLabelValues(userID, retentionHours, isAggregatedMetric).Add(float64(size))
bytesReceivedStats.Inc(size)
structuredMetadataBytesReceivedStats.Inc(size)
@@ -134,7 +140,7 @@ func ParseRequest(logger log.Logger, userID string, r *http.Request, tenantsRete
// incrementing tenant metrics if we have a tenant.
if pushStats.NumLines != 0 && userID != "" {
- linesIngested.WithLabelValues(userID).Add(float64(pushStats.NumLines))
+ linesIngested.WithLabelValues(userID, isAggregatedMetric).Add(float64(pushStats.NumLines))
}
linesReceivedStats.Inc(pushStats.NumLines)
@@ -237,7 +243,11 @@ func ParseLokiRequest(userID string, r *http.Request, tenantsRetention TenantsRe
return nil, nil, fmt.Errorf("couldn't parse labels: %w", err)
}
- if !lbs.Has(LabelServiceName) && len(discoverServiceName) > 0 {
+ if lbs.Has(AggregatedMetricLabel) {
+ pushStats.IsAggregatedMetric = true
+ }
+
+ if !lbs.Has(LabelServiceName) && len(discoverServiceName) > 0 && !pushStats.IsAggregatedMetric {
serviceName := ServiceUnknown
for _, labelName := range discoverServiceName {
if labelVal := lbs.Get(labelName); labelVal != "" {
diff --git a/pkg/loghttp/push/push_test.go b/pkg/loghttp/push/push_test.go
index 0484afe31c3b0..80e7c5e7eead1 100644
--- a/pkg/loghttp/push/push_test.go
+++ b/pkg/loghttp/push/push_test.go
@@ -60,6 +60,7 @@ func TestParseRequest(t *testing.T) {
expectedLines int
expectedBytesUsageTracker map[string]float64
expectedLabels labels.Labels
+ aggregatedMetric bool
}{
{
path: `/loki/api/v1/push`,
@@ -228,6 +229,18 @@ func TestParseRequest(t *testing.T) {
expectedBytesUsageTracker: map[string]float64{`{foo="bar2"}`: float64(len("fizzbuss"))},
expectedLabels: labels.FromStrings("foo", "bar2", LabelServiceName, ServiceUnknown),
},
+ {
+ path: `/loki/api/v1/push`,
+ body: `{"streams": [{ "stream": { "__aggregated_metric__": "stuff", "foo": "bar2", "job": "stuff" }, "values": [ [ "1570818238000000000", "fizzbuzz" ] ] }]}`,
+ contentType: `application/json`,
+ valid: true,
+ enableServiceDiscovery: true,
+ expectedBytes: len("fizzbuzz"),
+ expectedLines: 1,
+ expectedBytesUsageTracker: map[string]float64{`{__aggregated_metric__="stuff", foo="bar2", job="stuff"}`: float64(len("fizzbuss"))},
+ expectedLabels: labels.FromStrings("__aggregated_metric__", "stuff", "foo", "bar2", "job", "stuff"),
+ aggregatedMetric: true,
+ },
} {
t.Run(fmt.Sprintf("test %d", index), func(t *testing.T) {
structuredMetadataBytesIngested.Reset()
@@ -259,9 +272,32 @@ func TestParseRequest(t *testing.T) {
require.Equal(t, test.expectedBytes, bytesReceived)
require.Equalf(t, tracker.Total(), float64(bytesReceived), "tracked usage bytes must equal bytes received metric")
require.Equal(t, test.expectedLines, linesReceived)
- require.Equal(t, float64(test.expectedStructuredMetadataBytes), testutil.ToFloat64(structuredMetadataBytesIngested.WithLabelValues("fake", "")))
- require.Equal(t, float64(test.expectedBytes), testutil.ToFloat64(bytesIngested.WithLabelValues("fake", "")))
- require.Equal(t, float64(test.expectedLines), testutil.ToFloat64(linesIngested.WithLabelValues("fake")))
+ require.Equal(
+ t,
+ float64(test.expectedStructuredMetadataBytes),
+ testutil.ToFloat64(structuredMetadataBytesIngested.WithLabelValues("fake", "", fmt.Sprintf("%t", test.aggregatedMetric))),
+ )
+ require.Equal(
+ t,
+ float64(test.expectedBytes),
+ testutil.ToFloat64(
+ bytesIngested.WithLabelValues(
+ "fake",
+ "",
+ fmt.Sprintf("%t", test.aggregatedMetric),
+ ),
+ ),
+ )
+ require.Equal(
+ t,
+ float64(test.expectedLines),
+ testutil.ToFloat64(
+ linesIngested.WithLabelValues(
+ "fake",
+ fmt.Sprintf("%t", test.aggregatedMetric),
+ ),
+ ),
+ )
require.Equal(t, test.expectedLabels.String(), data.Streams[0].Labels)
require.InDeltaMapValuesf(t, test.expectedBytesUsageTracker, tracker.receivedBytes, 0.0, "%s != %s", test.expectedBytesUsageTracker, tracker.receivedBytes)
} else {
@@ -270,9 +306,9 @@ func TestParseRequest(t *testing.T) {
require.Equal(t, 0, structuredMetadataBytesReceived)
require.Equal(t, 0, bytesReceived)
require.Equal(t, 0, linesReceived)
- require.Equal(t, float64(0), testutil.ToFloat64(structuredMetadataBytesIngested.WithLabelValues("fake", "")))
- require.Equal(t, float64(0), testutil.ToFloat64(bytesIngested.WithLabelValues("fake", "")))
- require.Equal(t, float64(0), testutil.ToFloat64(linesIngested.WithLabelValues("fake")))
+ require.Equal(t, float64(0), testutil.ToFloat64(structuredMetadataBytesIngested.WithLabelValues("fake", "", fmt.Sprintf("%t", test.aggregatedMetric))))
+ require.Equal(t, float64(0), testutil.ToFloat64(bytesIngested.WithLabelValues("fake", "", fmt.Sprintf("%t", test.aggregatedMetric))))
+ require.Equal(t, float64(0), testutil.ToFloat64(linesIngested.WithLabelValues("fake", fmt.Sprintf("%t", test.aggregatedMetric))))
}
})
}
diff --git a/pkg/loki/loki.go b/pkg/loki/loki.go
index be80e71e9352b..01074ddf80416 100644
--- a/pkg/loki/loki.go
+++ b/pkg/loki/loki.go
@@ -353,7 +353,7 @@ type Loki struct {
IngesterRF1 ingester_rf1.Interface
IngesterRF1RingClient *ingester_rf1.RingClient
PatternIngester *pattern.Ingester
- PatternRingClient *pattern.RingClient
+ PatternRingClient pattern.RingClient
Querier querier.Querier
cacheGenerationLoader queryrangebase.CacheGenNumberLoader
querierAPI *querier.QuerierAPI
@@ -704,8 +704,9 @@ func (t *Loki) setupModuleManager() error {
mm.RegisterModule(QuerySchedulerRing, t.initQuerySchedulerRing, modules.UserInvisibleModule)
mm.RegisterModule(Analytics, t.initAnalytics)
mm.RegisterModule(CacheGenerationLoader, t.initCacheGenerationLoader)
- mm.RegisterModule(PatternIngester, t.initPatternIngester)
mm.RegisterModule(PatternRingClient, t.initPatternRingClient, modules.UserInvisibleModule)
+ mm.RegisterModule(PatternIngesterTee, t.initPatternIngesterTee, modules.UserInvisibleModule)
+ mm.RegisterModule(PatternIngester, t.initPatternIngester)
mm.RegisterModule(Metastore, t.initMetastore)
mm.RegisterModule(MetastoreClient, t.initMetastoreClient, modules.UserInvisibleModule)
@@ -721,7 +722,7 @@ func (t *Loki) setupModuleManager() error {
Overrides: {RuntimeConfig},
OverridesExporter: {Overrides, Server},
TenantConfigs: {RuntimeConfig},
- Distributor: {Ring, Server, Overrides, TenantConfigs, PatternRingClient, IngesterRF1RingClient, Analytics},
+ Distributor: {Ring, Server, Overrides, TenantConfigs, PatternRingClient, PatternIngesterTee, IngesterRF1RingClient, Analytics},
Store: {Overrides, IndexGatewayRing},
IngesterRF1: {Store, Server, MemberlistKV, TenantConfigs, MetastoreClient, Analytics},
Ingester: {Store, Server, MemberlistKV, TenantConfigs, Analytics},
@@ -739,8 +740,9 @@ func (t *Loki) setupModuleManager() error {
BloomPlanner: {Server, BloomStore, Analytics, Store},
BloomBuilder: {Server, BloomStore, Analytics, Store},
BloomStore: {IndexGatewayRing},
- PatternIngester: {Server, MemberlistKV, Analytics},
PatternRingClient: {Server, MemberlistKV, Analytics},
+ PatternIngesterTee: {Server, MemberlistKV, Analytics, PatternRingClient},
+ PatternIngester: {Server, MemberlistKV, Analytics, PatternRingClient, PatternIngesterTee},
IngesterRF1RingClient: {Server, MemberlistKV, Analytics},
Metastore: {Server, MetastoreClient},
IngesterQuerier: {Ring},
diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index 5e279845dfd35..60e2683b599ff 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -111,6 +111,7 @@ const (
IngesterRF1 string = "ingester-rf1"
IngesterRF1RingClient string = "ingester-rf1-ring-client"
PatternIngester string = "pattern-ingester"
+ PatternIngesterTee string = "pattern-ingester-tee"
PatternRingClient string = "pattern-ring-client"
IngesterQuerier string = "ingester-querier"
IngesterGRPCInterceptors string = "ingester-query-tags-interceptors"
@@ -333,13 +334,6 @@ func (t *Loki) initTenantConfigs() (_ services.Service, err error) {
}
func (t *Loki) initDistributor() (services.Service, error) {
- if t.Cfg.Pattern.Enabled {
- patternTee, err := pattern.NewTee(t.Cfg.Pattern, t.PatternRingClient, t.Cfg.MetricsNamespace, prometheus.DefaultRegisterer, util_log.Logger)
- if err != nil {
- return nil, err
- }
- t.Tee = distributor.WrapTee(t.Tee, patternTee)
- }
if t.Cfg.IngesterRF1.Enabled {
rf1Tee, err := ingester_rf1.NewTee(t.Cfg.IngesterRF1, t.IngesterRF1RingClient, t.Cfg.MetricsNamespace, prometheus.DefaultRegisterer, util_log.Logger)
if err != nil {
@@ -714,7 +708,13 @@ func (t *Loki) initPatternIngester() (_ services.Service, err error) {
return nil, nil
}
t.Cfg.Pattern.LifecyclerConfig.ListenPort = t.Cfg.Server.GRPCListenPort
- t.PatternIngester, err = pattern.New(t.Cfg.Pattern, t.Cfg.MetricsNamespace, prometheus.DefaultRegisterer, util_log.Logger)
+ t.PatternIngester, err = pattern.New(
+ t.Cfg.Pattern,
+ t.PatternRingClient,
+ t.Cfg.MetricsNamespace,
+ prometheus.DefaultRegisterer,
+ util_log.Logger,
+ )
if err != nil {
return nil, err
}
@@ -740,6 +740,41 @@ func (t *Loki) initPatternRingClient() (_ services.Service, err error) {
return ringClient, nil
}
+func (t *Loki) initPatternIngesterTee() (services.Service, error) {
+ logger := util_log.Logger
+
+ if !t.Cfg.Pattern.Enabled {
+ _ = level.Debug(logger).Log("msg", " pattern ingester tee service disabled")
+ return nil, nil
+ }
+ _ = level.Debug(logger).Log("msg", "initializing pattern ingester tee service...")
+
+ svc, err := pattern.NewTeeService(
+ t.Cfg.Pattern,
+ t.PatternRingClient,
+ t.Cfg.MetricsNamespace,
+ prometheus.DefaultRegisterer,
+ logger,
+ )
+ if err != nil {
+ return nil, err
+ }
+
+ t.Tee = distributor.WrapTee(t.Tee, svc)
+
+ return services.NewBasicService(
+ svc.Start,
+ func(_ context.Context) error {
+ svc.WaitUntilDone()
+ return nil
+ },
+ func(_ error) error {
+ svc.WaitUntilDone()
+ return nil
+ },
+ ), nil
+}
+
func (t *Loki) initTableManager() (services.Service, error) {
level.Warn(util_log.Logger).Log("msg", "table manager is deprecated. Consider migrating to tsdb index which relies on a compactor instead.")
diff --git a/pkg/pattern/aggregation/config.go b/pkg/pattern/aggregation/config.go
new file mode 100644
index 0000000000000..b88eb8499ca73
--- /dev/null
+++ b/pkg/pattern/aggregation/config.go
@@ -0,0 +1,107 @@
+package aggregation
+
+import (
+ "flag"
+ "time"
+
+ "github.com/grafana/dskit/backoff"
+ "github.com/prometheus/common/config"
+)
+
+type Config struct {
+ // TODO(twhitney): This needs to be a per-tenant config
+ Enabled bool `yaml:"enabled,omitempty" doc:"description=Whether the pattern ingester metric aggregation is enabled."`
+ DownsamplePeriod time.Duration `yaml:"downsample_period"`
+ LokiAddr string `yaml:"loki_address,omitempty" doc:"description=The address of the Loki instance to push aggregated metrics to."`
+ WriteTimeout time.Duration `yaml:"timeout,omitempty" doc:"description=The timeout for writing to Loki."`
+ PushPeriod time.Duration `yaml:"push_period,omitempty" doc:"description=How long to wait in between pushes to Loki."`
+ HTTPClientConfig config.HTTPClientConfig `yaml:"http_client_config,omitempty" doc:"description=The HTTP client configuration for pushing metrics to Loki."`
+ UseTLS bool `yaml:"use_tls,omitempty" doc:"description=Whether to use TLS for pushing metrics to Loki."`
+ BasicAuth BasicAuth `yaml:"basic_auth,omitempty" doc:"description=The basic auth configuration for pushing metrics to Loki."`
+ BackoffConfig backoff.Config `yaml:"backoff_config,omitempty" doc:"description=The backoff configuration for pushing metrics to Loki."`
+}
+
+// RegisterFlags registers pattern ingester related flags.
+func (cfg *Config) RegisterFlags(fs *flag.FlagSet) {
+ cfg.RegisterFlagsWithPrefix(fs, "")
+}
+
+func (cfg *Config) RegisterFlagsWithPrefix(fs *flag.FlagSet, prefix string) {
+ fs.BoolVar(
+ &cfg.Enabled,
+ prefix+"metric-aggregation.enabled",
+ false,
+ "Flag to enable or disable metric aggregation.",
+ )
+ fs.DurationVar(
+ &cfg.DownsamplePeriod,
+ prefix+"metric-aggregation.downsample-period",
+ 10*time.Second,
+ "How often to downsample metrics from raw push observations.",
+ )
+ fs.StringVar(
+ &cfg.LokiAddr,
+ prefix+"metric-aggregation.loki-address",
+ "",
+ "Loki address to send aggregated metrics to.",
+ )
+ fs.DurationVar(
+ &cfg.WriteTimeout,
+ prefix+"metric-aggregation.timeout",
+ 10*time.Second,
+ "How long to wait write response from Loki",
+ )
+ fs.DurationVar(
+ &cfg.PushPeriod,
+ prefix+"metric-aggregation.push-period",
+ 30*time.Second,
+ "How long to wait write response from Loki",
+ )
+ fs.BoolVar(
+ &cfg.UseTLS,
+ prefix+"metric-aggregation.tls",
+ false,
+ "Does the loki connection use TLS?",
+ )
+
+ cfg.BackoffConfig.RegisterFlagsWithPrefix(prefix+"metric-aggregation", fs)
+ cfg.BasicAuth.RegisterFlagsWithPrefix(prefix+"metric-aggregation.", fs)
+}
+
+// BasicAuth contains basic HTTP authentication credentials.
+type BasicAuth struct {
+ Username string `yaml:"username" json:"username"`
+ // UsernameFile string `yaml:"username_file,omitempty" json:"username_file,omitempty"`
+ Password config.Secret `yaml:"password,omitempty" json:"password,omitempty"`
+ // PasswordFile string `yaml:"password_file,omitempty" json:"password_file,omitempty"`
+}
+
+func (cfg *BasicAuth) RegisterFlagsWithPrefix(prefix string, fs *flag.FlagSet) {
+ fs.StringVar(
+ &cfg.Username,
+ prefix+"basic-auth.username",
+ "",
+ "Basic auth username for sending aggregations back to Loki.",
+ )
+ fs.Var(
+ newSecretValue(config.Secret(""), &cfg.Password),
+ prefix+"basic-auth.password",
+ "Basic auth password for sending aggregations back to Loki.",
+ )
+}
+
+type secretValue string
+
+func newSecretValue(val config.Secret, p *config.Secret) *secretValue {
+ *p = val
+ return (*secretValue)(p)
+}
+
+func (s *secretValue) Set(val string) error {
+ *s = secretValue(val)
+ return nil
+}
+
+func (s *secretValue) Get() any { return string(*s) }
+
+func (s *secretValue) String() string { return string(*s) }
diff --git a/pkg/pattern/aggregation/metrics.go b/pkg/pattern/aggregation/metrics.go
new file mode 100644
index 0000000000000..d777af50b8130
--- /dev/null
+++ b/pkg/pattern/aggregation/metrics.go
@@ -0,0 +1,28 @@
+package aggregation
+
+import (
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+)
+
+type ChunkMetrics struct {
+ chunks *prometheus.GaugeVec
+ samples *prometheus.CounterVec
+}
+
+func NewChunkMetrics(r prometheus.Registerer, metricsNamespace string) *ChunkMetrics {
+ return &ChunkMetrics{
+ chunks: promauto.With(r).NewGaugeVec(prometheus.GaugeOpts{
+ Namespace: metricsNamespace,
+ Subsystem: "pattern_ingester",
+ Name: "metric_chunks",
+ Help: "The total number of chunks in memory.",
+ }, []string{"service_name"}),
+ samples: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Namespace: metricsNamespace,
+ Subsystem: "pattern_ingester",
+ Name: "metric_samples",
+ Help: "The total number of samples in memory.",
+ }, []string{"service_name"}),
+ }
+}
diff --git a/pkg/pattern/aggregation/push.go b/pkg/pattern/aggregation/push.go
new file mode 100644
index 0000000000000..9aac2e3a5050d
--- /dev/null
+++ b/pkg/pattern/aggregation/push.go
@@ -0,0 +1,329 @@
+package aggregation
+
+import (
+ "bufio"
+ "bytes"
+ "context"
+ "fmt"
+ "io"
+ "net/http"
+ "net/url"
+ "sync"
+ "time"
+
+ "github.com/dustin/go-humanize"
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/golang/snappy"
+ "github.com/prometheus/common/config"
+ "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/model/labels"
+
+ "github.com/grafana/loki/v3/pkg/loghttp/push"
+ "github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/logql/syntax"
+ "github.com/grafana/loki/v3/pkg/util/build"
+
+ "github.com/grafana/dskit/backoff"
+
+ "github.com/gogo/protobuf/proto"
+)
+
+const (
+ defaultContentType = "application/x-protobuf"
+ defaultMaxReponseBufferLen = 1024
+
+ pushEndpoint = "/loki/api/v1/push"
+)
+
+var defaultUserAgent = fmt.Sprintf("pattern-ingester-push/%s", build.GetVersion().Version)
+
+type EntryWriter interface {
+ // WriteEntry handles sending the log to the output
+ // To maintain consistent log timing, Write is expected to be non-blocking
+ WriteEntry(ts time.Time, entry string, lbls labels.Labels)
+ Stop()
+}
+
+// Push is a io.Writer, that writes given log entries by pushing
+// directly to the given loki server URL. Each `Push` instance handles for a single tenant.
+// No batching of log lines happens when sending to Loki.
+type Push struct {
+ lokiURL string
+ tenantID string
+ httpClient *http.Client
+ userAgent string
+ contentType string
+ logger log.Logger
+
+ // shutdown channels
+ quit chan struct{}
+
+ // auth
+ username, password string
+
+ // Will add these label to the logs pushed to loki
+ labelName, labelValue, streamName, streamValue string
+
+ // push retry and backoff
+ backoff *backoff.Config
+
+ entries entries
+}
+
+type entry struct {
+ ts time.Time
+ entry string
+ labels labels.Labels
+}
+
+type entries struct {
+ lock sync.Mutex
+ entries []entry
+}
+
+func (e *entries) add(entry entry) {
+ e.lock.Lock()
+ defer e.lock.Unlock()
+ e.entries = append(e.entries, entry)
+}
+
+func (e *entries) reset() []entry {
+ e.lock.Lock()
+ defer e.lock.Unlock()
+ entries := e.entries
+ e.entries = make([]entry, 0, len(entries))
+ return entries
+}
+
+// NewPush creates an instance of `Push` which writes logs directly to given `lokiAddr`
+func NewPush(
+ lokiAddr, tenantID string,
+ timeout time.Duration,
+ pushPeriod time.Duration,
+ cfg config.HTTPClientConfig,
+ username, password string,
+ useTLS bool,
+ backoffCfg *backoff.Config,
+ logger log.Logger,
+) (*Push, error) {
+ client, err := config.NewClientFromConfig(cfg, "pattern-ingester-push", config.WithHTTP2Disabled())
+ if err != nil {
+ return nil, err
+ }
+
+ client.Timeout = timeout
+ scheme := "http"
+
+ // setup tls transport
+ if useTLS {
+ scheme = "https"
+ }
+
+ u := url.URL{
+ Scheme: scheme,
+ Host: lokiAddr,
+ Path: pushEndpoint,
+ }
+
+ p := &Push{
+ lokiURL: u.String(),
+ tenantID: tenantID,
+ httpClient: client,
+ userAgent: defaultUserAgent,
+ contentType: defaultContentType,
+ username: username,
+ password: password,
+ logger: logger,
+ quit: make(chan struct{}),
+ backoff: backoffCfg,
+ entries: entries{
+ entries: make([]entry, 0),
+ },
+ }
+
+ go p.run(pushPeriod)
+ return p, nil
+}
+
+// WriteEntry implements EntryWriter
+func (p *Push) WriteEntry(ts time.Time, e string, lbls labels.Labels) {
+ p.entries.add(entry{ts: ts, entry: e, labels: lbls})
+}
+
+// Stop will cancel any ongoing requests and stop the goroutine listening for requests
+func (p *Push) Stop() {
+ if p.quit != nil {
+ close(p.quit)
+ p.quit = nil
+ }
+}
+
+// buildPayload creates the snappy compressed protobuf to send to Loki
+func (p *Push) buildPayload() ([]byte, error) {
+ entries := p.entries.reset()
+
+ entriesByStream := make(map[string][]logproto.Entry)
+ for _, e := range entries {
+ stream := e.labels.String()
+ entries, ok := entriesByStream[stream]
+ if !ok {
+ entries = make([]logproto.Entry, 0)
+ }
+
+ entries = append(entries, logproto.Entry{
+ Timestamp: e.ts,
+ Line: e.entry,
+ })
+ entriesByStream[stream] = entries
+ }
+
+ streams := make([]logproto.Stream, 0, len(entriesByStream))
+ for s, entries := range entriesByStream {
+ lbls, err := syntax.ParseLabels(s)
+ if err != nil {
+ continue
+ }
+
+ streams = append(streams, logproto.Stream{
+ Labels: s,
+ Entries: entries,
+ Hash: lbls.Hash(),
+ })
+ }
+
+ req := &logproto.PushRequest{
+ Streams: streams,
+ }
+ payload, err := proto.Marshal(req)
+ if err != nil {
+ return []byte{}, fmt.Errorf("failed to marshal payload to json: %w", err)
+ }
+
+ payload = snappy.Encode(nil, payload)
+
+ return payload, nil
+}
+
+// run pulls lines out of the channel and sends them to Loki
+func (p *Push) run(pushPeriod time.Duration) {
+ ctx, cancel := context.WithCancel(context.Background())
+ pushTicker := time.NewTimer(pushPeriod)
+ defer pushTicker.Stop()
+
+ defer func() {
+ pushTicker.Stop()
+ }()
+
+ for {
+ select {
+ case <-p.quit:
+ cancel()
+ return
+ case <-pushTicker.C:
+ payload, err := p.buildPayload()
+ if err != nil {
+ level.Error(p.logger).Log("msg", "failed to build payload", "err", err)
+ continue
+ }
+
+ // We will use a timeout within each attempt to send
+ backoff := backoff.New(context.Background(), *p.backoff)
+
+ // send log with retry
+ for {
+ status := 0
+ status, err = p.send(ctx, payload)
+ if err == nil {
+ pushTicker.Reset(pushPeriod)
+ break
+ }
+
+ if status > 0 && status != 429 && status/100 != 5 {
+ level.Error(p.logger).Log("msg", "failed to send entry, server rejected push with a non-retryable status code", "status", status, "err", err)
+ pushTicker.Reset(pushPeriod)
+ break
+ }
+
+ if !backoff.Ongoing() {
+ level.Error(p.logger).Log("msg", "failed to send entry, retries exhausted, entry will be dropped", "entry", "status", status, "error", err)
+ pushTicker.Reset(pushPeriod)
+ break
+ }
+ level.Warn(p.logger).
+ Log("msg", "failed to send entry, retrying", "entry", "status", status, "error", err)
+ backoff.Wait()
+ }
+
+ }
+ }
+}
+
+// send makes one attempt to send the payload to Loki
+func (p *Push) send(ctx context.Context, payload []byte) (int, error) {
+ var (
+ err error
+ resp *http.Response
+ )
+ // Set a timeout for the request
+ ctx, cancel := context.WithTimeout(ctx, p.httpClient.Timeout)
+ defer cancel()
+ req, err := http.NewRequestWithContext(ctx, "POST", p.lokiURL, bytes.NewReader(payload))
+ if err != nil {
+ return -1, fmt.Errorf("failed to create push request: %w", err)
+ }
+ req.Header.Set("Content-Type", p.contentType)
+ req.Header.Set("User-Agent", p.userAgent)
+
+ // set org-id
+ if p.tenantID != "" {
+ req.Header.Set("X-Scope-OrgID", p.tenantID)
+ }
+
+ // basic auth if provided
+ if p.username != "" {
+ req.SetBasicAuth(p.username, p.password)
+ }
+
+ resp, err = p.httpClient.Do(req)
+ if err != nil {
+ return -1, fmt.Errorf("failed to push payload: %w", err)
+ }
+ status := resp.StatusCode
+ if status/100 != 2 {
+ scanner := bufio.NewScanner(io.LimitReader(resp.Body, defaultMaxReponseBufferLen))
+ line := ""
+ if scanner.Scan() {
+ line = scanner.Text()
+ }
+ err = fmt.Errorf("server returned HTTP status %s (%d): %s", resp.Status, status, line)
+ }
+
+ if err := resp.Body.Close(); err != nil {
+ level.Error(p.logger).Log("msg", "failed to close response body", "error", err)
+ }
+
+ return status, err
+}
+
+func AggregatedMetricEntry(
+ ts model.Time,
+ totalBytes, totalCount uint64,
+ service string,
+ lbls labels.Labels,
+) string {
+ byteString := humanize.Bytes(totalBytes)
+ base := fmt.Sprintf(
+ "ts=%d bytes=%s count=%d %s=%s",
+ ts.UnixNano(),
+ byteString,
+ totalCount,
+ push.LabelServiceName, service,
+ )
+
+ for _, l := range lbls {
+ base += fmt.Sprintf(" %s=%s", l.Name, l.Value)
+ }
+
+ return base
+}
diff --git a/pkg/pattern/aggregation/push_test.go b/pkg/pattern/aggregation/push_test.go
new file mode 100644
index 0000000000000..15f0336b5f7e8
--- /dev/null
+++ b/pkg/pattern/aggregation/push_test.go
@@ -0,0 +1,335 @@
+package aggregation
+
+import (
+ "encoding/base64"
+ "fmt"
+ "math"
+ "net/http"
+ "net/http/httptest"
+ "net/url"
+ "strings"
+ "testing"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/grafana/dskit/backoff"
+ "github.com/prometheus/common/config"
+ "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/model/labels"
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/util"
+)
+
+const (
+ testTenant = "test1"
+ testUsername = "user"
+ testPassword = "secret"
+ LogEntry = "%s %s\n"
+)
+
+func Test_Push(t *testing.T) {
+ lbls := labels.New(labels.Label{Name: "test", Value: "test"})
+
+ // create dummy loki server
+ responses := make(chan response, 1) // buffered not to block the response handler
+ backoff := backoff.Config{
+ MinBackoff: 300 * time.Millisecond,
+ MaxBackoff: 1 * time.Minute,
+ MaxRetries: 1,
+ }
+
+ t.Run("sends log entry to loki server without TLS", func(t *testing.T) {
+ // mock loki server
+ mock := httptest.NewServer(createServerHandler(responses))
+ require.NotNil(t, mock)
+ defer mock.Close()
+
+ // without TLS
+ push, err := NewPush(
+ mock.Listener.Addr().String(),
+ "test1",
+ 2*time.Second,
+ 1*time.Second,
+ config.DefaultHTTPClientConfig,
+ "", "",
+ false,
+ &backoff,
+ log.NewNopLogger(),
+ )
+ require.NoError(t, err)
+ ts, payload := testPayload()
+ push.WriteEntry(ts, payload, lbls)
+ resp := <-responses
+ assertResponse(t, resp, false, labelSet("test", "test"), ts, payload)
+ })
+
+ t.Run("sends log entry to loki server with basic auth", func(t *testing.T) {
+ // mock loki server
+ mock := httptest.NewServer(createServerHandler(responses))
+ require.NotNil(t, mock)
+ defer mock.Close()
+
+ // with basic Auth
+ push, err := NewPush(
+ mock.Listener.Addr().String(),
+ "test1",
+ 2*time.Second,
+ 1*time.Second,
+ config.DefaultHTTPClientConfig,
+ "user", "secret",
+ false,
+ &backoff,
+ log.NewNopLogger(),
+ )
+ require.NoError(t, err)
+ ts, payload := testPayload()
+ push.WriteEntry(ts, payload, lbls)
+ resp := <-responses
+ assertResponse(t, resp, true, labelSet("test", "test"), ts, payload)
+ })
+
+ t.Run("batches push requests", func(t *testing.T) {
+ // mock loki server
+ mock := httptest.NewServer(createServerHandler(responses))
+ require.NotNil(t, mock)
+ defer mock.Close()
+
+ client, err := config.NewClientFromConfig(
+ config.DefaultHTTPClientConfig,
+ "pattern-ingester-push-test",
+ config.WithHTTP2Disabled(),
+ )
+ require.NoError(t, err)
+ client.Timeout = 2 * time.Second
+
+ u := url.URL{
+ Scheme: "http",
+ Host: mock.Listener.Addr().String(),
+ Path: pushEndpoint,
+ }
+
+ p := &Push{
+ lokiURL: u.String(),
+ tenantID: "test1",
+ httpClient: client,
+ userAgent: defaultUserAgent,
+ contentType: defaultContentType,
+ username: "user",
+ password: "secret",
+ logger: log.NewNopLogger(),
+ quit: make(chan struct{}),
+ backoff: &backoff,
+ entries: entries{},
+ }
+
+ lbls1 := labels.New(labels.Label{Name: "test", Value: "test"})
+ lbls2 := labels.New(
+ labels.Label{Name: "test", Value: "test"},
+ labels.Label{Name: "test2", Value: "test2"},
+ )
+
+ now := time.Now().Truncate(time.Second).UTC()
+ then := now.Add(-1 * time.Minute)
+ wayBack := now.Add(-5 * time.Minute)
+
+ p.WriteEntry(
+ wayBack,
+ AggregatedMetricEntry(model.TimeFromUnix(wayBack.Unix()), 1, 1, "test_service", lbls1),
+ lbls1,
+ )
+ p.WriteEntry(
+ then,
+ AggregatedMetricEntry(model.TimeFromUnix(then.Unix()), 2, 2, "test_service", lbls1),
+ lbls1,
+ )
+ p.WriteEntry(
+ now,
+ AggregatedMetricEntry(model.TimeFromUnix(now.Unix()), 3, 3, "test_service", lbls1),
+ lbls1,
+ )
+
+ p.WriteEntry(
+ wayBack,
+ AggregatedMetricEntry(model.TimeFromUnix(wayBack.Unix()), 1, 1, "test2_service", lbls2),
+ lbls2,
+ )
+ p.WriteEntry(
+ then,
+ AggregatedMetricEntry(model.TimeFromUnix(then.Unix()), 2, 2, "test2_service", lbls2),
+ lbls2,
+ )
+ p.WriteEntry(
+ now,
+ AggregatedMetricEntry(model.TimeFromUnix(now.Unix()), 3, 3, "test2_service", lbls2),
+ lbls2,
+ )
+
+ go p.run(time.Nanosecond)
+
+ select {
+ case resp := <-responses:
+ p.Stop()
+ req := resp.pushReq
+ assert.Len(t, req.Streams, 2)
+
+ var stream1, stream2 logproto.Stream
+ for _, stream := range req.Streams {
+ if stream.Labels == lbls1.String() {
+ stream1 = stream
+ }
+
+ if stream.Labels == lbls2.String() {
+ stream2 = stream
+ }
+ }
+
+ require.Len(t, stream1.Entries, 3)
+ require.Len(t, stream2.Entries, 3)
+
+ require.Equal(t, stream1.Entries[0].Timestamp, wayBack)
+ require.Equal(t, stream1.Entries[1].Timestamp, then)
+ require.Equal(t, stream1.Entries[2].Timestamp, now)
+
+ require.Equal(
+ t,
+ AggregatedMetricEntry(model.TimeFromUnix(wayBack.Unix()), 1, 1, "test_service", lbls1),
+ stream1.Entries[0].Line,
+ )
+ require.Equal(
+ t,
+ AggregatedMetricEntry(model.TimeFromUnix(then.Unix()), 2, 2, "test_service", lbls1),
+ stream1.Entries[1].Line,
+ )
+ require.Equal(
+ t,
+ AggregatedMetricEntry(model.TimeFromUnix(now.Unix()), 3, 3, "test_service", lbls1),
+ stream1.Entries[2].Line,
+ )
+
+ require.Equal(t, stream2.Entries[0].Timestamp, wayBack)
+ require.Equal(t, stream2.Entries[1].Timestamp, then)
+ require.Equal(t, stream2.Entries[2].Timestamp, now)
+
+ require.Equal(
+ t,
+ AggregatedMetricEntry(model.TimeFromUnix(wayBack.Unix()), 1, 1, "test2_service", lbls2),
+ stream2.Entries[0].Line,
+ )
+ require.Equal(
+ t,
+ AggregatedMetricEntry(model.TimeFromUnix(then.Unix()), 2, 2, "test2_service", lbls2),
+ stream2.Entries[1].Line,
+ )
+ require.Equal(
+ t,
+ AggregatedMetricEntry(model.TimeFromUnix(now.Unix()), 3, 3, "test2_service", lbls2),
+ stream2.Entries[2].Line,
+ )
+
+ case <-time.After(5 * time.Second):
+ t.Fatal("timeout")
+ }
+ })
+}
+
+// Test helpers
+
+func assertResponse(t *testing.T, resp response, testAuth bool, labels labels.Labels, ts time.Time, payload string) {
+ t.Helper()
+
+ // assert metadata
+ assert.Equal(t, testTenant, resp.tenantID)
+
+ var expUser, expPass string
+
+ if testAuth {
+ expUser = testUsername
+ expPass = testPassword
+ }
+
+ assert.Equal(t, expUser, resp.username)
+ assert.Equal(t, expPass, resp.password)
+ assert.Equal(t, defaultContentType, resp.contentType)
+ assert.Equal(t, defaultUserAgent, resp.userAgent)
+
+ // assert stream labels
+ require.Len(t, resp.pushReq.Streams, 1)
+ assert.Equal(t, labels.String(), resp.pushReq.Streams[0].Labels)
+ assert.Equal(t, labels.Hash(), resp.pushReq.Streams[0].Hash)
+
+ // assert log entry
+ require.Len(t, resp.pushReq.Streams, 1)
+ require.Len(t, resp.pushReq.Streams[0].Entries, 1)
+ assert.Equal(t, payload, resp.pushReq.Streams[0].Entries[0].Line)
+ assert.Equal(t, ts, resp.pushReq.Streams[0].Entries[0].Timestamp)
+}
+
+type response struct {
+ tenantID string
+ pushReq logproto.PushRequest
+ contentType string
+ userAgent string
+ username, password string
+}
+
+func createServerHandler(responses chan response) http.HandlerFunc {
+ return http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
+ // Parse the request
+ var pushReq logproto.PushRequest
+ if err := util.ParseProtoReader(req.Context(), req.Body, int(req.ContentLength), math.MaxInt32, &pushReq, util.RawSnappy); err != nil {
+ rw.WriteHeader(500)
+ return
+ }
+
+ var username, password string
+
+ basicAuth := req.Header.Get("Authorization")
+ if basicAuth != "" {
+ encoded := strings.TrimPrefix(basicAuth, "Basic ") // now we have just encoded `username:password`
+ decoded, err := base64.StdEncoding.DecodeString(encoded)
+ if err != nil {
+ rw.WriteHeader(500)
+ return
+ }
+ toks := strings.FieldsFunc(string(decoded), func(r rune) bool {
+ return r == ':'
+ })
+ username, password = toks[0], toks[1]
+ }
+
+ responses <- response{
+ tenantID: req.Header.Get("X-Scope-OrgID"),
+ contentType: req.Header.Get("Content-Type"),
+ userAgent: req.Header.Get("User-Agent"),
+ username: username,
+ password: password,
+ pushReq: pushReq,
+ }
+
+ rw.WriteHeader(http.StatusOK)
+ })
+}
+
+func labelSet(keyVals ...string) labels.Labels {
+ if len(keyVals)%2 != 0 {
+ panic("not matching key-value pairs")
+ }
+
+ lbls := labels.Labels{}
+
+ for i := 0; i < len(keyVals)-1; i += 2 {
+ lbls = append(lbls, labels.Label{Name: keyVals[i], Value: keyVals[i+1]})
+ }
+
+ return lbls
+}
+
+func testPayload() (time.Time, string) {
+ ts := time.Now().UTC()
+ payload := fmt.Sprintf(LogEntry, fmt.Sprint(ts.UnixNano()), "pppppp")
+
+ return ts, payload
+}
diff --git a/pkg/pattern/flush_test.go b/pkg/pattern/flush_test.go
index 9ee4bd436992b..ea71f6055d8b1 100644
--- a/pkg/pattern/flush_test.go
+++ b/pkg/pattern/flush_test.go
@@ -10,10 +10,14 @@ import (
"github.com/grafana/dskit/flagext"
"github.com/grafana/dskit/kv"
"github.com/grafana/dskit/ring"
+ ring_client "github.com/grafana/dskit/ring/client"
"github.com/grafana/dskit/services"
"github.com/grafana/dskit/user"
"github.com/prometheus/prometheus/model/labels"
+ "github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/health/grpc_health_v1"
"github.com/grafana/loki/v3/pkg/logproto"
"github.com/grafana/loki/v3/pkg/pattern/iter"
@@ -22,7 +26,22 @@ import (
)
func TestSweepInstance(t *testing.T) {
- ing, err := New(defaultIngesterTestConfig(t), "foo", nil, log.NewNopLogger())
+ replicationSet := ring.ReplicationSet{
+ Instances: []ring.InstanceDesc{
+ {Id: "localhost", Addr: "ingester0"},
+ {Id: "remotehost", Addr: "ingester1"},
+ {Id: "otherhost", Addr: "ingester2"},
+ },
+ }
+
+ fakeRing := &fakeRing{}
+ fakeRing.On("Get", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(replicationSet, nil)
+
+ ringClient := &fakeRingClient{
+ ring: fakeRing,
+ }
+
+ ing, err := New(defaultIngesterTestConfig(t), ringClient, "foo", nil, log.NewNopLogger())
require.NoError(t, err)
defer services.StopAndAwaitTerminated(context.Background(), ing) //nolint:errcheck
err = services.StartAndAwaitRunning(context.Background(), ing)
@@ -95,3 +114,185 @@ func defaultIngesterTestConfig(t testing.TB) Config {
return cfg
}
+
+type fakeRingClient struct {
+ ring ring.ReadRing
+ poolClient ring_client.PoolClient
+}
+
+func (f *fakeRingClient) StartAsync(_ context.Context) error {
+ panic("not implemented")
+}
+
+func (f *fakeRingClient) AwaitRunning(_ context.Context) error {
+ panic("not implemented")
+}
+
+func (f *fakeRingClient) StopAsync() {
+ panic("not implemented")
+}
+
+func (f *fakeRingClient) AwaitTerminated(_ context.Context) error {
+ panic("not implemented")
+}
+
+func (f *fakeRingClient) FailureCase() error {
+ panic("not implemented")
+}
+
+func (f *fakeRingClient) State() services.State {
+ panic("not implemented")
+}
+
+func (f *fakeRingClient) AddListener(_ services.Listener) {
+ panic("not implemented")
+}
+
+func (f *fakeRingClient) Ring() ring.ReadRing {
+ return f.ring
+}
+
+func (f *fakeRingClient) GetClientFor(_ string) (ring_client.PoolClient, error) {
+ return f.poolClient, nil
+}
+
+type fakeRing struct {
+ mock.Mock
+}
+
+// InstancesWithTokensCount returns the number of instances in the ring that have tokens.
+func (f *fakeRing) InstancesWithTokensCount() int {
+ args := f.Called()
+ return args.Int(0)
+}
+
+// InstancesInZoneCount returns the number of instances in the ring that are registered in given zone.
+func (f *fakeRing) InstancesInZoneCount(zone string) int {
+ args := f.Called(zone)
+ return args.Int(0)
+}
+
+// InstancesWithTokensInZoneCount returns the number of instances in the ring that are registered in given zone and have tokens.
+func (f *fakeRing) InstancesWithTokensInZoneCount(zone string) int {
+ args := f.Called(zone)
+ return args.Int(0)
+}
+
+// ZonesCount returns the number of zones for which there's at least 1 instance registered in the ring.
+func (f *fakeRing) ZonesCount() int {
+ args := f.Called()
+ return args.Int(0)
+}
+
+func (f *fakeRing) Get(
+ key uint32,
+ op ring.Operation,
+ bufInstances []ring.InstanceDesc,
+ bufStrings1, bufStrings2 []string,
+) (ring.ReplicationSet, error) {
+ args := f.Called(key, op, bufInstances, bufStrings1, bufStrings2)
+ return args.Get(0).(ring.ReplicationSet), args.Error(1)
+}
+
+func (f *fakeRing) GetAllHealthy(op ring.Operation) (ring.ReplicationSet, error) {
+ args := f.Called(op)
+ return args.Get(0).(ring.ReplicationSet), args.Error(1)
+}
+
+func (f *fakeRing) GetReplicationSetForOperation(op ring.Operation) (ring.ReplicationSet, error) {
+ args := f.Called(op)
+ return args.Get(0).(ring.ReplicationSet), args.Error(1)
+}
+
+func (f *fakeRing) ReplicationFactor() int {
+ args := f.Called()
+ return args.Int(0)
+}
+
+func (f *fakeRing) InstancesCount() int {
+ args := f.Called()
+ return args.Int(0)
+}
+
+func (f *fakeRing) ShuffleShard(identifier string, size int) ring.ReadRing {
+ args := f.Called(identifier, size)
+ return args.Get(0).(ring.ReadRing)
+}
+
+func (f *fakeRing) GetInstanceState(instanceID string) (ring.InstanceState, error) {
+ args := f.Called(instanceID)
+ return args.Get(0).(ring.InstanceState), args.Error(1)
+}
+
+func (f *fakeRing) ShuffleShardWithLookback(
+ identifier string,
+ size int,
+ lookbackPeriod time.Duration,
+ now time.Time,
+) ring.ReadRing {
+ args := f.Called(identifier, size, lookbackPeriod, now)
+ return args.Get(0).(ring.ReadRing)
+}
+
+func (f *fakeRing) HasInstance(instanceID string) bool {
+ args := f.Called(instanceID)
+ return args.Bool(0)
+}
+
+func (f *fakeRing) CleanupShuffleShardCache(identifier string) {
+ f.Called(identifier)
+}
+
+func (f *fakeRing) GetTokenRangesForInstance(identifier string) (ring.TokenRanges, error) {
+ args := f.Called(identifier)
+ return args.Get(0).(ring.TokenRanges), args.Error(1)
+}
+
+type mockPoolClient struct {
+ mock.Mock
+ ctx context.Context
+ req *logproto.PushRequest
+}
+
+func (m *mockPoolClient) Push(
+ ctx context.Context,
+ in *push.PushRequest,
+ _ ...grpc.CallOption,
+) (*push.PushResponse, error) {
+ m.ctx = ctx
+ m.req = in
+ args := m.Called(ctx, in)
+ return args.Get(0).(*push.PushResponse), args.Error(1)
+}
+
+func (m *mockPoolClient) Query(
+ ctx context.Context,
+ in *logproto.QueryPatternsRequest,
+ opts ...grpc.CallOption,
+) (logproto.Pattern_QueryClient, error) {
+ args := m.Called(ctx, in, opts)
+ return args.Get(0).(logproto.Pattern_QueryClient), args.Error(1)
+}
+
+func (m *mockPoolClient) Check(
+ ctx context.Context,
+ in *grpc_health_v1.HealthCheckRequest,
+ opts ...grpc.CallOption,
+) (*grpc_health_v1.HealthCheckResponse, error) {
+ args := m.Called(ctx, in, opts)
+ return args.Get(0).(*grpc_health_v1.HealthCheckResponse), args.Error(1)
+}
+
+func (m *mockPoolClient) Watch(
+ ctx context.Context,
+ in *grpc_health_v1.HealthCheckRequest,
+ opts ...grpc.CallOption,
+) (grpc_health_v1.Health_WatchClient, error) {
+ args := m.Called(ctx, in, opts)
+ return args.Get(0).(grpc_health_v1.Health_WatchClient), args.Error(1)
+}
+
+func (m *mockPoolClient) Close() error {
+ args := m.Called()
+ return args.Error(0)
+}
diff --git a/pkg/pattern/ingester.go b/pkg/pattern/ingester.go
index 8864a03960bc1..bd43908f289d5 100644
--- a/pkg/pattern/ingester.go
+++ b/pkg/pattern/ingester.go
@@ -16,11 +16,13 @@ import (
"github.com/grafana/dskit/services"
"github.com/grafana/dskit/tenant"
"github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/common/model"
"google.golang.org/grpc/health/grpc_health_v1"
ring_client "github.com/grafana/dskit/ring/client"
"github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/pattern/aggregation"
"github.com/grafana/loki/v3/pkg/pattern/clientpool"
"github.com/grafana/loki/v3/pkg/pattern/drain"
"github.com/grafana/loki/v3/pkg/pattern/iter"
@@ -38,6 +40,9 @@ type Config struct {
FlushCheckPeriod time.Duration `yaml:"flush_check_period"`
MaxClusters int `yaml:"max_clusters,omitempty" doc:"description=The maximum number of detected pattern clusters that can be created by streams."`
MaxEvictionRatio float64 `yaml:"max_eviction_ratio,omitempty" doc:"description=The maximum eviction ratio of patterns per stream. Once that ratio is reached, the stream will throttled pattern detection."`
+ MetricAggregation aggregation.Config `yaml:"metric_aggregation,omitempty" doc:"description=Configures the metric aggregation and storage behavior of the pattern ingester."`
+ TeeConfig TeeConfig `yaml:"tee_config,omitempty" doc:"description=Configures the pattern tee which forwards requests to the pattern ingester."`
+ ConnectionTimeout time.Duration `yaml:"connection_timeout"`
// For testing.
factory ring_client.PoolFactory `yaml:"-"`
@@ -47,11 +52,86 @@ type Config struct {
func (cfg *Config) RegisterFlags(fs *flag.FlagSet) {
cfg.LifecyclerConfig.RegisterFlagsWithPrefix("pattern-ingester.", fs, util_log.Logger)
cfg.ClientConfig.RegisterFlags(fs)
- fs.BoolVar(&cfg.Enabled, "pattern-ingester.enabled", false, "Flag to enable or disable the usage of the pattern-ingester component.")
- fs.IntVar(&cfg.ConcurrentFlushes, "pattern-ingester.concurrent-flushes", 32, "How many flushes can happen concurrently from each stream.")
- fs.DurationVar(&cfg.FlushCheckPeriod, "pattern-ingester.flush-check-period", 1*time.Minute, "How often should the ingester see if there are any blocks to flush. The first flush check is delayed by a random time up to 0.8x the flush check period. Additionally, there is +/- 1% jitter added to the interval.")
- fs.IntVar(&cfg.MaxClusters, "pattern-ingester.max-clusters", drain.DefaultConfig().MaxClusters, "The maximum number of detected pattern clusters that can be created by the pattern ingester.")
- fs.Float64Var(&cfg.MaxEvictionRatio, "pattern-ingester.max-eviction-ratio", drain.DefaultConfig().MaxEvictionRatio, "The maximum eviction ratio of patterns per stream. Once that ratio is reached, the stream will be throttled for pattern detection.")
+ cfg.MetricAggregation.RegisterFlagsWithPrefix(fs, "pattern-ingester.")
+ cfg.TeeConfig.RegisterFlags(fs, "pattern-ingester.")
+
+ fs.BoolVar(
+ &cfg.Enabled,
+ "pattern-ingester.enabled",
+ false,
+ "Flag to enable or disable the usage of the pattern-ingester component.",
+ )
+ fs.IntVar(
+ &cfg.ConcurrentFlushes,
+ "pattern-ingester.concurrent-flushes",
+ 32,
+ "How many flushes can happen concurrently from each stream.",
+ )
+ fs.DurationVar(
+ &cfg.FlushCheckPeriod,
+ "pattern-ingester.flush-check-period",
+ 1*time.Minute,
+ "How often should the ingester see if there are any blocks to flush. The first flush check is delayed by a random time up to 0.8x the flush check period. Additionally, there is +/- 1% jitter added to the interval.",
+ )
+ fs.IntVar(
+ &cfg.MaxClusters,
+ "pattern-ingester.max-clusters",
+ drain.DefaultConfig().MaxClusters,
+ "The maximum number of detected pattern clusters that can be created by the pattern ingester.",
+ )
+ fs.Float64Var(
+ &cfg.MaxEvictionRatio,
+ "pattern-ingester.max-eviction-ratio",
+ drain.DefaultConfig().MaxEvictionRatio,
+ "The maximum eviction ratio of patterns per stream. Once that ratio is reached, the stream will be throttled for pattern detection.",
+ )
+ fs.DurationVar(
+ &cfg.ConnectionTimeout,
+ "pattern-ingester.connection-timeout",
+ 2*time.Second,
+ "Timeout for connections between the Loki and the pattern ingester.",
+ )
+}
+
+type TeeConfig struct {
+ BatchSize int `yaml:"batch_size"`
+ BatchFlushInterval time.Duration `yaml:"batch_flush_interval"`
+ FlushQueueSize int `yaml:"flush_queue_size"`
+ FlushWorkerCount int `yaml:"flush_worker_count"`
+ StopFlushTimeout time.Duration `yaml:"stop_flush_timeout"`
+}
+
+func (cfg *TeeConfig) RegisterFlags(f *flag.FlagSet, prefix string) {
+ f.IntVar(
+ &cfg.BatchSize,
+ prefix+"tee.batch-size",
+ 5000,
+ "The size of the batch of raw logs to send for template mining",
+ )
+ f.DurationVar(
+ &cfg.BatchFlushInterval,
+ prefix+"tee.batch-flush-interval",
+ time.Second,
+ "The max time between batches of raw logs to send for template mining",
+ )
+ f.IntVar(
+ &cfg.FlushQueueSize,
+ prefix+"tee.flush-queue-size",
+ 1000,
+ "The number of log flushes to queue before dropping",
+ )
+ f.IntVar(
+ &cfg.FlushWorkerCount,
+ prefix+"tee.flush-worker-count",
+ 100,
+ "the number of concurrent workers sending logs to the template service",
+ )
+ f.DurationVar(
+ &cfg.StopFlushTimeout,
+ prefix+"tee.stop-flush-timeout",
+ 30*time.Second,
+ "The max time we will try to flush any remaining logs to be mined when the service is stopped",
+ )
}
func (cfg *Config) Validate() error {
@@ -64,6 +144,7 @@ func (cfg *Config) Validate() error {
type Ingester struct {
services.Service
lifecycler *ring.Lifecycler
+ ringClient RingClient
lifecyclerWatcher *services.FailureWatcher
@@ -87,6 +168,7 @@ type Ingester struct {
func New(
cfg Config,
+ ringClient RingClient,
metricsNamespace string,
registerer prometheus.Registerer,
logger log.Logger,
@@ -100,6 +182,7 @@ func New(
i := &Ingester{
cfg: cfg,
+ ringClient: ringClient,
logger: log.With(logger, "component", "pattern-ingester"),
registerer: registerer,
metrics: metrics,
@@ -165,6 +248,7 @@ func (i *Ingester) stopping(_ error) error {
flushQueue.Close()
}
i.flushQueuesDone.Wait()
+ i.stopWriters()
return err
}
@@ -196,13 +280,29 @@ func (i *Ingester) loop() {
flushTicker := util.NewTickerWithJitter(i.cfg.FlushCheckPeriod, j)
defer flushTicker.Stop()
- for {
- select {
- case <-flushTicker.C:
- i.sweepUsers(false, true)
-
- case <-i.loopQuit:
- return
+ if i.cfg.MetricAggregation.Enabled {
+ downsampleTicker := time.NewTimer(i.cfg.MetricAggregation.DownsamplePeriod)
+ defer downsampleTicker.Stop()
+ for {
+ select {
+ case <-flushTicker.C:
+ i.sweepUsers(false, true)
+ case t := <-downsampleTicker.C:
+ downsampleTicker.Reset(i.cfg.MetricAggregation.DownsamplePeriod)
+ now := model.TimeFromUnixNano(t.UnixNano())
+ i.downsampleMetrics(now)
+ case <-i.loopQuit:
+ return
+ }
+ }
+ } else {
+ for {
+ select {
+ case <-flushTicker.C:
+ i.sweepUsers(false, true)
+ case <-i.loopQuit:
+ return
+ }
}
}
}
@@ -284,11 +384,34 @@ func (i *Ingester) GetOrCreateInstance(instanceID string) (*instance, error) { /
inst, ok = i.instances[instanceID]
if !ok {
var err error
+ var writer aggregation.EntryWriter
+
+ aggCfg := i.cfg.MetricAggregation
+ if aggCfg.Enabled {
+ writer, err = aggregation.NewPush(
+ aggCfg.LokiAddr,
+ instanceID,
+ aggCfg.WriteTimeout,
+ aggCfg.PushPeriod,
+ aggCfg.HTTPClientConfig,
+ aggCfg.BasicAuth.Username,
+ string(aggCfg.BasicAuth.Password),
+ aggCfg.UseTLS,
+ &aggCfg.BackoffConfig,
+ i.logger,
+ )
+ if err != nil {
+ return nil, err
+ }
+ }
inst, err = newInstance(
instanceID,
i.logger,
i.metrics,
i.drainCfg,
+ i.ringClient,
+ i.lifecycler.ID,
+ writer,
)
if err != nil {
return nil, err
@@ -316,3 +439,21 @@ func (i *Ingester) getInstances() []*instance {
}
return instances
}
+
+func (i *Ingester) stopWriters() {
+ instances := i.getInstances()
+
+ for _, instance := range instances {
+ if instance.writer != nil {
+ instance.writer.Stop()
+ }
+ }
+}
+
+func (i *Ingester) downsampleMetrics(ts model.Time) {
+ instances := i.getInstances()
+
+ for _, instance := range instances {
+ instance.Downsample(ts)
+ }
+}
diff --git a/pkg/pattern/ingester_querier.go b/pkg/pattern/ingester_querier.go
index 2220a2ef41d8b..a77dd47b31137 100644
--- a/pkg/pattern/ingester_querier.go
+++ b/pkg/pattern/ingester_querier.go
@@ -27,7 +27,7 @@ type IngesterQuerier struct {
cfg Config
logger log.Logger
- ringClient *RingClient
+ ringClient RingClient
registerer prometheus.Registerer
ingesterQuerierMetrics *ingesterQuerierMetrics
@@ -35,7 +35,7 @@ type IngesterQuerier struct {
func NewIngesterQuerier(
cfg Config,
- ringClient *RingClient,
+ ringClient RingClient,
metricsNamespace string,
registerer prometheus.Registerer,
logger log.Logger,
@@ -128,7 +128,7 @@ func prunePatterns(resp *logproto.QueryPatternsResponse, minClusterSize int64, m
// ForAllIngesters runs f, in parallel, for all ingesters
func (q *IngesterQuerier) forAllIngesters(ctx context.Context, f func(context.Context, logproto.PatternClient) (interface{}, error)) ([]ResponseFromIngesters, error) {
- replicationSet, err := q.ringClient.ring.GetAllHealthy(ring.Read)
+ replicationSet, err := q.ringClient.Ring().GetAllHealthy(ring.Read)
if err != nil {
return nil, err
}
@@ -149,7 +149,7 @@ func (q *IngesterQuerier) forGivenIngesters(ctx context.Context, replicationSet
ingester := ingester
i := i
g.Go(func() error {
- client, err := q.ringClient.pool.GetClientFor(ingester.Addr)
+ client, err := q.ringClient.GetClientFor(ingester.Addr)
if err != nil {
return err
}
diff --git a/pkg/pattern/ingester_test.go b/pkg/pattern/ingester_test.go
index 90b1845a90c3d..a5dd5cdbaaed4 100644
--- a/pkg/pattern/ingester_test.go
+++ b/pkg/pattern/ingester_test.go
@@ -7,24 +7,56 @@ import (
"time"
"github.com/go-kit/log"
+ "github.com/prometheus/common/model"
"github.com/prometheus/prometheus/model/labels"
+ "github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
+ "github.com/grafana/dskit/ring"
+
"github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/pattern/aggregation"
"github.com/grafana/loki/v3/pkg/pattern/iter"
+ "github.com/grafana/loki/v3/pkg/util/constants"
"github.com/grafana/loki/v3/pkg/pattern/drain"
+ loghttp_push "github.com/grafana/loki/v3/pkg/loghttp/push"
+
"github.com/grafana/loki/pkg/push"
)
func TestInstancePushQuery(t *testing.T) {
lbs := labels.New(labels.Label{Name: "test", Value: "test"})
+
+ ingesterID := "foo"
+ replicationSet := ring.ReplicationSet{
+ Instances: []ring.InstanceDesc{
+ {Id: ingesterID, Addr: "ingester0"},
+ {Id: "bar", Addr: "ingester1"},
+ {Id: "baz", Addr: "ingester2"},
+ },
+ }
+
+ fakeRing := &fakeRing{}
+ fakeRing.On("Get", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
+ Return(replicationSet, nil)
+
+ ringClient := &fakeRingClient{
+ ring: fakeRing,
+ }
+
+ mockWriter := &mockEntryWriter{}
+ mockWriter.On("WriteEntry", mock.Anything, mock.Anything, mock.Anything)
+
inst, err := newInstance(
"foo",
log.NewNopLogger(),
newIngesterMetrics(nil, "test"),
drain.DefaultConfig(),
+ ringClient,
+ ingesterID,
+ mockWriter,
)
require.NoError(t, err)
@@ -68,3 +100,239 @@ func TestInstancePushQuery(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 2, len(res.Series))
}
+
+func TestInstancePushAggregateMetrics(t *testing.T) {
+ lbs := labels.New(
+ labels.Label{Name: "test", Value: "test"},
+ labels.Label{Name: "service_name", Value: "test_service"},
+ )
+ lbs2 := labels.New(
+ labels.Label{Name: "foo", Value: "bar"},
+ labels.Label{Name: "service_name", Value: "foo_service"},
+ )
+ lbs3 := labels.New(
+ labels.Label{Name: "foo", Value: "baz"},
+ labels.Label{Name: "service_name", Value: "baz_service"},
+ )
+
+ setup := func() (*instance, *mockEntryWriter) {
+ ingesterID := "foo"
+ replicationSet := ring.ReplicationSet{
+ Instances: []ring.InstanceDesc{
+ {Id: ingesterID, Addr: "ingester0"},
+ {Id: "bar", Addr: "ingester1"},
+ {Id: "baz", Addr: "ingester2"},
+ },
+ }
+
+ fakeRing := &fakeRing{}
+ fakeRing.On("Get", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
+ Return(replicationSet, nil)
+
+ ringClient := &fakeRingClient{
+ ring: fakeRing,
+ }
+
+ mockWriter := &mockEntryWriter{}
+ mockWriter.On("WriteEntry", mock.Anything, mock.Anything, mock.Anything)
+
+ inst, err := newInstance(
+ "foo",
+ log.NewNopLogger(),
+ newIngesterMetrics(nil, "test"),
+ drain.DefaultConfig(),
+ ringClient,
+ ingesterID,
+ mockWriter,
+ )
+ require.NoError(t, err)
+
+ err = inst.Push(context.Background(), &push.PushRequest{
+ Streams: []push.Stream{
+ {
+ Labels: lbs.String(),
+ Entries: []push.Entry{
+ {
+ Timestamp: time.Unix(20, 0),
+ Line: "ts=1 msg=hello",
+ StructuredMetadata: push.LabelsAdapter{
+ push.LabelAdapter{
+ Name: constants.LevelLabel,
+ Value: "info",
+ },
+ },
+ },
+ },
+ },
+ {
+ Labels: lbs2.String(),
+ Entries: []push.Entry{
+ {
+ Timestamp: time.Unix(20, 0),
+ Line: "ts=1 msg=hello",
+ StructuredMetadata: push.LabelsAdapter{
+ push.LabelAdapter{
+ Name: constants.LevelLabel,
+ Value: "error",
+ },
+ },
+ },
+ },
+ },
+ {
+ Labels: lbs3.String(),
+ Entries: []push.Entry{
+ {
+ Timestamp: time.Unix(20, 0),
+ Line: "error error error",
+ StructuredMetadata: push.LabelsAdapter{
+ push.LabelAdapter{
+ Name: constants.LevelLabel,
+ Value: "error",
+ },
+ },
+ },
+ },
+ },
+ },
+ })
+ for i := 0; i < 30; i++ {
+ err = inst.Push(context.Background(), &push.PushRequest{
+ Streams: []push.Stream{
+ {
+ Labels: lbs.String(),
+ Entries: []push.Entry{
+ {
+ Timestamp: time.Unix(20, 0),
+ Line: "foo bar foo bar",
+ StructuredMetadata: push.LabelsAdapter{
+ push.LabelAdapter{
+ Name: constants.LevelLabel,
+ Value: "info",
+ },
+ },
+ },
+ },
+ },
+ {
+ Labels: lbs2.String(),
+ Entries: []push.Entry{
+ {
+ Timestamp: time.Unix(20, 0),
+ Line: "foo bar foo bar",
+ StructuredMetadata: push.LabelsAdapter{
+ push.LabelAdapter{
+ Name: constants.LevelLabel,
+ Value: "error",
+ },
+ },
+ },
+ },
+ },
+ },
+ })
+ require.NoError(t, err)
+ }
+ require.NoError(t, err)
+
+ return inst, mockWriter
+ }
+
+ t.Run("accumulates bytes and count for each stream and level on every push", func(t *testing.T) {
+ inst, _ := setup()
+
+ require.Len(t, inst.aggMetricsByStreamAndLevel, 3)
+
+ require.Equal(t, uint64(14+(15*30)), inst.aggMetricsByStreamAndLevel[lbs.String()]["info"].bytes)
+ require.Equal(t, uint64(14+(15*30)), inst.aggMetricsByStreamAndLevel[lbs2.String()]["error"].bytes)
+ require.Equal(t, uint64(17), inst.aggMetricsByStreamAndLevel[lbs3.String()]["error"].bytes)
+
+ require.Equal(
+ t,
+ uint64(31),
+ inst.aggMetricsByStreamAndLevel[lbs.String()]["info"].count,
+ )
+ require.Equal(
+ t,
+ uint64(31),
+ inst.aggMetricsByStreamAndLevel[lbs2.String()]["error"].count,
+ )
+ require.Equal(
+ t,
+ uint64(1),
+ inst.aggMetricsByStreamAndLevel[lbs3.String()]["error"].count,
+ )
+ },
+ )
+
+ t.Run("downsamples aggregated metrics", func(t *testing.T) {
+ inst, mockWriter := setup()
+ now := model.Now()
+ inst.Downsample(now)
+
+ mockWriter.AssertCalled(
+ t,
+ "WriteEntry",
+ now.Time(),
+ aggregation.AggregatedMetricEntry(
+ now,
+ uint64(14+(15*30)),
+ uint64(31),
+ "test_service",
+ lbs,
+ ),
+ labels.New(
+ labels.Label{Name: loghttp_push.AggregatedMetricLabel, Value: "test_service"},
+ labels.Label{Name: "level", Value: "info"},
+ ),
+ )
+
+ mockWriter.AssertCalled(
+ t,
+ "WriteEntry",
+ now.Time(),
+ aggregation.AggregatedMetricEntry(
+ now,
+ uint64(14+(15*30)),
+ uint64(31),
+ "foo_service",
+ lbs2,
+ ),
+ labels.New(
+ labels.Label{Name: loghttp_push.AggregatedMetricLabel, Value: "foo_service"},
+ labels.Label{Name: "level", Value: "error"},
+ ),
+ )
+
+ mockWriter.AssertCalled(
+ t,
+ "WriteEntry",
+ now.Time(),
+ aggregation.AggregatedMetricEntry(
+ now,
+ uint64(17),
+ uint64(1),
+ "baz_service",
+ lbs3,
+ ),
+ labels.New(
+ labels.Label{Name: loghttp_push.AggregatedMetricLabel, Value: "baz_service"},
+ labels.Label{Name: "level", Value: "error"},
+ ),
+ )
+
+ require.Equal(t, 0, len(inst.aggMetricsByStreamAndLevel))
+ })
+}
+
+type mockEntryWriter struct {
+ mock.Mock
+}
+
+func (m *mockEntryWriter) WriteEntry(ts time.Time, entry string, lbls labels.Labels) {
+ _ = m.Called(ts, entry, lbls)
+}
+
+func (m *mockEntryWriter) Stop() {
+ _ = m.Called()
+}
diff --git a/pkg/pattern/instance.go b/pkg/pattern/instance.go
index e19ba040ff71e..719f90d69075c 100644
--- a/pkg/pattern/instance.go
+++ b/pkg/pattern/instance.go
@@ -2,22 +2,31 @@ package pattern
import (
"context"
+ "errors"
"fmt"
"net/http"
+ "strings"
+ "sync"
"github.com/go-kit/log"
+ "github.com/go-kit/log/level"
"github.com/grafana/dskit/httpgrpc"
"github.com/grafana/dskit/multierror"
+ "github.com/grafana/dskit/ring"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/model/labels"
"github.com/grafana/loki/v3/pkg/ingester"
"github.com/grafana/loki/v3/pkg/ingester/index"
+ "github.com/grafana/loki/v3/pkg/loghttp/push"
"github.com/grafana/loki/v3/pkg/logproto"
"github.com/grafana/loki/v3/pkg/logql/syntax"
+ "github.com/grafana/loki/v3/pkg/pattern/aggregation"
"github.com/grafana/loki/v3/pkg/pattern/drain"
"github.com/grafana/loki/v3/pkg/pattern/iter"
"github.com/grafana/loki/v3/pkg/util"
+ "github.com/grafana/loki/v3/pkg/util/constants"
+ lokiring "github.com/grafana/loki/v3/pkg/util/ring"
)
const indexShards = 32
@@ -32,21 +41,45 @@ type instance struct {
logger log.Logger
metrics *ingesterMetrics
drainCfg *drain.Config
+ ringClient RingClient
+ ingesterID string
+
+ aggMetricsLock sync.Mutex
+ aggMetricsByStreamAndLevel map[string]map[string]*aggregatedMetrics
+
+ writer aggregation.EntryWriter
+}
+
+type aggregatedMetrics struct {
+ bytes uint64
+ count uint64
}
-func newInstance(instanceID string, logger log.Logger, metrics *ingesterMetrics, drainCfg *drain.Config) (*instance, error) {
+func newInstance(
+ instanceID string,
+ logger log.Logger,
+ metrics *ingesterMetrics,
+ drainCfg *drain.Config,
+ ringClient RingClient,
+ ingesterID string,
+ writer aggregation.EntryWriter,
+) (*instance, error) {
index, err := index.NewBitPrefixWithShards(indexShards)
if err != nil {
return nil, err
}
i := &instance{
- buf: make([]byte, 0, 1024),
- logger: logger,
- instanceID: instanceID,
- streams: newStreamsMap(),
- index: index,
- metrics: metrics,
- drainCfg: drainCfg,
+ buf: make([]byte, 0, 1024),
+ logger: logger,
+ instanceID: instanceID,
+ streams: newStreamsMap(),
+ index: index,
+ metrics: metrics,
+ drainCfg: drainCfg,
+ ringClient: ringClient,
+ ingesterID: ingesterID,
+ aggMetricsByStreamAndLevel: make(map[string]map[string]*aggregatedMetrics),
+ writer: writer,
}
i.mapper = ingester.NewFPMapper(i.getLabelsFromFingerprint)
return i, nil
@@ -58,27 +91,69 @@ func (i *instance) Push(ctx context.Context, req *logproto.PushRequest) error {
appendErr := multierror.New()
for _, reqStream := range req.Streams {
- if reqStream.Entries == nil || len(reqStream.Entries) == 0 {
- continue
- }
- s, _, err := i.streams.LoadOrStoreNew(reqStream.Labels,
- func() (*stream, error) {
- // add stream
- return i.createStream(ctx, reqStream)
- }, nil)
+ // All streams are observed for metrics
+ // TODO(twhitney): this would be better as a queue that drops in response to backpressure
+ i.Observe(reqStream.Labels, reqStream.Entries)
+
+ // But only owned streamed are processed for patterns
+ ownedStream, err := i.isOwnedStream(i.ingesterID, reqStream.Labels)
if err != nil {
appendErr.Add(err)
- continue
}
- err = s.Push(ctx, reqStream.Entries)
- if err != nil {
- appendErr.Add(err)
- continue
+
+ if ownedStream {
+ if reqStream.Entries == nil || len(reqStream.Entries) == 0 {
+ continue
+ }
+ s, _, err := i.streams.LoadOrStoreNew(reqStream.Labels,
+ func() (*stream, error) {
+ // add stream
+ return i.createStream(ctx, reqStream)
+ }, nil)
+ if err != nil {
+ appendErr.Add(err)
+ continue
+ }
+ err = s.Push(ctx, reqStream.Entries)
+ if err != nil {
+ appendErr.Add(err)
+ continue
+ }
}
}
+
return appendErr.Err()
}
+func (i *instance) isOwnedStream(ingesterID string, stream string) (bool, error) {
+ var descs [1]ring.InstanceDesc
+ replicationSet, err := i.ringClient.Ring().Get(
+ lokiring.TokenFor(i.instanceID, stream),
+ ring.WriteNoExtend,
+ descs[:0],
+ nil,
+ nil,
+ )
+ if err != nil {
+ return false, fmt.Errorf(
+ "error getting replication set for stream %s: %v",
+ stream,
+ err,
+ )
+ }
+
+ if replicationSet.Instances == nil {
+ return false, errors.New("no instances found")
+ }
+
+ for _, instanceDesc := range replicationSet.Instances {
+ if instanceDesc.Id == ingesterID {
+ return true, nil
+ }
+ }
+ return false, nil
+}
+
// Iterator returns an iterator of pattern samples matching the given query patterns request.
func (i *instance) Iterator(ctx context.Context, req *logproto.QueryPatternsRequest) (iter.Iterator, error) {
matchers, err := syntax.ParseMatchers(req.Query, true)
@@ -174,3 +249,89 @@ func (i *instance) removeStream(s *stream) {
i.index.Delete(s.labels, s.fp)
}
}
+
+func (i *instance) Observe(stream string, entries []logproto.Entry) {
+ i.aggMetricsLock.Lock()
+ defer i.aggMetricsLock.Unlock()
+
+ for _, entry := range entries {
+ lvl := constants.LogLevelUnknown
+ structuredMetadata := logproto.FromLabelAdaptersToLabels(entry.StructuredMetadata)
+ if structuredMetadata.Has(constants.LevelLabel) {
+ lvl = strings.ToLower(structuredMetadata.Get(constants.LevelLabel))
+ }
+
+ streamMetrics, ok := i.aggMetricsByStreamAndLevel[stream]
+
+ if !ok {
+ streamMetrics = make(map[string]*aggregatedMetrics, len(constants.LogLevels))
+ for _, l := range constants.LogLevels {
+ streamMetrics[l] = &aggregatedMetrics{}
+ }
+ }
+
+ if _, ok := streamMetrics[lvl]; !ok {
+ level.Warn(i.logger).Log(
+ "msg", "unknown log level while observing stream",
+ "level", lvl,
+ "stream", stream,
+ )
+
+ lvl = constants.LogLevelUnknown
+ }
+
+ streamMetrics[lvl].bytes += uint64(len(entry.Line))
+ streamMetrics[lvl].count++
+
+ i.aggMetricsByStreamAndLevel[stream] = streamMetrics
+ }
+}
+
+func (i *instance) Downsample(now model.Time) {
+ i.aggMetricsLock.Lock()
+ defer func() {
+ i.aggMetricsByStreamAndLevel = make(map[string]map[string]*aggregatedMetrics)
+ i.aggMetricsLock.Unlock()
+ }()
+
+ for stream, metricsByLevel := range i.aggMetricsByStreamAndLevel {
+ lbls, err := syntax.ParseLabels(stream)
+ if err != nil {
+ continue
+ }
+
+ for level, metrics := range metricsByLevel {
+ // we start with an empty bucket for each level, so only write if we have metrics
+ if metrics.count > 0 {
+ i.writeAggregatedMetrics(now, lbls, level, metrics.bytes, metrics.count)
+ }
+ }
+ }
+}
+
+func (i *instance) writeAggregatedMetrics(
+ now model.Time,
+ streamLbls labels.Labels,
+ level string,
+ totalBytes, totalCount uint64,
+) {
+ service := streamLbls.Get(push.LabelServiceName)
+ if service == "" {
+ service = push.ServiceUnknown
+ }
+
+ newLbls := labels.Labels{
+ labels.Label{Name: push.AggregatedMetricLabel, Value: service},
+ labels.Label{Name: "level", Value: level},
+ }
+
+ if i.writer != nil {
+ i.writer.WriteEntry(
+ now.Time(),
+ aggregation.AggregatedMetricEntry(now, totalBytes, totalCount, service, streamLbls),
+ newLbls,
+ )
+
+ i.metrics.samples.WithLabelValues(service).Inc()
+ }
+}
diff --git a/pkg/pattern/metrics.go b/pkg/pattern/metrics.go
index f6f8289c7d176..25bbd1fd1f6e1 100644
--- a/pkg/pattern/metrics.go
+++ b/pkg/pattern/metrics.go
@@ -11,6 +11,7 @@ type ingesterMetrics struct {
patternsDetectedTotal *prometheus.CounterVec
tokensPerLine *prometheus.HistogramVec
statePerLine *prometheus.HistogramVec
+ samples *prometheus.CounterVec
}
func newIngesterMetrics(r prometheus.Registerer, metricsNamespace string) *ingesterMetrics {
@@ -47,6 +48,12 @@ func newIngesterMetrics(r prometheus.Registerer, metricsNamespace string) *inges
Help: "The number of items of additional state returned alongside tokens for pattern recognition.",
Buckets: []float64{20, 40, 80, 120, 160, 320, 640, 1280},
}, []string{"tenant", "format"}),
+ samples: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Namespace: metricsNamespace,
+ Subsystem: "pattern_ingester",
+ Name: "metric_samples",
+ Help: "The total number of samples created to write back to Loki.",
+ }, []string{"service_name"}),
}
}
diff --git a/pkg/pattern/ring_client.go b/pkg/pattern/ring_client.go
index 3ceaf481a3b9b..72739e0c0849e 100644
--- a/pkg/pattern/ring_client.go
+++ b/pkg/pattern/ring_client.go
@@ -13,7 +13,13 @@ import (
"github.com/grafana/loki/v3/pkg/pattern/clientpool"
)
-type RingClient struct {
+type RingClient interface {
+ services.Service
+ Ring() ring.ReadRing
+ GetClientFor(addr string) (ring_client.PoolClient, error)
+}
+
+type ringClient struct {
cfg Config
logger log.Logger
@@ -29,10 +35,10 @@ func NewRingClient(
metricsNamespace string,
registerer prometheus.Registerer,
logger log.Logger,
-) (*RingClient, error) {
+) (RingClient, error) {
var err error
registerer = prometheus.WrapRegistererWithPrefix(metricsNamespace+"_", registerer)
- ringClient := &RingClient{
+ ringClient := &ringClient{
logger: log.With(logger, "component", "pattern-ring-client"),
cfg: cfg,
}
@@ -59,19 +65,55 @@ func NewRingClient(
return ringClient, nil
}
-func (q *RingClient) starting(ctx context.Context) error {
- return services.StartManagerAndAwaitHealthy(ctx, q.subservices)
+func (r *ringClient) starting(ctx context.Context) error {
+ return services.StartManagerAndAwaitHealthy(ctx, r.subservices)
}
-func (q *RingClient) running(ctx context.Context) error {
+func (r *ringClient) running(ctx context.Context) error {
select {
case <-ctx.Done():
return nil
- case err := <-q.subservicesWatcher.Chan():
+ case err := <-r.subservicesWatcher.Chan():
return fmt.Errorf("pattern tee subservices failed: %w", err)
}
}
-func (q *RingClient) stopping(_ error) error {
- return services.StopManagerAndAwaitStopped(context.Background(), q.subservices)
+func (r *ringClient) stopping(_ error) error {
+ return services.StopManagerAndAwaitStopped(context.Background(), r.subservices)
+}
+
+func (r *ringClient) Ring() ring.ReadRing {
+ return r.ring
+}
+
+func (r *ringClient) StartAsync(ctx context.Context) error {
+ return r.ring.StartAsync(ctx)
+}
+
+func (r *ringClient) AwaitRunning(ctx context.Context) error {
+ return r.ring.AwaitRunning(ctx)
+}
+
+func (r *ringClient) StopAsync() {
+ r.ring.StopAsync()
+}
+
+func (r *ringClient) AwaitTerminated(ctx context.Context) error {
+ return r.ring.AwaitTerminated(ctx)
+}
+
+func (r *ringClient) FailureCase() error {
+ return r.ring.FailureCase()
+}
+
+func (r *ringClient) State() services.State {
+ return r.ring.State()
+}
+
+func (r *ringClient) AddListener(listener services.Listener) {
+ r.ring.AddListener(listener)
+}
+
+func (r *ringClient) GetClientFor(addr string) (ring_client.PoolClient, error) {
+ return r.pool.GetClientFor(addr)
}
diff --git a/pkg/pattern/tee.go b/pkg/pattern/tee.go
deleted file mode 100644
index 70fb37e1b6929..0000000000000
--- a/pkg/pattern/tee.go
+++ /dev/null
@@ -1,88 +0,0 @@
-package pattern
-
-import (
- "context"
- "errors"
-
- "github.com/go-kit/log"
- "github.com/go-kit/log/level"
- "github.com/grafana/dskit/ring"
- "github.com/grafana/dskit/user"
- "github.com/prometheus/client_golang/prometheus"
- "github.com/prometheus/client_golang/prometheus/promauto"
-
- "github.com/grafana/loki/v3/pkg/distributor"
- "github.com/grafana/loki/v3/pkg/logproto"
-)
-
-type Tee struct {
- cfg Config
- logger log.Logger
- ringClient *RingClient
-
- ingesterAppends *prometheus.CounterVec
-}
-
-func NewTee(
- cfg Config,
- ringClient *RingClient,
- metricsNamespace string,
- registerer prometheus.Registerer,
- logger log.Logger,
-) (*Tee, error) {
- registerer = prometheus.WrapRegistererWithPrefix(metricsNamespace+"_", registerer)
-
- t := &Tee{
- logger: log.With(logger, "component", "pattern-tee"),
- ingesterAppends: promauto.With(registerer).NewCounterVec(prometheus.CounterOpts{
- Name: "pattern_ingester_appends_total",
- Help: "The total number of batch appends sent to pattern ingesters.",
- }, []string{"ingester", "status"}),
- cfg: cfg,
- ringClient: ringClient,
- }
-
- return t, nil
-}
-
-// Duplicate Implements distributor.Tee which is used to tee distributor requests to pattern ingesters.
-func (t *Tee) Duplicate(tenant string, streams []distributor.KeyedStream) {
- for idx := range streams {
- go func(stream distributor.KeyedStream) {
- if err := t.sendStream(tenant, stream); err != nil {
- level.Error(t.logger).Log("msg", "failed to send stream to pattern ingester", "err", err)
- }
- }(streams[idx])
- }
-}
-
-func (t *Tee) sendStream(tenant string, stream distributor.KeyedStream) error {
- var descs [1]ring.InstanceDesc
- replicationSet, err := t.ringClient.ring.Get(stream.HashKey, ring.WriteNoExtend, descs[:0], nil, nil)
- if err != nil {
- return err
- }
- if replicationSet.Instances == nil {
- return errors.New("no instances found")
- }
- addr := replicationSet.Instances[0].Addr
- client, err := t.ringClient.pool.GetClientFor(addr)
- if err != nil {
- return err
- }
- req := &logproto.PushRequest{
- Streams: []logproto.Stream{
- stream.Stream,
- },
- }
-
- ctx, cancel := context.WithTimeout(user.InjectOrgID(context.Background(), tenant), t.cfg.ClientConfig.RemoteTimeout)
- defer cancel()
- _, err = client.(logproto.PatternClient).Push(ctx, req)
- if err != nil {
- t.ingesterAppends.WithLabelValues(addr, "fail").Inc()
- return err
- }
- t.ingesterAppends.WithLabelValues(addr, "success").Inc()
- return nil
-}
diff --git a/pkg/pattern/tee_service.go b/pkg/pattern/tee_service.go
new file mode 100644
index 0000000000000..13058fbaeb468
--- /dev/null
+++ b/pkg/pattern/tee_service.go
@@ -0,0 +1,401 @@
+package pattern
+
+import (
+ "context"
+ "errors"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "github.com/prometheus/prometheus/util/pool"
+
+ "github.com/grafana/dskit/instrument"
+ "github.com/grafana/dskit/ring"
+ "github.com/grafana/dskit/user"
+
+ "github.com/grafana/loki/v3/pkg/distributor"
+ "github.com/grafana/loki/v3/pkg/loghttp/push"
+ "github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/logql/syntax"
+ "github.com/grafana/loki/v3/pkg/util/constants"
+
+ ring_client "github.com/grafana/dskit/ring/client"
+)
+
+type TeeService struct {
+ cfg Config
+ logger log.Logger
+ ringClient RingClient
+ wg *sync.WaitGroup
+
+ ingesterAppends *prometheus.CounterVec
+ ingesterMetricAppends *prometheus.CounterVec
+
+ teedStreams *prometheus.CounterVec
+ teedRequests *prometheus.CounterVec
+
+ sendDuration *instrument.HistogramCollector
+
+ flushQueue chan clientRequest
+
+ bufferPool *pool.Pool
+ buffersMutex *sync.Mutex
+ buffers map[string][]distributor.KeyedStream
+}
+
+func NewTeeService(
+ cfg Config,
+ ringClient RingClient,
+ metricsNamespace string,
+ registerer prometheus.Registerer,
+ logger log.Logger,
+) (*TeeService, error) {
+ registerer = prometheus.WrapRegistererWithPrefix(metricsNamespace+"_", registerer)
+
+ t := &TeeService{
+ logger: log.With(logger, "component", "pattern-tee"),
+ ingesterAppends: promauto.With(registerer).NewCounterVec(prometheus.CounterOpts{
+ Name: "pattern_ingester_appends_total",
+ Help: "The total number of batch appends sent to pattern ingesters.",
+ }, []string{"ingester", "status"}),
+ ingesterMetricAppends: promauto.With(registerer).NewCounterVec(prometheus.CounterOpts{
+ Name: "pattern_ingester_metric_appends_total",
+ Help: "The total number of metric only batch appends sent to pattern ingesters. These requests will not be processed for patterns.",
+ }, []string{"status"}),
+ teedStreams: promauto.With(registerer).NewCounterVec(prometheus.CounterOpts{
+ Name: "pattern_ingester_teed_streams_total",
+ Help: "The total number of streams teed to the pattern ingester.",
+ }, []string{"status"}),
+ teedRequests: promauto.With(registerer).NewCounterVec(prometheus.CounterOpts{
+ Name: "pattern_ingester_teed_requests_total",
+ Help: "The total number of batch appends sent to fallback pattern ingesters, for not owned streams.",
+ }, []string{"status"}),
+ sendDuration: instrument.NewHistogramCollector(
+ promauto.With(registerer).NewHistogramVec(
+ prometheus.HistogramOpts{
+ Namespace: constants.Loki,
+ Name: "pattern_ingester_tee_send_duration_seconds",
+ Help: "Time spent sending batches from the tee to the pattern ingester",
+ Buckets: prometheus.DefBuckets,
+ }, instrument.HistogramCollectorBuckets,
+ ),
+ ),
+ cfg: cfg,
+ ringClient: ringClient,
+
+ wg: &sync.WaitGroup{},
+ buffersMutex: &sync.Mutex{},
+ buffers: make(map[string][]distributor.KeyedStream),
+ flushQueue: make(chan clientRequest, cfg.TeeConfig.FlushQueueSize),
+ }
+
+ return t, nil
+}
+
+func (ts *TeeService) Start(runCtx context.Context) error {
+ ts.wg.Add(1)
+
+ // Start all batchSenders. We don't use the Run() context here, because we
+ // want the senders to finish sending any currently in-flight data and the
+ // remining batches in the queue before the TeeService fully stops.
+ //
+ // Still, we have a maximum amount of time we will wait after the TeeService
+ // is stopped, see cfg.StopFlushTimeout below.
+ senderCtx, senderCancel := context.WithCancel(context.Background())
+
+ sendersWg := &sync.WaitGroup{}
+ sendersWg.Add(ts.cfg.TeeConfig.FlushWorkerCount)
+ for i := 0; i < ts.cfg.TeeConfig.FlushWorkerCount; i++ {
+ go func() {
+ ts.batchSender(senderCtx)
+ sendersWg.Done()
+ }()
+ }
+
+ // We need this to implement the select with StopFlushTimeout below
+ sendersDone := make(chan struct{})
+ go func() {
+ sendersWg.Wait()
+ close(sendersDone)
+ }()
+
+ go func() {
+ // We wait for the Run() context to be done, so we know we are stopping
+ <-runCtx.Done()
+
+ // The senders euther stop normally in the allotted time, or we hit the
+ // timeout and cancel thir context. In either case, we wait for them to
+ // finish before we consider the service to be done.
+ select {
+ case <-time.After(ts.cfg.TeeConfig.StopFlushTimeout):
+ senderCancel() // Cancel any remaining senders
+ <-sendersDone // Wait for them to be done
+ case <-sendersDone:
+ }
+ ts.wg.Done()
+ }()
+
+ go func() {
+ t := time.NewTicker(ts.cfg.TeeConfig.BatchFlushInterval)
+ defer t.Stop()
+ for {
+ select {
+ case <-t.C:
+ ts.flush()
+ case <-runCtx.Done():
+ // Final flush to send anything currently buffered
+ ts.flush()
+ close(ts.flushQueue) // nothing will write to it anymore
+ return
+ }
+ }
+ }()
+
+ return nil
+}
+
+func (ts *TeeService) WaitUntilDone() {
+ ts.wg.Wait()
+}
+
+func (ts *TeeService) flush() {
+ ts.buffersMutex.Lock()
+ if len(ts.buffers) == 0 {
+ ts.buffersMutex.Unlock()
+ return
+ }
+
+ buffered := ts.buffers
+ ts.buffers = make(map[string][]distributor.KeyedStream)
+ ts.buffersMutex.Unlock()
+
+ batches := make([]map[string]map[string]*logproto.PushRequest, 0, len(buffered))
+ for tenant, streams := range buffered {
+ batches = append(batches, ts.batchesForTenant(tenant, streams))
+ }
+
+ byTenantAndPatternIngester := make(map[string]map[string][]*logproto.PushRequest)
+ for _, b := range batches {
+ for tenant, requests := range b {
+ for addr, req := range requests {
+ byTenant, ok := byTenantAndPatternIngester[tenant]
+ if !ok {
+ byTenant = make(map[string][]*logproto.PushRequest)
+ }
+
+ byTenant[addr] = append(
+ byTenant[addr],
+ req,
+ )
+
+ byTenantAndPatternIngester[tenant] = byTenant
+ }
+ }
+ }
+
+ for tenant, requests := range byTenantAndPatternIngester {
+ for addr, reqs := range requests {
+ select {
+ case ts.flushQueue <- clientRequest{
+ ingesterAddr: addr,
+ tenant: tenant,
+ reqs: reqs,
+ }:
+ ts.teedRequests.WithLabelValues("queued").Inc()
+ default:
+ ts.teedRequests.WithLabelValues("dropped").Inc()
+ }
+ }
+ }
+}
+
+func (ts *TeeService) batchesForTenant(
+ tenant string,
+ streams []distributor.KeyedStream,
+) map[string]map[string]*logproto.PushRequest {
+ batches := map[string]map[string]*logproto.PushRequest{
+ tenant: make(map[string]*logproto.PushRequest),
+ }
+
+ if len(streams) == 0 {
+ return batches
+ }
+
+ for _, stream := range streams {
+ var descs [1]ring.InstanceDesc
+ replicationSet, err := ts.ringClient.Ring().
+ Get(stream.HashKey, ring.WriteNoExtend, descs[:0], nil, nil)
+ if err != nil || len(replicationSet.Instances) == 0 {
+ ts.teedStreams.WithLabelValues("dropped").Inc()
+ continue
+ }
+
+ addr := replicationSet.Instances[0].Addr
+ batch, ok := batches[tenant][addr]
+ if !ok {
+ batch = &logproto.PushRequest{}
+ batches[tenant][addr] = batch
+ }
+
+ if len(stream.Stream.Entries) > 0 {
+ batch.Streams = append(batch.Streams, stream.Stream)
+ ts.teedStreams.WithLabelValues("batched").Inc()
+ }
+ }
+
+ streamCount := uint64(len(streams))
+ level.Debug(ts.logger).Log(
+ "msg", "prepared pattern Tee batches for tenant",
+ "tenant", tenant,
+ "stream_count", streamCount,
+ )
+
+ return batches
+}
+
+type clientRequest struct {
+ ingesterAddr string
+ tenant string
+ reqs []*logproto.PushRequest
+}
+
+func (ts *TeeService) batchSender(ctx context.Context) {
+ for {
+ select {
+ case clientReq, ok := <-ts.flushQueue:
+ if !ok {
+ return // we are done, the queue was closed by Run()
+ }
+ ts.sendBatch(ctx, clientReq)
+ case <-ctx.Done():
+ return
+ }
+ }
+}
+
+func (ts *TeeService) sendBatch(ctx context.Context, clientRequest clientRequest) {
+ ctx, cancel := context.WithTimeout(ctx, ts.cfg.ConnectionTimeout)
+ defer cancel()
+
+ for i := 0; i < len(clientRequest.reqs); i++ {
+ req := clientRequest.reqs[i]
+
+ if len(req.Streams) == 0 {
+ continue
+ }
+
+ // Nothing to do with this error. It's recorded in the metrics that
+ // are gathered by this request
+ _ = instrument.CollectedRequest(
+ ctx,
+ "FlushTeedLogsToPatternIngested",
+ ts.sendDuration,
+ instrument.ErrorCode,
+ func(ctx context.Context) error {
+ client, err := ts.ringClient.GetClientFor(clientRequest.ingesterAddr)
+ if err != nil {
+ return err
+ }
+ ctx, cancel := context.WithTimeout(
+ user.InjectOrgID(ctx, clientRequest.tenant),
+ ts.cfg.ClientConfig.RemoteTimeout,
+ )
+
+ // First try to send the request to the correct pattern ingester
+ defer cancel()
+ _, err = client.(logproto.PatternClient).Push(ctx, req)
+ if err == nil {
+ // Success here means the stream will be processed for both metrics and patterns
+ ts.ingesterAppends.WithLabelValues(clientRequest.ingesterAddr, "success").Inc()
+ ts.ingesterMetricAppends.WithLabelValues("success").Inc()
+ return nil
+ }
+
+ // The pattern ingester appends failed, but we can retry the metric append
+ ts.ingesterAppends.WithLabelValues(clientRequest.ingesterAddr, "fail").Inc()
+ level.Error(ts.logger).Log("msg", "failed to send patterns to pattern ingester", "err", err)
+
+ if !ts.cfg.MetricAggregation.Enabled {
+ return err
+ }
+
+ // Pattern ingesters serve 2 functions, processing patterns and aggregating metrics.
+ // Only owned streams are processed for patterns, however any pattern ingester can
+ // aggregate metrics for any stream. Therefore, if we can't send the owned stream,
+ // try to forward request to any pattern ingester so we at least capture the metrics.
+ replicationSet, err := ts.ringClient.Ring().
+ GetReplicationSetForOperation(ring.WriteNoExtend)
+ if err != nil || len(replicationSet.Instances) == 0 {
+ ts.ingesterMetricAppends.WithLabelValues("fail").Inc()
+ level.Error(ts.logger).Log(
+ "msg", "failed to send metrics to fallback pattern ingesters",
+ "num_instances", len(replicationSet.Instances),
+ "err", err,
+ )
+ return errors.New("no instances found for fallback")
+ }
+
+ fallbackAddrs := make([]string, 0, len(replicationSet.Instances))
+ for _, instance := range replicationSet.Instances {
+ addr := instance.Addr
+ fallbackAddrs = append(fallbackAddrs, addr)
+
+ var client ring_client.PoolClient
+ client, err = ts.ringClient.GetClientFor(addr)
+ if err != nil {
+ ctx, cancel := context.WithTimeout(
+ user.InjectOrgID(ctx, clientRequest.tenant),
+ ts.cfg.ClientConfig.RemoteTimeout,
+ )
+ defer cancel()
+
+ _, err = client.(logproto.PatternClient).Push(ctx, req)
+ if err != nil {
+ continue
+ }
+
+ ts.ingesterMetricAppends.WithLabelValues("success").Inc()
+ // bail after any success to prevent sending more than one
+ return nil
+ }
+ }
+
+ ts.ingesterMetricAppends.WithLabelValues("fail").Inc()
+ level.Error(ts.logger).Log(
+ "msg", "failed to send metrics to fallback pattern ingesters. exhausted all fallback instances",
+ "addresses", strings.Join(fallbackAddrs, ", "),
+ "err", err,
+ )
+ return err
+ })
+ }
+}
+
+// Duplicate Implements distributor.Tee which is used to tee distributor requests to pattern ingesters.
+func (ts *TeeService) Duplicate(tenant string, streams []distributor.KeyedStream) {
+ if !ts.cfg.Enabled {
+ return
+ }
+
+ if len(streams) == 0 {
+ return
+ }
+
+ for _, stream := range streams {
+ lbls, err := syntax.ParseLabels(stream.Stream.Labels)
+ if err != nil || lbls.Has(push.AggregatedMetricLabel) {
+ level.Error(ts.logger).
+ Log("msg", "error parsing stream labels", "labels", stream.Stream.Labels, "err", err)
+
+ continue
+ }
+
+ ts.buffersMutex.Lock()
+ ts.buffers[tenant] = append(ts.buffers[tenant], stream)
+ ts.buffersMutex.Unlock()
+ }
+}
diff --git a/pkg/pattern/tee_service_test.go b/pkg/pattern/tee_service_test.go
new file mode 100644
index 0000000000000..1be8114df0220
--- /dev/null
+++ b/pkg/pattern/tee_service_test.go
@@ -0,0 +1,183 @@
+package pattern
+
+import (
+ "context"
+ "flag"
+ "slices"
+ "testing"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/grafana/dskit/ring"
+ "github.com/grafana/dskit/user"
+ "github.com/stretchr/testify/mock"
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/v3/pkg/distributor"
+ "github.com/grafana/loki/v3/pkg/logproto"
+
+ "github.com/grafana/loki/pkg/push"
+)
+
+func getTestTee(t *testing.T) (*TeeService, *mockPoolClient) {
+ cfg := Config{}
+ cfg.RegisterFlags(flag.NewFlagSet("test", flag.PanicOnError)) // set up defaults
+
+ cfg.Enabled = true
+
+ response := &logproto.PushResponse{}
+ client := &mockPoolClient{}
+ client.On("Push", mock.Anything, mock.Anything).Return(response, nil)
+
+ replicationSet := ring.ReplicationSet{
+ Instances: []ring.InstanceDesc{
+ {Id: "localhost", Addr: "ingester0"},
+ {Id: "remotehost", Addr: "ingester1"},
+ {Id: "otherhost", Addr: "ingester2"},
+ },
+ }
+
+ fakeRing := &fakeRing{}
+ fakeRing.On("Get", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(replicationSet, nil)
+
+ ringClient := &fakeRingClient{
+ poolClient: client,
+ ring: fakeRing,
+ }
+
+ logsTee, err := NewTeeService(
+ cfg,
+ ringClient,
+ "test",
+ nil,
+ log.NewNopLogger(),
+ )
+ require.NoError(t, err)
+
+ return logsTee, client
+}
+
+func TestPatternTeeBasic(t *testing.T) {
+ tee, client := getTestTee(t)
+
+ ctx, cancel := context.WithCancel(context.Background())
+
+ require.NoError(t, tee.Start(ctx))
+
+ now := time.Now()
+ tee.Duplicate("test-tenant", []distributor.KeyedStream{
+ {HashKey: 123, Stream: push.Stream{
+ Labels: `{foo="bar"}`,
+ Entries: []push.Entry{
+ {Timestamp: now, Line: "foo1"},
+ {Timestamp: now.Add(1 * time.Second), Line: "bar1"},
+ {Timestamp: now.Add(2 * time.Second), Line: "baz1"},
+ },
+ }},
+ })
+
+ tee.Duplicate("test-tenant", []distributor.KeyedStream{
+ {HashKey: 123, Stream: push.Stream{
+ Labels: `{foo="bar"}`,
+ Entries: []push.Entry{
+ {Timestamp: now.Add(3 * time.Second), Line: "foo2"},
+ {Timestamp: now.Add(4 * time.Second), Line: "bar2"},
+ {Timestamp: now.Add(5 * time.Second), Line: "baz2"},
+ },
+ }},
+ })
+
+ tee.Duplicate("test-tenant", []distributor.KeyedStream{
+ {HashKey: 456, Stream: push.Stream{
+ Labels: `{ping="pong"}`,
+ Entries: []push.Entry{
+ {Timestamp: now.Add(1 * time.Second), Line: "ping"},
+ {Timestamp: now.Add(2 * time.Second), Line: "pong"},
+ },
+ }},
+ })
+
+ cancel()
+
+ // This should ensure that everything has been flushed and we have no data races below.
+ tee.WaitUntilDone()
+
+ req := client.req
+ reqCtx := client.ctx
+
+ require.NotNil(t, req)
+ tenant, err := user.ExtractOrgID(reqCtx)
+ require.NoError(t, err)
+
+ require.Equal(t, "test-tenant", tenant)
+
+ require.Len(t, req.Streams, 3)
+
+ fooBarEntries := []push.Entry{}
+ pingPongEntries := []push.Entry{}
+
+ for _, stream := range req.Streams {
+ if stream.Labels == `{foo="bar"}` {
+ fooBarEntries = append(fooBarEntries, stream.Entries...)
+ }
+
+ if stream.Labels == `{ping="pong"}` {
+ pingPongEntries = append(pingPongEntries, stream.Entries...)
+ }
+ }
+
+ slices.SortFunc(fooBarEntries, func(i, j push.Entry) int {
+ return i.Timestamp.Compare(j.Timestamp)
+ })
+
+ slices.SortFunc(pingPongEntries, func(i, j push.Entry) int {
+ return i.Timestamp.Compare(j.Timestamp)
+ })
+
+ require.Equal(t, []push.Entry{
+ {Timestamp: now, Line: "foo1"},
+ {Timestamp: now.Add(1 * time.Second), Line: "bar1"},
+ {Timestamp: now.Add(2 * time.Second), Line: "baz1"},
+ {Timestamp: now.Add(3 * time.Second), Line: "foo2"},
+ {Timestamp: now.Add(4 * time.Second), Line: "bar2"},
+ {Timestamp: now.Add(5 * time.Second), Line: "baz2"},
+ }, fooBarEntries)
+
+ require.Equal(t, []push.Entry{
+ {Timestamp: now.Add(1 * time.Second), Line: "ping"},
+ {Timestamp: now.Add(2 * time.Second), Line: "pong"},
+ }, pingPongEntries)
+}
+
+func TestPatternTeeEmptyStream(t *testing.T) {
+ tee, client := getTestTee(t)
+
+ ctx, cancel := context.WithCancel(context.Background())
+
+ require.NoError(t, tee.Start(ctx))
+
+ tee.Duplicate("test-tenant", []distributor.KeyedStream{
+ {HashKey: 123, Stream: push.Stream{
+ Labels: `{foo="bar"}`,
+ Entries: []push.Entry{},
+ }},
+ })
+
+ tee.Duplicate("test-tenant", []distributor.KeyedStream{
+ {HashKey: 456, Stream: push.Stream{
+ Labels: `{ping="pong"}`,
+ Entries: []push.Entry{},
+ }},
+ })
+
+ cancel()
+
+ // This should ensure that everything has been flushed and we have no data races below.
+ tee.WaitUntilDone()
+
+ req := client.req
+ reqCtx := client.ctx
+
+ require.Nil(t, req)
+ require.Nil(t, reqCtx)
+}
diff --git a/pkg/util/constants/levels.go b/pkg/util/constants/levels.go
new file mode 100644
index 0000000000000..df735f84db40d
--- /dev/null
+++ b/pkg/util/constants/levels.go
@@ -0,0 +1,24 @@
+package constants
+
+const (
+ LevelLabel = "detected_level"
+ LogLevelUnknown = "unknown"
+ LogLevelDebug = "debug"
+ LogLevelInfo = "info"
+ LogLevelWarn = "warn"
+ LogLevelError = "error"
+ LogLevelFatal = "fatal"
+ LogLevelCritical = "critical"
+ LogLevelTrace = "trace"
+)
+
+var LogLevels = []string{
+ LogLevelUnknown,
+ LogLevelDebug,
+ LogLevelInfo,
+ LogLevelWarn,
+ LogLevelError,
+ LogLevelFatal,
+ LogLevelCritical,
+ LogLevelTrace,
+}
|
feat
|
aggregate byte and count metrics (#13731)
|
5b432e3d99f6facbe7d9b008bd9eb119b5824a65
|
2022-07-15 14:04:01
|
Karen Miller
|
docs: Fix bad links in the API section (#6688)
| false
|
diff --git a/docs/sources/api/_index.md b/docs/sources/api/_index.md
index 1259b0cbbe453..f7ef842753dc1 100644
--- a/docs/sources/api/_index.md
+++ b/docs/sources/api/_index.md
@@ -1,5 +1,7 @@
---
title: HTTP API
+menuTitle: "HTTP API"
+description: "Loki exposes REST endpoints for operating on a Loki cluster. This section details the endpoints."
weight: 900
---
@@ -1024,7 +1026,8 @@ POST /loki/api/v1/delete
PUT /loki/api/v1/delete
```
-Create a new delete request for the authenticated tenant. More details can be found in the [logs deletion documentation](../operations/storage/logs-deletion.md#request-log-entry-deletion).
+Create a new delete request for the authenticated tenant.
+The [log entry deletion](../operations/storage/logs-deletion/) documentation has configuration details.
Log entry deletion is supported _only_ when the BoltDB Shipper is configured for the index store.
@@ -1062,7 +1065,8 @@ curl -u "Tenant1:$API_TOKEN" \
GET /loki/api/v1/delete
```
-List the existing delete requests for the authenticated tenant. More details can be found in the [logs deletion documentation](../operations/storage/logs-deletion.md#list-delete-requests).
+List the existing delete requests for the authenticated tenant.
+The [log entry deletion](../operations/storage/logs-deletion/) documentation has configuration details.
Log entry deletion is supported _only_ when the BoltDB Shipper is configured for the index store.
@@ -1098,7 +1102,8 @@ curl -u "Tenant1:$API_TOKEN" \
DELETE /loki/api/v1/delete
```
-Remove a delete request for the authenticated tenant. More details can be found in the [logs deletion documentation](../operations/storage/logs-deletion.md#request-cancellation-of-a-delete-request).
+Remove a delete request for the authenticated tenant.
+The [log entry deletion](../operations/storage/logs-deletion/) documentation has configuration details.
Loki allows cancellation of delete requests until the requests are picked up for processing. It is controlled by the `delete_request_cancel_period` YAML configuration or the equivalent command line option when invoking Loki.
diff --git a/docs/sources/operations/storage/logs-deletion.md b/docs/sources/operations/storage/logs-deletion.md
index 967863c6b46ed..a1399a0bb3d09 100644
--- a/docs/sources/operations/storage/logs-deletion.md
+++ b/docs/sources/operations/storage/logs-deletion.md
@@ -1,8 +1,10 @@
---
-title: Log Entry Deletion
+title: Log entry deletion
+menuTitle: "Log entry deletion"
+description: "Log entries from a specified stream may be deleted."
weight: 60
---
-# Log Entry Deletion
+# Log entry deletion
Grafana Loki supports the deletion of log entries from a specified stream.
Log entries that fall within a specified time window and match an optional line filter are those that will be deleted.
|
docs
|
Fix bad links in the API section (#6688)
|
5f50003214969573c9755f81e79cfbc39c7be45d
|
2024-11-19 05:42:45
|
Trevor Whitney
|
ci: fix helm lint (#15001)
| false
|
diff --git a/.github/workflows/helm-ci.yml b/.github/workflows/helm-ci.yml
index 8ecb7ddab3cea..91bdd5d0235ad 100644
--- a/.github/workflows/helm-ci.yml
+++ b/.github/workflows/helm-ci.yml
@@ -11,35 +11,9 @@ env:
jobs:
call-lint:
- name: Lint Helm Chart
- runs-on: ubuntu-latest
- steps:
- - name: Checkout Code
- uses: actions/checkout@v4
-
- - name: Check Docs
- run: |
- docker run --rm --volume "$(pwd):/helm-docs" -u "$(id -u)" jnorwood/helm-docs:v1.11.0
- if ! git diff --exit-code; then
- echo "Documentation not up to date. Please run helm-docs and commit changes!" >&2
- exit 1
- fi
-
- - name: Lint Yaml
- run: make helm-lint
-
- - name: Lint Code Base
- uses: docker://github/super-linter:v3.17.2
- env:
- FILTER_REGEX_EXCLUDE: .*(CHANGELOG\.md|README\.md|Chart\.yaml|NOTES.txt).*
- FILTER_REGEX_INCLUDE: .*production/helm/.*
- VALIDATE_ALL_CODEBASE: false
- VALIDATE_KUBERNETES_KUBEVAL: false
- VALIDATE_YAML: false
- VALIDATE_GO: false
- DEFAULT_BRANCH: main
- GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
- LOG_LEVEL: DEBUG
+ uses: grafana/helm-charts/.github/workflows/linter.yml@main
+ with:
+ filter_regex_include: .*production/helm/loki/.*
call-test:
name: Test Helm Chart
runs-on: ubuntu-latest
diff --git a/.github/workflows/helm-release.yaml b/.github/workflows/helm-release.yaml
index c303939fdf39c..1a22318fcf752 100644
--- a/.github/workflows/helm-release.yaml
+++ b/.github/workflows/helm-release.yaml
@@ -11,7 +11,7 @@ on:
jobs:
call-update-helm-repo:
- uses: grafana/helm-charts/.github/workflows/update-helm-repo.yaml@main
+ uses: grafana/helm-charts/.github/workflows/update-helm-repo.yaml@70dbbb722dee3f2ee126e12684cc0e92a20972ed
with:
charts_dir: production/helm
cr_configfile: production/helm/cr.yaml
|
ci
|
fix helm lint (#15001)
|
6f1d1d73b80fbeb85a0cca21d8c748a142bd1f82
|
2023-04-25 15:21:08
|
Rens Groothuijsen
|
operator: Replace deprecated MinIO environment variables (#9245)
| false
|
diff --git a/operator/config/overlays/development/minio/deployment.yaml b/operator/config/overlays/development/minio/deployment.yaml
index 6b3a16d3eaa29..050681b5d3561 100644
--- a/operator/config/overlays/development/minio/deployment.yaml
+++ b/operator/config/overlays/development/minio/deployment.yaml
@@ -21,9 +21,9 @@ spec:
mkdir -p /storage/loki && \
minio server /storage
env:
- - name: MINIO_ACCESS_KEY
+ - name: MINIO_ROOT_USER
value: minio
- - name: MINIO_SECRET_KEY
+ - name: MINIO_ROOT_PASSWORD
value: minio123
image: minio/minio
name: minio
|
operator
|
Replace deprecated MinIO environment variables (#9245)
|
658fb24311d57dfcdca783f4cf2b64a7a19fa97f
|
2024-12-04 03:31:37
|
Alex Richard Westhaver-Ford
|
docs: fixed typos/grammatical mistakes in metrics.md (#15166)
| false
|
diff --git a/docs/sources/send-data/promtail/stages/metrics.md b/docs/sources/send-data/promtail/stages/metrics.md
index b034bd6d6d6a1..ea1c7b78150c5 100644
--- a/docs/sources/send-data/promtail/stages/metrics.md
+++ b/docs/sources/send-data/promtail/stages/metrics.md
@@ -51,8 +51,8 @@ type: Counter
[max_idle_duration: <string>]
config:
- # If present and true all log lines will be counted without
- # attempting to match the source to the extract map.
+ # If present and true all log lines will be counted without attempting
+ # to match the `value` to the field specified by `source` in the extracted map.
# It is an error to specify `match_all: true` and also specify a `value`
[match_all: <bool>]
@@ -231,7 +231,7 @@ This pipeline first tries to find text in the format `order_status=<value>` in
the log line, pulling out the `<value>` into the extracted map with the key
`order_status`.
-The metric stages creates `successful_orders_total` and `failed_orders_total`
+The metrics stage creates `successful_orders_total` and `failed_orders_total`
metrics that only increment when the value of `order_status` in the extracted
map is `success` or `fail` respectively.
@@ -265,7 +265,7 @@ number in the `retries` field from the extracted map.
- metrics:
http_response_time_seconds:
type: Histogram
- description: "length of each log line"
+ description: "distribution of log response time"
source: response_time
config:
buckets: [0.001,0.0025,0.005,0.010,0.025,0.050]
|
docs
|
fixed typos/grammatical mistakes in metrics.md (#15166)
|
77ea11724b438c15acd3890626430729832817f2
|
2021-10-25 22:13:25
|
Dylan Guedes
|
loki: Enable FIFO cache by default (#4519)
| false
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index ec6db7cc15d5b..bd550d543d7f8 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,7 @@
* [4415](https://github.com/grafana/loki/pull/4415) **DylanGuedes**: Change default limits to common values
* [4473](https://github.com/grafana/loki/pull/4473) **trevorwhitney**: Config: add object storage configuration to common config
* [4425](https://github.com/grafana/loki/pull/4425) **trevorwhitney** and **slim-bean**: Add a ring for the query scheduler
+* [4519](https://github.com/grafana/loki/pull/4519) **DylanGuedes** and **replay**: Loki: Enable FIFO cache by default
# 2.3.0 (2021/08/06)
diff --git a/docs/sources/configuration/_index.md b/docs/sources/configuration/_index.md
index 705fe71cef33f..b9f3e2c582eee 100644
--- a/docs/sources/configuration/_index.md
+++ b/docs/sources/configuration/_index.md
@@ -1794,7 +1794,7 @@ fifocache:
# Maximum memory size of the cache in bytes. A unit suffix (KB, MB, GB) may be
# applied.
# CLI flag: -<prefix>.fifocache.max-size-bytes
- [max_size_bytes: <string> | default = ""]
+ [max_size_bytes: <string> | default = "1GB"]
# Maximum number of entries in the cache.
# CLI flag: -<prefix>.fifocache.max-size-items
@@ -1802,7 +1802,7 @@ fifocache:
# The expiry duration for the cache.
# CLI flag: -<prefix>.fifocache.duration
- [validity: <duration> | default = 0s]
+ [validity: <duration> | default = 1h]
```
## schema_config
diff --git a/pkg/loki/config_wrapper.go b/pkg/loki/config_wrapper.go
index 9f70b827b8cde..e3f90eb13c0a2 100644
--- a/pkg/loki/config_wrapper.go
+++ b/pkg/loki/config_wrapper.go
@@ -5,13 +5,15 @@ import (
"fmt"
"reflect"
+ cortexcache "github.com/cortexproject/cortex/pkg/chunk/cache"
"github.com/grafana/dskit/flagext"
"github.com/pkg/errors"
+ "github.com/grafana/loki/pkg/storage/chunk/cache"
+ "github.com/grafana/loki/pkg/util/cfg"
+
loki_storage "github.com/grafana/loki/pkg/storage"
chunk_storage "github.com/grafana/loki/pkg/storage/chunk/storage"
-
- "github.com/grafana/loki/pkg/util/cfg"
)
// ConfigWrapper is a struct containing the Loki config along with other values that can be set on the command line
@@ -84,8 +86,8 @@ func (c *ConfigWrapper) ApplyDynamicConfig() cfg.Source {
}
applyMemberlistConfig(r)
- err := applyStorageConfig(r, &defaults)
- if err != nil {
+
+ if err := applyStorageConfig(r, &defaults); err != nil {
return err
}
@@ -93,6 +95,8 @@ func (c *ConfigWrapper) ApplyDynamicConfig() cfg.Source {
betterBoltdbShipperDefaults(r, &defaults)
}
+ applyFIFOCacheConfig(r)
+
return nil
}
}
@@ -229,3 +233,36 @@ func betterBoltdbShipperDefaults(cfg, defaults *ConfigWrapper) {
cfg.CompactorConfig.SharedStoreType = currentSchema.ObjectType
}
}
+
+// applyFIFOCacheConfig turns on FIFO cache for the chunk store and for the query range results,
+// but only if no other cache storage is configured (redis or memcache).
+//
+// This behavior is only applied for the chunk store cache and for the query range results cache
+// (i.e: not applicable for the index queries cache or for the write dedupe cache).
+func applyFIFOCacheConfig(r *ConfigWrapper) {
+ chunkCacheConfig := r.ChunkStoreConfig.ChunkCacheConfig
+ if !cache.IsRedisSet(chunkCacheConfig) && !cache.IsMemcacheSet(chunkCacheConfig) {
+ r.ChunkStoreConfig.ChunkCacheConfig.EnableFifoCache = true
+ }
+
+ resultsCacheConfig := r.QueryRange.ResultsCacheConfig.CacheConfig
+ if !isRedisSet(resultsCacheConfig) && !isMemcacheSet(resultsCacheConfig) {
+ r.QueryRange.ResultsCacheConfig.CacheConfig.EnableFifoCache = true
+ }
+}
+
+// isRedisSet is a duplicate of cache.IsRedisSet.
+//
+// We had to duplicate this implementation because we have code relying on
+// loki/pkg/storage/chunk/cache and cortex/pkg/chunk/cache at the same time.
+func isRedisSet(cfg cortexcache.Config) bool {
+ return cfg.Redis.Endpoint != ""
+}
+
+// isMemcacheSet is a duplicate of cache.IsMemcacheSet.
+//
+// We had to duplicate this implementation because we have code relying on
+// loki/pkg/storage/chunk/cache and cortex/pkg/chunk/cache at the same time.
+func isMemcacheSet(cfg cortexcache.Config) bool {
+ return cfg.MemcacheClient.Addresses != "" || cfg.MemcacheClient.Host != ""
+}
diff --git a/pkg/loki/config_wrapper_test.go b/pkg/loki/config_wrapper_test.go
index 90a28a45036c3..bffdc857e4ae5 100644
--- a/pkg/loki/config_wrapper_test.go
+++ b/pkg/loki/config_wrapper_test.go
@@ -21,41 +21,41 @@ import (
"github.com/grafana/loki/pkg/util/cfg"
)
-func Test_ApplyDynamicConfig(t *testing.T) {
- testContextExposingErrs := func(configFileString string, args []string) (error, ConfigWrapper, ConfigWrapper) {
- config := ConfigWrapper{}
- fs := flag.NewFlagSet(t.Name(), flag.PanicOnError)
-
- file, err := ioutil.TempFile("", "config.yaml")
- defer func() {
- os.Remove(file.Name())
- }()
-
- require.NoError(t, err)
- _, err = file.WriteString(configFileString)
- require.NoError(t, err)
+func configWrapperFromYAML(t *testing.T, configFileString string, args []string) (ConfigWrapper, ConfigWrapper, error) {
+ config := ConfigWrapper{}
+ fs := flag.NewFlagSet(t.Name(), flag.PanicOnError)
+
+ file, err := ioutil.TempFile("", "config.yaml")
+ defer func() {
+ os.Remove(file.Name())
+ }()
+
+ require.NoError(t, err)
+ _, err = file.WriteString(configFileString)
+ require.NoError(t, err)
+
+ configFileArgs := []string{"-config.file", file.Name()}
+ if args == nil {
+ args = configFileArgs
+ } else {
+ args = append(args, configFileArgs...)
+ }
+ err = cfg.DynamicUnmarshal(&config, args, fs)
+ if err != nil {
+ return ConfigWrapper{}, ConfigWrapper{}, err
+ }
- configFileArgs := []string{"-config.file", file.Name()}
- if args == nil {
- args = configFileArgs
- } else {
- args = append(args, configFileArgs...)
- }
- err = cfg.DynamicUnmarshal(&config, args, fs)
- if err != nil {
- return err, ConfigWrapper{}, ConfigWrapper{}
- }
-
- defaults := ConfigWrapper{}
- freshFlags := flag.NewFlagSet(t.Name(), flag.PanicOnError)
- err = cfg.DefaultUnmarshal(&defaults, args, freshFlags)
- require.NoError(t, err)
+ defaults := ConfigWrapper{}
+ freshFlags := flag.NewFlagSet(t.Name(), flag.PanicOnError)
+ err = cfg.DefaultUnmarshal(&defaults, args, freshFlags)
+ require.NoError(t, err)
- return nil, config, defaults
- }
+ return config, defaults, nil
+}
+func Test_ApplyDynamicConfig(t *testing.T) {
testContext := func(configFileString string, args []string) (ConfigWrapper, ConfigWrapper) {
- err, config, defaults := testContextExposingErrs(configFileString, args)
+ config, defaults, err := configWrapperFromYAML(t, configFileString, args)
require.NoError(t, err)
return config, defaults
@@ -208,7 +208,7 @@ memberlist:
chunk_buffer_size: 27
request_timeout: 5m`
- err, _, _ := testContextExposingErrs(multipleConfig, nil)
+ _, _, err := configWrapperFromYAML(t, multipleConfig, nil)
assert.ErrorIs(t, err, ErrTooManyStorageConfigs)
})
@@ -656,7 +656,136 @@ schema_config:
})
}
-// Can't use a totally empty yaml file or it causes weird behavior in the unmarhsalling
+func TestDefaultFIFOCacheBehavior(t *testing.T) {
+ t.Run("for the chunk cache config", func(t *testing.T) {
+ t.Run("no FIFO cache enabled by default if Redis is set", func(t *testing.T) {
+ configFileString := `---
+chunk_store_config:
+ chunk_cache_config:
+ redis:
+ endpoint: endpoint.redis.org`
+
+ config, _, _ := configWrapperFromYAML(t, configFileString, nil)
+ assert.EqualValues(t, "endpoint.redis.org", config.ChunkStoreConfig.ChunkCacheConfig.Redis.Endpoint)
+ assert.False(t, config.ChunkStoreConfig.ChunkCacheConfig.EnableFifoCache)
+ })
+
+ t.Run("no FIFO cache enabled by default if Memcache is set", func(t *testing.T) {
+ configFileString := `---
+chunk_store_config:
+ chunk_cache_config:
+ memcached_client:
+ host: host.memcached.org`
+
+ config, _, _ := configWrapperFromYAML(t, configFileString, nil)
+ assert.EqualValues(t, "host.memcached.org", config.ChunkStoreConfig.ChunkCacheConfig.MemcacheClient.Host)
+ assert.False(t, config.ChunkStoreConfig.ChunkCacheConfig.EnableFifoCache)
+ })
+
+ t.Run("FIFO cache is enabled by default if no other cache is set", func(t *testing.T) {
+ config, _, _ := configWrapperFromYAML(t, minimalConfig, nil)
+ assert.True(t, config.ChunkStoreConfig.ChunkCacheConfig.EnableFifoCache)
+ })
+ })
+
+ t.Run("for the write dedupe cache config", func(t *testing.T) {
+ t.Run("no FIFO cache enabled by default if Redis is set", func(t *testing.T) {
+ configFileString := `---
+chunk_store_config:
+ write_dedupe_cache_config:
+ redis:
+ endpoint: endpoint.redis.org`
+
+ config, _, _ := configWrapperFromYAML(t, configFileString, nil)
+ assert.EqualValues(t, "endpoint.redis.org", config.ChunkStoreConfig.WriteDedupeCacheConfig.Redis.Endpoint)
+ assert.False(t, config.ChunkStoreConfig.WriteDedupeCacheConfig.EnableFifoCache)
+ })
+
+ t.Run("no FIFO cache enabled by default if Memcache is set", func(t *testing.T) {
+ configFileString := `---
+chunk_store_config:
+ write_dedupe_cache_config:
+ memcached_client:
+ host: host.memcached.org`
+
+ config, _, _ := configWrapperFromYAML(t, configFileString, nil)
+ assert.EqualValues(t, "host.memcached.org", config.ChunkStoreConfig.WriteDedupeCacheConfig.MemcacheClient.Host)
+ assert.False(t, config.ChunkStoreConfig.WriteDedupeCacheConfig.EnableFifoCache)
+ })
+
+ t.Run("no FIFO cache is enabled by default even if no other cache is set", func(t *testing.T) {
+ config, _, _ := configWrapperFromYAML(t, minimalConfig, nil)
+ assert.False(t, config.ChunkStoreConfig.WriteDedupeCacheConfig.EnableFifoCache)
+ })
+ })
+
+ t.Run("for the index queries cache config", func(t *testing.T) {
+ t.Run("no FIFO cache enabled by default if Redis is set", func(t *testing.T) {
+ configFileString := `---
+storage_config:
+ index_queries_cache_config:
+ redis:
+ endpoint: endpoint.redis.org`
+
+ config, _, _ := configWrapperFromYAML(t, configFileString, nil)
+ assert.EqualValues(t, "endpoint.redis.org", config.StorageConfig.IndexQueriesCacheConfig.Redis.Endpoint)
+ assert.False(t, config.StorageConfig.IndexQueriesCacheConfig.EnableFifoCache)
+ })
+
+ t.Run("no FIFO cache enabled by default if Memcache is set", func(t *testing.T) {
+ configFileString := `---
+storage_config:
+ index_queries_cache_config:
+ memcached_client:
+ host: host.memcached.org`
+
+ config, _, _ := configWrapperFromYAML(t, configFileString, nil)
+
+ assert.EqualValues(t, "host.memcached.org", config.StorageConfig.IndexQueriesCacheConfig.MemcacheClient.Host)
+ assert.False(t, config.StorageConfig.IndexQueriesCacheConfig.EnableFifoCache)
+ })
+
+ t.Run("no FIFO cache is enabled by default even if no other cache is set", func(t *testing.T) {
+ config, _, _ := configWrapperFromYAML(t, minimalConfig, nil)
+ assert.False(t, config.StorageConfig.IndexQueriesCacheConfig.EnableFifoCache)
+ })
+ })
+
+ t.Run("for the query range results cache config", func(t *testing.T) {
+ t.Run("no FIFO cache enabled by default if Redis is set", func(t *testing.T) {
+ configFileString := `---
+query_range:
+ results_cache:
+ cache:
+ redis:
+ endpoint: endpoint.redis.org`
+
+ config, _, _ := configWrapperFromYAML(t, configFileString, nil)
+ assert.EqualValues(t, config.QueryRange.CacheConfig.Redis.Endpoint, "endpoint.redis.org")
+ assert.False(t, config.QueryRange.CacheConfig.EnableFifoCache)
+ })
+
+ t.Run("no FIFO cache enabled by default if Memcache is set", func(t *testing.T) {
+ configFileString := `---
+query_range:
+ results_cache:
+ cache:
+ memcached_client:
+ host: memcached.host.org`
+
+ config, _, _ := configWrapperFromYAML(t, configFileString, nil)
+ assert.EqualValues(t, "memcached.host.org", config.QueryRange.CacheConfig.MemcacheClient.Host)
+ assert.False(t, config.QueryRange.CacheConfig.EnableFifoCache)
+ })
+
+ t.Run("FIFO cache is enabled by default if no other cache is set", func(t *testing.T) {
+ config, _, _ := configWrapperFromYAML(t, minimalConfig, nil)
+ assert.True(t, config.QueryRange.CacheConfig.EnableFifoCache)
+ })
+ })
+}
+
+// Can't use a totally empty yaml file or it causes weird behavior in the unmarhsalling.
const minimalConfig = `---
schema_config:
configs:
diff --git a/pkg/storage/chunk/cache/cache.go b/pkg/storage/chunk/cache/cache.go
index 1f9eff8ecd9a1..10418524bbaf7 100644
--- a/pkg/storage/chunk/cache/cache.go
+++ b/pkg/storage/chunk/cache/cache.go
@@ -49,9 +49,8 @@ func (cfg *Config) RegisterFlagsWithPrefix(prefix string, description string, f
cfg.MemcacheClient.RegisterFlagsWithPrefix(prefix, description, f)
cfg.Redis.RegisterFlagsWithPrefix(prefix, description, f)
cfg.Fifocache.RegisterFlagsWithPrefix(prefix, description, f)
-
- f.BoolVar(&cfg.EnableFifoCache, prefix+"cache.enable-fifocache", false, description+"Enable in-memory cache.")
- f.DurationVar(&cfg.DefaultValidity, prefix+"default-validity", 0, description+"The default validity of entries for caches unless overridden.")
+ f.DurationVar(&cfg.DefaultValidity, prefix+"default-validity", time.Hour, description+"The default validity of entries for caches unless overridden.")
+ f.BoolVar(&cfg.EnableFifoCache, prefix+"cache.enable-fifocache", false, description+"Enable in-memory cache (auto-enabled for the chunks & query results cache if no other cache is configured).")
cfg.Prefix = prefix
}
@@ -60,6 +59,21 @@ func (cfg *Config) Validate() error {
return cfg.Fifocache.Validate()
}
+// IsMemcacheSet returns whether a non empty Memcache config is set or not, based on the configured
+// host or addresses.
+//
+// Internally, this function is used to set Memcache as the cache storage to be used.
+func IsMemcacheSet(cfg Config) bool {
+ return cfg.MemcacheClient.Host != "" || cfg.MemcacheClient.Addresses != ""
+}
+
+// IsRedisSet returns whether a non empty Redis config is set or not, based on the configured endpoint.
+//
+// Internally, this function is used to set Redis as the cache storage to be used.
+func IsRedisSet(cfg Config) bool {
+ return cfg.Redis.Endpoint != ""
+}
+
// New creates a new Cache using Config.
func New(cfg Config, reg prometheus.Registerer, logger log.Logger) (Cache, error) {
if cfg.Cache != nil {
@@ -78,11 +92,11 @@ func New(cfg Config, reg prometheus.Registerer, logger log.Logger) (Cache, error
}
}
- if (cfg.MemcacheClient.Host != "" || cfg.MemcacheClient.Addresses != "") && cfg.Redis.Endpoint != "" {
+ if IsMemcacheSet(cfg) && IsRedisSet(cfg) {
return nil, errors.New("use of multiple cache storage systems is not supported")
}
- if cfg.MemcacheClient.Host != "" || cfg.MemcacheClient.Addresses != "" {
+ if IsMemcacheSet(cfg) {
if cfg.Memcache.Expiration == 0 && cfg.DefaultValidity != 0 {
cfg.Memcache.Expiration = cfg.DefaultValidity
}
@@ -94,7 +108,7 @@ func New(cfg Config, reg prometheus.Registerer, logger log.Logger) (Cache, error
caches = append(caches, NewBackground(cacheName, cfg.Background, Instrument(cacheName, cache, reg), reg))
}
- if cfg.Redis.Endpoint != "" {
+ if IsRedisSet(cfg) {
if cfg.Redis.Expiration == 0 && cfg.DefaultValidity != 0 {
cfg.Redis.Expiration = cfg.DefaultValidity
}
diff --git a/pkg/storage/chunk/cache/fifo_cache.go b/pkg/storage/chunk/cache/fifo_cache.go
index cea0c6413c971..c4f969c8b3484 100644
--- a/pkg/storage/chunk/cache/fifo_cache.go
+++ b/pkg/storage/chunk/cache/fifo_cache.go
@@ -39,9 +39,9 @@ type FifoCacheConfig struct {
// RegisterFlagsWithPrefix adds the flags required to config this to the given FlagSet
func (cfg *FifoCacheConfig) RegisterFlagsWithPrefix(prefix, description string, f *flag.FlagSet) {
- f.StringVar(&cfg.MaxSizeBytes, prefix+"fifocache.max-size-bytes", "", description+"Maximum memory size of the cache in bytes. A unit suffix (KB, MB, GB) may be applied.")
+ f.StringVar(&cfg.MaxSizeBytes, prefix+"fifocache.max-size-bytes", "1GB", description+"Maximum memory size of the cache in bytes. A unit suffix (KB, MB, GB) may be applied.")
f.IntVar(&cfg.MaxSizeItems, prefix+"fifocache.max-size-items", 0, description+"Maximum number of entries in the cache.")
- f.DurationVar(&cfg.Validity, prefix+"fifocache.duration", 0, description+"The expiry duration for the cache.")
+ f.DurationVar(&cfg.Validity, prefix+"fifocache.duration", time.Hour, description+"The expiry duration for the cache.")
f.IntVar(&cfg.DeprecatedSize, prefix+"fifocache.size", 0, "Deprecated (use max-size-items or max-size-bytes instead): "+description+"The number of entries to cache. ")
}
diff --git a/pkg/storage/chunk/chunk_store.go b/pkg/storage/chunk/chunk_store.go
index 4dd83268ca71d..16debd630cbd4 100644
--- a/pkg/storage/chunk/chunk_store.go
+++ b/pkg/storage/chunk/chunk_store.go
@@ -72,7 +72,7 @@ type StoreConfig struct {
func (cfg *StoreConfig) RegisterFlags(f *flag.FlagSet) {
cfg.ChunkCacheConfig.RegisterFlagsWithPrefix("store.chunks-cache.", "Cache config for chunks. ", f)
f.BoolVar(&cfg.chunkCacheStubs, "store.chunks-cache.cache-stubs", false, "If true, don't write the full chunk to cache, just a stub entry.")
- cfg.WriteDedupeCacheConfig.RegisterFlagsWithPrefix("store.index-cache-write.", "Cache config for index entry writing. ", f)
+ cfg.WriteDedupeCacheConfig.RegisterFlagsWithPrefix("store.index-cache-write.", "Cache config for index entry writing.", f)
f.Var(&cfg.CacheLookupsOlderThan, "store.cache-lookups-older-than", "Cache index entries older than this period. 0 to disable.")
}
diff --git a/pkg/storage/chunk/chunk_store_test.go b/pkg/storage/chunk/chunk_store_test.go
index 960c7df546f7f..f16933dc3a2bc 100644
--- a/pkg/storage/chunk/chunk_store_test.go
+++ b/pkg/storage/chunk/chunk_store_test.go
@@ -890,9 +890,7 @@ func TestStore_DeleteChunk(t *testing.T) {
nonExistentChunk := dummyChunkForEncoding(model.Now(), metric3, encoding.Varbit, 200)
fooMetricNameMatcher, err := parser.ParseMetricSelector(`foo`)
- if err != nil {
- t.Fatal(err)
- }
+ require.NoError(t, err)
for _, tc := range []struct {
name string
diff --git a/pkg/storage/chunk/storage/factory.go b/pkg/storage/chunk/storage/factory.go
index 677927458b216..fcb65d0b61076 100644
--- a/pkg/storage/chunk/storage/factory.go
+++ b/pkg/storage/chunk/storage/factory.go
@@ -114,7 +114,7 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
cfg.GrpcConfig.RegisterFlags(f)
f.StringVar(&cfg.Engine, "store.engine", "chunks", "The storage engine to use: chunks or blocks.")
- cfg.IndexQueriesCacheConfig.RegisterFlagsWithPrefix("store.index-cache-read.", "Cache config for index entry reading. ", f)
+ cfg.IndexQueriesCacheConfig.RegisterFlagsWithPrefix("store.index-cache-read.", "Cache config for index entry reading.", f)
f.DurationVar(&cfg.IndexCacheValidity, "store.index-cache-validity", 5*time.Minute, "Cache validity for active index entries. Should be no higher than -ingester.max-chunk-idle.")
f.BoolVar(&cfg.DisableBroadIndexQueries, "store.disable-broad-index-queries", false, "Disable broad index queries which results in reduced cache usage and faster query performance at the expense of somewhat higher QPS on the index store.")
}
|
loki
|
Enable FIFO cache by default (#4519)
|
ded8f589df127d008a9557f09e84cc565d128b40
|
2025-03-16 19:01:01
|
George Robinson
|
chore: move adapter to limits.go (#16771)
| false
|
diff --git a/pkg/limits/frontend/frontend.go b/pkg/limits/frontend/frontend.go
index a735025f79523..ec008e42b68c2 100644
--- a/pkg/limits/frontend/frontend.go
+++ b/pkg/limits/frontend/frontend.go
@@ -75,7 +75,7 @@ func New(cfg Config, ringName string, limitsRing ring.ReadRing, limits Limits, l
factory := limits_client.NewPoolFactory(cfg.ClientConfig)
pool := limits_client.NewPool(ringName, cfg.ClientConfig.PoolConfig, limitsRing, factory, logger)
- rateLimiter := limiter.NewRateLimiter(newIngestionRateStrategy(limits), cfg.RecheckPeriod)
+ rateLimiter := limiter.NewRateLimiter(newRateLimitsAdapter(limits), cfg.RecheckPeriod)
limitsSrv := NewRingIngestLimitsService(limitsRing, pool, limits, rateLimiter, logger, reg)
f := &Frontend{
diff --git a/pkg/limits/frontend/limits.go b/pkg/limits/frontend/limits.go
new file mode 100644
index 0000000000000..5197a928fe75f
--- /dev/null
+++ b/pkg/limits/frontend/limits.go
@@ -0,0 +1,28 @@
+package frontend
+
+// Limits contains all limits enforced by the limits frontend.
+type Limits interface {
+ IngestionRateBytes(userID string) float64
+ IngestionBurstSizeBytes(userID string) int
+ MaxGlobalStreamsPerUser(userID string) int
+}
+
+// rateLimitsAdapter implements the dskit.RateLimiterStrategy interface. We use
+// it to load per-tenant rate limits into dskit.RateLimiter.
+type rateLimitsAdapter struct {
+ limits Limits
+}
+
+func newRateLimitsAdapter(limits Limits) *rateLimitsAdapter {
+ return &rateLimitsAdapter{limits: limits}
+}
+
+// Limit implements dskit.RateLimiterStrategy.
+func (s *rateLimitsAdapter) Limit(tenantID string) float64 {
+ return s.limits.IngestionRateBytes(tenantID)
+}
+
+// Burst implements dskit.RateLimiterStrategy.
+func (s *rateLimitsAdapter) Burst(tenantID string) int {
+ return s.limits.IngestionBurstSizeBytes(tenantID)
+}
diff --git a/pkg/limits/frontend/service.go b/pkg/limits/frontend/service.go
index a1e3ab3f5a051..872b0bf465c2b 100644
--- a/pkg/limits/frontend/service.go
+++ b/pkg/limits/frontend/service.go
@@ -28,30 +28,6 @@ const (
RejectedStreamReasonRateLimited = "rate_limited"
)
-// Limits is the interface of the limits configuration
-// builder to be passed to the frontend service.
-type Limits interface {
- MaxGlobalStreamsPerUser(userID string) int
- IngestionRateBytes(userID string) float64
- IngestionBurstSizeBytes(userID string) int
-}
-
-type ingestionRateStrategy struct {
- limits Limits
-}
-
-func newIngestionRateStrategy(limits Limits) *ingestionRateStrategy {
- return &ingestionRateStrategy{limits: limits}
-}
-
-func (s *ingestionRateStrategy) Limit(tenantID string) float64 {
- return s.limits.IngestionRateBytes(tenantID)
-}
-
-func (s *ingestionRateStrategy) Burst(tenantID string) int {
- return s.limits.IngestionBurstSizeBytes(tenantID)
-}
-
// IngestLimitsService is responsible for receiving, processing and
// validating requests, forwarding them to individual limits backends,
// gathering and aggregating their responses (where required), and returning
diff --git a/pkg/limits/frontend/service_test.go b/pkg/limits/frontend/service_test.go
index 5713e68776110..b6205e4d7f85f 100644
--- a/pkg/limits/frontend/service_test.go
+++ b/pkg/limits/frontend/service_test.go
@@ -403,7 +403,7 @@ func TestRingIngestLimitsService_ExceedsLimits(t *testing.T) {
ingestionRate: tt.ingestionRate,
}
- rateLimiter := limiter.NewRateLimiter(newIngestionRateStrategy(mockLimits), 10*time.Second)
+ rateLimiter := limiter.NewRateLimiter(newRateLimitsAdapter(mockLimits), 10*time.Second)
service := NewRingIngestLimitsService(mockRing, mockPool, mockLimits, rateLimiter, log.NewNopLogger(), prometheus.NewRegistry())
|
chore
|
move adapter to limits.go (#16771)
|
ec1a057a323ed1bd8de448e714a672b64140b691
|
2024-04-26 15:06:43
|
Michel Hollands
|
feat: Add a version of the mixin dashboards for meta monitoring (#12700)
| false
|
diff --git a/production/loki-mixin-compiled-ssd/dashboards/loki-deletion.json b/production/loki-mixin-compiled-ssd/dashboards/loki-deletion.json
index e56de2786225a..cec3fe4351fd1 100644
--- a/production/loki-mixin-compiled-ssd/dashboards/loki-deletion.json
+++ b/production/loki-mixin-compiled-ssd/dashboards/loki-deletion.json
@@ -579,7 +579,7 @@
"span": 6,
"targets": [
{
- "expr": "sum(rate(loki_compactor_deleted_lines{cluster=~\"$cluster\", namespace=~\"$namespace\", container=\"loki\", pod=~\"(loki|enterprise-logs)-read.*\"}[$__rate_interval])) by (user)",
+ "expr": "sum(rate(loki_compactor_deleted_lines{cluster=~\"$cluster\", namespace=~\"$namespace\", container=\"loki\", pod=~\"(loki|enterprise-logs)-backend.*\"}[$__rate_interval])) by (user)",
"format": "time_series",
"legendFormat": "{{user}}",
"legendLink": null
@@ -606,7 +606,7 @@
"span": 6,
"targets": [
{
- "expr": "{cluster=~\"$cluster\", namespace=~\"$namespace\", container=\"loki\", pod=~\"(loki|enterprise-logs)-read.*\"} |~ \"Started processing delete request|delete request for user marked as processed\" | logfmt | line_format \"{{.ts}} user={{.user}} delete_request_id={{.delete_request_id}} msg={{.msg}}\" ",
+ "expr": "{cluster=~\"$cluster\", namespace=~\"$namespace\", container=\"loki\", pod=~\"(loki|enterprise-logs)-backend.*\"} |~ \"Started processing delete request|delete request for user marked as processed\" | logfmt | line_format \"{{.ts}} user={{.user}} delete_request_id={{.delete_request_id}} msg={{.msg}}\" ",
"refId": "A"
}
],
@@ -619,7 +619,7 @@
"span": 6,
"targets": [
{
- "expr": "{cluster=~\"$cluster\", namespace=~\"$namespace\", container=\"loki\", pod=~\"(loki|enterprise-logs)-read.*\"} |~ \"delete request for user added\" | logfmt | line_format \"{{.ts}} user={{.user}} query='{{.query}}'\"",
+ "expr": "{cluster=~\"$cluster\", namespace=~\"$namespace\", container=\"loki\", pod=~\"(loki|enterprise-logs)-backend.*\"} |~ \"delete request for user added\" | logfmt | line_format \"{{.ts}} user={{.user}} query='{{.query}}'\"",
"refId": "A"
}
],
diff --git a/production/loki-mixin-compiled-ssd/dashboards/loki-retention.json b/production/loki-mixin-compiled-ssd/dashboards/loki-retention.json
index 1e3edc736160a..94e09efaf85eb 100644
--- a/production/loki-mixin-compiled-ssd/dashboards/loki-retention.json
+++ b/production/loki-mixin-compiled-ssd/dashboards/loki-retention.json
@@ -266,7 +266,7 @@
"span": 4,
"targets": [
{
- "expr": "sum by(pod) (go_memstats_heap_inuse_bytes{cluster=~\"$cluster\", job=~\"($namespace)/(loki|enterprise-logs)-read\"})",
+ "expr": "sum by(pod) (go_memstats_heap_inuse_bytes{cluster=~\"$cluster\", job=~\"($namespace)/(loki|enterprise-logs)-backend\"})",
"format": "time_series",
"legendFormat": "{{pod}}",
"legendLink": null
@@ -1367,7 +1367,7 @@
"span": 12,
"targets": [
{
- "expr": "{cluster=~\"$cluster\", job=~\"($namespace)/(loki|enterprise-logs)-read\"}",
+ "expr": "{cluster=~\"$cluster\", job=~\"($namespace)/(loki|enterprise-logs)-backend\"}",
"refId": "A"
}
],
diff --git a/production/loki-mixin/config.libsonnet b/production/loki-mixin/config.libsonnet
index 1fa22f566cc69..48e7586595c97 100644
--- a/production/loki-mixin/config.libsonnet
+++ b/production/loki-mixin/config.libsonnet
@@ -31,5 +31,10 @@
// The prefix used to match the write and read pods on SSD mode.
pod_prefix_matcher: '(loki|enterprise-logs)',
},
+
+ // Meta-monitoring related configuration
+ meta_monitoring: {
+ enabled: false,
+ },
},
}
diff --git a/production/loki-mixin/dashboards/loki-chunks.libsonnet b/production/loki-mixin/dashboards/loki-chunks.libsonnet
index dcb086977db0e..a048dadf19ada 100644
--- a/production/loki-mixin/dashboards/loki-chunks.libsonnet
+++ b/production/loki-mixin/dashboards/loki-chunks.libsonnet
@@ -6,7 +6,11 @@ local utils = import 'mixin-utils/utils.libsonnet';
local dashboards = self,
'loki-chunks.json': {
local cfg = self,
- labelsSelector:: $._config.per_cluster_label + '="$cluster", job=~"$namespace/%s"' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester.*'),
+ labelsSelector:: $._config.per_cluster_label + '="$cluster", job=~"$namespace/%s"' % (
+ if $._config.meta_monitoring.enabled
+ then '(ingester.*|%s-write|loki-single-binary)' % $._config.ssd.pod_prefix_matcher
+ else if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester.*'
+ ),
} +
$.dashboard('Loki / Chunks', uid='chunks')
.addCluster()
diff --git a/production/loki-mixin/dashboards/loki-deletion.libsonnet b/production/loki-mixin/dashboards/loki-deletion.libsonnet
index 2acd86a8b1fbb..5b8ef5d5bd5a8 100644
--- a/production/loki-mixin/dashboards/loki-deletion.libsonnet
+++ b/production/loki-mixin/dashboards/loki-deletion.libsonnet
@@ -2,7 +2,9 @@ local g = import 'grafana-builder/grafana.libsonnet';
local utils = import 'mixin-utils/utils.libsonnet';
(import 'dashboard-utils.libsonnet') {
- local compactor_matcher = if $._config.ssd.enabled then 'container="loki", pod=~"%s-read.*"' % $._config.ssd.pod_prefix_matcher else 'container="compactor"',
+ local compactor_matcher = if $._config.meta_monitoring.enabled
+ then 'pod=~"(compactor|%s-backend.*|loki-single-binary)"' % $._config.ssd.pod_prefix_matcher
+ else if $._config.ssd.enabled then 'container="loki", pod=~"%s-backend.*"' % $._config.ssd.pod_prefix_matcher else 'container="compactor"',
grafanaDashboards+::
{
'loki-deletion.json':
diff --git a/production/loki-mixin/dashboards/loki-operational.libsonnet b/production/loki-mixin/dashboards/loki-operational.libsonnet
index e20d7dc2d5629..f1c8166d7f873 100644
--- a/production/loki-mixin/dashboards/loki-operational.libsonnet
+++ b/production/loki-mixin/dashboards/loki-operational.libsonnet
@@ -24,17 +24,31 @@ local utils = import 'mixin-utils/utils.libsonnet';
jobMatchers:: {
cortexgateway: [utils.selector.re('job', '($namespace)/cortex-gw(-internal)?')],
- distributor: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'distributor'))],
- ingester: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester.*'))],
- querier: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else 'querier'))],
- queryFrontend: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else 'query-frontend'))],
+ distributor: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(distributor|%s-write|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'distributor'))],
+ ingester: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(ingester|%s-write|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester.*'))],
+ querier: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(querier|%s-read|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else 'querier'))],
+ queryFrontend: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(query-frontend|%s-read|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else 'query-frontend'))],
},
podMatchers:: {
cortexgateway: [utils.selector.re('pod', 'cortex-gw')],
- distributor: [utils.selector.re('pod', '%s' % (if $._config.ssd.enabled then '%s-write.*' % $._config.ssd.pod_prefix_matcher else 'distributor.*'))],
- ingester: [utils.selector.re('pod', '%s' % (if $._config.ssd.enabled then '%s-write.*' % $._config.ssd.pod_prefix_matcher else 'ingester.*'))],
- querier: [utils.selector.re('pod', '%s' % (if $._config.ssd.enabled then '%s-read.*' % $._config.ssd.pod_prefix_matcher else 'querier.*'))],
+ distributor: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('pod', '(distributor|%s-write|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('pod', '%s' % (if $._config.ssd.enabled then '%s-write.*' % $._config.ssd.pod_prefix_matcher else 'distributor.*'))],
+ ingester: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('pod', '(ingester|%s-write|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('pod', '%s' % (if $._config.ssd.enabled then '%s-write.*' % $._config.ssd.pod_prefix_matcher else 'ingester.*'))],
+ querier: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('pod', '(querier|%s-read|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('pod', '%s' % (if $._config.ssd.enabled then '%s-read.*' % $._config.ssd.pod_prefix_matcher else 'querier.*'))],
},
}
+ lokiOperational + {
diff --git a/production/loki-mixin/dashboards/loki-reads-resources.libsonnet b/production/loki-mixin/dashboards/loki-reads-resources.libsonnet
index 1e76718c0ff01..21db04ea2cf88 100644
--- a/production/loki-mixin/dashboards/loki-reads-resources.libsonnet
+++ b/production/loki-mixin/dashboards/loki-reads-resources.libsonnet
@@ -2,11 +2,19 @@ local grafana = import 'grafonnet/grafana.libsonnet';
local utils = import 'mixin-utils/utils.libsonnet';
(import 'dashboard-utils.libsonnet') {
- local index_gateway_pod_matcher = if $._config.ssd.enabled then 'container="loki", pod=~"%s-read.*"' % $._config.ssd.pod_prefix_matcher else 'container="index-gateway"',
- local index_gateway_job_matcher = if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else 'index-gateway',
+ local index_gateway_pod_matcher = if $._config.meta_monitoring.enabled
+ then 'container=~"loki|index-gateway", pod=~"(index-gateway.*|%s-read.*|loki-single-binary)"' % $._config.ssd.pod_prefix_matcher
+ else if $._config.ssd.enabled then 'container="loki", pod=~"%s-read.*"' % $._config.ssd.pod_prefix_matcher else 'container="index-gateway"',
+ local index_gateway_job_matcher = if $._config.meta_monitoring.enabled
+ then '(index-gateway.*|%s-read.*|loki-single-binary)' % $._config.ssd.pod_prefix_matcher
+ else if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else 'index-gateway',
- local ingester_pod_matcher = if $._config.ssd.enabled then 'container="loki", pod=~"%s-write.*"' % $._config.ssd.pod_prefix_matcher else 'container="ingester"',
- local ingester_job_matcher = if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester.+',
+ local ingester_pod_matcher = if $._config.meta_monitoring.enabled
+ then 'container=~"loki|ingester", pod=~"(ingester.*|%s-write.*|loki-single-binary)"' % $._config.ssd.pod_prefix_matcher
+ else if $._config.ssd.enabled then 'container="loki", pod=~"%s-write.*"' % $._config.ssd.pod_prefix_matcher else 'container="ingester"',
+ local ingester_job_matcher = if $._config.meta_monitoring.enabled
+ then '(ingester.+|%s-write|loki-single-binary)' % $._config.ssd.pod_prefix_matcher
+ else if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester.+',
grafanaDashboards+::
{
diff --git a/production/loki-mixin/dashboards/loki-reads.libsonnet b/production/loki-mixin/dashboards/loki-reads.libsonnet
index 11c36b7a5ae7c..536beeb67dcac 100644
--- a/production/loki-mixin/dashboards/loki-reads.libsonnet
+++ b/production/loki-mixin/dashboards/loki-reads.libsonnet
@@ -87,13 +87,27 @@ local utils = import 'mixin-utils/utils.libsonnet';
matchers:: {
cortexgateway: [utils.selector.re('job', '($namespace)/cortex-gw(-internal)?')],
- queryFrontend: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else 'query-frontend'))],
- querier: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'querier'))],
- ingester: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester'))],
- ingesterZoneAware: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester-zone.*'))],
- querierOrIndexGateway: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else '(querier|index-gateway)'))],
- indexGateway: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-backend' % $._config.ssd.pod_prefix_matcher else 'index-gateway'))],
- bloomGateway: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-backend' % $._config.ssd.pod_prefix_matcher else 'bloom-gateway'))],
+ queryFrontend: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(query-frontend|%s-read|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else 'query-frontend'))],
+ querier: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(querier|%s-read|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else 'querier'))],
+ ingester: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(ingester|%s-write|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester'))],
+ ingesterZoneAware: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(ingester-zone-.*|%s-write|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester-zone.*'))],
+ querierOrIndexGateway: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(querier|index-gateway|%s-read|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else '(querier|index-gateway)'))],
+ indexGateway: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(index-gateway|%s-backend|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-backend' % $._config.ssd.pod_prefix_matcher else 'index-gateway'))],
+ bloomGateway: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(bloom-gateway|%s-backend|loki-single-binary' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-backend' % $._config.ssd.pod_prefix_matcher else 'bloom-gateway'))],
},
local selector(matcherId) =
diff --git a/production/loki-mixin/dashboards/loki-retention.libsonnet b/production/loki-mixin/dashboards/loki-retention.libsonnet
index 9896a5246881b..bbfaa8630d22f 100644
--- a/production/loki-mixin/dashboards/loki-retention.libsonnet
+++ b/production/loki-mixin/dashboards/loki-retention.libsonnet
@@ -1,8 +1,12 @@
local utils = import 'mixin-utils/utils.libsonnet';
(import 'dashboard-utils.libsonnet') {
- local compactor_pod_matcher = if $._config.ssd.enabled then 'container="loki", pod=~"%s-read.*"' % $._config.ssd.pod_prefix_matcher else 'container="compactor"',
- local compactor_job_matcher = if $._config.ssd.enabled then '%s-read' % $._config.ssd.pod_prefix_matcher else 'compactor',
+ local compactor_pod_matcher = if $._config.meta_monitoring.enabled
+ then 'pod=~(compactor.*|%s-backend.*|loki-single-binary)' % $._config.ssd.pod_prefix_matcher
+ else if $._config.ssd.enabled then 'container="loki", pod=~"%s-read.*"' % $._config.ssd.pod_prefix_matcher else 'container="compactor"',
+ local compactor_job_matcher = if $._config.meta_monitoring.enabled
+ then '(compactor|%s-backend.*|loki-single-binary)' % $._config.ssd.pod_prefix_matcher
+ else if $._config.ssd.enabled then '%s-backend' % $._config.ssd.pod_prefix_matcher else 'compactor',
grafanaDashboards+::
{
'loki-retention.json':
diff --git a/production/loki-mixin/dashboards/loki-writes-resources.libsonnet b/production/loki-mixin/dashboards/loki-writes-resources.libsonnet
index f25aeb4b546b4..1d4c693a9b9bd 100644
--- a/production/loki-mixin/dashboards/loki-writes-resources.libsonnet
+++ b/production/loki-mixin/dashboards/loki-writes-resources.libsonnet
@@ -2,8 +2,12 @@ local grafana = import 'grafonnet/grafana.libsonnet';
local utils = import 'mixin-utils/utils.libsonnet';
(import 'dashboard-utils.libsonnet') {
- local ingester_pod_matcher = if $._config.ssd.enabled then 'container="loki", pod=~"%s-write.*"' % $._config.ssd.pod_prefix_matcher else 'container="ingester"',
- local ingester_job_matcher = if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester.*',
+ local ingester_pod_matcher = if $._config.meta_monitoring.enabled
+ then 'container=~"loki|ingester", pod=~"(ingester.*|%s-write.*|loki-single-binary)"' % $._config.ssd.pod_prefix_matcher
+ else if $._config.ssd.enabled then 'container="loki", pod=~"%s-write.*"' % $._config.ssd.pod_prefix_matcher else 'container="ingester"',
+ local ingester_job_matcher = if $._config.meta_monitoring.enabled
+ then '(ingester.*|%s-write|loki-single-binary)' % $._config.ssd.pod_prefix_matcher
+ else if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester.*',
grafanaDashboards+::
{
diff --git a/production/loki-mixin/dashboards/loki-writes.libsonnet b/production/loki-mixin/dashboards/loki-writes.libsonnet
index 8227cc3834929..8cde24657090f 100644
--- a/production/loki-mixin/dashboards/loki-writes.libsonnet
+++ b/production/loki-mixin/dashboards/loki-writes.libsonnet
@@ -17,10 +17,18 @@ local utils = import 'mixin-utils/utils.libsonnet';
matchers:: {
cortexgateway: [utils.selector.re('job', '($namespace)/cortex-gw(-internal)?')],
- distributor: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'distributor'))],
- ingester: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester'))],
- ingester_zone: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester-zone.*'))],
- any_ingester: [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester.*'))],
+ distributor: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(distributor|%s-write|loki-single-binary)' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'distributor'))],
+ ingester: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(ingester|%s-write|loki-single-binary)' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester'))],
+ ingester_zone: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(ingester-zone.*|%s-write|loki-single-binary)' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester-zone.*'))],
+ any_ingester: if $._config.meta_monitoring.enabled
+ then [utils.selector.re('job', '($namespace)/(ingester.*|%s-write|loki-single-binary)' % $._config.ssd.pod_prefix_matcher)]
+ else [utils.selector.re('job', '($namespace)/%s' % (if $._config.ssd.enabled then '%s-write' % $._config.ssd.pod_prefix_matcher else 'ingester.*'))],
},
local selector(matcherId) =
diff --git a/production/loki-mixin/mixin-meta-monitoring.libsonnet b/production/loki-mixin/mixin-meta-monitoring.libsonnet
new file mode 100644
index 0000000000000..a721a94148085
--- /dev/null
+++ b/production/loki-mixin/mixin-meta-monitoring.libsonnet
@@ -0,0 +1,20 @@
+// The Meta Monitoring helm chart uses this file to build a version of the dashboards
+// that work with the different deployment modes.
+(import 'dashboards.libsonnet') +
+(import 'alerts.libsonnet') +
+(import 'recording_rules.libsonnet') + {
+ grafanaDashboardFolder: 'Loki Meta Monitoring',
+
+ _config+:: {
+ internal_components: false,
+
+ // The Meta Monitoring helm chart uses Grafana Alloy instead of promtail
+ promtail+: {
+ enabled: false,
+ },
+
+ meta_monitoring+: {
+ enabled: true,
+ },
+ },
+}
|
feat
|
Add a version of the mixin dashboards for meta monitoring (#12700)
|
b9ce005ec1cd8cf1bb448fce7a312dd47037a87b
|
2024-02-22 15:11:39
|
Christian Haudum
|
fix: Ensure working dir for bloomstore exists (#12019)
| false
|
diff --git a/pkg/bloomgateway/bloomgateway_test.go b/pkg/bloomgateway/bloomgateway_test.go
index fede86484a96b..9a4dea08dba26 100644
--- a/pkg/bloomgateway/bloomgateway_test.go
+++ b/pkg/bloomgateway/bloomgateway_test.go
@@ -26,6 +26,7 @@ import (
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/storage/chunk/client/local"
"github.com/grafana/loki/pkg/storage/config"
+ bloomshipperconfig "github.com/grafana/loki/pkg/storage/stores/shipper/bloomshipper/config"
lokiring "github.com/grafana/loki/pkg/util/ring"
"github.com/grafana/loki/pkg/validation"
)
@@ -70,6 +71,9 @@ func TestBloomGateway_StartStopService(t *testing.T) {
Configs: []config.PeriodConfig{p},
}
storageCfg := storage.Config{
+ BloomShipperConfig: bloomshipperconfig.Config{
+ WorkingDirectory: t.TempDir(),
+ },
FSConfig: local.FSConfig{
Directory: t.TempDir(),
},
@@ -136,6 +140,9 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
Configs: []config.PeriodConfig{p},
}
storageCfg := storage.Config{
+ BloomShipperConfig: bloomshipperconfig.Config{
+ WorkingDirectory: t.TempDir(),
+ },
FSConfig: local.FSConfig{
Directory: t.TempDir(),
},
diff --git a/pkg/loki/config_wrapper.go b/pkg/loki/config_wrapper.go
index 1914c8ab3edfc..f76e0f75da9f7 100644
--- a/pkg/loki/config_wrapper.go
+++ b/pkg/loki/config_wrapper.go
@@ -407,7 +407,7 @@ func applyPathPrefixDefaults(r, defaults *ConfigWrapper) {
r.CompactorConfig.WorkingDirectory = fmt.Sprintf("%s/compactor", prefix)
}
if r.StorageConfig.BloomShipperConfig.WorkingDirectory == defaults.StorageConfig.BloomShipperConfig.WorkingDirectory {
- r.StorageConfig.BloomShipperConfig.WorkingDirectory = fmt.Sprintf("%s/bloom-shipper", prefix)
+ r.StorageConfig.BloomShipperConfig.WorkingDirectory = fmt.Sprintf("%s/blooms", prefix)
}
}
}
diff --git a/pkg/loki/config_wrapper_test.go b/pkg/loki/config_wrapper_test.go
index 3b1237dad4d1d..60c9223732d05 100644
--- a/pkg/loki/config_wrapper_test.go
+++ b/pkg/loki/config_wrapper_test.go
@@ -100,6 +100,7 @@ common:
assert.EqualValues(t, "/opt/loki/rules-temp", config.Ruler.RulePath)
assert.EqualValues(t, "/opt/loki/wal", config.Ingester.WAL.Dir)
assert.EqualValues(t, "/opt/loki/compactor", config.CompactorConfig.WorkingDirectory)
+ assert.EqualValues(t, "/opt/loki/blooms", config.StorageConfig.BloomShipperConfig.WorkingDirectory)
})
t.Run("accepts paths both with and without trailing slash", func(t *testing.T) {
@@ -111,6 +112,7 @@ common:
assert.EqualValues(t, "/opt/loki/rules-temp", config.Ruler.RulePath)
assert.EqualValues(t, "/opt/loki/wal", config.Ingester.WAL.Dir)
assert.EqualValues(t, "/opt/loki/compactor", config.CompactorConfig.WorkingDirectory)
+ assert.EqualValues(t, "/opt/loki/blooms", config.StorageConfig.BloomShipperConfig.WorkingDirectory)
})
t.Run("does not rewrite custom (non-default) paths passed via config file", func(t *testing.T) {
diff --git a/pkg/loki/modules_test.go b/pkg/loki/modules_test.go
index 0d07242b75370..047ba5f838a52 100644
--- a/pkg/loki/modules_test.go
+++ b/pkg/loki/modules_test.go
@@ -2,7 +2,6 @@ package loki
import (
"fmt"
- "path"
"path/filepath"
"testing"
"time"
@@ -17,6 +16,7 @@ import (
"github.com/grafana/loki/pkg/storage"
"github.com/grafana/loki/pkg/storage/chunk/client/local"
"github.com/grafana/loki/pkg/storage/config"
+ bloomshipperconfig "github.com/grafana/loki/pkg/storage/stores/shipper/bloomshipper/config"
"github.com/grafana/loki/pkg/storage/stores/shipper/indexshipper"
"github.com/grafana/loki/pkg/storage/stores/shipper/indexshipper/boltdb"
"github.com/grafana/loki/pkg/storage/stores/shipper/indexshipper/indexgateway"
@@ -366,10 +366,13 @@ func minimalWorkingConfig(t *testing.T, dir, target string, cfgTransformers ...f
// This would be overwritten by the default values setting.
cfg.StorageConfig = storage.Config{
FSConfig: local.FSConfig{Directory: dir},
+ BloomShipperConfig: bloomshipperconfig.Config{
+ WorkingDirectory: filepath.Join(dir, "blooms"),
+ },
BoltDBShipperConfig: boltdb.IndexCfg{
Config: indexshipper.Config{
- ActiveIndexDirectory: path.Join(dir, "index"),
- CacheLocation: path.Join(dir, "cache"),
+ ActiveIndexDirectory: filepath.Join(dir, "index"),
+ CacheLocation: filepath.Join(dir, "cache"),
Mode: indexshipper.ModeWriteOnly,
ResyncInterval: 24 * time.Hour,
},
@@ -402,7 +405,7 @@ func minimalWorkingConfig(t *testing.T, dir, target string, cfgTransformers ...f
cfg.BloomCompactor.Ring.InstanceAddr = localhost
cfg.BloomGateway.Ring.InstanceAddr = localhost
cfg.CompactorConfig.CompactorRing.InstanceAddr = localhost
- cfg.CompactorConfig.WorkingDirectory = path.Join(dir, "compactor")
+ cfg.CompactorConfig.WorkingDirectory = filepath.Join(dir, "compactor")
cfg.Ruler.Config.Ring.InstanceAddr = localhost
cfg.Ruler.Config.StoreConfig.Type = config.StorageTypeLocal
diff --git a/pkg/storage/chunk/client/util/util.go b/pkg/storage/chunk/client/util/util.go
index 10237cc456da5..e49fad20136fb 100644
--- a/pkg/storage/chunk/client/util/util.go
+++ b/pkg/storage/chunk/client/util/util.go
@@ -72,6 +72,8 @@ func EnsureDirectory(dir string) error {
return os.MkdirAll(dir, 0o777)
} else if err == nil && !info.IsDir() {
return fmt.Errorf("not a directory: %s", dir)
+ } else if err == nil && info.Mode()&0700 != 0700 {
+ return fmt.Errorf("insufficient permissions: %s %s", dir, info.Mode())
}
return err
}
diff --git a/pkg/storage/stores/shipper/bloomshipper/shipper_test.go b/pkg/storage/stores/shipper/bloomshipper/shipper_test.go
index e03d72c26ba37..86e8ed90a174c 100644
--- a/pkg/storage/stores/shipper/bloomshipper/shipper_test.go
+++ b/pkg/storage/stores/shipper/bloomshipper/shipper_test.go
@@ -142,7 +142,7 @@ func TestBloomShipper_IsOutsideRange(t *testing.T) {
func TestBloomShipper_ForEach(t *testing.T) {
blockRefs := make([]BlockRef, 0, 3)
- store, _ := newMockBloomStore(t)
+ store, _, _ := newMockBloomStore(t)
for i := 0; i < len(blockRefs); i++ {
block, err := createBlockInStorage(t, store, "tenant", model.Time(i*24*int(time.Hour)), 0x0000, 0x00ff)
require.NoError(t, err)
diff --git a/pkg/storage/stores/shipper/bloomshipper/store.go b/pkg/storage/stores/shipper/bloomshipper/store.go
index d5cfa24b11ed5..56bfb3ebe97ab 100644
--- a/pkg/storage/stores/shipper/bloomshipper/store.go
+++ b/pkg/storage/stores/shipper/bloomshipper/store.go
@@ -15,6 +15,7 @@ import (
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/storage/chunk/cache"
"github.com/grafana/loki/pkg/storage/chunk/client"
+ "github.com/grafana/loki/pkg/storage/chunk/client/util"
"github.com/grafana/loki/pkg/storage/config"
)
@@ -172,6 +173,10 @@ func NewBloomStore(
numWorkers: storageConfig.BloomShipperConfig.BlocksDownloadingQueue.WorkersCount,
}
+ if err := util.EnsureDirectory(cfg.workingDir); err != nil {
+ return nil, errors.Wrapf(err, "failed to create working directory for bloom store: '%s'", cfg.workingDir)
+ }
+
for _, periodicConfig := range periodicConfigs {
objectClient, err := storage.NewObjectClient(periodicConfig.ObjectType, storageConfig, clientMetrics)
if err != nil {
@@ -323,10 +328,10 @@ func (b *BloomStore) FetchBlocks(ctx context.Context, blocks []BlockRef) ([]*Clo
results := make([]*CloseableBlockQuerier, 0, len(blocks))
for i := range fetchers {
res, err := fetchers[i].FetchBlocks(ctx, refs[i])
- results = append(results, res...)
if err != nil {
return results, err
}
+ results = append(results, res...)
}
// sort responses (results []*CloseableBlockQuerier) based on requests (blocks []BlockRef)
diff --git a/pkg/storage/stores/shipper/bloomshipper/store_test.go b/pkg/storage/stores/shipper/bloomshipper/store_test.go
index 59d8eee464053..48ab81cc45027 100644
--- a/pkg/storage/stores/shipper/bloomshipper/store_test.go
+++ b/pkg/storage/stores/shipper/bloomshipper/store_test.go
@@ -5,6 +5,7 @@ import (
"context"
"encoding/json"
"os"
+ "path/filepath"
"testing"
"time"
@@ -20,9 +21,12 @@ import (
"github.com/grafana/loki/pkg/storage/stores/shipper/bloomshipper/config"
)
-func newMockBloomStore(t *testing.T) (*BloomStore, string) {
+func newMockBloomStore(t *testing.T) (*BloomStore, string, error) {
workDir := t.TempDir()
+ return newMockBloomStoreWithWorkDir(t, workDir)
+}
+func newMockBloomStoreWithWorkDir(t *testing.T, workDir string) (*BloomStore, string, error) {
periodicConfigs := []storageconfig.PeriodConfig{
{
ObjectType: storageconfig.StorageTypeInMemory,
@@ -63,11 +67,13 @@ func newMockBloomStore(t *testing.T) (*BloomStore, string) {
metasCache := cache.NewMockCache()
blocksCache := NewBlocksCache(storageConfig.BloomShipperConfig.BlocksCache, prometheus.NewPedanticRegistry(), logger)
+
store, err := NewBloomStore(periodicConfigs, storageConfig, metrics, metasCache, blocksCache, logger)
- require.NoError(t, err)
- t.Cleanup(store.Stop)
+ if err == nil {
+ t.Cleanup(store.Stop)
+ }
- return store, workDir
+ return store, workDir, err
}
func createMetaInStorage(store *BloomStore, tenant string, start model.Time, minFp, maxFp model.Fingerprint) (Meta, error) {
@@ -123,7 +129,8 @@ func createBlockInStorage(t *testing.T, store *BloomStore, tenant string, start
}
func TestBloomStore_ResolveMetas(t *testing.T) {
- store, _ := newMockBloomStore(t)
+ store, _, err := newMockBloomStore(t)
+ require.NoError(t, err)
// schema 1
// outside of interval, outside of bounds
@@ -178,7 +185,8 @@ func TestBloomStore_ResolveMetas(t *testing.T) {
}
func TestBloomStore_FetchMetas(t *testing.T) {
- store, _ := newMockBloomStore(t)
+ store, _, err := newMockBloomStore(t)
+ require.NoError(t, err)
// schema 1
// outside of interval, outside of bounds
@@ -231,7 +239,8 @@ func TestBloomStore_FetchMetas(t *testing.T) {
}
func TestBloomStore_FetchBlocks(t *testing.T) {
- store, _ := newMockBloomStore(t)
+ store, _, err := newMockBloomStore(t)
+ require.NoError(t, err)
// schema 1
b1, _ := createBlockInStorage(t, store, "tenant", parseTime("2024-01-20 00:00"), 0x00000000, 0x0000ffff)
@@ -259,3 +268,33 @@ func TestBloomStore_FetchBlocks(t *testing.T) {
[]BlockRef{bqs[0].BlockRef, bqs[1].BlockRef, bqs[2].BlockRef, bqs[3].BlockRef},
)
}
+
+func TestBloomShipper_WorkingDir(t *testing.T) {
+ t.Run("insufficient permissions on directory yields error", func(t *testing.T) {
+ base := t.TempDir()
+ wd := filepath.Join(base, "notpermitted")
+ err := os.MkdirAll(wd, 0500)
+ require.NoError(t, err)
+ fi, _ := os.Stat(wd)
+ t.Log("working directory", wd, fi.Mode())
+
+ _, _, err = newMockBloomStoreWithWorkDir(t, wd)
+ require.ErrorContains(t, err, "insufficient permissions")
+ })
+
+ t.Run("not existing directory will be created", func(t *testing.T) {
+ base := t.TempDir()
+ // if the base directory does not exist, it will be created
+ wd := filepath.Join(base, "doesnotexist")
+ t.Log("working directory", wd)
+
+ store, _, err := newMockBloomStoreWithWorkDir(t, wd)
+ require.NoError(t, err)
+ b, err := createBlockInStorage(t, store, "tenant", parseTime("2024-01-20 00:00"), 0x00000000, 0x0000ffff)
+ require.NoError(t, err)
+
+ ctx := context.Background()
+ _, err = store.FetchBlocks(ctx, []BlockRef{b.BlockRef})
+ require.NoError(t, err)
+ })
+}
|
fix
|
Ensure working dir for bloomstore exists (#12019)
|
53c2b2c22a2701b5b4dcbb42d5c6f518fdbe8c9f
|
2023-09-07 19:30:19
|
Bayan Taani
|
operator: Use a condition to warn when labels for zone-awareness are empty (#10418)
| false
|
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index 5c0a617e08a1f..e39eab8ab6030 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,6 @@
## Main
+- [10418](https://github.com/grafana/loki/pull/10418) **btaani**: Use a condition to warn when labels for zone-awareness are empty
- [9468](https://github.com/grafana/loki/pull/9468) **periklis**: Add support for reconciling loki-mixin dashboards on OpenShift Console
- [9942](https://github.com/grafana/loki/pull/9942) **btaani**: Use a condition to warn when there are no nodes with matching labels for zone-awareness
diff --git a/operator/apis/loki/v1/lokistack_types.go b/operator/apis/loki/v1/lokistack_types.go
index dbc597d48121e..4149af172d790 100644
--- a/operator/apis/loki/v1/lokistack_types.go
+++ b/operator/apis/loki/v1/lokistack_types.go
@@ -990,8 +990,10 @@ const (
ReasonFailedCertificateRotation LokiStackConditionReason = "FailedCertificateRotation"
// ReasonQueryTimeoutInvalid when the QueryTimeout can not be parsed.
ReasonQueryTimeoutInvalid LokiStackConditionReason = "ReasonQueryTimeoutInvalid"
- // ReasonNoZoneAwareNodes when the cluster does not contain any nodes with the labels needed for zone-awareness.
- ReasonNoZoneAwareNodes LokiStackConditionReason = "ReasonNoZoneAwareNodes"
+ // ReasonZoneAwareNodesMissing when the cluster does not contain any nodes with the labels needed for zone-awareness.
+ ReasonZoneAwareNodesMissing LokiStackConditionReason = "ReasonZoneAwareNodesMissing"
+ // ReasonZoneAwareEmptyLabel when the node-label used for zone-awareness has an empty value.
+ ReasonZoneAwareEmptyLabel LokiStackConditionReason = "ReasonZoneAwareEmptyLabel"
)
// PodStatusMap defines the type for mapping pod status to pod name.
diff --git a/operator/docs/operator/api.md b/operator/docs/operator/api.md
index 3a513fcb26d0b..99e180078dea4 100644
--- a/operator/docs/operator/api.md
+++ b/operator/docs/operator/api.md
@@ -1657,9 +1657,6 @@ storage is missing.</p>
<td><p>ReasonMissingRulerSecret when the required secret to authorization remote write connections
for the ruler is missing.</p>
</td>
-</tr><tr><td><p>"ReasonNoZoneAwareNodes"</p></td>
-<td><p>ReasonNoZoneAwareNodes when the cluster does not contain any nodes with the labels needed for zone-awareness.</p>
-</td>
</tr><tr><td><p>"PendingComponents"</p></td>
<td><p>ReasonPendingComponents when all/some LokiStack components pending dependencies</p>
</td>
@@ -1669,6 +1666,12 @@ for the ruler is missing.</p>
</tr><tr><td><p>"ReadyComponents"</p></td>
<td><p>ReasonReadyComponents when all LokiStack components are ready to serve traffic.</p>
</td>
+</tr><tr><td><p>"ReasonZoneAwareEmptyLabel"</p></td>
+<td><p>ReasonZoneAwareEmptyLabel when the node-label used for zone-awareness has an empty value.</p>
+</td>
+</tr><tr><td><p>"ReasonZoneAwareNodesMissing"</p></td>
+<td><p>ReasonZoneAwareNodesMissing when the cluster does not contain any nodes with the labels needed for zone-awareness.</p>
+</td>
</tr></tbody>
</table>
diff --git a/operator/internal/status/lokistack.go b/operator/internal/status/lokistack.go
index a8597dde24705..a97c2150544de 100644
--- a/operator/internal/status/lokistack.go
+++ b/operator/internal/status/lokistack.go
@@ -17,10 +17,11 @@ import (
)
const (
- messageReady = "All components ready"
- messageFailed = "Some LokiStack components failed"
- messagePending = "Some LokiStack components pending on dependencies"
- messageDegradedNodeLabels = "Cluster contains no nodes matching the labels used for zone-awareness"
+ messageReady = "All components ready"
+ messageFailed = "Some LokiStack components failed"
+ messagePending = "Some LokiStack components pending on dependencies"
+ messageDegradedMissingNodes = "Cluster contains no nodes matching the labels used for zone-awareness"
+ messageDegradedEmptyNodeLabel = "No value for the labels used for zone-awareness"
)
var (
@@ -41,8 +42,13 @@ var (
}
conditionDegradedNodeLabels = metav1.Condition{
Type: string(lokiv1.ConditionDegraded),
- Message: messageDegradedNodeLabels,
- Reason: string(lokiv1.ReasonNoZoneAwareNodes),
+ Message: messageDegradedMissingNodes,
+ Reason: string(lokiv1.ReasonZoneAwareNodesMissing),
+ }
+ conditionDegradedEmptyNodeLabel = metav1.Condition{
+ Type: string(lokiv1.ConditionDegraded),
+ Message: messageDegradedEmptyNodeLabel,
+ Reason: string(lokiv1.ReasonZoneAwareEmptyLabel),
}
)
@@ -97,7 +103,7 @@ func generateCondition(ctx context.Context, cs *lokiv1.LokiStackComponentStatus,
if stack.Spec.Replication != nil && len(stack.Spec.Replication.Zones) > 0 {
// When there are pending pods and zone-awareness is enabled check if there are any nodes
// that can satisfy the constraints and emit a condition if not.
- nodesOk, err := checkForZoneawareNodes(ctx, k, stack.Spec.Replication.Zones)
+ nodesOk, labelsOk, err := checkForZoneawareNodes(ctx, k, stack.Spec.Replication.Zones)
if err != nil {
return metav1.Condition{}, err
}
@@ -105,6 +111,10 @@ func generateCondition(ctx context.Context, cs *lokiv1.LokiStackComponentStatus,
if !nodesOk {
return conditionDegradedNodeLabels, nil
}
+
+ if !labelsOk {
+ return conditionDegradedEmptyNodeLabel, nil
+ }
}
return conditionPending, nil
@@ -113,7 +123,7 @@ func generateCondition(ctx context.Context, cs *lokiv1.LokiStackComponentStatus,
return conditionReady, nil
}
-func checkForZoneawareNodes(ctx context.Context, k client.Client, zones []lokiv1.ZoneSpec) (bool, error) {
+func checkForZoneawareNodes(ctx context.Context, k client.Client, zones []lokiv1.ZoneSpec) (nodesOk bool, labelsOk bool, err error) {
nodeLabels := client.HasLabels{}
for _, z := range zones {
nodeLabels = append(nodeLabels, z.TopologyKey)
@@ -121,10 +131,22 @@ func checkForZoneawareNodes(ctx context.Context, k client.Client, zones []lokiv1
nodeList := &corev1.NodeList{}
if err := k.List(ctx, nodeList, nodeLabels); err != nil {
- return false, err
+ return false, false, err
+ }
+
+ if len(nodeList.Items) == 0 {
+ return false, false, nil
+ }
+
+ for _, node := range nodeList.Items {
+ for _, nodeLabel := range nodeLabels {
+ if node.Labels[nodeLabel] == "" {
+ return true, false, nil
+ }
+ }
}
- return len(nodeList.Items) > 0, nil
+ return true, true, nil
}
func updateCondition(ctx context.Context, k k8s.Client, req ctrl.Request, condition metav1.Condition) error {
diff --git a/operator/internal/status/lokistack_test.go b/operator/internal/status/lokistack_test.go
index 64726795e5332..8bdc9fadc7cf6 100644
--- a/operator/internal/status/lokistack_test.go
+++ b/operator/internal/status/lokistack_test.go
@@ -227,10 +227,21 @@ func TestGenerateCondition_ZoneAwareLokiStack(t *testing.T) {
{
desc: "nodes available",
nodes: []corev1.Node{
- {},
+ {ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{"topology-key": "value"},
+ }},
},
wantCondition: conditionPending,
},
+ {
+ desc: "nodes available but empty label value",
+ nodes: []corev1.Node{
+ {ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{"topology-key": ""},
+ }},
+ },
+ wantCondition: conditionDegradedEmptyNodeLabel,
+ },
{
desc: "no nodes available",
nodes: []corev1.Node{},
|
operator
|
Use a condition to warn when labels for zone-awareness are empty (#10418)
|
f568dda136b5f0649a4c1d8e2e92dc9c935b0088
|
2024-10-21 13:40:38
|
Christian Haudum
|
chore: Improve logging of jumphash server selector (#14306)
| false
|
diff --git a/pkg/bloomgateway/client_pool.go b/pkg/bloomgateway/client_pool.go
index 4b45292bef889..b784511e201bf 100644
--- a/pkg/bloomgateway/client_pool.go
+++ b/pkg/bloomgateway/client_pool.go
@@ -54,7 +54,7 @@ type AddressProvider interface {
}
func NewJumpHashClientPool(clientFactory ClientFactory, dnsProvider AddressProvider, updateInterval time.Duration, logger log.Logger) (*JumpHashClientPool, error) {
- selector := jumphash.DefaultSelector()
+ selector := jumphash.DefaultSelector("bloomgateway")
err := selector.SetServers(dnsProvider.Addresses()...)
if err != nil {
level.Warn(logger).Log("msg", "error updating servers", "err", err)
diff --git a/pkg/storage/chunk/cache/memcached_client.go b/pkg/storage/chunk/cache/memcached_client.go
index 995e896fbcfee..ffdc817b68b42 100644
--- a/pkg/storage/chunk/cache/memcached_client.go
+++ b/pkg/storage/chunk/cache/memcached_client.go
@@ -114,7 +114,7 @@ func (cfg *MemcachedClientConfig) RegisterFlagsWithPrefix(prefix, description st
func NewMemcachedClient(cfg MemcachedClientConfig, name string, r prometheus.Registerer, logger log.Logger, metricsNamespace string) MemcachedClient {
var selector serverSelector
if cfg.ConsistentHash {
- selector = jumphash.DefaultSelector()
+ selector = jumphash.DefaultSelector("memcached")
} else {
selector = &memcache.ServerList{}
}
diff --git a/pkg/util/jumphash/memcached_client_selector.go b/pkg/util/jumphash/memcached_client_selector.go
index ccec90fa0dda2..7eec90a3de706 100644
--- a/pkg/util/jumphash/memcached_client_selector.go
+++ b/pkg/util/jumphash/memcached_client_selector.go
@@ -7,6 +7,7 @@ import (
"github.com/cespare/xxhash"
"github.com/facette/natsort"
+ "github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/grafana/gomemcache/memcache"
@@ -23,6 +24,7 @@ import (
// with consistent DNS names where the naturally sorted order
// is predictable.
type Selector struct {
+ logger log.Logger
mu sync.RWMutex
addrs []net.Addr
resolveUnixAddr UnixResolver
@@ -33,15 +35,17 @@ type UnixResolver func(network, address string) (*net.UnixAddr, error)
type TCPResolver func(network, address string) (*net.TCPAddr, error)
-func NewSelector(resolveUnixAddr UnixResolver, resolveTCPAddr TCPResolver) *Selector {
+func NewSelector(name string, resolveUnixAddr UnixResolver, resolveTCPAddr TCPResolver) *Selector {
return &Selector{
+ logger: log.With(util_log.Logger, "name", name),
resolveUnixAddr: resolveUnixAddr,
resolveTCPAddr: resolveTCPAddr,
}
}
-func DefaultSelector() *Selector {
+func DefaultSelector(name string) *Selector {
return &Selector{
+ logger: log.With(util_log.Logger, "name", name),
resolveUnixAddr: net.ResolveUnixAddr,
resolveTCPAddr: net.ResolveTCPAddr,
}
@@ -102,7 +106,7 @@ func (s *Selector) SetServers(servers ...string) error {
}
}
- level.Debug(util_log.Logger).Log("msg", "updating memcached servers", "servers", strings.Join(addresses(naddrs), ","), "count", len(naddrs))
+ level.Debug(util_log.Logger).Log("msg", "updating servers", "servers", strings.Join(addresses(naddrs), ","), "count", len(naddrs))
s.mu.Lock()
defer s.mu.Unlock()
diff --git a/pkg/util/jumphash/memcached_client_selector_test.go b/pkg/util/jumphash/memcached_client_selector_test.go
index 939106ad5aac8..06beca0f8800c 100644
--- a/pkg/util/jumphash/memcached_client_selector_test.go
+++ b/pkg/util/jumphash/memcached_client_selector_test.go
@@ -57,6 +57,7 @@ var mockTCPResolver = func(_, address string) (*net.TCPAddr, error) {
func TestMemcachedJumpHashSelector_PickSever(t *testing.T) {
s := NewSelector(
+ "test",
mockUnixResolver,
mockTCPResolver,
)
@@ -84,6 +85,7 @@ func TestMemcachedJumpHashSelector_PickSever(t *testing.T) {
func TestMemcachedJumpHashSelector_PickSever_ErrNoServers(t *testing.T) {
s := NewSelector(
+ "test",
mockUnixResolver,
mockTCPResolver,
)
|
chore
|
Improve logging of jumphash server selector (#14306)
|
b577cf5ea5901393a37427f8503b5f647ace288d
|
2023-10-10 14:16:44
|
Kaviraj Kanagaraj
|
chore(renovate): Move the config to `.github` directory (#10816)
| false
|
diff --git a/renovate.json b/.github/renovate.json
similarity index 100%
rename from renovate.json
rename to .github/renovate.json
diff --git a/dependabot.yml b/dependabot.yml
deleted file mode 100644
index 19365cd27771f..0000000000000
--- a/dependabot.yml
+++ /dev/null
@@ -1,6 +0,0 @@
-version: 2
-updates:
- - package-ecosystem: "gomod"
- ignore:
- - dependency-name: "github.com/mattn/go-ieproxy"
- versions: ["0.0.9"]
|
chore
|
Move the config to `.github` directory (#10816)
|
2592e6a5eed644a55c3feb3cd0607b1def24deb1
|
2025-03-19 06:23:45
|
renovate[bot]
|
fix(deps): update module go.opentelemetry.io/collector/pdata to v1.28.1 (main) (#16824)
| false
|
diff --git a/go.mod b/go.mod
index 89737601cc3d3..d3515905d957f 100644
--- a/go.mod
+++ b/go.mod
@@ -150,7 +150,7 @@ require (
github.com/twmb/franz-go/plugin/kotel v1.5.0
github.com/twmb/franz-go/plugin/kprom v1.1.0
github.com/willf/bloom v2.0.3+incompatible
- go.opentelemetry.io/collector/pdata v1.28.0
+ go.opentelemetry.io/collector/pdata v1.28.1
go4.org/netipx v0.0.0-20230125063823-8449b0a6169f
golang.org/x/oauth2 v0.28.0
golang.org/x/text v0.23.0
diff --git a/go.sum b/go.sum
index 59c3d23615987..d65db9860d3bd 100644
--- a/go.sum
+++ b/go.sum
@@ -1296,8 +1296,8 @@ go.opentelemetry.io/collector/consumer/consumertest v0.118.0 h1:8AAS9ejQapP1zqt0
go.opentelemetry.io/collector/consumer/consumertest v0.118.0/go.mod h1:spRM2wyGr4QZzqMHlLmZnqRCxqXN4Wd0piogC4Qb5PQ=
go.opentelemetry.io/collector/consumer/xconsumer v0.118.0 h1:guWnzzRqgCInjnYlOQ1BPrimppNGIVvnknAjlIbWXuY=
go.opentelemetry.io/collector/consumer/xconsumer v0.118.0/go.mod h1:C5V2d6Ys/Fi6k3tzjBmbdZ9v3J/rZSAMlhx4KVcMIIg=
-go.opentelemetry.io/collector/pdata v1.28.0 h1:xSZyvTOOc2Wmz4PoxrVqeQfodLgs9k7gowLAnzZN0eU=
-go.opentelemetry.io/collector/pdata v1.28.0/go.mod h1:asKE8MD/4SOKz1mCrGdAz4VO2U2HUNg8A6094uK7pq0=
+go.opentelemetry.io/collector/pdata v1.28.1 h1:ORl5WLpQJvjzBVpHu12lqKMdcf/qDBwRXMcUubhybiQ=
+go.opentelemetry.io/collector/pdata v1.28.1/go.mod h1:asKE8MD/4SOKz1mCrGdAz4VO2U2HUNg8A6094uK7pq0=
go.opentelemetry.io/collector/pdata/pprofile v0.118.0 h1:VK/fr65VFOwEhsSGRPj5c3lCv0yIK1Kt0sZxv9WZBb8=
go.opentelemetry.io/collector/pdata/pprofile v0.118.0/go.mod h1:eJyP/vBm179EghV3dPSnamGAWQwLyd+4z/3yG54YFoQ=
go.opentelemetry.io/collector/pdata/testdata v0.118.0 h1:5N0w1SX9KIRkwvtkrpzQgXy9eGk3vfNG0ds6mhEPMIM=
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 0d54cfc6d66aa..00b5f702678ea 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1913,7 +1913,7 @@ go.opentelemetry.io/collector/config/configtelemetry
## explicit; go 1.22.0
go.opentelemetry.io/collector/consumer
go.opentelemetry.io/collector/consumer/internal
-# go.opentelemetry.io/collector/pdata v1.28.0
+# go.opentelemetry.io/collector/pdata v1.28.1
## explicit; go 1.23.0
go.opentelemetry.io/collector/pdata/internal
go.opentelemetry.io/collector/pdata/internal/data
|
fix
|
update module go.opentelemetry.io/collector/pdata to v1.28.1 (main) (#16824)
|
c394ce94622411e9f4c4a6c5a554a2145fc4bbdd
|
2020-11-04 18:36:14
|
jkellerer
|
logql: Add unwrap bytes() conversion function (#2876)
| false
|
diff --git a/docs/sources/logql/_index.md b/docs/sources/logql/_index.md
index 824a53b014923..3c2a83ff1a740 100644
--- a/docs/sources/logql/_index.md
+++ b/docs/sources/logql/_index.md
@@ -492,7 +492,9 @@ The unwrap expression is noted `| unwrap label_identifier` where the label ident
Since label values are string, by default a conversion into a float (64bits) will be attempted, in case of failure the `__error__` label is added to the sample.
Optionally the label identifier can be wrapped by a conversion function `| unwrap <function>(label_identifier)`, which will attempt to convert the label value from a specific format.
-We currently support only the function `duration_seconds` (or its short equivalent `duration`) which will convert the label value in seconds from the [go duration format](https://golang.org/pkg/time/#ParseDuration) (e.g `5m`, `24s30ms`).
+We currently support the functions:
+- `duration_seconds(label_identifier)` (or its short equivalent `duration`) which will convert the label value in seconds from the [go duration format](https://golang.org/pkg/time/#ParseDuration) (e.g `5m`, `24s30ms`).
+- `bytes(label_identifier)` which will convert the label value to raw bytes applying the bytes unit (e.g. `5 MiB`, `3k`, `1G`).
Supported function for operating over unwrapped ranges are:
diff --git a/pkg/logql/ast.go b/pkg/logql/ast.go
index 3705e3a44e02c..1902d1bfcfcac 100644
--- a/pkg/logql/ast.go
+++ b/pkg/logql/ast.go
@@ -528,6 +528,7 @@ const (
OpUnwrap = "unwrap"
// conversion Op
+ OpConvBytes = "bytes"
OpConvDuration = "duration"
OpConvDurationSeconds = "duration_seconds"
)
diff --git a/pkg/logql/ast_test.go b/pkg/logql/ast_test.go
index 1a6672fa1e792..7ebe137aaced2 100644
--- a/pkg/logql/ast_test.go
+++ b/pkg/logql/ast_test.go
@@ -85,6 +85,11 @@ func Test_SampleExpr_String(t *testing.T) {
/
count_over_time({namespace="tns"} | logfmt | label_format foo=bar[5m])
)`,
+ `sum by (job) (
+ sum_over_time({namespace="tns"} |= "level=error" | json | foo=5 and bar<25ms | unwrap bytes(latency)[5m])
+ /
+ count_over_time({namespace="tns"} | logfmt | label_format foo=bar[5m])
+ )`,
`sum by (job) (
sum_over_time(
{namespace="tns"} |= "level=error" | json | avg=5 and bar<25ms | unwrap duration(latency) [5m]
diff --git a/pkg/logql/expr.y b/pkg/logql/expr.y
index a01241059fbdd..e43e15f484f9a 100644
--- a/pkg/logql/expr.y
+++ b/pkg/logql/expr.y
@@ -89,7 +89,7 @@ import (
%token <val> MATCHERS LABELS EQ RE NRE OPEN_BRACE CLOSE_BRACE OPEN_BRACKET CLOSE_BRACKET COMMA DOT PIPE_MATCH PIPE_EXACT
OPEN_PARENTHESIS CLOSE_PARENTHESIS BY WITHOUT COUNT_OVER_TIME RATE SUM AVG MAX MIN COUNT STDDEV STDVAR BOTTOMK TOPK
BYTES_OVER_TIME BYTES_RATE BOOL JSON REGEXP LOGFMT PIPE LINE_FMT LABEL_FMT UNWRAP AVG_OVER_TIME SUM_OVER_TIME MIN_OVER_TIME
- MAX_OVER_TIME STDVAR_OVER_TIME STDDEV_OVER_TIME QUANTILE_OVER_TIME DURATION_CONV DURATION_SECONDS_CONV
+ MAX_OVER_TIME STDVAR_OVER_TIME STDDEV_OVER_TIME QUANTILE_OVER_TIME BYTES_CONV DURATION_CONV DURATION_SECONDS_CONV
// Operators are listed with increasing precedence.
%left <binOp> OR
@@ -146,7 +146,8 @@ unwrapExpr:
;
convOp:
- DURATION_CONV { $$ = OpConvDuration }
+ BYTES_CONV { $$ = OpConvBytes }
+ | DURATION_CONV { $$ = OpConvDuration }
| DURATION_SECONDS_CONV { $$ = OpConvDurationSeconds }
;
diff --git a/pkg/logql/expr.y.go b/pkg/logql/expr.y.go
index 1b99d052d7919..5bdaf8120b7f2 100644
--- a/pkg/logql/expr.y.go
+++ b/pkg/logql/expr.y.go
@@ -1,18 +1,15 @@
// Code generated by goyacc -p expr -o pkg/logql/expr.y.go pkg/logql/expr.y. DO NOT EDIT.
-//line pkg/logql/expr.y:2
package logql
import __yyfmt__ "fmt"
-//line pkg/logql/expr.y:2
import (
"github.com/grafana/loki/pkg/logql/log"
"github.com/prometheus/prometheus/pkg/labels"
"time"
)
-//line pkg/logql/expr.y:12
type exprSymType struct {
yys int
Expr Expr
@@ -104,23 +101,24 @@ const MAX_OVER_TIME = 57393
const STDVAR_OVER_TIME = 57394
const STDDEV_OVER_TIME = 57395
const QUANTILE_OVER_TIME = 57396
-const DURATION_CONV = 57397
-const DURATION_SECONDS_CONV = 57398
-const OR = 57399
-const AND = 57400
-const UNLESS = 57401
-const CMP_EQ = 57402
-const NEQ = 57403
-const LT = 57404
-const LTE = 57405
-const GT = 57406
-const GTE = 57407
-const ADD = 57408
-const SUB = 57409
-const MUL = 57410
-const DIV = 57411
-const MOD = 57412
-const POW = 57413
+const BYTES_CONV = 57397
+const DURATION_CONV = 57398
+const DURATION_SECONDS_CONV = 57399
+const OR = 57400
+const AND = 57401
+const UNLESS = 57402
+const CMP_EQ = 57403
+const NEQ = 57404
+const LT = 57405
+const LTE = 57406
+const GT = 57407
+const GTE = 57408
+const ADD = 57409
+const SUB = 57410
+const MUL = 57411
+const DIV = 57412
+const MOD = 57413
+const POW = 57414
var exprToknames = [...]string{
"$end",
@@ -177,6 +175,7 @@ var exprToknames = [...]string{
"STDVAR_OVER_TIME",
"STDDEV_OVER_TIME",
"QUANTILE_OVER_TIME",
+ "BYTES_CONV",
"DURATION_CONV",
"DURATION_SECONDS_CONV",
"OR",
@@ -201,9 +200,6 @@ const exprEofCode = 1
const exprErrCode = 2
const exprInitialStackSize = 16
-//line pkg/logql/expr.y:345
-
-//line yacctab:1
var exprExca = [...]int{
-1, 1,
1, -1,
@@ -212,142 +208,142 @@ var exprExca = [...]int{
const exprPrivate = 57344
-const exprLast = 396
+const exprLast = 397
var exprAct = [...]int{
70, 171, 53, 153, 145, 4, 179, 100, 63, 2,
52, 45, 61, 56, 5, 217, 120, 214, 235, 66,
- 14, 40, 41, 42, 43, 44, 45, 249, 11, 42,
- 43, 44, 45, 252, 76, 235, 6, 213, 256, 244,
+ 14, 40, 41, 42, 43, 44, 45, 250, 11, 42,
+ 43, 44, 45, 253, 235, 213, 6, 76, 257, 213,
17, 18, 28, 29, 31, 32, 30, 33, 34, 35,
- 36, 19, 20, 214, 225, 91, 116, 118, 119, 227,
- 94, 21, 22, 23, 24, 25, 26, 27, 92, 215,
- 214, 224, 214, 247, 59, 124, 155, 118, 119, 15,
- 16, 57, 58, 122, 129, 176, 130, 131, 132, 133,
+ 36, 19, 20, 214, 242, 91, 116, 118, 119, 245,
+ 94, 21, 22, 23, 24, 25, 26, 27, 92, 214,
+ 214, 71, 72, 224, 214, 124, 155, 118, 119, 176,
+ 15, 16, 167, 122, 129, 167, 130, 131, 132, 133,
134, 135, 136, 137, 138, 139, 140, 141, 142, 143,
- 69, 128, 71, 72, 173, 117, 71, 72, 150, 46,
+ 69, 167, 71, 72, 232, 111, 117, 221, 150, 46,
47, 50, 51, 48, 49, 40, 41, 42, 43, 44,
- 45, 60, 162, 59, 161, 156, 159, 160, 157, 158,
- 57, 58, 178, 172, 225, 181, 170, 11, 174, 226,
- 175, 59, 111, 110, 127, 123, 126, 68, 57, 58,
- 186, 182, 183, 184, 37, 38, 39, 46, 47, 50,
+ 45, 110, 162, 168, 248, 161, 156, 159, 160, 157,
+ 158, 128, 178, 172, 127, 181, 215, 225, 174, 59,
+ 175, 59, 227, 126, 68, 225, 57, 58, 57, 58,
+ 226, 182, 183, 184, 37, 38, 39, 46, 47, 50,
51, 48, 49, 40, 41, 42, 43, 44, 45, 209,
- 60, 173, 211, 177, 216, 91, 219, 222, 94, 169,
- 115, 212, 187, 223, 122, 220, 210, 167, 60, 167,
+ 106, 173, 211, 186, 216, 91, 219, 222, 94, 177,
+ 169, 212, 115, 223, 122, 220, 210, 60, 103, 60,
228, 38, 39, 46, 47, 50, 51, 48, 49, 40,
- 41, 42, 43, 44, 45, 255, 213, 251, 106, 232,
- 106, 221, 167, 250, 233, 91, 236, 240, 106, 234,
- 166, 241, 243, 91, 147, 215, 103, 125, 103, 185,
- 59, 74, 147, 246, 168, 11, 103, 57, 58, 73,
- 242, 214, 248, 6, 165, 253, 164, 17, 18, 28,
- 29, 31, 32, 30, 33, 34, 35, 36, 19, 20,
- 173, 106, 148, 146, 230, 231, 238, 239, 21, 22,
- 23, 24, 25, 26, 27, 59, 170, 60, 163, 103,
- 151, 59, 57, 58, 149, 59, 15, 16, 57, 58,
- 113, 218, 57, 58, 75, 106, 106, 97, 99, 98,
- 121, 104, 105, 217, 112, 173, 144, 114, 11, 147,
- 147, 173, 109, 103, 103, 55, 123, 192, 101, 164,
- 193, 191, 60, 189, 3, 163, 190, 188, 60, 254,
- 245, 62, 60, 77, 78, 79, 80, 81, 82, 83,
- 84, 85, 86, 87, 88, 89, 90, 148, 146, 146,
- 106, 207, 152, 204, 208, 206, 205, 203, 201, 96,
- 198, 202, 200, 199, 197, 180, 195, 95, 103, 196,
- 194, 229, 65, 67, 154, 67, 154, 54, 107, 102,
- 108, 93, 10, 9, 13, 8, 97, 99, 98, 237,
- 104, 105, 12, 7, 64, 1,
+ 41, 42, 43, 44, 45, 121, 97, 99, 98, 187,
+ 104, 105, 215, 11, 233, 91, 236, 59, 113, 234,
+ 74, 123, 244, 91, 57, 58, 170, 243, 125, 11,
+ 73, 59, 112, 247, 256, 114, 11, 123, 57, 58,
+ 252, 218, 251, 249, 6, 106, 254, 173, 17, 18,
+ 28, 29, 31, 32, 30, 33, 34, 35, 36, 19,
+ 20, 173, 106, 103, 241, 60, 238, 239, 240, 21,
+ 22, 23, 24, 25, 26, 27, 147, 59, 170, 60,
+ 103, 230, 231, 59, 57, 58, 59, 166, 15, 16,
+ 57, 58, 165, 57, 58, 207, 75, 106, 208, 206,
+ 106, 204, 164, 106, 205, 203, 192, 173, 164, 193,
+ 191, 147, 163, 173, 147, 103, 55, 147, 103, 185,
+ 189, 103, 163, 190, 188, 60, 201, 151, 255, 202,
+ 200, 60, 149, 144, 60, 77, 78, 79, 80, 81,
+ 82, 83, 84, 85, 86, 87, 88, 89, 90, 106,
+ 148, 146, 198, 148, 146, 199, 197, 146, 195, 3,
+ 229, 196, 194, 154, 101, 109, 62, 103, 65, 246,
+ 180, 67, 67, 154, 152, 96, 95, 54, 107, 102,
+ 108, 93, 10, 9, 13, 97, 99, 98, 8, 104,
+ 105, 217, 237, 12, 7, 64, 1,
}
var exprPact = [...]int{
- 13, -1000, 97, -1000, -1000, 271, 13, -1000, -1000, -1000,
- -1000, 370, 124, 77, -1000, 232, 224, -1000, -1000, -1000,
+ 13, -1000, 96, -1000, -1000, 272, 13, -1000, -1000, -1000,
+ -1000, 366, 121, 77, -1000, 223, 213, -1000, -1000, -1000,
-1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000,
- -1000, -1000, -1000, -1000, -1000, -1000, -1000, -6, -6, -6,
- -6, -6, -6, -6, -6, -6, -6, -6, -6, -6,
- -6, -6, 271, -1000, 109, 345, 306, -1000, -1000, -1000,
- -1000, 119, 118, 97, 288, 164, -1000, 44, 293, 220,
- 123, 121, 78, -1000, -1000, 13, -1000, 13, 13, 13,
+ -1000, -1000, -1000, -1000, -1000, -1000, -1000, -3, -3, -3,
+ -3, -3, -3, -3, -3, -3, -3, -3, -3, -3,
+ -3, -3, 272, -1000, 125, 165, 359, -1000, -1000, -1000,
+ -1000, 97, 81, 96, 216, 166, -1000, 44, 198, 221,
+ 120, 111, 108, -1000, -1000, 13, -1000, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
- 13, -1000, 300, -1000, 290, -1000, -1000, -1000, -1000, 278,
- -1000, -1000, -1000, 203, 274, 371, 64, -1000, -1000, -1000,
- -1000, -1000, -1000, -1000, 368, -1000, 272, 240, 238, 214,
- 210, 160, 127, 122, 61, 154, 13, 360, 360, 133,
- 49, 49, -39, -39, -60, -60, -60, -60, -45, -45,
- -45, -45, -45, -45, -1000, 290, 203, 203, 203, -1000,
- 205, -1000, 131, -1000, 170, 319, 313, 362, 356, 354,
- 349, 347, -1000, -1000, -1000, -1000, -1000, -1000, 81, 122,
- 261, 28, 60, 256, 267, 187, 81, 13, 47, 115,
- -1000, 35, 213, 290, 291, -1000, 369, 259, -1000, -1000,
+ 13, -1000, 327, -1000, 292, -1000, -1000, -1000, -1000, 326,
+ -1000, -1000, -1000, 240, 321, 368, 64, -1000, -1000, -1000,
+ -1000, -1000, -1000, -1000, 367, -1000, 306, 296, 286, 281,
+ 99, 161, 269, 214, 55, 160, 13, 365, 365, 132,
+ 48, 48, -40, -40, -61, -61, -61, -61, -46, -46,
+ -46, -46, -46, -46, -1000, 292, 240, 240, 240, -1000,
+ 295, -1000, 154, -1000, 197, 316, 302, 354, 348, 322,
+ 297, 291, -1000, -1000, -1000, -1000, -1000, -1000, 46, 214,
+ 263, 26, 127, 344, 217, 83, 46, 13, 49, 126,
+ -1000, 118, 257, 292, 298, -1000, 358, 276, -1000, -1000,
-1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000,
-1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000,
- 185, -27, 261, -1000, 203, -1000, 26, 211, 208, 197,
- 216, -1000, -1000, 15, -1000, 325, -1000, -1000, -1000, -1000,
- -1000, -1000, 81, -27, 290, -1000, -1000, 50, -1000, -1000,
- -17, 204, 198, 9, 81, -1000, -1000, 324, -27, -32,
- -1000, -1000, 196, -1000, 14, -1000, -1000,
+ 80, -27, 263, -1000, 240, -1000, 25, 211, 255, 30,
+ 203, -1000, -1000, 35, -1000, 364, -1000, -1000, -1000, -1000,
+ -1000, -1000, 46, -27, 292, -1000, -1000, 101, -1000, -1000,
+ -1000, -17, 233, 231, 9, 46, -1000, -1000, 323, -27,
+ -32, -1000, -1000, 225, -1000, 14, -1000, -1000,
}
var exprPgo = [...]int{
- 0, 395, 8, 13, 0, 6, 324, 5, 16, 7,
- 394, 393, 392, 389, 14, 385, 384, 383, 382, 294,
- 381, 10, 2, 380, 379, 378, 4, 377, 367, 359,
- 3, 352, 1, 318,
+ 0, 396, 8, 13, 0, 6, 359, 5, 16, 7,
+ 395, 394, 393, 392, 14, 388, 384, 383, 382, 296,
+ 381, 10, 2, 380, 379, 378, 4, 377, 376, 375,
+ 3, 374, 1, 364,
}
var exprR1 = [...]int{
0, 1, 2, 2, 7, 7, 7, 7, 7, 6,
6, 6, 8, 8, 8, 8, 8, 8, 8, 8,
8, 8, 8, 8, 8, 8, 32, 32, 32, 13,
- 13, 11, 11, 11, 11, 15, 15, 15, 15, 15,
- 3, 3, 3, 3, 14, 14, 14, 10, 10, 9,
- 9, 9, 9, 21, 21, 22, 22, 22, 22, 22,
- 27, 27, 20, 20, 20, 28, 30, 30, 31, 31,
- 31, 29, 26, 26, 26, 26, 26, 26, 26, 26,
- 33, 33, 25, 25, 25, 25, 25, 25, 25, 23,
- 23, 23, 23, 23, 23, 23, 24, 24, 24, 24,
- 24, 24, 24, 17, 17, 17, 17, 17, 17, 17,
- 17, 17, 17, 17, 17, 17, 17, 17, 19, 19,
- 18, 18, 18, 16, 16, 16, 16, 16, 16, 16,
- 16, 16, 12, 12, 12, 12, 12, 12, 12, 12,
- 12, 12, 12, 5, 5, 4, 4,
+ 13, 13, 11, 11, 11, 11, 15, 15, 15, 15,
+ 15, 3, 3, 3, 3, 14, 14, 14, 10, 10,
+ 9, 9, 9, 9, 21, 21, 22, 22, 22, 22,
+ 22, 27, 27, 20, 20, 20, 28, 30, 30, 31,
+ 31, 31, 29, 26, 26, 26, 26, 26, 26, 26,
+ 26, 33, 33, 25, 25, 25, 25, 25, 25, 25,
+ 23, 23, 23, 23, 23, 23, 23, 24, 24, 24,
+ 24, 24, 24, 24, 17, 17, 17, 17, 17, 17,
+ 17, 17, 17, 17, 17, 17, 17, 17, 17, 19,
+ 19, 18, 18, 18, 16, 16, 16, 16, 16, 16,
+ 16, 16, 16, 12, 12, 12, 12, 12, 12, 12,
+ 12, 12, 12, 12, 5, 5, 4, 4,
}
var exprR2 = [...]int{
0, 1, 1, 1, 1, 1, 1, 1, 3, 1,
2, 3, 2, 4, 3, 5, 3, 5, 3, 5,
4, 6, 3, 4, 3, 2, 3, 6, 3, 1,
- 1, 4, 6, 5, 7, 4, 5, 5, 6, 7,
- 1, 1, 1, 1, 3, 3, 3, 1, 3, 3,
- 3, 3, 3, 1, 2, 1, 2, 2, 2, 2,
- 2, 3, 1, 1, 2, 2, 3, 3, 1, 3,
- 3, 2, 1, 1, 1, 3, 2, 3, 3, 3,
- 1, 1, 3, 3, 3, 3, 3, 3, 3, 3,
+ 1, 1, 4, 6, 5, 7, 4, 5, 5, 6,
+ 7, 1, 1, 1, 1, 3, 3, 3, 1, 3,
+ 3, 3, 3, 3, 1, 2, 1, 2, 2, 2,
+ 2, 2, 3, 1, 1, 2, 2, 3, 3, 1,
+ 3, 3, 2, 1, 1, 1, 3, 2, 3, 3,
+ 3, 1, 1, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
- 3, 3, 3, 4, 4, 4, 4, 4, 4, 4,
- 4, 4, 4, 4, 4, 4, 4, 4, 0, 1,
- 1, 2, 2, 1, 1, 1, 1, 1, 1, 1,
+ 3, 3, 3, 3, 4, 4, 4, 4, 4, 4,
+ 4, 4, 4, 4, 4, 4, 4, 4, 4, 0,
+ 1, 1, 2, 2, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
- 1, 1, 1, 1, 3, 4, 4,
+ 1, 1, 1, 1, 1, 3, 4, 4,
}
var exprChk = [...]int{
-1000, -1, -2, -6, -7, -14, 23, -11, -15, -17,
- -18, 15, -12, -16, 7, 66, 67, 27, 28, 38,
+ -18, 15, -12, -16, 7, 67, 68, 27, 28, 38,
39, 48, 49, 50, 51, 52, 53, 54, 29, 30,
- 33, 31, 32, 34, 35, 36, 37, 57, 58, 59,
- 66, 67, 68, 69, 70, 71, 60, 61, 64, 65,
- 62, 63, -21, -22, -27, 44, -3, 21, 22, 14,
- 61, -7, -6, -2, -10, 2, -9, 5, 23, 23,
+ 33, 31, 32, 34, 35, 36, 37, 58, 59, 60,
+ 67, 68, 69, 70, 71, 72, 61, 62, 65, 66,
+ 63, 64, -21, -22, -27, 44, -3, 21, 22, 14,
+ 62, -7, -6, -2, -10, 2, -9, 5, 23, 23,
-4, 25, 26, 7, 7, -19, 40, -19, -19, -19,
-19, -19, -19, -19, -19, -19, -19, -19, -19, -19,
-19, -22, -3, -20, -26, -28, -29, 41, 43, 42,
-9, -33, -24, 23, 45, 46, 5, -25, -23, 6,
- 24, 24, 16, 2, 19, 16, 12, 61, 13, 14,
+ 24, 24, 16, 2, 19, 16, 12, 62, 13, 14,
-8, 7, -14, 23, -7, 7, 23, 23, 23, -2,
-2, -2, -2, -2, -2, -2, -2, -2, -2, -2,
- -2, -2, -2, -2, 6, -26, 58, 19, 57, 6,
- -26, 6, -31, -30, 5, 12, 61, 64, 65, 62,
- 63, 60, -9, 6, 6, 6, 6, 2, 24, 19,
+ -2, -2, -2, -2, 6, -26, 59, 19, 58, 6,
+ -26, 6, -31, -30, 5, 12, 62, 65, 66, 63,
+ 64, 61, -9, 6, 6, 6, 6, 2, 24, 19,
9, -32, -21, 44, -14, -8, 24, 19, -7, -5,
5, -5, -26, -26, -26, 24, 19, 12, 8, 4,
7, 8, 4, 7, 8, 4, 7, 8, 4, 7,
@@ -355,37 +351,37 @@ var exprChk = [...]int{
-8, -32, -21, 9, 44, 9, -32, 47, 24, -32,
-21, 24, -4, -7, 24, 19, 24, 24, -30, 2,
5, 6, 24, -32, -26, 9, 5, -13, 55, 56,
- 9, 24, 24, -32, 24, 5, -4, 23, -32, 44,
- 9, 9, 24, -4, 5, 9, 24,
+ 57, 9, 24, 24, -32, 24, 5, -4, 23, -32,
+ 44, 9, 9, 24, -4, 5, 9, 24,
}
var exprDef = [...]int{
0, -2, 1, 2, 3, 9, 0, 4, 5, 6,
- 7, 0, 0, 0, 120, 0, 0, 132, 133, 134,
- 135, 136, 137, 138, 139, 140, 141, 142, 123, 124,
- 125, 126, 127, 128, 129, 130, 131, 118, 118, 118,
- 118, 118, 118, 118, 118, 118, 118, 118, 118, 118,
- 118, 118, 10, 53, 55, 0, 0, 40, 41, 42,
- 43, 3, 2, 0, 0, 0, 47, 0, 0, 0,
- 0, 0, 0, 121, 122, 0, 119, 0, 0, 0,
+ 7, 0, 0, 0, 121, 0, 0, 133, 134, 135,
+ 136, 137, 138, 139, 140, 141, 142, 143, 124, 125,
+ 126, 127, 128, 129, 130, 131, 132, 119, 119, 119,
+ 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
+ 119, 119, 10, 54, 56, 0, 0, 41, 42, 43,
+ 44, 3, 2, 0, 0, 0, 48, 0, 0, 0,
+ 0, 0, 0, 122, 123, 0, 120, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 54, 0, 56, 57, 58, 59, 62, 63, 0,
- 72, 73, 74, 0, 0, 0, 0, 80, 81, 60,
- 8, 11, 44, 45, 0, 46, 0, 0, 0, 0,
- 0, 0, 0, 0, 3, 120, 0, 0, 0, 103,
- 104, 105, 106, 107, 108, 109, 110, 111, 112, 113,
- 114, 115, 116, 117, 61, 76, 0, 0, 0, 64,
- 0, 65, 71, 68, 0, 0, 0, 0, 0, 0,
- 0, 0, 48, 49, 50, 51, 52, 25, 31, 0,
- 12, 0, 0, 0, 0, 0, 35, 0, 3, 0,
- 143, 0, 77, 78, 79, 75, 0, 0, 87, 94,
- 101, 86, 93, 100, 82, 89, 96, 83, 90, 97,
- 84, 91, 98, 85, 92, 99, 88, 95, 102, 33,
+ 0, 55, 0, 57, 58, 59, 60, 63, 64, 0,
+ 73, 74, 75, 0, 0, 0, 0, 81, 82, 61,
+ 8, 11, 45, 46, 0, 47, 0, 0, 0, 0,
+ 0, 0, 0, 0, 3, 121, 0, 0, 0, 104,
+ 105, 106, 107, 108, 109, 110, 111, 112, 113, 114,
+ 115, 116, 117, 118, 62, 77, 0, 0, 0, 65,
+ 0, 66, 72, 69, 0, 0, 0, 0, 0, 0,
+ 0, 0, 49, 50, 51, 52, 53, 25, 32, 0,
+ 12, 0, 0, 0, 0, 0, 36, 0, 3, 0,
+ 144, 0, 78, 79, 80, 76, 0, 0, 88, 95,
+ 102, 87, 94, 101, 83, 90, 97, 84, 91, 98,
+ 85, 92, 99, 86, 93, 100, 89, 96, 103, 34,
0, 14, 22, 16, 0, 18, 0, 0, 0, 0,
- 0, 24, 37, 3, 36, 0, 145, 146, 69, 70,
- 66, 67, 32, 23, 28, 20, 26, 0, 29, 30,
- 13, 0, 0, 0, 38, 144, 34, 0, 15, 0,
- 17, 19, 0, 39, 0, 21, 27,
+ 0, 24, 38, 3, 37, 0, 146, 147, 70, 71,
+ 67, 68, 33, 23, 28, 20, 26, 0, 29, 30,
+ 31, 13, 0, 0, 0, 39, 145, 35, 0, 15,
+ 0, 17, 19, 0, 40, 0, 21, 27,
}
var exprTok1 = [...]int{
@@ -400,6 +396,7 @@ var exprTok2 = [...]int{
42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
62, 63, 64, 65, 66, 67, 68, 69, 70, 71,
+ 72,
}
var exprTok3 = [...]int{
0,
@@ -411,8 +408,6 @@ var exprErrorMessages = [...]struct {
msg string
}{}
-//line yaccpar:1
-
/* parser for yacc output */
var (
@@ -744,864 +739,725 @@ exprdefault:
case 1:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:104
{
exprlex.(*lexer).expr = exprDollar[1].Expr
}
case 2:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:107
{
exprVAL.Expr = exprDollar[1].LogExpr
}
case 3:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:108
{
exprVAL.Expr = exprDollar[1].MetricExpr
}
case 4:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:112
{
exprVAL.MetricExpr = exprDollar[1].RangeAggregationExpr
}
case 5:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:113
{
exprVAL.MetricExpr = exprDollar[1].VectorAggregationExpr
}
case 6:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:114
{
exprVAL.MetricExpr = exprDollar[1].BinOpExpr
}
case 7:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:115
{
exprVAL.MetricExpr = exprDollar[1].LiteralExpr
}
case 8:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:116
{
exprVAL.MetricExpr = exprDollar[2].MetricExpr
}
case 9:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:120
{
exprVAL.LogExpr = newMatcherExpr(exprDollar[1].Selector)
}
case 10:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:121
{
exprVAL.LogExpr = newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].PipelineExpr)
}
case 11:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:122
{
exprVAL.LogExpr = exprDollar[2].LogExpr
}
case 12:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:126
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].duration, nil)
}
case 13:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:127
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[2].Selector), exprDollar[4].duration, nil)
}
case 14:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:128
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].duration, exprDollar[3].UnwrapExpr)
}
case 15:
exprDollar = exprS[exprpt-5 : exprpt+1]
-//line pkg/logql/expr.y:129
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[2].Selector), exprDollar[4].duration, exprDollar[5].UnwrapExpr)
}
case 16:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:130
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[1].Selector), exprDollar[3].duration, exprDollar[2].UnwrapExpr)
}
case 17:
exprDollar = exprS[exprpt-5 : exprpt+1]
-//line pkg/logql/expr.y:131
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[2].Selector), exprDollar[5].duration, exprDollar[3].UnwrapExpr)
}
case 18:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:132
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].PipelineExpr), exprDollar[3].duration, nil)
}
case 19:
exprDollar = exprS[exprpt-5 : exprpt+1]
-//line pkg/logql/expr.y:133
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[2].Selector), exprDollar[3].PipelineExpr), exprDollar[5].duration, nil)
}
case 20:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:134
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].PipelineExpr), exprDollar[4].duration, exprDollar[3].UnwrapExpr)
}
case 21:
exprDollar = exprS[exprpt-6 : exprpt+1]
-//line pkg/logql/expr.y:135
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[2].Selector), exprDollar[3].PipelineExpr), exprDollar[6].duration, exprDollar[4].UnwrapExpr)
}
case 22:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:136
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[3].PipelineExpr), exprDollar[2].duration, nil)
}
case 23:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:137
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[3].PipelineExpr), exprDollar[2].duration, exprDollar[4].UnwrapExpr)
}
case 24:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:138
{
exprVAL.LogRangeExpr = exprDollar[2].LogRangeExpr
}
case 26:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:143
{
exprVAL.UnwrapExpr = newUnwrapExpr(exprDollar[3].str, "")
}
case 27:
exprDollar = exprS[exprpt-6 : exprpt+1]
-//line pkg/logql/expr.y:144
{
exprVAL.UnwrapExpr = newUnwrapExpr(exprDollar[5].str, exprDollar[3].ConvOp)
}
case 28:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:145
{
exprVAL.UnwrapExpr = exprDollar[1].UnwrapExpr.addPostFilter(exprDollar[3].LabelFilter)
}
case 29:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:149
{
- exprVAL.ConvOp = OpConvDuration
+ exprVAL.ConvOp = OpConvBytes
}
case 30:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:150
{
- exprVAL.ConvOp = OpConvDurationSeconds
+ exprVAL.ConvOp = OpConvDuration
}
case 31:
+ exprDollar = exprS[exprpt-1 : exprpt+1]
+ {
+ exprVAL.ConvOp = OpConvDurationSeconds
+ }
+ case 32:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:154
{
exprVAL.RangeAggregationExpr = newRangeAggregationExpr(exprDollar[3].LogRangeExpr, exprDollar[1].RangeOp, nil, nil)
}
- case 32:
+ case 33:
exprDollar = exprS[exprpt-6 : exprpt+1]
-//line pkg/logql/expr.y:155
{
exprVAL.RangeAggregationExpr = newRangeAggregationExpr(exprDollar[5].LogRangeExpr, exprDollar[1].RangeOp, nil, &exprDollar[3].str)
}
- case 33:
+ case 34:
exprDollar = exprS[exprpt-5 : exprpt+1]
-//line pkg/logql/expr.y:156
{
exprVAL.RangeAggregationExpr = newRangeAggregationExpr(exprDollar[3].LogRangeExpr, exprDollar[1].RangeOp, exprDollar[5].Grouping, nil)
}
- case 34:
+ case 35:
exprDollar = exprS[exprpt-7 : exprpt+1]
-//line pkg/logql/expr.y:157
{
exprVAL.RangeAggregationExpr = newRangeAggregationExpr(exprDollar[5].LogRangeExpr, exprDollar[1].RangeOp, exprDollar[7].Grouping, &exprDollar[3].str)
}
- case 35:
+ case 36:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:162
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[3].MetricExpr, exprDollar[1].VectorOp, nil, nil)
}
- case 36:
+ case 37:
exprDollar = exprS[exprpt-5 : exprpt+1]
-//line pkg/logql/expr.y:163
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[4].MetricExpr, exprDollar[1].VectorOp, exprDollar[2].Grouping, nil)
}
- case 37:
+ case 38:
exprDollar = exprS[exprpt-5 : exprpt+1]
-//line pkg/logql/expr.y:164
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[3].MetricExpr, exprDollar[1].VectorOp, exprDollar[5].Grouping, nil)
}
- case 38:
+ case 39:
exprDollar = exprS[exprpt-6 : exprpt+1]
-//line pkg/logql/expr.y:166
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[5].MetricExpr, exprDollar[1].VectorOp, nil, &exprDollar[3].str)
}
- case 39:
+ case 40:
exprDollar = exprS[exprpt-7 : exprpt+1]
-//line pkg/logql/expr.y:167
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[5].MetricExpr, exprDollar[1].VectorOp, exprDollar[7].Grouping, &exprDollar[3].str)
}
- case 40:
+ case 41:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:171
{
exprVAL.Filter = labels.MatchRegexp
}
- case 41:
+ case 42:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:172
{
exprVAL.Filter = labels.MatchEqual
}
- case 42:
+ case 43:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:173
{
exprVAL.Filter = labels.MatchNotRegexp
}
- case 43:
+ case 44:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:174
{
exprVAL.Filter = labels.MatchNotEqual
}
- case 44:
+ case 45:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:178
{
exprVAL.Selector = exprDollar[2].Matchers
}
- case 45:
+ case 46:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:179
{
exprVAL.Selector = exprDollar[2].Matchers
}
- case 46:
+ case 47:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:180
{
}
- case 47:
+ case 48:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:184
{
exprVAL.Matchers = []*labels.Matcher{exprDollar[1].Matcher}
}
- case 48:
+ case 49:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:185
{
exprVAL.Matchers = append(exprDollar[1].Matchers, exprDollar[3].Matcher)
}
- case 49:
+ case 50:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:189
{
exprVAL.Matcher = mustNewMatcher(labels.MatchEqual, exprDollar[1].str, exprDollar[3].str)
}
- case 50:
+ case 51:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:190
{
exprVAL.Matcher = mustNewMatcher(labels.MatchNotEqual, exprDollar[1].str, exprDollar[3].str)
}
- case 51:
+ case 52:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:191
{
exprVAL.Matcher = mustNewMatcher(labels.MatchRegexp, exprDollar[1].str, exprDollar[3].str)
}
- case 52:
+ case 53:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:192
{
exprVAL.Matcher = mustNewMatcher(labels.MatchNotRegexp, exprDollar[1].str, exprDollar[3].str)
}
- case 53:
+ case 54:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:196
{
exprVAL.PipelineExpr = MultiStageExpr{exprDollar[1].PipelineStage}
}
- case 54:
+ case 55:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:197
{
exprVAL.PipelineExpr = append(exprDollar[1].PipelineExpr, exprDollar[2].PipelineStage)
}
- case 55:
+ case 56:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:201
{
exprVAL.PipelineStage = exprDollar[1].LineFilters
}
- case 56:
+ case 57:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:202
{
exprVAL.PipelineStage = exprDollar[2].LabelParser
}
- case 57:
+ case 58:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:203
{
exprVAL.PipelineStage = &labelFilterExpr{LabelFilterer: exprDollar[2].LabelFilter}
}
- case 58:
+ case 59:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:204
{
exprVAL.PipelineStage = exprDollar[2].LineFormatExpr
}
- case 59:
+ case 60:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:205
{
exprVAL.PipelineStage = exprDollar[2].LabelFormatExpr
}
- case 60:
+ case 61:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:209
{
exprVAL.LineFilters = newLineFilterExpr(nil, exprDollar[1].Filter, exprDollar[2].str)
}
- case 61:
+ case 62:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:210
{
exprVAL.LineFilters = newLineFilterExpr(exprDollar[1].LineFilters, exprDollar[2].Filter, exprDollar[3].str)
}
- case 62:
+ case 63:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:213
{
exprVAL.LabelParser = newLabelParserExpr(OpParserTypeJSON, "")
}
- case 63:
+ case 64:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:214
{
exprVAL.LabelParser = newLabelParserExpr(OpParserTypeLogfmt, "")
}
- case 64:
+ case 65:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:215
{
exprVAL.LabelParser = newLabelParserExpr(OpParserTypeRegexp, exprDollar[2].str)
}
- case 65:
+ case 66:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:218
{
exprVAL.LineFormatExpr = newLineFmtExpr(exprDollar[2].str)
}
- case 66:
+ case 67:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:221
{
exprVAL.LabelFormat = log.NewRenameLabelFmt(exprDollar[1].str, exprDollar[3].str)
}
- case 67:
+ case 68:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:222
{
exprVAL.LabelFormat = log.NewTemplateLabelFmt(exprDollar[1].str, exprDollar[3].str)
}
- case 68:
+ case 69:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:226
{
exprVAL.LabelsFormat = []log.LabelFmt{exprDollar[1].LabelFormat}
}
- case 69:
+ case 70:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:227
{
exprVAL.LabelsFormat = append(exprDollar[1].LabelsFormat, exprDollar[3].LabelFormat)
}
- case 71:
+ case 72:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:231
{
exprVAL.LabelFormatExpr = newLabelFmtExpr(exprDollar[2].LabelsFormat)
}
- case 72:
+ case 73:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:234
{
exprVAL.LabelFilter = log.NewStringLabelFilter(exprDollar[1].Matcher)
}
- case 73:
+ case 74:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:235
{
exprVAL.LabelFilter = exprDollar[1].UnitFilter
}
- case 74:
+ case 75:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:236
{
exprVAL.LabelFilter = exprDollar[1].NumberFilter
}
- case 75:
+ case 76:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:237
{
exprVAL.LabelFilter = exprDollar[2].LabelFilter
}
- case 76:
+ case 77:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:238
{
exprVAL.LabelFilter = log.NewAndLabelFilter(exprDollar[1].LabelFilter, exprDollar[2].LabelFilter)
}
- case 77:
+ case 78:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:239
{
exprVAL.LabelFilter = log.NewAndLabelFilter(exprDollar[1].LabelFilter, exprDollar[3].LabelFilter)
}
- case 78:
+ case 79:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:240
{
exprVAL.LabelFilter = log.NewAndLabelFilter(exprDollar[1].LabelFilter, exprDollar[3].LabelFilter)
}
- case 79:
+ case 80:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:241
{
exprVAL.LabelFilter = log.NewOrLabelFilter(exprDollar[1].LabelFilter, exprDollar[3].LabelFilter)
}
- case 80:
+ case 81:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:245
{
exprVAL.UnitFilter = exprDollar[1].DurationFilter
}
- case 81:
+ case 82:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:246
{
exprVAL.UnitFilter = exprDollar[1].BytesFilter
}
- case 82:
- exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:249
- {
- exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterGreaterThan, exprDollar[1].str, exprDollar[3].duration)
- }
case 83:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:250
{
- exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterGreaterThanOrEqual, exprDollar[1].str, exprDollar[3].duration)
+ exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterGreaterThan, exprDollar[1].str, exprDollar[3].duration)
}
case 84:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:251
{
- exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterLesserThan, exprDollar[1].str, exprDollar[3].duration)
+ exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterGreaterThanOrEqual, exprDollar[1].str, exprDollar[3].duration)
}
case 85:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:252
{
- exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterLesserThanOrEqual, exprDollar[1].str, exprDollar[3].duration)
+ exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterLesserThan, exprDollar[1].str, exprDollar[3].duration)
}
case 86:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:253
{
- exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterNotEqual, exprDollar[1].str, exprDollar[3].duration)
+ exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterLesserThanOrEqual, exprDollar[1].str, exprDollar[3].duration)
}
case 87:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:254
{
- exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].duration)
+ exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterNotEqual, exprDollar[1].str, exprDollar[3].duration)
}
case 88:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:255
{
exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].duration)
}
case 89:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:259
{
- exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterGreaterThan, exprDollar[1].str, exprDollar[3].bytes)
+ exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].duration)
}
case 90:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:260
{
- exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterGreaterThanOrEqual, exprDollar[1].str, exprDollar[3].bytes)
+ exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterGreaterThan, exprDollar[1].str, exprDollar[3].bytes)
}
case 91:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:261
{
- exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterLesserThan, exprDollar[1].str, exprDollar[3].bytes)
+ exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterGreaterThanOrEqual, exprDollar[1].str, exprDollar[3].bytes)
}
case 92:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:262
{
- exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterLesserThanOrEqual, exprDollar[1].str, exprDollar[3].bytes)
+ exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterLesserThan, exprDollar[1].str, exprDollar[3].bytes)
}
case 93:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:263
{
- exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterNotEqual, exprDollar[1].str, exprDollar[3].bytes)
+ exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterLesserThanOrEqual, exprDollar[1].str, exprDollar[3].bytes)
}
case 94:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:264
{
- exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].bytes)
+ exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterNotEqual, exprDollar[1].str, exprDollar[3].bytes)
}
case 95:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:265
{
exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].bytes)
}
case 96:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:269
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterGreaterThan, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].bytes)
}
case 97:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:270
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterGreaterThanOrEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterGreaterThan, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
}
case 98:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:271
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterLesserThan, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterGreaterThanOrEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
}
case 99:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:272
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterLesserThanOrEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterLesserThan, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
}
case 100:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:273
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterNotEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterLesserThanOrEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
}
case 101:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:274
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterNotEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
}
case 102:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:275
{
exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
}
case 103:
+ exprDollar = exprS[exprpt-3 : exprpt+1]
+ {
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ }
+ case 104:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:281
{
exprVAL.BinOpExpr = mustNewBinOpExpr("or", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 104:
+ case 105:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:282
{
exprVAL.BinOpExpr = mustNewBinOpExpr("and", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 105:
+ case 106:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:283
{
exprVAL.BinOpExpr = mustNewBinOpExpr("unless", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 106:
+ case 107:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:284
{
exprVAL.BinOpExpr = mustNewBinOpExpr("+", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 107:
+ case 108:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:285
{
exprVAL.BinOpExpr = mustNewBinOpExpr("-", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 108:
+ case 109:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:286
{
exprVAL.BinOpExpr = mustNewBinOpExpr("*", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 109:
+ case 110:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:287
{
exprVAL.BinOpExpr = mustNewBinOpExpr("/", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 110:
+ case 111:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:288
{
exprVAL.BinOpExpr = mustNewBinOpExpr("%", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 111:
+ case 112:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:289
{
exprVAL.BinOpExpr = mustNewBinOpExpr("^", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 112:
+ case 113:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:290
{
exprVAL.BinOpExpr = mustNewBinOpExpr("==", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 113:
+ case 114:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:291
{
exprVAL.BinOpExpr = mustNewBinOpExpr("!=", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 114:
+ case 115:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:292
{
exprVAL.BinOpExpr = mustNewBinOpExpr(">", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 115:
+ case 116:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:293
{
exprVAL.BinOpExpr = mustNewBinOpExpr(">=", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 116:
+ case 117:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:294
{
exprVAL.BinOpExpr = mustNewBinOpExpr("<", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 117:
+ case 118:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:295
{
exprVAL.BinOpExpr = mustNewBinOpExpr("<=", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
- case 118:
+ case 119:
exprDollar = exprS[exprpt-0 : exprpt+1]
-//line pkg/logql/expr.y:299
{
exprVAL.BinOpModifier = BinOpOptions{}
}
- case 119:
+ case 120:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:300
{
exprVAL.BinOpModifier = BinOpOptions{ReturnBool: true}
}
- case 120:
+ case 121:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:304
{
exprVAL.LiteralExpr = mustNewLiteralExpr(exprDollar[1].str, false)
}
- case 121:
+ case 122:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:305
{
exprVAL.LiteralExpr = mustNewLiteralExpr(exprDollar[2].str, false)
}
- case 122:
+ case 123:
exprDollar = exprS[exprpt-2 : exprpt+1]
-//line pkg/logql/expr.y:306
{
exprVAL.LiteralExpr = mustNewLiteralExpr(exprDollar[2].str, true)
}
- case 123:
+ case 124:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:310
{
exprVAL.VectorOp = OpTypeSum
}
- case 124:
+ case 125:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:311
{
exprVAL.VectorOp = OpTypeAvg
}
- case 125:
+ case 126:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:312
{
exprVAL.VectorOp = OpTypeCount
}
- case 126:
+ case 127:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:313
{
exprVAL.VectorOp = OpTypeMax
}
- case 127:
+ case 128:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:314
{
exprVAL.VectorOp = OpTypeMin
}
- case 128:
+ case 129:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:315
{
exprVAL.VectorOp = OpTypeStddev
}
- case 129:
+ case 130:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:316
{
exprVAL.VectorOp = OpTypeStdvar
}
- case 130:
+ case 131:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:317
{
exprVAL.VectorOp = OpTypeBottomK
}
- case 131:
+ case 132:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:318
{
exprVAL.VectorOp = OpTypeTopK
}
- case 132:
+ case 133:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:322
{
exprVAL.RangeOp = OpRangeTypeCount
}
- case 133:
+ case 134:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:323
{
exprVAL.RangeOp = OpRangeTypeRate
}
- case 134:
+ case 135:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:324
{
exprVAL.RangeOp = OpRangeTypeBytes
}
- case 135:
+ case 136:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:325
{
exprVAL.RangeOp = OpRangeTypeBytesRate
}
- case 136:
+ case 137:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:326
{
exprVAL.RangeOp = OpRangeTypeAvg
}
- case 137:
+ case 138:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:327
{
exprVAL.RangeOp = OpRangeTypeSum
}
- case 138:
+ case 139:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:328
{
exprVAL.RangeOp = OpRangeTypeMin
}
- case 139:
+ case 140:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:329
{
exprVAL.RangeOp = OpRangeTypeMax
}
- case 140:
+ case 141:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:330
{
exprVAL.RangeOp = OpRangeTypeStdvar
}
- case 141:
+ case 142:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:331
{
exprVAL.RangeOp = OpRangeTypeStddev
}
- case 142:
+ case 143:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:332
{
exprVAL.RangeOp = OpRangeTypeQuantile
}
- case 143:
+ case 144:
exprDollar = exprS[exprpt-1 : exprpt+1]
-//line pkg/logql/expr.y:337
{
exprVAL.Labels = []string{exprDollar[1].str}
}
- case 144:
+ case 145:
exprDollar = exprS[exprpt-3 : exprpt+1]
-//line pkg/logql/expr.y:338
{
exprVAL.Labels = append(exprDollar[1].Labels, exprDollar[3].str)
}
- case 145:
+ case 146:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:342
{
exprVAL.Grouping = &grouping{without: false, groups: exprDollar[3].Labels}
}
- case 146:
+ case 147:
exprDollar = exprS[exprpt-4 : exprpt+1]
-//line pkg/logql/expr.y:343
{
exprVAL.Grouping = &grouping{without: true, groups: exprDollar[3].Labels}
}
diff --git a/pkg/logql/functions.go b/pkg/logql/functions.go
index 9cf56d4dd95bf..4cd7cf0bdffbc 100644
--- a/pkg/logql/functions.go
+++ b/pkg/logql/functions.go
@@ -50,6 +50,8 @@ func (r rangeAggregationExpr) extractor(gr *grouping, all bool) (log.SampleExtra
if r.left.unwrap != nil {
var convOp string
switch r.left.unwrap.operation {
+ case OpConvBytes:
+ convOp = log.ConvertBytes
case OpConvDuration, OpConvDurationSeconds:
convOp = log.ConvertDuration
default:
diff --git a/pkg/logql/lex.go b/pkg/logql/lex.go
index 336b56674afc5..c977d008d81f3 100644
--- a/pkg/logql/lex.go
+++ b/pkg/logql/lex.go
@@ -86,6 +86,7 @@ var functionTokens = map[string]int{
OpTypeTopK: TOPK,
// conversion Op
+ OpConvBytes: BYTES_CONV,
OpConvDuration: DURATION_CONV,
OpConvDurationSeconds: DURATION_SECONDS_CONV,
}
diff --git a/pkg/logql/log/metrics_extraction.go b/pkg/logql/log/metrics_extraction.go
index 7c0791c8c482c..98149dc3506fa 100644
--- a/pkg/logql/log/metrics_extraction.go
+++ b/pkg/logql/log/metrics_extraction.go
@@ -7,9 +7,12 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/prometheus/pkg/labels"
+
+ "github.com/dustin/go-humanize"
)
const (
+ ConvertBytes = "bytes"
ConvertDuration = "duration"
ConvertFloat = "float"
)
@@ -117,6 +120,8 @@ func LabelExtractorWithStages(
) (SampleExtractor, error) {
var convFn convertionFn
switch conversion {
+ case ConvertBytes:
+ convFn = convertBytes
case ConvertDuration:
convFn = convertDuration
case ConvertFloat:
@@ -195,3 +200,11 @@ func convertDuration(v string) (float64, error) {
}
return d.Seconds(), nil
}
+
+func convertBytes(v string) (float64, error) {
+ b, err := humanize.ParseBytes(v)
+ if err != nil {
+ return 0, err
+ }
+ return float64(b), nil
+}
diff --git a/pkg/logql/log/metrics_extraction_test.go b/pkg/logql/log/metrics_extraction_test.go
index c6133c6bb81d5..a0b5d53fc5d6e 100644
--- a/pkg/logql/log/metrics_extraction_test.go
+++ b/pkg/logql/log/metrics_extraction_test.go
@@ -90,6 +90,24 @@ func Test_labelSampleExtractor_Extract(t *testing.T) {
},
true,
},
+ {
+ "convert bytes",
+ mustSampleExtractor(LabelExtractorWithStages(
+ "foo", ConvertBytes, []string{"bar", "buzz"}, false, false, nil, NoopStage,
+ )),
+ labels.Labels{
+ {Name: "foo", Value: "13 MiB"},
+ {Name: "bar", Value: "foo"},
+ {Name: "buzz", Value: "blip"},
+ {Name: "namespace", Value: "dev"},
+ },
+ 13 * 1024 * 1024,
+ labels.Labels{
+ {Name: "bar", Value: "foo"},
+ {Name: "buzz", Value: "blip"},
+ },
+ true,
+ },
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
diff --git a/pkg/logql/parser_test.go b/pkg/logql/parser_test.go
index 7ee139bcb549d..10cde5bcf152c 100644
--- a/pkg/logql/parser_test.go
+++ b/pkg/logql/parser_test.go
@@ -1228,6 +1228,27 @@ func TestParse(t *testing.T) {
OpRangeTypeStdvar, nil, nil,
),
},
+ {
+ in: `sum_over_time({namespace="tns"} |= "level=error" | json |foo>=5,bar<25ms| unwrap bytes(foo) [5m])`,
+ exp: newRangeAggregationExpr(
+ newLogRange(&pipelineExpr{
+ left: newMatcherExpr([]*labels.Matcher{{Type: labels.MatchEqual, Name: "namespace", Value: "tns"}}),
+ pipeline: MultiStageExpr{
+ newLineFilterExpr(nil, labels.MatchEqual, "level=error"),
+ newLabelParserExpr(OpParserTypeJSON, ""),
+ &labelFilterExpr{
+ LabelFilterer: log.NewAndLabelFilter(
+ log.NewNumericLabelFilter(log.LabelFilterGreaterThanOrEqual, "foo", 5),
+ log.NewDurationLabelFilter(log.LabelFilterLesserThan, "bar", 25*time.Millisecond),
+ ),
+ },
+ },
+ },
+ 5*time.Minute,
+ newUnwrapExpr("foo", OpConvBytes)),
+ OpRangeTypeSum, nil, nil,
+ ),
+ },
{
in: `sum_over_time({namespace="tns"} |= "level=error" | json |foo>=5,bar<25ms| unwrap latency [5m])`,
exp: newRangeAggregationExpr(
|
logql
|
Add unwrap bytes() conversion function (#2876)
|
ede6941c6ff0f40d836b288e167a26c34c2a9437
|
2024-06-22 02:22:11
|
Owen Diehl
|
fix(blooms): ignores bloom filtering errors in bounded shard query planning (#13285)
| false
|
diff --git a/pkg/indexgateway/gateway.go b/pkg/indexgateway/gateway.go
index e2850e8c9317f..7b49490a012ef 100644
--- a/pkg/indexgateway/gateway.go
+++ b/pkg/indexgateway/gateway.go
@@ -465,12 +465,15 @@ func (g *Gateway) boundedShards(
// 2) filter via blooms if enabled
filters := syntax.ExtractLineFilters(p.Plan().AST)
if g.bloomQuerier != nil && len(filters) > 0 {
- filtered, err = g.bloomQuerier.FilterChunkRefs(ctx, instanceID, req.From, req.Through, refs, p.Plan())
+ xs, err := g.bloomQuerier.FilterChunkRefs(ctx, instanceID, req.From, req.Through, refs, p.Plan())
if err != nil {
- return err
+ level.Error(logger).Log("msg", "failed to filter chunk refs", "err", err)
+ } else {
+ filtered = xs
}
sp.LogKV(
"stage", "queried bloom gateway",
+ "err", err,
)
}
|
fix
|
ignores bloom filtering errors in bounded shard query planning (#13285)
|
655ab05b915d0961e0154180cae28917ff27f445
|
2022-12-09 13:11:14
|
dependabot[bot]
|
build(deps): bump golang.org/x/crypto from 0.1.0 to 0.4.0 (#7883)
| false
|
diff --git a/go.mod b/go.mod
index b6f90271f3d87..c8dab99ac68e8 100644
--- a/go.mod
+++ b/go.mod
@@ -97,10 +97,10 @@ require (
go.etcd.io/bbolt v1.3.6
go.uber.org/atomic v1.10.0
go.uber.org/goleak v1.2.0
- golang.org/x/crypto v0.1.0
- golang.org/x/net v0.1.0
+ golang.org/x/crypto v0.4.0
+ golang.org/x/net v0.3.0
golang.org/x/sync v0.1.0
- golang.org/x/sys v0.1.0
+ golang.org/x/sys v0.3.0
golang.org/x/time v0.1.0
google.golang.org/api v0.102.0
google.golang.org/grpc v1.50.1
@@ -119,7 +119,7 @@ require (
github.com/thanos-io/objstore v0.0.0-20220715165016-ce338803bc1e
github.com/willf/bloom v2.0.3+incompatible
golang.org/x/oauth2 v0.1.0
- golang.org/x/text v0.4.0
+ golang.org/x/text v0.5.0
)
require (
@@ -283,7 +283,7 @@ require (
go4.org/unsafe/assume-no-moving-gc v0.0.0-20220617031537-928513b29760 // indirect
golang.org/x/exp v0.0.0-20221031165847-c99f073a8326 // indirect
golang.org/x/mod v0.6.0 // indirect
- golang.org/x/term v0.1.0 // indirect
+ golang.org/x/term v0.3.0 // indirect
golang.org/x/tools v0.2.0 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
google.golang.org/appengine v1.6.7 // indirect
diff --git a/go.sum b/go.sum
index b3141b106e713..e163e46d59578 100644
--- a/go.sum
+++ b/go.sum
@@ -1520,8 +1520,8 @@ golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20221012134737-56aed061732a/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
-golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
-golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
+golang.org/x/crypto v0.4.0 h1:UVQgzMY87xqpKNgb+kDsll2Igd33HszWHFLmpaRMq/8=
+golang.org/x/crypto v0.4.0/go.mod h1:3quD/ATkf6oY+rnes5c3ExXTbLc8mueNue5/DoinL80=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@@ -1642,8 +1642,8 @@ golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220907135653-1e95f45603a7/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
golang.org/x/net v0.0.0-20220909164309-bea034e7d591/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
-golang.org/x/net v0.1.0 h1:hZ/3BUoy5aId7sCpA/Tc5lt8DkFgdVS2onTpJsZ/fl0=
-golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
+golang.org/x/net v0.3.0 h1:VWL6FNY2bEEmsGVKabSlHu5Irp34xmMRoqb/9lF9lxk=
+golang.org/x/net v0.3.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
golang.org/x/oauth2 v0.0.0-20170807180024-9a379c6b3e95/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1796,13 +1796,13 @@ golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908150016-7ac13a9a928d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U=
-golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.3.0 h1:w8ZOecv6NaNa/zC8944JTU3vz4u6Lagfk4RPQxv92NQ=
+golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
-golang.org/x/term v0.1.0 h1:g6Z6vPFA9dYBAF7DWcH6sCcOntplXsDKcliusYijMlw=
-golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/term v0.3.0 h1:qoo4akIqOcDME5bhc/NgxUdovd6BSS2uMsVjB56q1xI=
+golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1813,8 +1813,8 @@ golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
-golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
-golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.5.0 h1:OLmvp0KP+FVG99Ct/qFiL/Fhk4zp4QQnZ7b2U+5piUM=
+golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
diff --git a/vendor/golang.org/x/crypto/bcrypt/bcrypt.go b/vendor/golang.org/x/crypto/bcrypt/bcrypt.go
index aeb73f81a14c8..addf56b435a34 100644
--- a/vendor/golang.org/x/crypto/bcrypt/bcrypt.go
+++ b/vendor/golang.org/x/crypto/bcrypt/bcrypt.go
@@ -50,7 +50,7 @@ func (ih InvalidHashPrefixError) Error() string {
type InvalidCostError int
func (ic InvalidCostError) Error() string {
- return fmt.Sprintf("crypto/bcrypt: cost %d is outside allowed range (%d,%d)", int(ic), int(MinCost), int(MaxCost))
+ return fmt.Sprintf("crypto/bcrypt: cost %d is outside allowed range (%d,%d)", int(ic), MinCost, MaxCost)
}
const (
diff --git a/vendor/golang.org/x/crypto/md4/md4block.go b/vendor/golang.org/x/crypto/md4/md4block.go
index 3fed475f3f600..5ea1ba966ea4d 100644
--- a/vendor/golang.org/x/crypto/md4/md4block.go
+++ b/vendor/golang.org/x/crypto/md4/md4block.go
@@ -8,9 +8,11 @@
package md4
-var shift1 = []uint{3, 7, 11, 19}
-var shift2 = []uint{3, 5, 9, 13}
-var shift3 = []uint{3, 9, 11, 15}
+import "math/bits"
+
+var shift1 = []int{3, 7, 11, 19}
+var shift2 = []int{3, 5, 9, 13}
+var shift3 = []int{3, 9, 11, 15}
var xIndex2 = []uint{0, 4, 8, 12, 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15}
var xIndex3 = []uint{0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15}
@@ -48,7 +50,7 @@ func _Block(dig *digest, p []byte) int {
s := shift1[i%4]
f := ((c ^ d) & b) ^ d
a += f + X[x]
- a = a<<s | a>>(32-s)
+ a = bits.RotateLeft32(a, s)
a, b, c, d = d, a, b, c
}
@@ -58,7 +60,7 @@ func _Block(dig *digest, p []byte) int {
s := shift2[i%4]
g := (b & c) | (b & d) | (c & d)
a += g + X[x] + 0x5a827999
- a = a<<s | a>>(32-s)
+ a = bits.RotateLeft32(a, s)
a, b, c, d = d, a, b, c
}
@@ -68,7 +70,7 @@ func _Block(dig *digest, p []byte) int {
s := shift3[i%4]
h := b ^ c ^ d
a += h + X[x] + 0x6ed9eba1
- a = a<<s | a>>(32-s)
+ a = bits.RotateLeft32(a, s)
a, b, c, d = d, a, b, c
}
diff --git a/vendor/golang.org/x/crypto/pkcs12/internal/rc2/rc2.go b/vendor/golang.org/x/crypto/pkcs12/internal/rc2/rc2.go
index 7499e3fb69d2d..05de9cc2cdcc5 100644
--- a/vendor/golang.org/x/crypto/pkcs12/internal/rc2/rc2.go
+++ b/vendor/golang.org/x/crypto/pkcs12/internal/rc2/rc2.go
@@ -14,6 +14,7 @@ package rc2
import (
"crypto/cipher"
"encoding/binary"
+ "math/bits"
)
// The rc2 block size in bytes
@@ -80,10 +81,6 @@ func expandKey(key []byte, t1 int) [64]uint16 {
return k
}
-func rotl16(x uint16, b uint) uint16 {
- return (x >> (16 - b)) | (x << b)
-}
-
func (c *rc2Cipher) Encrypt(dst, src []byte) {
r0 := binary.LittleEndian.Uint16(src[0:])
@@ -96,22 +93,22 @@ func (c *rc2Cipher) Encrypt(dst, src []byte) {
for j <= 16 {
// mix r0
r0 = r0 + c.k[j] + (r3 & r2) + ((^r3) & r1)
- r0 = rotl16(r0, 1)
+ r0 = bits.RotateLeft16(r0, 1)
j++
// mix r1
r1 = r1 + c.k[j] + (r0 & r3) + ((^r0) & r2)
- r1 = rotl16(r1, 2)
+ r1 = bits.RotateLeft16(r1, 2)
j++
// mix r2
r2 = r2 + c.k[j] + (r1 & r0) + ((^r1) & r3)
- r2 = rotl16(r2, 3)
+ r2 = bits.RotateLeft16(r2, 3)
j++
// mix r3
r3 = r3 + c.k[j] + (r2 & r1) + ((^r2) & r0)
- r3 = rotl16(r3, 5)
+ r3 = bits.RotateLeft16(r3, 5)
j++
}
@@ -124,22 +121,22 @@ func (c *rc2Cipher) Encrypt(dst, src []byte) {
for j <= 40 {
// mix r0
r0 = r0 + c.k[j] + (r3 & r2) + ((^r3) & r1)
- r0 = rotl16(r0, 1)
+ r0 = bits.RotateLeft16(r0, 1)
j++
// mix r1
r1 = r1 + c.k[j] + (r0 & r3) + ((^r0) & r2)
- r1 = rotl16(r1, 2)
+ r1 = bits.RotateLeft16(r1, 2)
j++
// mix r2
r2 = r2 + c.k[j] + (r1 & r0) + ((^r1) & r3)
- r2 = rotl16(r2, 3)
+ r2 = bits.RotateLeft16(r2, 3)
j++
// mix r3
r3 = r3 + c.k[j] + (r2 & r1) + ((^r2) & r0)
- r3 = rotl16(r3, 5)
+ r3 = bits.RotateLeft16(r3, 5)
j++
}
@@ -152,22 +149,22 @@ func (c *rc2Cipher) Encrypt(dst, src []byte) {
for j <= 60 {
// mix r0
r0 = r0 + c.k[j] + (r3 & r2) + ((^r3) & r1)
- r0 = rotl16(r0, 1)
+ r0 = bits.RotateLeft16(r0, 1)
j++
// mix r1
r1 = r1 + c.k[j] + (r0 & r3) + ((^r0) & r2)
- r1 = rotl16(r1, 2)
+ r1 = bits.RotateLeft16(r1, 2)
j++
// mix r2
r2 = r2 + c.k[j] + (r1 & r0) + ((^r1) & r3)
- r2 = rotl16(r2, 3)
+ r2 = bits.RotateLeft16(r2, 3)
j++
// mix r3
r3 = r3 + c.k[j] + (r2 & r1) + ((^r2) & r0)
- r3 = rotl16(r3, 5)
+ r3 = bits.RotateLeft16(r3, 5)
j++
}
@@ -188,22 +185,22 @@ func (c *rc2Cipher) Decrypt(dst, src []byte) {
for j >= 44 {
// unmix r3
- r3 = rotl16(r3, 16-5)
+ r3 = bits.RotateLeft16(r3, 16-5)
r3 = r3 - c.k[j] - (r2 & r1) - ((^r2) & r0)
j--
// unmix r2
- r2 = rotl16(r2, 16-3)
+ r2 = bits.RotateLeft16(r2, 16-3)
r2 = r2 - c.k[j] - (r1 & r0) - ((^r1) & r3)
j--
// unmix r1
- r1 = rotl16(r1, 16-2)
+ r1 = bits.RotateLeft16(r1, 16-2)
r1 = r1 - c.k[j] - (r0 & r3) - ((^r0) & r2)
j--
// unmix r0
- r0 = rotl16(r0, 16-1)
+ r0 = bits.RotateLeft16(r0, 16-1)
r0 = r0 - c.k[j] - (r3 & r2) - ((^r3) & r1)
j--
}
@@ -215,22 +212,22 @@ func (c *rc2Cipher) Decrypt(dst, src []byte) {
for j >= 20 {
// unmix r3
- r3 = rotl16(r3, 16-5)
+ r3 = bits.RotateLeft16(r3, 16-5)
r3 = r3 - c.k[j] - (r2 & r1) - ((^r2) & r0)
j--
// unmix r2
- r2 = rotl16(r2, 16-3)
+ r2 = bits.RotateLeft16(r2, 16-3)
r2 = r2 - c.k[j] - (r1 & r0) - ((^r1) & r3)
j--
// unmix r1
- r1 = rotl16(r1, 16-2)
+ r1 = bits.RotateLeft16(r1, 16-2)
r1 = r1 - c.k[j] - (r0 & r3) - ((^r0) & r2)
j--
// unmix r0
- r0 = rotl16(r0, 16-1)
+ r0 = bits.RotateLeft16(r0, 16-1)
r0 = r0 - c.k[j] - (r3 & r2) - ((^r3) & r1)
j--
@@ -243,22 +240,22 @@ func (c *rc2Cipher) Decrypt(dst, src []byte) {
for j >= 0 {
// unmix r3
- r3 = rotl16(r3, 16-5)
+ r3 = bits.RotateLeft16(r3, 16-5)
r3 = r3 - c.k[j] - (r2 & r1) - ((^r2) & r0)
j--
// unmix r2
- r2 = rotl16(r2, 16-3)
+ r2 = bits.RotateLeft16(r2, 16-3)
r2 = r2 - c.k[j] - (r1 & r0) - ((^r1) & r3)
j--
// unmix r1
- r1 = rotl16(r1, 16-2)
+ r1 = bits.RotateLeft16(r1, 16-2)
r1 = r1 - c.k[j] - (r0 & r3) - ((^r0) & r2)
j--
// unmix r0
- r0 = rotl16(r0, 16-1)
+ r0 = bits.RotateLeft16(r0, 16-1)
r0 = r0 - c.k[j] - (r3 & r2) - ((^r3) & r1)
j--
diff --git a/vendor/golang.org/x/crypto/sha3/keccakf.go b/vendor/golang.org/x/crypto/sha3/keccakf.go
index 0f4ae8bacff54..e5faa375c04e8 100644
--- a/vendor/golang.org/x/crypto/sha3/keccakf.go
+++ b/vendor/golang.org/x/crypto/sha3/keccakf.go
@@ -7,6 +7,8 @@
package sha3
+import "math/bits"
+
// rc stores the round constants for use in the ι step.
var rc = [24]uint64{
0x0000000000000001,
@@ -60,13 +62,13 @@ func keccakF1600(a *[25]uint64) {
bc0 = a[0] ^ d0
t = a[6] ^ d1
- bc1 = t<<44 | t>>(64-44)
+ bc1 = bits.RotateLeft64(t, 44)
t = a[12] ^ d2
- bc2 = t<<43 | t>>(64-43)
+ bc2 = bits.RotateLeft64(t, 43)
t = a[18] ^ d3
- bc3 = t<<21 | t>>(64-21)
+ bc3 = bits.RotateLeft64(t, 21)
t = a[24] ^ d4
- bc4 = t<<14 | t>>(64-14)
+ bc4 = bits.RotateLeft64(t, 14)
a[0] = bc0 ^ (bc2 &^ bc1) ^ rc[i]
a[6] = bc1 ^ (bc3 &^ bc2)
a[12] = bc2 ^ (bc4 &^ bc3)
@@ -74,15 +76,15 @@ func keccakF1600(a *[25]uint64) {
a[24] = bc4 ^ (bc1 &^ bc0)
t = a[10] ^ d0
- bc2 = t<<3 | t>>(64-3)
+ bc2 = bits.RotateLeft64(t, 3)
t = a[16] ^ d1
- bc3 = t<<45 | t>>(64-45)
+ bc3 = bits.RotateLeft64(t, 45)
t = a[22] ^ d2
- bc4 = t<<61 | t>>(64-61)
+ bc4 = bits.RotateLeft64(t, 61)
t = a[3] ^ d3
- bc0 = t<<28 | t>>(64-28)
+ bc0 = bits.RotateLeft64(t, 28)
t = a[9] ^ d4
- bc1 = t<<20 | t>>(64-20)
+ bc1 = bits.RotateLeft64(t, 20)
a[10] = bc0 ^ (bc2 &^ bc1)
a[16] = bc1 ^ (bc3 &^ bc2)
a[22] = bc2 ^ (bc4 &^ bc3)
@@ -90,15 +92,15 @@ func keccakF1600(a *[25]uint64) {
a[9] = bc4 ^ (bc1 &^ bc0)
t = a[20] ^ d0
- bc4 = t<<18 | t>>(64-18)
+ bc4 = bits.RotateLeft64(t, 18)
t = a[1] ^ d1
- bc0 = t<<1 | t>>(64-1)
+ bc0 = bits.RotateLeft64(t, 1)
t = a[7] ^ d2
- bc1 = t<<6 | t>>(64-6)
+ bc1 = bits.RotateLeft64(t, 6)
t = a[13] ^ d3
- bc2 = t<<25 | t>>(64-25)
+ bc2 = bits.RotateLeft64(t, 25)
t = a[19] ^ d4
- bc3 = t<<8 | t>>(64-8)
+ bc3 = bits.RotateLeft64(t, 8)
a[20] = bc0 ^ (bc2 &^ bc1)
a[1] = bc1 ^ (bc3 &^ bc2)
a[7] = bc2 ^ (bc4 &^ bc3)
@@ -106,15 +108,15 @@ func keccakF1600(a *[25]uint64) {
a[19] = bc4 ^ (bc1 &^ bc0)
t = a[5] ^ d0
- bc1 = t<<36 | t>>(64-36)
+ bc1 = bits.RotateLeft64(t, 36)
t = a[11] ^ d1
- bc2 = t<<10 | t>>(64-10)
+ bc2 = bits.RotateLeft64(t, 10)
t = a[17] ^ d2
- bc3 = t<<15 | t>>(64-15)
+ bc3 = bits.RotateLeft64(t, 15)
t = a[23] ^ d3
- bc4 = t<<56 | t>>(64-56)
+ bc4 = bits.RotateLeft64(t, 56)
t = a[4] ^ d4
- bc0 = t<<27 | t>>(64-27)
+ bc0 = bits.RotateLeft64(t, 27)
a[5] = bc0 ^ (bc2 &^ bc1)
a[11] = bc1 ^ (bc3 &^ bc2)
a[17] = bc2 ^ (bc4 &^ bc3)
@@ -122,15 +124,15 @@ func keccakF1600(a *[25]uint64) {
a[4] = bc4 ^ (bc1 &^ bc0)
t = a[15] ^ d0
- bc3 = t<<41 | t>>(64-41)
+ bc3 = bits.RotateLeft64(t, 41)
t = a[21] ^ d1
- bc4 = t<<2 | t>>(64-2)
+ bc4 = bits.RotateLeft64(t, 2)
t = a[2] ^ d2
- bc0 = t<<62 | t>>(64-62)
+ bc0 = bits.RotateLeft64(t, 62)
t = a[8] ^ d3
- bc1 = t<<55 | t>>(64-55)
+ bc1 = bits.RotateLeft64(t, 55)
t = a[14] ^ d4
- bc2 = t<<39 | t>>(64-39)
+ bc2 = bits.RotateLeft64(t, 39)
a[15] = bc0 ^ (bc2 &^ bc1)
a[21] = bc1 ^ (bc3 &^ bc2)
a[2] = bc2 ^ (bc4 &^ bc3)
@@ -151,13 +153,13 @@ func keccakF1600(a *[25]uint64) {
bc0 = a[0] ^ d0
t = a[16] ^ d1
- bc1 = t<<44 | t>>(64-44)
+ bc1 = bits.RotateLeft64(t, 44)
t = a[7] ^ d2
- bc2 = t<<43 | t>>(64-43)
+ bc2 = bits.RotateLeft64(t, 43)
t = a[23] ^ d3
- bc3 = t<<21 | t>>(64-21)
+ bc3 = bits.RotateLeft64(t, 21)
t = a[14] ^ d4
- bc4 = t<<14 | t>>(64-14)
+ bc4 = bits.RotateLeft64(t, 14)
a[0] = bc0 ^ (bc2 &^ bc1) ^ rc[i+1]
a[16] = bc1 ^ (bc3 &^ bc2)
a[7] = bc2 ^ (bc4 &^ bc3)
@@ -165,15 +167,15 @@ func keccakF1600(a *[25]uint64) {
a[14] = bc4 ^ (bc1 &^ bc0)
t = a[20] ^ d0
- bc2 = t<<3 | t>>(64-3)
+ bc2 = bits.RotateLeft64(t, 3)
t = a[11] ^ d1
- bc3 = t<<45 | t>>(64-45)
+ bc3 = bits.RotateLeft64(t, 45)
t = a[2] ^ d2
- bc4 = t<<61 | t>>(64-61)
+ bc4 = bits.RotateLeft64(t, 61)
t = a[18] ^ d3
- bc0 = t<<28 | t>>(64-28)
+ bc0 = bits.RotateLeft64(t, 28)
t = a[9] ^ d4
- bc1 = t<<20 | t>>(64-20)
+ bc1 = bits.RotateLeft64(t, 20)
a[20] = bc0 ^ (bc2 &^ bc1)
a[11] = bc1 ^ (bc3 &^ bc2)
a[2] = bc2 ^ (bc4 &^ bc3)
@@ -181,15 +183,15 @@ func keccakF1600(a *[25]uint64) {
a[9] = bc4 ^ (bc1 &^ bc0)
t = a[15] ^ d0
- bc4 = t<<18 | t>>(64-18)
+ bc4 = bits.RotateLeft64(t, 18)
t = a[6] ^ d1
- bc0 = t<<1 | t>>(64-1)
+ bc0 = bits.RotateLeft64(t, 1)
t = a[22] ^ d2
- bc1 = t<<6 | t>>(64-6)
+ bc1 = bits.RotateLeft64(t, 6)
t = a[13] ^ d3
- bc2 = t<<25 | t>>(64-25)
+ bc2 = bits.RotateLeft64(t, 25)
t = a[4] ^ d4
- bc3 = t<<8 | t>>(64-8)
+ bc3 = bits.RotateLeft64(t, 8)
a[15] = bc0 ^ (bc2 &^ bc1)
a[6] = bc1 ^ (bc3 &^ bc2)
a[22] = bc2 ^ (bc4 &^ bc3)
@@ -197,15 +199,15 @@ func keccakF1600(a *[25]uint64) {
a[4] = bc4 ^ (bc1 &^ bc0)
t = a[10] ^ d0
- bc1 = t<<36 | t>>(64-36)
+ bc1 = bits.RotateLeft64(t, 36)
t = a[1] ^ d1
- bc2 = t<<10 | t>>(64-10)
+ bc2 = bits.RotateLeft64(t, 10)
t = a[17] ^ d2
- bc3 = t<<15 | t>>(64-15)
+ bc3 = bits.RotateLeft64(t, 15)
t = a[8] ^ d3
- bc4 = t<<56 | t>>(64-56)
+ bc4 = bits.RotateLeft64(t, 56)
t = a[24] ^ d4
- bc0 = t<<27 | t>>(64-27)
+ bc0 = bits.RotateLeft64(t, 27)
a[10] = bc0 ^ (bc2 &^ bc1)
a[1] = bc1 ^ (bc3 &^ bc2)
a[17] = bc2 ^ (bc4 &^ bc3)
@@ -213,15 +215,15 @@ func keccakF1600(a *[25]uint64) {
a[24] = bc4 ^ (bc1 &^ bc0)
t = a[5] ^ d0
- bc3 = t<<41 | t>>(64-41)
+ bc3 = bits.RotateLeft64(t, 41)
t = a[21] ^ d1
- bc4 = t<<2 | t>>(64-2)
+ bc4 = bits.RotateLeft64(t, 2)
t = a[12] ^ d2
- bc0 = t<<62 | t>>(64-62)
+ bc0 = bits.RotateLeft64(t, 62)
t = a[3] ^ d3
- bc1 = t<<55 | t>>(64-55)
+ bc1 = bits.RotateLeft64(t, 55)
t = a[19] ^ d4
- bc2 = t<<39 | t>>(64-39)
+ bc2 = bits.RotateLeft64(t, 39)
a[5] = bc0 ^ (bc2 &^ bc1)
a[21] = bc1 ^ (bc3 &^ bc2)
a[12] = bc2 ^ (bc4 &^ bc3)
@@ -242,13 +244,13 @@ func keccakF1600(a *[25]uint64) {
bc0 = a[0] ^ d0
t = a[11] ^ d1
- bc1 = t<<44 | t>>(64-44)
+ bc1 = bits.RotateLeft64(t, 44)
t = a[22] ^ d2
- bc2 = t<<43 | t>>(64-43)
+ bc2 = bits.RotateLeft64(t, 43)
t = a[8] ^ d3
- bc3 = t<<21 | t>>(64-21)
+ bc3 = bits.RotateLeft64(t, 21)
t = a[19] ^ d4
- bc4 = t<<14 | t>>(64-14)
+ bc4 = bits.RotateLeft64(t, 14)
a[0] = bc0 ^ (bc2 &^ bc1) ^ rc[i+2]
a[11] = bc1 ^ (bc3 &^ bc2)
a[22] = bc2 ^ (bc4 &^ bc3)
@@ -256,15 +258,15 @@ func keccakF1600(a *[25]uint64) {
a[19] = bc4 ^ (bc1 &^ bc0)
t = a[15] ^ d0
- bc2 = t<<3 | t>>(64-3)
+ bc2 = bits.RotateLeft64(t, 3)
t = a[1] ^ d1
- bc3 = t<<45 | t>>(64-45)
+ bc3 = bits.RotateLeft64(t, 45)
t = a[12] ^ d2
- bc4 = t<<61 | t>>(64-61)
+ bc4 = bits.RotateLeft64(t, 61)
t = a[23] ^ d3
- bc0 = t<<28 | t>>(64-28)
+ bc0 = bits.RotateLeft64(t, 28)
t = a[9] ^ d4
- bc1 = t<<20 | t>>(64-20)
+ bc1 = bits.RotateLeft64(t, 20)
a[15] = bc0 ^ (bc2 &^ bc1)
a[1] = bc1 ^ (bc3 &^ bc2)
a[12] = bc2 ^ (bc4 &^ bc3)
@@ -272,15 +274,15 @@ func keccakF1600(a *[25]uint64) {
a[9] = bc4 ^ (bc1 &^ bc0)
t = a[5] ^ d0
- bc4 = t<<18 | t>>(64-18)
+ bc4 = bits.RotateLeft64(t, 18)
t = a[16] ^ d1
- bc0 = t<<1 | t>>(64-1)
+ bc0 = bits.RotateLeft64(t, 1)
t = a[2] ^ d2
- bc1 = t<<6 | t>>(64-6)
+ bc1 = bits.RotateLeft64(t, 6)
t = a[13] ^ d3
- bc2 = t<<25 | t>>(64-25)
+ bc2 = bits.RotateLeft64(t, 25)
t = a[24] ^ d4
- bc3 = t<<8 | t>>(64-8)
+ bc3 = bits.RotateLeft64(t, 8)
a[5] = bc0 ^ (bc2 &^ bc1)
a[16] = bc1 ^ (bc3 &^ bc2)
a[2] = bc2 ^ (bc4 &^ bc3)
@@ -288,15 +290,15 @@ func keccakF1600(a *[25]uint64) {
a[24] = bc4 ^ (bc1 &^ bc0)
t = a[20] ^ d0
- bc1 = t<<36 | t>>(64-36)
+ bc1 = bits.RotateLeft64(t, 36)
t = a[6] ^ d1
- bc2 = t<<10 | t>>(64-10)
+ bc2 = bits.RotateLeft64(t, 10)
t = a[17] ^ d2
- bc3 = t<<15 | t>>(64-15)
+ bc3 = bits.RotateLeft64(t, 15)
t = a[3] ^ d3
- bc4 = t<<56 | t>>(64-56)
+ bc4 = bits.RotateLeft64(t, 56)
t = a[14] ^ d4
- bc0 = t<<27 | t>>(64-27)
+ bc0 = bits.RotateLeft64(t, 27)
a[20] = bc0 ^ (bc2 &^ bc1)
a[6] = bc1 ^ (bc3 &^ bc2)
a[17] = bc2 ^ (bc4 &^ bc3)
@@ -304,15 +306,15 @@ func keccakF1600(a *[25]uint64) {
a[14] = bc4 ^ (bc1 &^ bc0)
t = a[10] ^ d0
- bc3 = t<<41 | t>>(64-41)
+ bc3 = bits.RotateLeft64(t, 41)
t = a[21] ^ d1
- bc4 = t<<2 | t>>(64-2)
+ bc4 = bits.RotateLeft64(t, 2)
t = a[7] ^ d2
- bc0 = t<<62 | t>>(64-62)
+ bc0 = bits.RotateLeft64(t, 62)
t = a[18] ^ d3
- bc1 = t<<55 | t>>(64-55)
+ bc1 = bits.RotateLeft64(t, 55)
t = a[4] ^ d4
- bc2 = t<<39 | t>>(64-39)
+ bc2 = bits.RotateLeft64(t, 39)
a[10] = bc0 ^ (bc2 &^ bc1)
a[21] = bc1 ^ (bc3 &^ bc2)
a[7] = bc2 ^ (bc4 &^ bc3)
@@ -333,13 +335,13 @@ func keccakF1600(a *[25]uint64) {
bc0 = a[0] ^ d0
t = a[1] ^ d1
- bc1 = t<<44 | t>>(64-44)
+ bc1 = bits.RotateLeft64(t, 44)
t = a[2] ^ d2
- bc2 = t<<43 | t>>(64-43)
+ bc2 = bits.RotateLeft64(t, 43)
t = a[3] ^ d3
- bc3 = t<<21 | t>>(64-21)
+ bc3 = bits.RotateLeft64(t, 21)
t = a[4] ^ d4
- bc4 = t<<14 | t>>(64-14)
+ bc4 = bits.RotateLeft64(t, 14)
a[0] = bc0 ^ (bc2 &^ bc1) ^ rc[i+3]
a[1] = bc1 ^ (bc3 &^ bc2)
a[2] = bc2 ^ (bc4 &^ bc3)
@@ -347,15 +349,15 @@ func keccakF1600(a *[25]uint64) {
a[4] = bc4 ^ (bc1 &^ bc0)
t = a[5] ^ d0
- bc2 = t<<3 | t>>(64-3)
+ bc2 = bits.RotateLeft64(t, 3)
t = a[6] ^ d1
- bc3 = t<<45 | t>>(64-45)
+ bc3 = bits.RotateLeft64(t, 45)
t = a[7] ^ d2
- bc4 = t<<61 | t>>(64-61)
+ bc4 = bits.RotateLeft64(t, 61)
t = a[8] ^ d3
- bc0 = t<<28 | t>>(64-28)
+ bc0 = bits.RotateLeft64(t, 28)
t = a[9] ^ d4
- bc1 = t<<20 | t>>(64-20)
+ bc1 = bits.RotateLeft64(t, 20)
a[5] = bc0 ^ (bc2 &^ bc1)
a[6] = bc1 ^ (bc3 &^ bc2)
a[7] = bc2 ^ (bc4 &^ bc3)
@@ -363,15 +365,15 @@ func keccakF1600(a *[25]uint64) {
a[9] = bc4 ^ (bc1 &^ bc0)
t = a[10] ^ d0
- bc4 = t<<18 | t>>(64-18)
+ bc4 = bits.RotateLeft64(t, 18)
t = a[11] ^ d1
- bc0 = t<<1 | t>>(64-1)
+ bc0 = bits.RotateLeft64(t, 1)
t = a[12] ^ d2
- bc1 = t<<6 | t>>(64-6)
+ bc1 = bits.RotateLeft64(t, 6)
t = a[13] ^ d3
- bc2 = t<<25 | t>>(64-25)
+ bc2 = bits.RotateLeft64(t, 25)
t = a[14] ^ d4
- bc3 = t<<8 | t>>(64-8)
+ bc3 = bits.RotateLeft64(t, 8)
a[10] = bc0 ^ (bc2 &^ bc1)
a[11] = bc1 ^ (bc3 &^ bc2)
a[12] = bc2 ^ (bc4 &^ bc3)
@@ -379,15 +381,15 @@ func keccakF1600(a *[25]uint64) {
a[14] = bc4 ^ (bc1 &^ bc0)
t = a[15] ^ d0
- bc1 = t<<36 | t>>(64-36)
+ bc1 = bits.RotateLeft64(t, 36)
t = a[16] ^ d1
- bc2 = t<<10 | t>>(64-10)
+ bc2 = bits.RotateLeft64(t, 10)
t = a[17] ^ d2
- bc3 = t<<15 | t>>(64-15)
+ bc3 = bits.RotateLeft64(t, 15)
t = a[18] ^ d3
- bc4 = t<<56 | t>>(64-56)
+ bc4 = bits.RotateLeft64(t, 56)
t = a[19] ^ d4
- bc0 = t<<27 | t>>(64-27)
+ bc0 = bits.RotateLeft64(t, 27)
a[15] = bc0 ^ (bc2 &^ bc1)
a[16] = bc1 ^ (bc3 &^ bc2)
a[17] = bc2 ^ (bc4 &^ bc3)
@@ -395,15 +397,15 @@ func keccakF1600(a *[25]uint64) {
a[19] = bc4 ^ (bc1 &^ bc0)
t = a[20] ^ d0
- bc3 = t<<41 | t>>(64-41)
+ bc3 = bits.RotateLeft64(t, 41)
t = a[21] ^ d1
- bc4 = t<<2 | t>>(64-2)
+ bc4 = bits.RotateLeft64(t, 2)
t = a[22] ^ d2
- bc0 = t<<62 | t>>(64-62)
+ bc0 = bits.RotateLeft64(t, 62)
t = a[23] ^ d3
- bc1 = t<<55 | t>>(64-55)
+ bc1 = bits.RotateLeft64(t, 55)
t = a[24] ^ d4
- bc2 = t<<39 | t>>(64-39)
+ bc2 = bits.RotateLeft64(t, 39)
a[20] = bc0 ^ (bc2 &^ bc1)
a[21] = bc1 ^ (bc3 &^ bc2)
a[22] = bc2 ^ (bc4 &^ bc3)
diff --git a/vendor/golang.org/x/net/http2/headermap.go b/vendor/golang.org/x/net/http2/headermap.go
index 9e12941da4c3d..149b3dd20e45f 100644
--- a/vendor/golang.org/x/net/http2/headermap.go
+++ b/vendor/golang.org/x/net/http2/headermap.go
@@ -27,7 +27,14 @@ func buildCommonHeaderMaps() {
"accept-language",
"accept-ranges",
"age",
+ "access-control-allow-credentials",
+ "access-control-allow-headers",
+ "access-control-allow-methods",
"access-control-allow-origin",
+ "access-control-expose-headers",
+ "access-control-max-age",
+ "access-control-request-headers",
+ "access-control-request-method",
"allow",
"authorization",
"cache-control",
@@ -53,6 +60,7 @@ func buildCommonHeaderMaps() {
"link",
"location",
"max-forwards",
+ "origin",
"proxy-authenticate",
"proxy-authorization",
"range",
@@ -68,6 +76,8 @@ func buildCommonHeaderMaps() {
"vary",
"via",
"www-authenticate",
+ "x-forwarded-for",
+ "x-forwarded-proto",
}
commonLowerHeader = make(map[string]string, len(common))
commonCanonHeader = make(map[string]string, len(common))
@@ -85,3 +95,11 @@ func lowerHeader(v string) (lower string, ascii bool) {
}
return asciiToLower(v)
}
+
+func canonicalHeader(v string) string {
+ buildCommonHeaderMapsOnce()
+ if s, ok := commonCanonHeader[v]; ok {
+ return s
+ }
+ return http.CanonicalHeaderKey(v)
+}
diff --git a/vendor/golang.org/x/net/http2/hpack/encode.go b/vendor/golang.org/x/net/http2/hpack/encode.go
index 6886dc163cba5..46219da2b01b2 100644
--- a/vendor/golang.org/x/net/http2/hpack/encode.go
+++ b/vendor/golang.org/x/net/http2/hpack/encode.go
@@ -116,6 +116,11 @@ func (e *Encoder) SetMaxDynamicTableSize(v uint32) {
e.dynTab.setMaxSize(v)
}
+// MaxDynamicTableSize returns the current dynamic header table size.
+func (e *Encoder) MaxDynamicTableSize() (v uint32) {
+ return e.dynTab.maxSize
+}
+
// SetMaxDynamicTableSizeLimit changes the maximum value that can be
// specified in SetMaxDynamicTableSize to v. By default, it is set to
// 4096, which is the same size of the default dynamic header table
diff --git a/vendor/golang.org/x/net/http2/hpack/static_table.go b/vendor/golang.org/x/net/http2/hpack/static_table.go
new file mode 100644
index 0000000000000..754a1eb919e9d
--- /dev/null
+++ b/vendor/golang.org/x/net/http2/hpack/static_table.go
@@ -0,0 +1,188 @@
+// go generate gen.go
+// Code generated by the command above; DO NOT EDIT.
+
+package hpack
+
+var staticTable = &headerFieldTable{
+ evictCount: 0,
+ byName: map[string]uint64{
+ ":authority": 1,
+ ":method": 3,
+ ":path": 5,
+ ":scheme": 7,
+ ":status": 14,
+ "accept-charset": 15,
+ "accept-encoding": 16,
+ "accept-language": 17,
+ "accept-ranges": 18,
+ "accept": 19,
+ "access-control-allow-origin": 20,
+ "age": 21,
+ "allow": 22,
+ "authorization": 23,
+ "cache-control": 24,
+ "content-disposition": 25,
+ "content-encoding": 26,
+ "content-language": 27,
+ "content-length": 28,
+ "content-location": 29,
+ "content-range": 30,
+ "content-type": 31,
+ "cookie": 32,
+ "date": 33,
+ "etag": 34,
+ "expect": 35,
+ "expires": 36,
+ "from": 37,
+ "host": 38,
+ "if-match": 39,
+ "if-modified-since": 40,
+ "if-none-match": 41,
+ "if-range": 42,
+ "if-unmodified-since": 43,
+ "last-modified": 44,
+ "link": 45,
+ "location": 46,
+ "max-forwards": 47,
+ "proxy-authenticate": 48,
+ "proxy-authorization": 49,
+ "range": 50,
+ "referer": 51,
+ "refresh": 52,
+ "retry-after": 53,
+ "server": 54,
+ "set-cookie": 55,
+ "strict-transport-security": 56,
+ "transfer-encoding": 57,
+ "user-agent": 58,
+ "vary": 59,
+ "via": 60,
+ "www-authenticate": 61,
+ },
+ byNameValue: map[pairNameValue]uint64{
+ {name: ":authority", value: ""}: 1,
+ {name: ":method", value: "GET"}: 2,
+ {name: ":method", value: "POST"}: 3,
+ {name: ":path", value: "/"}: 4,
+ {name: ":path", value: "/index.html"}: 5,
+ {name: ":scheme", value: "http"}: 6,
+ {name: ":scheme", value: "https"}: 7,
+ {name: ":status", value: "200"}: 8,
+ {name: ":status", value: "204"}: 9,
+ {name: ":status", value: "206"}: 10,
+ {name: ":status", value: "304"}: 11,
+ {name: ":status", value: "400"}: 12,
+ {name: ":status", value: "404"}: 13,
+ {name: ":status", value: "500"}: 14,
+ {name: "accept-charset", value: ""}: 15,
+ {name: "accept-encoding", value: "gzip, deflate"}: 16,
+ {name: "accept-language", value: ""}: 17,
+ {name: "accept-ranges", value: ""}: 18,
+ {name: "accept", value: ""}: 19,
+ {name: "access-control-allow-origin", value: ""}: 20,
+ {name: "age", value: ""}: 21,
+ {name: "allow", value: ""}: 22,
+ {name: "authorization", value: ""}: 23,
+ {name: "cache-control", value: ""}: 24,
+ {name: "content-disposition", value: ""}: 25,
+ {name: "content-encoding", value: ""}: 26,
+ {name: "content-language", value: ""}: 27,
+ {name: "content-length", value: ""}: 28,
+ {name: "content-location", value: ""}: 29,
+ {name: "content-range", value: ""}: 30,
+ {name: "content-type", value: ""}: 31,
+ {name: "cookie", value: ""}: 32,
+ {name: "date", value: ""}: 33,
+ {name: "etag", value: ""}: 34,
+ {name: "expect", value: ""}: 35,
+ {name: "expires", value: ""}: 36,
+ {name: "from", value: ""}: 37,
+ {name: "host", value: ""}: 38,
+ {name: "if-match", value: ""}: 39,
+ {name: "if-modified-since", value: ""}: 40,
+ {name: "if-none-match", value: ""}: 41,
+ {name: "if-range", value: ""}: 42,
+ {name: "if-unmodified-since", value: ""}: 43,
+ {name: "last-modified", value: ""}: 44,
+ {name: "link", value: ""}: 45,
+ {name: "location", value: ""}: 46,
+ {name: "max-forwards", value: ""}: 47,
+ {name: "proxy-authenticate", value: ""}: 48,
+ {name: "proxy-authorization", value: ""}: 49,
+ {name: "range", value: ""}: 50,
+ {name: "referer", value: ""}: 51,
+ {name: "refresh", value: ""}: 52,
+ {name: "retry-after", value: ""}: 53,
+ {name: "server", value: ""}: 54,
+ {name: "set-cookie", value: ""}: 55,
+ {name: "strict-transport-security", value: ""}: 56,
+ {name: "transfer-encoding", value: ""}: 57,
+ {name: "user-agent", value: ""}: 58,
+ {name: "vary", value: ""}: 59,
+ {name: "via", value: ""}: 60,
+ {name: "www-authenticate", value: ""}: 61,
+ },
+ ents: []HeaderField{
+ {Name: ":authority", Value: "", Sensitive: false},
+ {Name: ":method", Value: "GET", Sensitive: false},
+ {Name: ":method", Value: "POST", Sensitive: false},
+ {Name: ":path", Value: "/", Sensitive: false},
+ {Name: ":path", Value: "/index.html", Sensitive: false},
+ {Name: ":scheme", Value: "http", Sensitive: false},
+ {Name: ":scheme", Value: "https", Sensitive: false},
+ {Name: ":status", Value: "200", Sensitive: false},
+ {Name: ":status", Value: "204", Sensitive: false},
+ {Name: ":status", Value: "206", Sensitive: false},
+ {Name: ":status", Value: "304", Sensitive: false},
+ {Name: ":status", Value: "400", Sensitive: false},
+ {Name: ":status", Value: "404", Sensitive: false},
+ {Name: ":status", Value: "500", Sensitive: false},
+ {Name: "accept-charset", Value: "", Sensitive: false},
+ {Name: "accept-encoding", Value: "gzip, deflate", Sensitive: false},
+ {Name: "accept-language", Value: "", Sensitive: false},
+ {Name: "accept-ranges", Value: "", Sensitive: false},
+ {Name: "accept", Value: "", Sensitive: false},
+ {Name: "access-control-allow-origin", Value: "", Sensitive: false},
+ {Name: "age", Value: "", Sensitive: false},
+ {Name: "allow", Value: "", Sensitive: false},
+ {Name: "authorization", Value: "", Sensitive: false},
+ {Name: "cache-control", Value: "", Sensitive: false},
+ {Name: "content-disposition", Value: "", Sensitive: false},
+ {Name: "content-encoding", Value: "", Sensitive: false},
+ {Name: "content-language", Value: "", Sensitive: false},
+ {Name: "content-length", Value: "", Sensitive: false},
+ {Name: "content-location", Value: "", Sensitive: false},
+ {Name: "content-range", Value: "", Sensitive: false},
+ {Name: "content-type", Value: "", Sensitive: false},
+ {Name: "cookie", Value: "", Sensitive: false},
+ {Name: "date", Value: "", Sensitive: false},
+ {Name: "etag", Value: "", Sensitive: false},
+ {Name: "expect", Value: "", Sensitive: false},
+ {Name: "expires", Value: "", Sensitive: false},
+ {Name: "from", Value: "", Sensitive: false},
+ {Name: "host", Value: "", Sensitive: false},
+ {Name: "if-match", Value: "", Sensitive: false},
+ {Name: "if-modified-since", Value: "", Sensitive: false},
+ {Name: "if-none-match", Value: "", Sensitive: false},
+ {Name: "if-range", Value: "", Sensitive: false},
+ {Name: "if-unmodified-since", Value: "", Sensitive: false},
+ {Name: "last-modified", Value: "", Sensitive: false},
+ {Name: "link", Value: "", Sensitive: false},
+ {Name: "location", Value: "", Sensitive: false},
+ {Name: "max-forwards", Value: "", Sensitive: false},
+ {Name: "proxy-authenticate", Value: "", Sensitive: false},
+ {Name: "proxy-authorization", Value: "", Sensitive: false},
+ {Name: "range", Value: "", Sensitive: false},
+ {Name: "referer", Value: "", Sensitive: false},
+ {Name: "refresh", Value: "", Sensitive: false},
+ {Name: "retry-after", Value: "", Sensitive: false},
+ {Name: "server", Value: "", Sensitive: false},
+ {Name: "set-cookie", Value: "", Sensitive: false},
+ {Name: "strict-transport-security", Value: "", Sensitive: false},
+ {Name: "transfer-encoding", Value: "", Sensitive: false},
+ {Name: "user-agent", Value: "", Sensitive: false},
+ {Name: "vary", Value: "", Sensitive: false},
+ {Name: "via", Value: "", Sensitive: false},
+ {Name: "www-authenticate", Value: "", Sensitive: false},
+ },
+}
diff --git a/vendor/golang.org/x/net/http2/hpack/tables.go b/vendor/golang.org/x/net/http2/hpack/tables.go
index a66cfbea69d91..8cbdf3f019cb1 100644
--- a/vendor/golang.org/x/net/http2/hpack/tables.go
+++ b/vendor/golang.org/x/net/http2/hpack/tables.go
@@ -96,8 +96,7 @@ func (t *headerFieldTable) evictOldest(n int) {
// meaning t.ents is reversed for dynamic tables. Hence, when t is a dynamic
// table, the return value i actually refers to the entry t.ents[t.len()-i].
//
-// All tables are assumed to be a dynamic tables except for the global
-// staticTable pointer.
+// All tables are assumed to be a dynamic tables except for the global staticTable.
//
// See Section 2.3.3.
func (t *headerFieldTable) search(f HeaderField) (i uint64, nameValueMatch bool) {
@@ -125,81 +124,6 @@ func (t *headerFieldTable) idToIndex(id uint64) uint64 {
return k + 1
}
-// http://tools.ietf.org/html/draft-ietf-httpbis-header-compression-07#appendix-B
-var staticTable = newStaticTable()
-var staticTableEntries = [...]HeaderField{
- {Name: ":authority"},
- {Name: ":method", Value: "GET"},
- {Name: ":method", Value: "POST"},
- {Name: ":path", Value: "/"},
- {Name: ":path", Value: "/index.html"},
- {Name: ":scheme", Value: "http"},
- {Name: ":scheme", Value: "https"},
- {Name: ":status", Value: "200"},
- {Name: ":status", Value: "204"},
- {Name: ":status", Value: "206"},
- {Name: ":status", Value: "304"},
- {Name: ":status", Value: "400"},
- {Name: ":status", Value: "404"},
- {Name: ":status", Value: "500"},
- {Name: "accept-charset"},
- {Name: "accept-encoding", Value: "gzip, deflate"},
- {Name: "accept-language"},
- {Name: "accept-ranges"},
- {Name: "accept"},
- {Name: "access-control-allow-origin"},
- {Name: "age"},
- {Name: "allow"},
- {Name: "authorization"},
- {Name: "cache-control"},
- {Name: "content-disposition"},
- {Name: "content-encoding"},
- {Name: "content-language"},
- {Name: "content-length"},
- {Name: "content-location"},
- {Name: "content-range"},
- {Name: "content-type"},
- {Name: "cookie"},
- {Name: "date"},
- {Name: "etag"},
- {Name: "expect"},
- {Name: "expires"},
- {Name: "from"},
- {Name: "host"},
- {Name: "if-match"},
- {Name: "if-modified-since"},
- {Name: "if-none-match"},
- {Name: "if-range"},
- {Name: "if-unmodified-since"},
- {Name: "last-modified"},
- {Name: "link"},
- {Name: "location"},
- {Name: "max-forwards"},
- {Name: "proxy-authenticate"},
- {Name: "proxy-authorization"},
- {Name: "range"},
- {Name: "referer"},
- {Name: "refresh"},
- {Name: "retry-after"},
- {Name: "server"},
- {Name: "set-cookie"},
- {Name: "strict-transport-security"},
- {Name: "transfer-encoding"},
- {Name: "user-agent"},
- {Name: "vary"},
- {Name: "via"},
- {Name: "www-authenticate"},
-}
-
-func newStaticTable() *headerFieldTable {
- t := &headerFieldTable{}
- t.init()
- for _, e := range staticTableEntries[:] {
- t.addEntry(e)
- }
- return t
-}
-
var huffmanCodes = [256]uint32{
0x1ff8,
0x7fffd8,
diff --git a/vendor/golang.org/x/net/http2/server.go b/vendor/golang.org/x/net/http2/server.go
index 43cc2a34ad021..e35a76c07b733 100644
--- a/vendor/golang.org/x/net/http2/server.go
+++ b/vendor/golang.org/x/net/http2/server.go
@@ -98,6 +98,19 @@ type Server struct {
// the HTTP/2 spec's recommendations.
MaxConcurrentStreams uint32
+ // MaxDecoderHeaderTableSize optionally specifies the http2
+ // SETTINGS_HEADER_TABLE_SIZE to send in the initial settings frame. It
+ // informs the remote endpoint of the maximum size of the header compression
+ // table used to decode header blocks, in octets. If zero, the default value
+ // of 4096 is used.
+ MaxDecoderHeaderTableSize uint32
+
+ // MaxEncoderHeaderTableSize optionally specifies an upper limit for the
+ // header compression table used for encoding request headers. Received
+ // SETTINGS_HEADER_TABLE_SIZE settings are capped at this limit. If zero,
+ // the default value of 4096 is used.
+ MaxEncoderHeaderTableSize uint32
+
// MaxReadFrameSize optionally specifies the largest frame
// this server is willing to read. A valid value is between
// 16k and 16M, inclusive. If zero or otherwise invalid, a
@@ -170,6 +183,20 @@ func (s *Server) maxConcurrentStreams() uint32 {
return defaultMaxStreams
}
+func (s *Server) maxDecoderHeaderTableSize() uint32 {
+ if v := s.MaxDecoderHeaderTableSize; v > 0 {
+ return v
+ }
+ return initialHeaderTableSize
+}
+
+func (s *Server) maxEncoderHeaderTableSize() uint32 {
+ if v := s.MaxEncoderHeaderTableSize; v > 0 {
+ return v
+ }
+ return initialHeaderTableSize
+}
+
// maxQueuedControlFrames is the maximum number of control frames like
// SETTINGS, PING and RST_STREAM that will be queued for writing before
// the connection is closed to prevent memory exhaustion attacks.
@@ -394,7 +421,6 @@ func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) {
advMaxStreams: s.maxConcurrentStreams(),
initialStreamSendWindowSize: initialWindowSize,
maxFrameSize: initialMaxFrameSize,
- headerTableSize: initialHeaderTableSize,
serveG: newGoroutineLock(),
pushEnabled: true,
sawClientPreface: opts.SawClientPreface,
@@ -424,12 +450,13 @@ func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) {
sc.flow.add(initialWindowSize)
sc.inflow.add(initialWindowSize)
sc.hpackEncoder = hpack.NewEncoder(&sc.headerWriteBuf)
+ sc.hpackEncoder.SetMaxDynamicTableSizeLimit(s.maxEncoderHeaderTableSize())
fr := NewFramer(sc.bw, c)
if s.CountError != nil {
fr.countError = s.CountError
}
- fr.ReadMetaHeaders = hpack.NewDecoder(initialHeaderTableSize, nil)
+ fr.ReadMetaHeaders = hpack.NewDecoder(s.maxDecoderHeaderTableSize(), nil)
fr.MaxHeaderListSize = sc.maxHeaderListSize()
fr.SetMaxReadFrameSize(s.maxReadFrameSize())
sc.framer = fr
@@ -559,7 +586,6 @@ type serverConn struct {
streams map[uint32]*stream
initialStreamSendWindowSize int32
maxFrameSize int32
- headerTableSize uint32
peerMaxHeaderListSize uint32 // zero means unknown (default)
canonHeader map[string]string // http2-lower-case -> Go-Canonical-Case
writingFrame bool // started writing a frame (on serve goroutine or separate)
@@ -622,7 +648,9 @@ type stream struct {
resetQueued bool // RST_STREAM queued for write; set by sc.resetStream
gotTrailerHeader bool // HEADER frame for trailers was seen
wroteHeaders bool // whether we wrote headers (not status 100)
+ readDeadline *time.Timer // nil if unused
writeDeadline *time.Timer // nil if unused
+ closeErr error // set before cw is closed
trailer http.Header // accumulated trailers
reqTrailer http.Header // handler's Request.Trailer
@@ -862,6 +890,7 @@ func (sc *serverConn) serve() {
{SettingMaxFrameSize, sc.srv.maxReadFrameSize()},
{SettingMaxConcurrentStreams, sc.advMaxStreams},
{SettingMaxHeaderListSize, sc.maxHeaderListSize()},
+ {SettingHeaderTableSize, sc.srv.maxDecoderHeaderTableSize()},
{SettingInitialWindowSize, uint32(sc.srv.initialStreamRecvWindowSize())},
},
})
@@ -869,7 +898,9 @@ func (sc *serverConn) serve() {
// Each connection starts with initialWindowSize inflow tokens.
// If a higher value is configured, we add more tokens.
- sc.sendWindowUpdate(nil)
+ if diff := sc.srv.initialConnRecvWindowSize() - initialWindowSize; diff > 0 {
+ sc.sendWindowUpdate(nil, int(diff))
+ }
if err := sc.readPreface(); err != nil {
sc.condlogf(err, "http2: server: error reading preface from client %v: %v", sc.conn.RemoteAddr(), err)
@@ -946,6 +977,8 @@ func (sc *serverConn) serve() {
}
case *startPushRequest:
sc.startPush(v)
+ case func(*serverConn):
+ v(sc)
default:
panic(fmt.Sprintf("unexpected type %T", v))
}
@@ -1459,6 +1492,21 @@ func (sc *serverConn) processFrame(f Frame) error {
sc.sawFirstSettings = true
}
+ // Discard frames for streams initiated after the identified last
+ // stream sent in a GOAWAY, or all frames after sending an error.
+ // We still need to return connection-level flow control for DATA frames.
+ // RFC 9113 Section 6.8.
+ if sc.inGoAway && (sc.goAwayCode != ErrCodeNo || f.Header().StreamID > sc.maxClientStreamID) {
+
+ if f, ok := f.(*DataFrame); ok {
+ if sc.inflow.available() < int32(f.Length) {
+ return sc.countError("data_flow", streamError(f.Header().StreamID, ErrCodeFlowControl))
+ }
+ sc.sendWindowUpdate(nil, int(f.Length)) // conn-level
+ }
+ return nil
+ }
+
switch f := f.(type) {
case *SettingsFrame:
return sc.processSettings(f)
@@ -1501,9 +1549,6 @@ func (sc *serverConn) processPing(f *PingFrame) error {
// PROTOCOL_ERROR."
return sc.countError("ping_on_stream", ConnectionError(ErrCodeProtocol))
}
- if sc.inGoAway && sc.goAwayCode != ErrCodeNo {
- return nil
- }
sc.writeFrame(FrameWriteRequest{write: writePingAck{f}})
return nil
}
@@ -1565,6 +1610,9 @@ func (sc *serverConn) closeStream(st *stream, err error) {
panic(fmt.Sprintf("invariant; can't close stream in state %v", st.state))
}
st.state = stateClosed
+ if st.readDeadline != nil {
+ st.readDeadline.Stop()
+ }
if st.writeDeadline != nil {
st.writeDeadline.Stop()
}
@@ -1586,10 +1634,18 @@ func (sc *serverConn) closeStream(st *stream, err error) {
if p := st.body; p != nil {
// Return any buffered unread bytes worth of conn-level flow control.
// See golang.org/issue/16481
- sc.sendWindowUpdate(nil)
+ sc.sendWindowUpdate(nil, p.Len())
p.CloseWithError(err)
}
+ if e, ok := err.(StreamError); ok {
+ if e.Cause != nil {
+ err = e.Cause
+ } else {
+ err = errStreamClosed
+ }
+ }
+ st.closeErr = err
st.cw.Close() // signals Handler's CloseNotifier, unblocks writes, etc
sc.writeSched.CloseStream(st.id)
}
@@ -1632,7 +1688,6 @@ func (sc *serverConn) processSetting(s Setting) error {
}
switch s.ID {
case SettingHeaderTableSize:
- sc.headerTableSize = s.Val
sc.hpackEncoder.SetMaxDynamicTableSize(s.Val)
case SettingEnablePush:
sc.pushEnabled = s.Val != 0
@@ -1686,16 +1741,6 @@ func (sc *serverConn) processSettingInitialWindowSize(val uint32) error {
func (sc *serverConn) processData(f *DataFrame) error {
sc.serveG.check()
id := f.Header().StreamID
- if sc.inGoAway && (sc.goAwayCode != ErrCodeNo || id > sc.maxClientStreamID) {
- // Discard all DATA frames if the GOAWAY is due to an
- // error, or:
- //
- // Section 6.8: After sending a GOAWAY frame, the sender
- // can discard frames for streams initiated by the
- // receiver with identifiers higher than the identified
- // last stream.
- return nil
- }
data := f.Data()
state, st := sc.state(id)
@@ -1734,7 +1779,7 @@ func (sc *serverConn) processData(f *DataFrame) error {
// sendWindowUpdate, which also schedules sending the
// frames.
sc.inflow.take(int32(f.Length))
- sc.sendWindowUpdate(nil) // conn-level
+ sc.sendWindowUpdate(nil, int(f.Length)) // conn-level
if st != nil && st.resetQueued {
// Already have a stream error in flight. Don't send another.
@@ -1752,7 +1797,7 @@ func (sc *serverConn) processData(f *DataFrame) error {
return sc.countError("data_flow", streamError(id, ErrCodeFlowControl))
}
sc.inflow.take(int32(f.Length))
- sc.sendWindowUpdate(nil) // conn-level
+ sc.sendWindowUpdate(nil, int(f.Length)) // conn-level
st.body.CloseWithError(fmt.Errorf("sender tried to send more than declared Content-Length of %d bytes", st.declBodyBytes))
// RFC 7540, sec 8.1.2.6: A request or response is also malformed if the
@@ -1770,7 +1815,7 @@ func (sc *serverConn) processData(f *DataFrame) error {
if len(data) > 0 {
wrote, err := st.body.Write(data)
if err != nil {
- sc.sendWindowUpdate32(nil, int32(f.Length)-int32(wrote))
+ sc.sendWindowUpdate(nil, int(f.Length)-wrote)
return sc.countError("body_write_err", streamError(id, ErrCodeStreamClosed))
}
if wrote != len(data) {
@@ -1838,19 +1883,27 @@ func (st *stream) copyTrailersToHandlerRequest() {
}
}
+// onReadTimeout is run on its own goroutine (from time.AfterFunc)
+// when the stream's ReadTimeout has fired.
+func (st *stream) onReadTimeout() {
+ // Wrap the ErrDeadlineExceeded to avoid callers depending on us
+ // returning the bare error.
+ st.body.CloseWithError(fmt.Errorf("%w", os.ErrDeadlineExceeded))
+}
+
// onWriteTimeout is run on its own goroutine (from time.AfterFunc)
// when the stream's WriteTimeout has fired.
func (st *stream) onWriteTimeout() {
- st.sc.writeFrameFromHandler(FrameWriteRequest{write: streamError(st.id, ErrCodeInternal)})
+ st.sc.writeFrameFromHandler(FrameWriteRequest{write: StreamError{
+ StreamID: st.id,
+ Code: ErrCodeInternal,
+ Cause: os.ErrDeadlineExceeded,
+ }})
}
func (sc *serverConn) processHeaders(f *MetaHeadersFrame) error {
sc.serveG.check()
id := f.StreamID
- if sc.inGoAway {
- // Ignore.
- return nil
- }
// http://tools.ietf.org/html/rfc7540#section-5.1.1
// Streams initiated by a client MUST use odd-numbered stream
// identifiers. [...] An endpoint that receives an unexpected
@@ -1953,6 +2006,9 @@ func (sc *serverConn) processHeaders(f *MetaHeadersFrame) error {
// (in Go 1.8), though. That's a more sane option anyway.
if sc.hs.ReadTimeout != 0 {
sc.conn.SetReadDeadline(time.Time{})
+ if st.body != nil {
+ st.readDeadline = time.AfterFunc(sc.hs.ReadTimeout, st.onReadTimeout)
+ }
}
go sc.runHandler(rw, req, handler)
@@ -2021,9 +2077,6 @@ func (sc *serverConn) checkPriority(streamID uint32, p PriorityParam) error {
}
func (sc *serverConn) processPriority(f *PriorityFrame) error {
- if sc.inGoAway {
- return nil
- }
if err := sc.checkPriority(f.StreamID, f.PriorityParam); err != nil {
return err
}
@@ -2322,39 +2375,24 @@ func (sc *serverConn) noteBodyReadFromHandler(st *stream, n int, err error) {
func (sc *serverConn) noteBodyRead(st *stream, n int) {
sc.serveG.check()
- sc.sendWindowUpdate(nil) // conn-level
+ sc.sendWindowUpdate(nil, n) // conn-level
if st.state != stateHalfClosedRemote && st.state != stateClosed {
// Don't send this WINDOW_UPDATE if the stream is closed
// remotely.
- sc.sendWindowUpdate(st)
+ sc.sendWindowUpdate(st, n)
}
}
// st may be nil for conn-level
-func (sc *serverConn) sendWindowUpdate(st *stream) {
+func (sc *serverConn) sendWindowUpdate(st *stream, n int) {
sc.serveG.check()
-
- var n int32
- if st == nil {
- if avail, windowSize := sc.inflow.available(), sc.srv.initialConnRecvWindowSize(); avail > windowSize/2 {
- return
- } else {
- n = windowSize - avail
- }
- } else {
- if avail, windowSize := st.inflow.available(), sc.srv.initialStreamRecvWindowSize(); avail > windowSize/2 {
- return
- } else {
- n = windowSize - avail
- }
- }
// "The legal range for the increment to the flow control
// window is 1 to 2^31-1 (2,147,483,647) octets."
// A Go Read call on 64-bit machines could in theory read
// a larger Read than this. Very unlikely, but we handle it here
// rather than elsewhere for now.
const maxUint31 = 1<<31 - 1
- for n >= maxUint31 {
+ for n > maxUint31 {
sc.sendWindowUpdate32(st, maxUint31)
n -= maxUint31
}
@@ -2474,7 +2512,15 @@ type responseWriterState struct {
type chunkWriter struct{ rws *responseWriterState }
-func (cw chunkWriter) Write(p []byte) (n int, err error) { return cw.rws.writeChunk(p) }
+func (cw chunkWriter) Write(p []byte) (n int, err error) {
+ n, err = cw.rws.writeChunk(p)
+ if err == errStreamClosed {
+ // If writing failed because the stream has been closed,
+ // return the reason it was closed.
+ err = cw.rws.stream.closeErr
+ }
+ return n, err
+}
func (rws *responseWriterState) hasTrailers() bool { return len(rws.trailers) > 0 }
@@ -2668,23 +2714,85 @@ func (rws *responseWriterState) promoteUndeclaredTrailers() {
}
}
+func (w *responseWriter) SetReadDeadline(deadline time.Time) error {
+ st := w.rws.stream
+ if !deadline.IsZero() && deadline.Before(time.Now()) {
+ // If we're setting a deadline in the past, reset the stream immediately
+ // so writes after SetWriteDeadline returns will fail.
+ st.onReadTimeout()
+ return nil
+ }
+ w.rws.conn.sendServeMsg(func(sc *serverConn) {
+ if st.readDeadline != nil {
+ if !st.readDeadline.Stop() {
+ // Deadline already exceeded, or stream has been closed.
+ return
+ }
+ }
+ if deadline.IsZero() {
+ st.readDeadline = nil
+ } else if st.readDeadline == nil {
+ st.readDeadline = time.AfterFunc(deadline.Sub(time.Now()), st.onReadTimeout)
+ } else {
+ st.readDeadline.Reset(deadline.Sub(time.Now()))
+ }
+ })
+ return nil
+}
+
+func (w *responseWriter) SetWriteDeadline(deadline time.Time) error {
+ st := w.rws.stream
+ if !deadline.IsZero() && deadline.Before(time.Now()) {
+ // If we're setting a deadline in the past, reset the stream immediately
+ // so writes after SetWriteDeadline returns will fail.
+ st.onWriteTimeout()
+ return nil
+ }
+ w.rws.conn.sendServeMsg(func(sc *serverConn) {
+ if st.writeDeadline != nil {
+ if !st.writeDeadline.Stop() {
+ // Deadline already exceeded, or stream has been closed.
+ return
+ }
+ }
+ if deadline.IsZero() {
+ st.writeDeadline = nil
+ } else if st.writeDeadline == nil {
+ st.writeDeadline = time.AfterFunc(deadline.Sub(time.Now()), st.onWriteTimeout)
+ } else {
+ st.writeDeadline.Reset(deadline.Sub(time.Now()))
+ }
+ })
+ return nil
+}
+
func (w *responseWriter) Flush() {
+ w.FlushError()
+}
+
+func (w *responseWriter) FlushError() error {
rws := w.rws
if rws == nil {
panic("Header called after Handler finished")
}
+ var err error
if rws.bw.Buffered() > 0 {
- if err := rws.bw.Flush(); err != nil {
- // Ignore the error. The frame writer already knows.
- return
- }
+ err = rws.bw.Flush()
} else {
// The bufio.Writer won't call chunkWriter.Write
// (writeChunk with zero bytes, so we have to do it
// ourselves to force the HTTP response header and/or
// final DATA frame (with END_STREAM) to be sent.
- rws.writeChunk(nil)
+ _, err = chunkWriter{rws}.Write(nil)
+ if err == nil {
+ select {
+ case <-rws.stream.cw:
+ err = rws.stream.closeErr
+ default:
+ }
+ }
}
+ return err
}
func (w *responseWriter) CloseNotify() <-chan bool {
diff --git a/vendor/golang.org/x/net/http2/transport.go b/vendor/golang.org/x/net/http2/transport.go
index c5d005bba7cc4..30f706e6cb81b 100644
--- a/vendor/golang.org/x/net/http2/transport.go
+++ b/vendor/golang.org/x/net/http2/transport.go
@@ -16,6 +16,7 @@ import (
"errors"
"fmt"
"io"
+ "io/fs"
"log"
"math"
mathrand "math/rand"
@@ -117,6 +118,28 @@ type Transport struct {
// to mean no limit.
MaxHeaderListSize uint32
+ // MaxReadFrameSize is the http2 SETTINGS_MAX_FRAME_SIZE to send in the
+ // initial settings frame. It is the size in bytes of the largest frame
+ // payload that the sender is willing to receive. If 0, no setting is
+ // sent, and the value is provided by the peer, which should be 16384
+ // according to the spec:
+ // https://datatracker.ietf.org/doc/html/rfc7540#section-6.5.2.
+ // Values are bounded in the range 16k to 16M.
+ MaxReadFrameSize uint32
+
+ // MaxDecoderHeaderTableSize optionally specifies the http2
+ // SETTINGS_HEADER_TABLE_SIZE to send in the initial settings frame. It
+ // informs the remote endpoint of the maximum size of the header compression
+ // table used to decode header blocks, in octets. If zero, the default value
+ // of 4096 is used.
+ MaxDecoderHeaderTableSize uint32
+
+ // MaxEncoderHeaderTableSize optionally specifies an upper limit for the
+ // header compression table used for encoding request headers. Received
+ // SETTINGS_HEADER_TABLE_SIZE settings are capped at this limit. If zero,
+ // the default value of 4096 is used.
+ MaxEncoderHeaderTableSize uint32
+
// StrictMaxConcurrentStreams controls whether the server's
// SETTINGS_MAX_CONCURRENT_STREAMS should be respected
// globally. If false, new TCP connections are created to the
@@ -170,6 +193,19 @@ func (t *Transport) maxHeaderListSize() uint32 {
return t.MaxHeaderListSize
}
+func (t *Transport) maxFrameReadSize() uint32 {
+ if t.MaxReadFrameSize == 0 {
+ return 0 // use the default provided by the peer
+ }
+ if t.MaxReadFrameSize < minMaxFrameSize {
+ return minMaxFrameSize
+ }
+ if t.MaxReadFrameSize > maxFrameSize {
+ return maxFrameSize
+ }
+ return t.MaxReadFrameSize
+}
+
func (t *Transport) disableCompression() bool {
return t.DisableCompression || (t.t1 != nil && t.t1.DisableCompression)
}
@@ -292,10 +328,11 @@ type ClientConn struct {
lastActive time.Time
lastIdle time.Time // time last idle
// Settings from peer: (also guarded by wmu)
- maxFrameSize uint32
- maxConcurrentStreams uint32
- peerMaxHeaderListSize uint64
- initialWindowSize uint32
+ maxFrameSize uint32
+ maxConcurrentStreams uint32
+ peerMaxHeaderListSize uint64
+ peerMaxHeaderTableSize uint32
+ initialWindowSize uint32
// reqHeaderMu is a 1-element semaphore channel controlling access to sending new requests.
// Write to reqHeaderMu to lock it, read from it to unlock.
@@ -501,6 +538,15 @@ func authorityAddr(scheme string, authority string) (addr string) {
return net.JoinHostPort(host, port)
}
+var retryBackoffHook func(time.Duration) *time.Timer
+
+func backoffNewTimer(d time.Duration) *time.Timer {
+ if retryBackoffHook != nil {
+ return retryBackoffHook(d)
+ }
+ return time.NewTimer(d)
+}
+
// RoundTripOpt is like RoundTrip, but takes options.
func (t *Transport) RoundTripOpt(req *http.Request, opt RoundTripOpt) (*http.Response, error) {
if !(req.URL.Scheme == "https" || (req.URL.Scheme == "http" && t.AllowHTTP)) {
@@ -526,11 +572,14 @@ func (t *Transport) RoundTripOpt(req *http.Request, opt RoundTripOpt) (*http.Res
}
backoff := float64(uint(1) << (uint(retry) - 1))
backoff += backoff * (0.1 * mathrand.Float64())
+ d := time.Second * time.Duration(backoff)
+ timer := backoffNewTimer(d)
select {
- case <-time.After(time.Second * time.Duration(backoff)):
+ case <-timer.C:
t.vlogf("RoundTrip retrying after failure: %v", err)
continue
case <-req.Context().Done():
+ timer.Stop()
err = req.Context().Err()
}
}
@@ -668,6 +717,20 @@ func (t *Transport) expectContinueTimeout() time.Duration {
return t.t1.ExpectContinueTimeout
}
+func (t *Transport) maxDecoderHeaderTableSize() uint32 {
+ if v := t.MaxDecoderHeaderTableSize; v > 0 {
+ return v
+ }
+ return initialHeaderTableSize
+}
+
+func (t *Transport) maxEncoderHeaderTableSize() uint32 {
+ if v := t.MaxEncoderHeaderTableSize; v > 0 {
+ return v
+ }
+ return initialHeaderTableSize
+}
+
func (t *Transport) NewClientConn(c net.Conn) (*ClientConn, error) {
return t.newClientConn(c, t.disableKeepAlives())
}
@@ -708,15 +771,19 @@ func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, erro
})
cc.br = bufio.NewReader(c)
cc.fr = NewFramer(cc.bw, cc.br)
+ if t.maxFrameReadSize() != 0 {
+ cc.fr.SetMaxReadFrameSize(t.maxFrameReadSize())
+ }
if t.CountError != nil {
cc.fr.countError = t.CountError
}
- cc.fr.ReadMetaHeaders = hpack.NewDecoder(initialHeaderTableSize, nil)
+ maxHeaderTableSize := t.maxDecoderHeaderTableSize()
+ cc.fr.ReadMetaHeaders = hpack.NewDecoder(maxHeaderTableSize, nil)
cc.fr.MaxHeaderListSize = t.maxHeaderListSize()
- // TODO: SetMaxDynamicTableSize, SetMaxDynamicTableSizeLimit on
- // henc in response to SETTINGS frames?
cc.henc = hpack.NewEncoder(&cc.hbuf)
+ cc.henc.SetMaxDynamicTableSizeLimit(t.maxEncoderHeaderTableSize())
+ cc.peerMaxHeaderTableSize = initialHeaderTableSize
if t.AllowHTTP {
cc.nextStreamID = 3
@@ -731,9 +798,15 @@ func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, erro
{ID: SettingEnablePush, Val: 0},
{ID: SettingInitialWindowSize, Val: transportDefaultStreamFlow},
}
+ if max := t.maxFrameReadSize(); max != 0 {
+ initialSettings = append(initialSettings, Setting{ID: SettingMaxFrameSize, Val: max})
+ }
if max := t.maxHeaderListSize(); max != 0 {
initialSettings = append(initialSettings, Setting{ID: SettingMaxHeaderListSize, Val: max})
}
+ if maxHeaderTableSize != initialHeaderTableSize {
+ initialSettings = append(initialSettings, Setting{ID: SettingHeaderTableSize, Val: maxHeaderTableSize})
+ }
cc.bw.Write(clientPreface)
cc.fr.WriteSettings(initialSettings...)
@@ -1075,7 +1148,7 @@ var errRequestCanceled = errors.New("net/http: request canceled")
func commaSeparatedTrailers(req *http.Request) (string, error) {
keys := make([]string, 0, len(req.Trailer))
for k := range req.Trailer {
- k = http.CanonicalHeaderKey(k)
+ k = canonicalHeader(k)
switch k {
case "Transfer-Encoding", "Trailer", "Content-Length":
return "", fmt.Errorf("invalid Trailer key %q", k)
@@ -1612,7 +1685,7 @@ func (cs *clientStream) writeRequestBody(req *http.Request) (err error) {
var sawEOF bool
for !sawEOF {
- n, err := body.Read(buf[:len(buf)])
+ n, err := body.Read(buf)
if hasContentLen {
remainLen -= int64(n)
if remainLen == 0 && err == nil {
@@ -1915,7 +1988,7 @@ func (cc *ClientConn) encodeHeaders(req *http.Request, addGzipHeader bool, trail
// Header list size is ok. Write the headers.
enumerateHeaders(func(name, value string) {
- name, ascii := asciiToLower(name)
+ name, ascii := lowerHeader(name)
if !ascii {
// Skip writing invalid headers. Per RFC 7540, Section 8.1.2, header
// field names have to be ASCII characters (just as in HTTP/1.x).
@@ -1968,7 +2041,7 @@ func (cc *ClientConn) encodeTrailers(trailer http.Header) ([]byte, error) {
}
for k, vv := range trailer {
- lowKey, ascii := asciiToLower(k)
+ lowKey, ascii := lowerHeader(k)
if !ascii {
// Skip writing invalid headers. Per RFC 7540, Section 8.1.2, header
// field names have to be ASCII characters (just as in HTTP/1.x).
@@ -2301,7 +2374,7 @@ func (rl *clientConnReadLoop) handleResponse(cs *clientStream, f *MetaHeadersFra
Status: status + " " + http.StatusText(statusCode),
}
for _, hf := range regularFields {
- key := http.CanonicalHeaderKey(hf.Name)
+ key := canonicalHeader(hf.Name)
if key == "Trailer" {
t := res.Trailer
if t == nil {
@@ -2309,7 +2382,7 @@ func (rl *clientConnReadLoop) handleResponse(cs *clientStream, f *MetaHeadersFra
res.Trailer = t
}
foreachHeaderElement(hf.Value, func(v string) {
- t[http.CanonicalHeaderKey(v)] = nil
+ t[canonicalHeader(v)] = nil
})
} else {
vv := header[key]
@@ -2414,7 +2487,7 @@ func (rl *clientConnReadLoop) processTrailers(cs *clientStream, f *MetaHeadersFr
trailer := make(http.Header)
for _, hf := range f.RegularFields() {
- key := http.CanonicalHeaderKey(hf.Name)
+ key := canonicalHeader(hf.Name)
trailer[key] = append(trailer[key], hf.Value)
}
cs.trailer = trailer
@@ -2760,8 +2833,10 @@ func (rl *clientConnReadLoop) processSettingsNoWrite(f *SettingsFrame) error {
cc.cond.Broadcast()
cc.initialWindowSize = s.Val
+ case SettingHeaderTableSize:
+ cc.henc.SetMaxDynamicTableSize(s.Val)
+ cc.peerMaxHeaderTableSize = s.Val
default:
- // TODO(bradfitz): handle more settings? SETTINGS_HEADER_TABLE_SIZE probably.
cc.vlogf("Unhandled Setting: %v", s)
}
return nil
@@ -2985,7 +3060,11 @@ func (gz *gzipReader) Read(p []byte) (n int, err error) {
}
func (gz *gzipReader) Close() error {
- return gz.body.Close()
+ if err := gz.body.Close(); err != nil {
+ return err
+ }
+ gz.zerr = fs.ErrClosed
+ return nil
}
type errorReader struct{ err error }
diff --git a/vendor/golang.org/x/net/publicsuffix/data/children b/vendor/golang.org/x/net/publicsuffix/data/children
new file mode 100644
index 0000000000000..1038c561ade46
Binary files /dev/null and b/vendor/golang.org/x/net/publicsuffix/data/children differ
diff --git a/vendor/golang.org/x/net/publicsuffix/data/nodes b/vendor/golang.org/x/net/publicsuffix/data/nodes
new file mode 100644
index 0000000000000..34751cd5b9d1b
Binary files /dev/null and b/vendor/golang.org/x/net/publicsuffix/data/nodes differ
diff --git a/vendor/golang.org/x/net/publicsuffix/data/text b/vendor/golang.org/x/net/publicsuffix/data/text
new file mode 100644
index 0000000000000..124dcd61f4023
--- /dev/null
+++ b/vendor/golang.org/x/net/publicsuffix/data/text
@@ -0,0 +1 @@
+billustrationionjukudoyamakeupowiathletajimageandsoundandvision-riopretobishimagentositecnologiabiocelotenkawabipanasonicatfoodnetworkinggroupperbirdartcenterprisecloudaccesscamdvrcampaniabirkenesoddtangenovarahkkeravjuegoshikikiraraholtalenishikatakazakindependent-revieweirbirthplaceu-1bitbucketrzynishikatsuragirlyuzawabitternidiscoverybjarkoybjerkreimdbaltimore-og-romsdalp1bjugnishikawazukamishihoronobeautydalwaysdatabaseballangenkainanaejrietisalatinabenogatabitorderblackfridaybloombergbauernishimerabloxcms3-website-us-west-2blushakotanishinomiyashironocparachutingjovikarateu-2bmoattachmentsalangenishinoomotegovtattoolforgerockartuzybmsalon-1bmwellbeingzoneu-3bnrwesteuropenairbusantiquesaltdalomzaporizhzhedmarkaratsuginamikatagamilanotairesistanceu-4bondigitaloceanspacesaludishangrilanciabonnishinoshimatsusakahoginankokubunjindianapolis-a-bloggerbookonlinewjerseyboomlahppiacenzachpomorskienishiokoppegardiskussionsbereichattanooganordkapparaglidinglassassinationalheritageu-north-1boschaefflerdalondonetskarelianceu-south-1bostik-serveronagasukevje-og-hornnesalvadordalibabalatinord-aurdalipaywhirlondrinaplesknsalzburgleezextraspace-to-rentalstomakomaibarabostonakijinsekikogentappssejnyaarparalleluxembourglitcheltenham-radio-opensocialorenskogliwicebotanicalgardeno-staginglobodoes-itcouldbeworldisrechtranakamurataiwanairforcechireadthedocsxeroxfinitybotanicgardenishitosashimizunaminamiawajikindianmarketinglogowestfalenishiwakindielddanuorrindigenamsskoganeindustriabotanyanagawallonieruchomoscienceandindustrynissandiegoddabouncemerckmsdnipropetrovskjervoyageorgeorgiabounty-fullensakerrypropertiesamegawaboutiquebecommerce-shopselectaxihuanissayokkaichintaifun-dnsaliasamnangerboutireservditchyouriparasiteboyfriendoftheinternetflixjavaldaostathellevangerbozen-sudtirolottokorozawabozen-suedtirolouvreisenissedalovepoparisor-fronisshingucciprianiigataipeidsvollovesickariyakumodumeloyalistoragebplaceducatorprojectcmembersampalermomahaccapooguybrandywinevalleybrasiliadboxosascoli-picenorddalpusercontentcp4bresciaokinawashirosatobamagazineuesamsclubartowestus2brindisibenikitagataikikuchikumagayagawalmartgorybristoloseyouriparliamentjeldsundivtasvuodnakaniikawatanagurabritishcolumbialowiezaganiyodogawabroadcastlebtimnetzlgloomy-routerbroadwaybroke-itvedestrandivttasvuotnakanojohanamakindlefrakkestadiybrokerbrothermesaverdeatnulmemergencyachtsamsungloppennebrowsersafetymarketsandnessjoenl-ams-1brumunddalublindesnesandoybrunelastxn--0trq7p7nnbrusselsandvikcoromantovalle-daostavangerbruxellesanfranciscofreakunekobayashikaoirmemorialucaniabryanskodjedugit-pagespeedmobilizeroticagliaricoharuovatlassian-dev-builderscbglugsjcbnpparibashkiriabrynewmexicoacharterbuzzwfarmerseinebwhalingmbhartiffany-2bzhitomirbzzcodyn-vpndnsantacruzsantafedjeffersoncoffeedbackdropocznordlandrudupontariobranconavstackasaokamikoaniikappudownloadurbanamexhibitioncogretakamatsukawacollectioncolognewyorkshirebungoonordre-landurhamburgrimstadynamisches-dnsantamariakecolonialwilliamsburgripeeweeklylotterycoloradoplateaudnedalncolumbusheycommunexus-3community-prochowicecomobaravendbambleborkapsicilyonagoyauthgear-stagingivestbyglandroverhallair-traffic-controlleyombomloabaths-heilbronnoysunddnslivegarsheiheijibigawaustraliaustinnfshostrolekamisatokaizukameyamatotakadaustevollivornowtv-infolldalolipopmcdircompanychipstmncomparemarkerryhotelsantoandrepbodynaliasnesoddenmarkhangelskjakdnepropetrovskiervaapsteigenflfannefrankfurtjxn--12cfi8ixb8lutskashibatakashimarshallstatebankashiharacomsecaaskimitsubatamibuildingriwatarailwaycondoshichinohealth-carereformemsettlersanukindustriesteamfamberlevagangaviikanonjinfinitigotembaixadaconferenceconstructionconsuladogadollsaobernardomniweatherchanneluxuryconsultanthropologyconsultingroks-thisayamanobeokakegawacontactkmaxxn--12co0c3b4evalled-aostamayukinsuregruhostingrondarcontagematsubaravennaharimalborkashiwaracontemporaryarteducationalchikugodonnakaiwamizawashtenawsmppl-wawdev-myqnapcloudcontrolledogawarabikomaezakirunoopschlesischesaogoncartoonartdecologiacontractorskenconventureshinodearthickashiwazakiyosatokamachilloutsystemscloudsitecookingchannelsdvrdnsdojogaszkolancashirecifedexetercoolblogdnsfor-better-thanawassamukawatarikuzentakatairavpagecooperativano-frankivskygearapparochernigovernmentksatxn--1ck2e1bananarepublic-inquiryggeebinatsukigatajimidsundevelopmentatarantours3-external-1copenhagencyclopedichiropracticatholicaxiashorokanaiecoproductionsaotomeinforumzcorporationcorsicahcesuoloanswatch-and-clockercorvettenrissagaeroclubmedecincinnativeamericanantiquest-le-patron-k3sapporomuracosenzamamidorittoeigersundynathomebuiltwithdarkasserverrankoshigayaltakasugaintelligencecosidnshome-webservercellikescandypoppdaluzerncostumedicallynxn--1ctwolominamatargets-itlon-2couchpotatofriesardegnarutomobegetmyiparsardiniacouncilvivanovoldacouponsarlcozoracq-acranbrookuwanalyticsarpsborgrongausdalcrankyowariasahikawatchandclockasukabeauxartsandcraftsarufutsunomiyawakasaikaitabashijonawatecrdyndns-at-homedepotaruinterhostsolutionsasayamatta-varjjatmpartinternationalfirearmsaseboknowsitallcreditcardyndns-at-workshoppingrossetouchigasakitahiroshimansionsaskatchewancreditunioncremonashgabadaddjaguarqcxn--1lqs03ncrewhmessinarashinomutashinaintuitoyosatoyokawacricketnedalcrimeast-kazakhstanangercrotonecrownipartsassarinuyamashinazawacrsaudacruisesauheradyndns-blogsitextilegnicapetownnews-stagingroundhandlingroznycuisinellancasterculturalcentertainmentoyotapartysvardocuneocupcakecuritibabymilk3curvallee-d-aosteinkjerusalempresashibetsurugashimaringatlantajirinvestmentsavannahgacutegirlfriendyndns-freeboxoslocalzonecymrulvikasumigaurawa-mazowszexnetlifyinzairtrafficplexus-1cyonabarumesswithdnsaveincloudyndns-homednsaves-the-whalessandria-trani-barletta-andriatranibarlettaandriacyouthruherecipescaracaltanissettaishinomakilovecollegefantasyleaguernseyfembetsukumiyamazonawsglobalacceleratorahimeshimabaridagawatchesciencecentersciencehistoryfermockasuyamegurownproviderferraraferraris-a-catererferrerotikagoshimalopolskanlandyndns-picsaxofetsundyndns-remotewdyndns-ipasadenaroyfgujoinvilleitungsenfhvalerfidontexistmein-iservschulegallocalhostrodawarafieldyndns-serverdalfigueresindevicenzaolkuszczytnoipirangalsaceofilateliafilegear-augustowhoswholdingsmall-webthingscientistordalfilegear-debianfilegear-gbizfilegear-iefilegear-jpmorganfilegear-sg-1filminamiechizenfinalfinancefineartscrapper-sitefinlandyndns-weblikes-piedmonticellocus-4finnoyfirebaseappaviancarrdyndns-wikinkobearalvahkijoetsuldalvdalaskanittedallasalleasecuritytacticschoenbrunnfirenetoystre-slidrettozawafirenzefirestonefirewebpaascrappingulenfirmdaleikangerfishingoldpoint2thisamitsukefitjarvodkafjordyndns-workangerfitnessettlementozsdellogliastradingunmanxn--1qqw23afjalerfldrvalleeaosteflekkefjordyndns1flesberguovdageaidnunjargaflickragerogerscrysecretrosnubar0flierneflirfloginlinefloppythonanywhereggio-calabriafloraflorencefloridatsunangojomedicinakamagayahabackplaneapplinzis-a-celticsfanfloripadoval-daostavalleyfloristanohatakahamalselvendrellflorokunohealthcareerscwienflowerservehalflifeinsurancefltrani-andria-barletta-trani-andriaflynnhosting-clusterfnchiryukyuragifuchungbukharanzanfndynnschokokekschokoladenfnwkaszubytemarkatowicefoolfor-ourfor-somedio-campidano-mediocampidanomediofor-theaterforexrothachijolsterforgotdnservehttpbin-butterforli-cesena-forlicesenaforlillesandefjordynservebbscholarshipschoolbusinessebyforsaleirfjordynuniversityforsandasuolodingenfortalfortefortmissoulangevagrigentomologyeonggiehtavuoatnagahamaroygardencowayfortworthachinoheavyfosneservehumourfotraniandriabarlettatraniandriafoxfordecampobassociatest-iserveblogsytemp-dnserveirchitachinakagawashingtondchernivtsiciliafozfr-par-1fr-par-2franamizuhobby-sitefrancaiseharafranziskanerimalvikatsushikabedzin-addrammenuorochesterfredrikstadtvserveminecraftranoyfreeddnsfreebox-oservemp3freedesktopfizerfreemasonryfreemyiphosteurovisionfreesitefreetlservep2pgfoggiafreiburgushikamifuranorfolkebibleksvikatsuyamarugame-hostyhostingxn--2m4a15efrenchkisshikirkeneservepicservequakefreseniuscultureggio-emilia-romagnakasatsunairguardiannakadomarinebraskaunicommbankaufentigerfribourgfriuli-v-giuliafriuli-ve-giuliafriuli-vegiuliafriuli-venezia-giuliafriuli-veneziagiuliafriuli-vgiuliafriuliv-giuliafriulive-giuliafriulivegiuliafriulivenezia-giuliafriuliveneziagiuliafriulivgiuliafrlfroganservesarcasmatartanddesignfrognfrolandynv6from-akrehamnfrom-alfrom-arfrom-azurewebsiteshikagamiishibukawakepnoorfrom-capitalonewportransipharmacienservicesevastopolefrom-coalfrom-ctranslatedynvpnpluscountryestateofdelawareclaimschoolsztynsettsupportoyotomiyazakis-a-candidatefrom-dchitosetodayfrom-dediboxafrom-flandersevenassisienarvikautokeinoticeablewismillerfrom-gaulardalfrom-hichisochikuzenfrom-iafrom-idyroyrvikingruenoharafrom-ilfrom-in-berlindasewiiheyaizuwakamatsubushikusakadogawafrom-ksharpharmacyshawaiijimarcheapartmentshellaspeziafrom-kyfrom-lanshimokawafrom-mamurogawatsonfrom-mdfrom-medizinhistorischeshimokitayamattelekommunikationfrom-mifunefrom-mnfrom-modalenfrom-mshimonitayanagit-reposts-and-telecommunicationshimonosekikawafrom-mtnfrom-nchofunatoriginstantcloudfrontdoorfrom-ndfrom-nefrom-nhktistoryfrom-njshimosuwalkis-a-chefarsundyndns-mailfrom-nminamifuranofrom-nvalleedaostefrom-nynysagamiharafrom-ohdattorelayfrom-oketogolffanshimotsukefrom-orfrom-padualstackazoologicalfrom-pratogurafrom-ris-a-conservativegashimotsumayfirstockholmestrandfrom-schmidtre-gauldalfrom-sdscloudfrom-tnfrom-txn--2scrj9chonanbunkyonanaoshimakanegasakikugawaltervistailscaleforcefrom-utsiracusaikirovogradoyfrom-vald-aostarostwodzislawildlifestylefrom-vtransportefrom-wafrom-wiardwebview-assetshinichinanfrom-wvanylvenneslaskerrylogisticshinjournalismartlabelingfrom-wyfrosinonefrostalowa-wolawafroyal-commissionfruskydivingfujiiderafujikawaguchikonefujiminokamoenairkitapps-auction-rancherkasydneyfujinomiyadattowebhoptogakushimotoganefujiokayamandalfujisatoshonairlinedre-eikerfujisawafujishiroishidakabiratoridedyn-berlincolnfujitsuruokazakiryuohkurafujiyoshidavvenjargap-east-1fukayabeardubaiduckdnsncfdfukuchiyamadavvesiidappnodebalancertmgrazimutheworkpccwilliamhillfukudomigawafukuis-a-cpalacefukumitsubishigakisarazure-mobileirvikazteleportlligatransurlfukuokakamigaharafukuroishikarikaturindalfukusakishiwadazaifudaigokaseljordfukuyamagatakaharunusualpersonfunabashiriuchinadafunagatakahashimamakisofukushimangonnakatombetsumy-gatewayfunahashikamiamakusatsumasendaisenergyfundaciofunkfeuerfuoiskujukuriyamangyshlakasamatsudoomdnstracefuosskoczowinbar1furubirafurudonostiaafurukawajimaniwakuratefusodegaurafussaintlouis-a-anarchistoireggiocalabriafutabayamaguchinomihachimanagementrapaniizafutboldlygoingnowhere-for-morenakatsugawafuttsurutaharafuturecmshinjukumamotoyamashikefuturehostingfuturemailingfvghamurakamigoris-a-designerhandcraftedhandsonyhangglidinghangoutwentehannanmokuizumodenaklodzkochikuseihidorahannorthwesternmutualhanyuzenhapmircloudletshintokushimahappounzenharvestcelebrationhasamap-northeast-3hasaminami-alpshintomikasaharahashbangryhasudahasura-apphiladelphiaareadmyblogspotrdhasvikfh-muensterhatogayahoooshikamaishimofusartshinyoshitomiokamisunagawahatoyamazakitakatakanabeatshiojirishirifujiedahatsukaichikaiseiyoichimkentrendhostinghattfjelldalhayashimamotobusellfylkesbiblackbaudcdn-edgestackhero-networkisboringhazuminobushistoryhelplfinancialhelsinkitakyushuaiahembygdsforbundhemneshioyanaizuerichardlimanowarudahemsedalhepforgeblockshirahamatonbetsurgeonshalloffameiwamasoyheroyhetemlbfanhgtvaohigashiagatsumagoianiahigashichichibuskerudhigashihiroshimanehigashiizumozakitamigrationhigashikagawahigashikagurasoedahigashikawakitaaikitamotosunndalhigashikurumeeresinstaginghigashimatsushimarburghigashimatsuyamakitaakitadaitoigawahigashimurayamamotorcycleshirakokonoehigashinarusells-for-lesshiranukamitondabayashiogamagoriziahigashinehigashiomitamanortonsberghigashiosakasayamanakakogawahigashishirakawamatakanezawahigashisumiyoshikawaminamiaikitanakagusukumodernhigashitsunosegawahigashiurausukitashiobarahigashiyamatokoriyamanashifteditorxn--30rr7yhigashiyodogawahigashiyoshinogaris-a-doctorhippyhiraizumisatohnoshoohirakatashinagawahiranairportland-4-salernogiessennanjobojis-a-financialadvisor-aurdalhirarahiratsukaerusrcfastlylbanzaicloudappspotagerhirayaitakaokalmykiahistorichouseshiraois-a-geekhakassiahitachiomiyagildeskaliszhitachiotagonohejis-a-greenhitraeumtgeradegreehjartdalhjelmelandholeckodairaholidayholyhomegoodshiraokamitsuehomeiphilatelyhomelinkyard-cloudjiffyresdalhomelinuxn--32vp30hachiojiyahikobierzycehomeofficehomesecuritymacaparecidahomesecuritypchoseikarugamvikarlsoyhomesenseeringhomesklepphilipsynology-diskstationhomeunixn--3bst00minamiiserniahondahongooglecodebergentinghonjyoitakarazukaluganskharkivaporcloudhornindalhorsells-for-ustkanmakiwielunnerhortendofinternet-dnshiratakahagitapphoenixn--3ds443ghospitalhoteleshishikuis-a-guruhotelwithflightshisognehotmailhoyangerhoylandetakasagophonefosshisuifuettertdasnetzhumanitieshitaramahungryhurdalhurumajis-a-hard-workershizukuishimogosenhyllestadhyogoris-a-hunterhyugawarahyundaiwafuneis-into-carsiiitesilkharkovaresearchaeologicalvinklein-the-bandairtelebitbridgestoneenebakkeshibechambagricultureadymadealstahaugesunderseaportsinfolionetworkdalaheadjudygarlandis-into-cartoonsimple-urlis-into-gamesserlillyis-leetrentin-suedtirolis-lostre-toteneis-a-lawyeris-not-certifiedis-savedis-slickhersonis-uberleetrentino-a-adigeis-very-badajozis-a-liberalis-very-evillageis-very-goodyearis-very-niceis-very-sweetpepperugiais-with-thebandovre-eikerisleofmanaustdaljellybeanjenv-arubahccavuotnagaragusabaerobaticketsirdaljeonnamerikawauejetztrentino-aadigejevnakershusdecorativeartslupskhmelnytskyivarggatrentino-alto-adigejewelryjewishartgalleryjfkhplaystation-cloudyclusterjgorajlljls-sto1jls-sto2jls-sto3jmphotographysiojnjaworznospamproxyjoyentrentino-altoadigejoyokaichibajddarchitecturealtorlandjpnjprslzjurkotohiradomainstitutekotourakouhokutamamurakounosupabasembokukizunokunimilitarykouyamarylhurstjordalshalsenkouzushimasfjordenkozagawakozakis-a-llamarnardalkozowindowskrakowinnersnoasakatakkokamiminersokndalkpnkppspbarcelonagawakkanaibetsubamericanfamilyds3-fips-us-gov-west-1krasnikahokutokashikis-a-musiciankrasnodarkredstonekrelliankristiansandcatsolarssonkristiansundkrodsheradkrokstadelvalle-aostatic-accessolognekryminamiizukaminokawanishiaizubangekumanotteroykumatorinovecoregontrailroadkumejimashikis-a-nascarfankumenantokonamegatakatoris-a-nursells-itrentin-sud-tirolkunisakis-a-painteractivelvetrentin-sudtirolkunitachiaraindropilotsolundbecknx-serversellsyourhomeftphxn--3e0b707ekunitomigusukuleuvenetokigawakunneppuboliviajessheimpertrixcdn77-secureggioemiliaromagnamsosnowiechristiansburgminakamichiharakunstsammlungkunstunddesignkuokgroupimientaketomisatoolsomakurehabmerkurgankurobeeldengeluidkurogimimatakatsukis-a-patsfankuroisoftwarezzoologykuromatsunais-a-personaltrainerkuronkurotakikawasakis-a-photographerokussldkushirogawakustanais-a-playershiftcryptonomichigangwonkusupersalezajskomakiyosemitekutchanelkutnowruzhgorodeokuzumakis-a-republicanonoichinomiyakekvafjordkvalsundkvamscompute-1kvanangenkvinesdalkvinnheradkviteseidatingkvitsoykwpspdnsomnatalkzmisakis-a-soxfanmisasaguris-a-studentalmisawamisconfusedmishimasudamissilemisugitokuyamatsumaebashikshacknetrentino-sued-tirolmitakeharamitourismilemitoyoakemiuramiyazurecontainerdpolicemiyotamatsukuris-a-teacherkassyno-dshowamjondalenmonstermontrealestatefarmequipmentrentino-suedtirolmonza-brianzapposor-odalmonza-e-della-brianzaptokyotangotpantheonsitemonzabrianzaramonzaebrianzamonzaedellabrianzamoonscalebookinghostedpictetrentinoa-adigemordoviamoriyamatsumotofukemoriyoshiminamiashigaramormonmouthachirogatakamoriokakudamatsuemoroyamatsunomortgagemoscowiosor-varangermoseushimodatemosjoenmoskenesorfoldmossorocabalena-devicesorreisahayakawakamiichikawamisatottoris-a-techietis-a-landscaperspectakasakitchenmosvikomatsushimarylandmoteginowaniihamatamakinoharamoviemovimientolgamozilla-iotrentinoaadigemtranbytomaritimekeepingmuginozawaonsensiositemuikaminoyamaxunispacemukoebenhavnmulhouseoullensvanguardmunakatanemuncienciamuosattemupinbarclaycards3-sa-east-1murmanskomforbar2murotorcraftrentinoalto-adigemusashinoharamuseetrentinoaltoadigemuseumverenigingmusicargodaddyn-o-saurlandesortlandmutsuzawamy-wanggoupilemyactivedirectorymyamazeplaymyasustor-elvdalmycdmycloudnsoruminamimakis-a-rockstarachowicemydattolocalcertificationmyddnsgeekgalaxymydissentrentinos-tirolmydobissmarterthanyoumydrobofageologymydsoundcastronomy-vigorlicemyeffectrentinostirolmyfastly-terrariuminamiminowamyfirewalledreplittlestargardmyforuminamioguni5myfritzmyftpaccessouthcarolinaturalhistorymuseumcentermyhome-servermyjinomykolaivencloud66mymailermymediapchristmasakillucernemyokohamamatsudamypepinkommunalforbundmypetsouthwest1-uslivinghistorymyphotoshibalashovhadanorth-kazakhstanmypicturestaurantrentinosud-tirolmypsxn--3pxu8kommunemysecuritycamerakermyshopblocksowamyshopifymyspreadshopwarendalenugmythic-beastspectruminamisanrikubetsuppliesoomytis-a-bookkeepermaritimodspeedpartnermytuleap-partnersphinxn--41amyvnchromediatechnologymywirepaircraftingvollohmusashimurayamashikokuchuoplantationplantspjelkavikomorotsukagawaplatformsharis-a-therapistoiaplatter-appinokofuefukihaboromskogplatterpioneerplazaplcube-serversicherungplumbingoplurinacionalpodhalepodlasiellaktyubinskiptveterinairealmpmnpodzonepohlpoivronpokerpokrovskomvuxn--3hcrj9choyodobashichikashukujitawaraumalatvuopmicrosoftbankarmoypoliticarrierpolitiendapolkowicepoltavalle-d-aostaticspydebergpomorzeszowitdkongsbergponpesaro-urbino-pesarourbinopesaromasvuotnarusawapordenonepornporsangerporsangugeporsgrunnanyokoshibahikariwanumatakinouepoznanpraxis-a-bruinsfanprdpreservationpresidioprgmrprimetelemarkongsvingerprincipeprivatizehealthinsuranceprofesionalprogressivestfoldpromombetsupplypropertyprotectionprotonetrentinosued-tirolprudentialpruszkowithgoogleapiszprvcyberprzeworskogpulawypunyufuelveruminamiuonumassa-carrara-massacarraramassabuyshousesopotrentino-sud-tirolpupugliapussycateringebuzentsujiiepvhadselfiphdfcbankazunoticiashinkamigototalpvtrentinosuedtirolpwchungnamdalseidsbergmodellingmxn--11b4c3dray-dnsupdaterpzqhaebaruericssongdalenviknakayamaoris-a-cubicle-slavellinodeobjectshinshinotsurfashionstorebaselburguidefinimamateramochizukimobetsumidatlantichirurgiens-dentistes-en-franceqldqotoyohashimotoshimatsuzakis-an-accountantshowtimelbourneqponiatowadaqslgbtrentinsud-tirolqualifioappippueblockbusternopilawaquickconnectrentinsudtirolquicksytesrhtrentinsued-tirolquipelementsrltunestuff-4-saletunkonsulatrobeebyteappigboatsmolaquilanxessmushcdn77-sslingturystykaniepcetuscanytushuissier-justicetuvalleaostaverntuxfamilytwmailvestvagoyvevelstadvibo-valentiavibovalentiavideovillastufftoread-booksnestorfjordvinnicasadelamonedagestangevinnytsiavipsinaappiwatevirginiavirtual-uservecounterstrikevirtualcloudvirtualservervirtualuserveexchangevirtuelvisakuhokksundviterbolognagasakikonaikawagoevivianvivolkenkundenvixn--42c2d9avlaanderennesoyvladikavkazimierz-dolnyvladimirvlogintoyonezawavminanovologdanskonyveloftrentino-stirolvolvolkswagentstuttgartrentinsuedtirolvolyngdalvoorlopervossevangenvotevotingvotoyonovps-hostrowiecircustomer-ocimmobilienwixsitewloclawekoobindalwmcloudwmflabsurnadalwoodsidelmenhorstabackyardsurreyworse-thandawowithyoutuberspacekitagawawpdevcloudwpenginepoweredwphostedmailwpmucdnpixolinodeusercontentrentinosudtirolwpmudevcdnaccessokanagawawritesthisblogoipizzawroclawiwatsukiyonoshiroomgwtcirclerkstagewtfastvps-serverisignwuozuwzmiuwajimaxn--4gbriminingxn--4it168dxn--4it797kooris-a-libertarianxn--4pvxs4allxn--54b7fta0ccivilaviationredumbrellajollamericanexpressexyxn--55qw42gxn--55qx5dxn--5dbhl8dxn--5js045dxn--5rtp49civilisationrenderxn--5rtq34koperviklabudhabikinokawachinaganoharamcocottempurlxn--5su34j936bgsgxn--5tzm5gxn--6btw5axn--6frz82gxn--6orx2rxn--6qq986b3xlxn--7t0a264civilizationthewifiatmallorcafederation-webspacexn--80aaa0cvacationsusonoxn--80adxhksuzakananiimiharuxn--80ao21axn--80aqecdr1axn--80asehdbarclays3-us-east-2xn--80aswgxn--80aukraanghkembuchikujobservableusercontentrevisohughestripperxn--8dbq2axn--8ltr62koryokamikawanehonbetsuwanouchijiwadeliveryxn--8pvr4uxn--8y0a063axn--90a1affinitylotterybnikeisenbahnxn--90a3academiamicable-modemoneyxn--90aeroportalabamagasakishimabaraffleentry-snowplowiczeladzxn--90aishobarakawaharaoxn--90amckinseyxn--90azhytomyrxn--9dbhblg6dietritonxn--9dbq2axn--9et52uxn--9krt00axn--andy-iraxn--aroport-byandexcloudxn--asky-iraxn--aurskog-hland-jnbarefootballooningjerstadgcapebretonamicrolightingjesdalombardiadembroideryonagunicloudiherokuappanamasteiermarkaracoldwarszawauthgearappspacehosted-by-previderxn--avery-yuasakuragawaxn--b-5gaxn--b4w605ferdxn--balsan-sdtirol-nsbsuzukanazawaxn--bck1b9a5dre4civilwarmiasadoesntexisteingeekarpaczest-a-la-maisondre-landrayddns5yxn--bdddj-mrabdxn--bearalvhki-y4axn--berlevg-jxaxn--bhcavuotna-s4axn--bhccavuotna-k7axn--bidr-5nachikatsuuraxn--bievt-0qa2xn--bjarky-fyaotsurgeryxn--bjddar-ptargithubpreviewsaitohmannore-og-uvdalxn--blt-elabourxn--bmlo-graingerxn--bod-2naturalsciencesnaturellesuzukis-an-actorxn--bozen-sdtirol-2obanazawaxn--brnny-wuacademy-firewall-gatewayxn--brnnysund-m8accident-investigation-acornxn--brum-voagatroandinosaureportrentoyonakagyokutoyakomaganexn--btsfjord-9zaxn--bulsan-sdtirol-nsbaremetalpha-myqnapcloud9guacuiababia-goracleaningitpagexlimoldell-ogliastraderxn--c1avgxn--c2br7gxn--c3s14mincomcastreserve-onlinexn--cck2b3bargainstances3-us-gov-west-1xn--cckwcxetdxn--cesena-forl-mcbremangerxn--cesenaforl-i8axn--cg4bkis-an-actresshwindmillxn--ciqpnxn--clchc0ea0b2g2a9gcdxn--comunicaes-v6a2oxn--correios-e-telecomunicaes-ghc29axn--czr694barreaudiblebesbydgoszczecinemagnethnologyoriikaragandauthordalandroiddnss3-ap-southeast-2ix4432-balsan-suedtirolimiteddnskinggfakefurniturecreationavuotnaritakoelnayorovigotsukisosakitahatakahatakaishimoichinosekigaharaurskog-holandingitlaborxn--czrs0trogstadxn--czru2dxn--czrw28barrel-of-knowledgeappgafanquanpachicappacificurussiautomotivelandds3-ca-central-16-balsan-sudtirollagdenesnaaseinet-freaks3-ap-southeast-123websiteleaf-south-123webseiteckidsmynasushiobarackmazerbaijan-mayen-rootaribeiraogakibichuobiramusementdllpages3-ap-south-123sitewebhareidfjordvagsoyerhcloudd-dnsiskinkyolasiteastcoastaldefenceastus2038xn--d1acj3barrell-of-knowledgecomputerhistoryofscience-fictionfabricafjs3-us-west-1xn--d1alfaromeoxn--d1atromsakegawaxn--d5qv7z876clanbibaidarmeniaxn--davvenjrga-y4axn--djrs72d6uyxn--djty4kosaigawaxn--dnna-grajewolterskluwerxn--drbak-wuaxn--dyry-iraxn--e1a4cldmailukowhitesnow-dnsangohtawaramotoineppubtlsanjotelulubin-brbambinagisobetsuitagajoburgjerdrumcprequalifymein-vigorgebetsukuibmdeveloperauniteroizumizakinderoyomitanobninskanzakiyokawaraustrheimatunduhrennebulsan-suedtirololitapunk123kotisivultrobjectselinogradimo-siemenscaledekaascolipiceno-ipifony-1337xn--eckvdtc9dxn--efvn9svalbardunloppaderbornxn--efvy88hagakhanamigawaxn--ehqz56nxn--elqq16hagebostadxn--eveni-0qa01gaxn--f6qx53axn--fct429kosakaerodromegallupaasdaburxn--fhbeiarnxn--finny-yuaxn--fiq228c5hsvchurchaseljeepsondriodejaneirockyotobetsuliguriaxn--fiq64barsycenterprisesakievennodesadistcgrouplidlugolekagaminord-frontierxn--fiqs8sveioxn--fiqz9svelvikoninjambylxn--fjord-lraxn--fjq720axn--fl-ziaxn--flor-jraxn--flw351exn--forl-cesena-fcbssvizzeraxn--forlcesena-c8axn--fpcrj9c3dxn--frde-grandrapidsvn-repostorjcloud-ver-jpchowderxn--frna-woaraisaijosoyroroswedenxn--frya-hraxn--fzc2c9e2cleverappsannanxn--fzys8d69uvgmailxn--g2xx48clicketcloudcontrolapparmatsuuraxn--gckr3f0fauskedsmokorsetagayaseralingenoamishirasatogliattipschulserverxn--gecrj9clickrisinglesannohekinannestadraydnsanokaruizawaxn--ggaviika-8ya47haibarakitakamiizumisanofidelitysfjordxn--gildeskl-g0axn--givuotna-8yasakaiminatoyookaneyamazoexn--gjvik-wuaxn--gk3at1exn--gls-elacaixaxn--gmq050is-an-anarchistoricalsocietysnesigdalxn--gmqw5axn--gnstigbestellen-zvbrplsbxn--45br5cylxn--gnstigliefern-wobihirosakikamijimatsushigexn--h-2failxn--h1aeghair-surveillancexn--h1ahnxn--h1alizxn--h2breg3eveneswidnicasacampinagrandebungotakadaemongolianxn--h2brj9c8clinichippubetsuikilatironporterxn--h3cuzk1digickoseis-a-linux-usershoujis-a-knightpointtohoboleslawieconomiastalbanshizuokamogawaxn--hbmer-xqaxn--hcesuolo-7ya35barsyonlinewhampshirealtychyattorneyagawakuyabukihokumakogeniwaizumiotsurugimbalsfjordeportexaskoyabeagleboardetroitskypecorivneatonoshoes3-eu-west-3utilitiesquare7xn--hebda8basicserversaillesjabbottateshinanomachildrensgardenhlfanhsbc66xn--hery-iraxn--hgebostad-g3axn--hkkinen-5waxn--hmmrfeasta-s4accident-prevention-aptibleangaviikadenaamesjevuemielnoboribetsuckswidnikkolobrzegersundxn--hnefoss-q1axn--hobl-iraxn--holtlen-hxaxn--hpmir-xqaxn--hxt814exn--hyanger-q1axn--hylandet-54axn--i1b6b1a6a2exn--imr513nxn--indery-fyasugithubusercontentromsojamisonxn--io0a7is-an-artistgstagexn--j1adpkomonotogawaxn--j1aefbsbxn--1lqs71dyndns-office-on-the-webhostingrpassagensavonarviikamiokameokamakurazakiwakunigamihamadaxn--j1ael8basilicataniautoscanadaeguambulancentralus-2xn--j1amhakatanorthflankddiamondshinshiroxn--j6w193gxn--jlq480n2rgxn--jlq61u9w7basketballfinanzgorzeleccodespotenzakopanewspaperxn--jlster-byasuokannamihokkaidopaaskvollxn--jrpeland-54axn--jvr189miniserversusakis-a-socialistg-builderxn--k7yn95exn--karmy-yuaxn--kbrq7oxn--kcrx77d1x4axn--kfjord-iuaxn--klbu-woaxn--klt787dxn--kltp7dxn--kltx9axn--klty5xn--45brj9cistrondheimperiaxn--koluokta-7ya57hakodatexn--kprw13dxn--kpry57dxn--kput3is-an-engineeringxn--krager-gyatominamibosogndalxn--kranghke-b0axn--krdsherad-m8axn--krehamn-dxaxn--krjohka-hwab49jdevcloudfunctionsimplesitexn--ksnes-uuaxn--kvfjord-nxaxn--kvitsy-fyatsukanoyakagexn--kvnangen-k0axn--l-1fairwindswiebodzin-dslattuminamiyamashirokawanabeepilepsykkylvenicexn--l1accentureklamborghinikolaeventswinoujscienceandhistoryxn--laheadju-7yatsushiroxn--langevg-jxaxn--lcvr32dxn--ldingen-q1axn--leagaviika-52batochigifts3-us-west-2xn--lesund-huaxn--lgbbat1ad8jdfaststackschulplattformetacentrumeteorappassenger-associationxn--lgrd-poacctrusteexn--lhppi-xqaxn--linds-pramericanartrvestnestudioxn--lns-qlavagiskexn--loabt-0qaxn--lrdal-sraxn--lrenskog-54axn--lt-liacliniquedapliexn--lten-granexn--lury-iraxn--m3ch0j3axn--mely-iraxn--merker-kuaxn--mgb2ddeswisstpetersburgxn--mgb9awbfbx-ostrowwlkpmguitarschwarzgwangjuifminamidaitomanchesterxn--mgba3a3ejtrycloudflarevistaplestudynamic-dnsrvaroyxn--mgba3a4f16axn--mgba3a4fra1-deloittevaksdalxn--mgba7c0bbn0axn--mgbaakc7dvfstdlibestadxn--mgbaam7a8hakonexn--mgbab2bdxn--mgbah1a3hjkrdxn--mgbai9a5eva00batsfjordiscordsays3-website-ap-northeast-1xn--mgbai9azgqp6jejuniperxn--mgbayh7gpalmaseratis-an-entertainerxn--mgbbh1a71exn--mgbc0a9azcgxn--mgbca7dzdoxn--mgbcpq6gpa1axn--mgberp4a5d4a87gxn--mgberp4a5d4arxn--mgbgu82axn--mgbi4ecexposedxn--mgbpl2fhskosherbrookegawaxn--mgbqly7c0a67fbclintonkotsukubankarumaifarmsteadrobaknoluoktachikawakayamadridvallee-aosteroyxn--mgbqly7cvafr-1xn--mgbt3dhdxn--mgbtf8flapymntrysiljanxn--mgbtx2bauhauspostman-echocolatemasekd1xn--mgbx4cd0abbvieeexn--mix082fbxoschweizxn--mix891fedorainfraclouderaxn--mjndalen-64axn--mk0axin-vpnclothingdustdatadetectjmaxxxn--12c1fe0bradescotlandrrxn--mk1bu44cn-northwest-1xn--mkru45is-bykleclerchoshibuyachiyodancexn--mlatvuopmi-s4axn--mli-tlavangenxn--mlselv-iuaxn--moreke-juaxn--mori-qsakurais-certifiedxn--mosjen-eyawaraxn--mot-tlazioxn--mre-og-romsdal-qqbuseranishiaritakurashikis-foundationxn--msy-ula0hakubaghdadultravelchannelxn--mtta-vrjjat-k7aflakstadaokagakicks-assnasaarlandxn--muost-0qaxn--mxtq1minisitexn--ngbc5azdxn--ngbe9e0axn--ngbrxn--45q11citadelhicampinashikiminohostfoldnavyxn--nit225koshimizumakiyosunnydayxn--nmesjevuemie-tcbalestrandabergamoarekeymachineustarnbergxn--nnx388axn--nodessakyotanabelaudiopsysynology-dstreamlitappittsburghofficialxn--nqv7fs00emaxn--nry-yla5gxn--ntso0iqx3axn--ntsq17gxn--nttery-byaeserveftplanetariuminamitanexn--nvuotna-hwaxn--nyqy26axn--o1achernihivgubsxn--o3cw4hakuis-a-democratravelersinsurancexn--o3cyx2axn--od0algxn--od0aq3belementorayoshiokanumazuryukuhashimojibxos3-website-ap-southeast-1xn--ogbpf8flatangerxn--oppegrd-ixaxn--ostery-fyawatahamaxn--osyro-wuaxn--otu796dxn--p1acfedorapeoplegoismailillehammerfeste-ipatriaxn--p1ais-gonexn--pgbs0dhlx3xn--porsgu-sta26fedoraprojectoyotsukaidoxn--pssu33lxn--pssy2uxn--q7ce6axn--q9jyb4cngreaterxn--qcka1pmcpenzaporizhzhiaxn--qqqt11minnesotaketakayamassivegridxn--qxa6axn--qxamsterdamnserverbaniaxn--rady-iraxn--rdal-poaxn--rde-ulaxn--rdy-0nabaris-into-animeetrentin-sued-tirolxn--rennesy-v1axn--rhkkervju-01afeiraquarelleasingujaratoyouraxn--rholt-mragowoltlab-democraciaxn--rhqv96gxn--rht27zxn--rht3dxn--rht61exn--risa-5naturbruksgymnxn--risr-iraxn--rland-uuaxn--rlingen-mxaxn--rmskog-byaxn--rny31hakusanagochihayaakasakawaiishopitsitexn--rovu88bellevuelosangeles3-website-ap-southeast-2xn--rros-granvindafjordxn--rskog-uuaxn--rst-0naturhistorischesxn--rsta-framercanvasxn--rvc1e0am3exn--ryken-vuaxn--ryrvik-byaxn--s-1faithaldenxn--s9brj9cnpyatigorskolecznagatorodoyxn--sandnessjen-ogbellunord-odalombardyn53xn--sandy-yuaxn--sdtirol-n2axn--seral-lraxn--ses554gxn--sgne-graphoxn--4dbgdty6citichernovtsyncloudrangedaluccarbonia-iglesias-carboniaiglesiascarboniaxn--skierv-utazasxn--skjervy-v1axn--skjk-soaxn--sknit-yqaxn--sknland-fxaxn--slat-5natuurwetenschappenginexn--slt-elabcieszynh-servebeero-stageiseiroumuenchencoreapigeelvinckoshunantankmpspawnextdirectrentino-s-tirolxn--smla-hraxn--smna-gratangentlentapisa-geekosugexn--snase-nraxn--sndre-land-0cbeneventochiokinoshimaintenancebinordreisa-hockeynutazurestaticappspaceusercontentateyamaveroykenglandeltaitogitsumitakagiizeasypanelblagrarchaeologyeongbuk0emmafann-arboretumbriamallamaceiobbcg123homepagefrontappchizip61123minsidaarborteaches-yogasawaracingroks-theatree123hjemmesidealerimo-i-rana4u2-localhistorybolzano-altoadigeometre-experts-comptables3-ap-northeast-123miwebcambridgehirn4t3l3p0rtarumizusawabogadobeaemcloud-fr123paginaweberkeleyokosukanrabruzzombieidskoguchikushinonsenasakuchinotsuchiurakawafaicloudineat-url-o-g-i-naval-d-aosta-valleyokote164-b-datacentermezproxyzgoraetnabudejjudaicadaquest-mon-blogueurodirumaceratabuseating-organicbcn-north-123saitamakawabartheshopencraftrainingdyniajuedischesapeakebayernavigationavoi234lima-cityeats3-ap-northeast-20001wwwedeployokozeastasiamunemurorangecloudplatform0xn--snes-poaxn--snsa-roaxn--sr-aurdal-l8axn--sr-fron-q1axn--sr-odal-q1axn--sr-varanger-ggbentleyurihonjournalistjohnikonanporovnobserverxn--srfold-byaxn--srreisa-q1axn--srum-gratis-a-bulls-fanxn--stfold-9xaxn--stjrdal-s1axn--stjrdalshalsen-sqbeppublishproxyusuharavocatanzarowegroweiboltashkentatamotorsitestingivingjemnes3-eu-central-1kappleadpages-12hpalmspringsakerxn--stre-toten-zcbeskidyn-ip24xn--t60b56axn--tckweddingxn--tiq49xqyjelasticbeanstalkhmelnitskiyamarumorimachidaxn--tjme-hraxn--tn0agrocerydxn--tnsberg-q1axn--tor131oxn--trany-yuaxn--trentin-sd-tirol-rzbestbuyshoparenagareyamaizurugbyenvironmentalconservationflashdrivefsnillfjordiscordsezjampaleoceanographics3-website-eu-west-1xn--trentin-sdtirol-7vbetainaboxfuseekloges3-website-sa-east-1xn--trentino-sd-tirol-c3bhzcasertainaioirasebastopologyeongnamegawafflecellclstagemologicaliforniavoues3-eu-west-1xn--trentino-sdtirol-szbielawalbrzycharitypedreamhostersvp4xn--trentinosd-tirol-rzbiellaakesvuemieleccebizenakanotoddeninoheguriitatebayashiibahcavuotnagaivuotnagaokakyotambabybluebitelevisioncilla-speziaxarnetbank8s3-eu-west-2xn--trentinosdtirol-7vbieszczadygeyachimataijiiyamanouchikuhokuryugasakitaurayasudaxn--trentinsd-tirol-6vbievat-band-campaignieznombrendlyngengerdalces3-website-us-east-1xn--trentinsdtirol-nsbifukagawalesundiscountypeformelhusgardeninomiyakonojorpelandiscourses3-website-us-west-1xn--trgstad-r1axn--trna-woaxn--troms-zuaxn--tysvr-vraxn--uc0atvestre-slidrexn--uc0ay4axn--uist22halsakakinokiaxn--uisz3gxn--unjrga-rtarnobrzegyptianxn--unup4yxn--uuwu58axn--vads-jraxn--valle-aoste-ebbtularvikonskowolayangroupiemontexn--valle-d-aoste-ehboehringerikexn--valleaoste-e7axn--valledaoste-ebbvadsoccerxn--vard-jraxn--vegrshei-c0axn--vermgensberater-ctb-hostingxn--vermgensberatung-pwbigvalledaostaobaomoriguchiharag-cloud-championshiphoplixboxenirasakincheonishiazaindependent-commissionishigouvicasinordeste-idclkarasjohkamikitayamatsurindependent-inquest-a-la-masionishiharaxn--vestvgy-ixa6oxn--vg-yiabkhaziaxn--vgan-qoaxn--vgsy-qoa0jelenia-goraxn--vgu402cnsantabarbaraxn--vhquvestre-totennishiawakuraxn--vler-qoaxn--vre-eiker-k8axn--vrggt-xqadxn--vry-yla5gxn--vuq861biharstadotsubetsugaruhrxn--w4r85el8fhu5dnraxn--w4rs40lxn--wcvs22dxn--wgbh1cntjomeldaluroyxn--wgbl6axn--xhq521bihorologyusuisservegame-serverxn--xkc2al3hye2axn--xkc2dl3a5ee0hammarfeastafricaravantaaxn--y9a3aquariumintereitrentino-sudtirolxn--yer-znaumburgxn--yfro4i67oxn--ygarden-p1axn--ygbi2ammxn--4dbrk0cexn--ystre-slidre-ujbikedaejeonbukarasjokarasuyamarriottatsunoceanographiquehimejindependent-inquiryuufcfanishiizunazukindependent-panelomoliseminemrxn--zbx025dxn--zf0ao64axn--zf0avxlxn--zfr164bilbaogashimadachicagoboavistanbulsan-sudtirolbia-tempio-olbiatempioolbialystokkeliwebredirectme-south-1xnbayxz
\ No newline at end of file
diff --git a/vendor/golang.org/x/net/publicsuffix/list.go b/vendor/golang.org/x/net/publicsuffix/list.go
index 7caeeaa696d47..d56e9e7624457 100644
--- a/vendor/golang.org/x/net/publicsuffix/list.go
+++ b/vendor/golang.org/x/net/publicsuffix/list.go
@@ -101,10 +101,10 @@ loop:
break
}
- u := uint32(nodeValue(f) >> (nodesBitsTextOffset + nodesBitsTextLength))
+ u := uint32(nodes.get(f) >> (nodesBitsTextOffset + nodesBitsTextLength))
icannNode = u&(1<<nodesBitsICANN-1) != 0
u >>= nodesBitsICANN
- u = children[u&(1<<nodesBitsChildren-1)]
+ u = children.get(u & (1<<nodesBitsChildren - 1))
lo = u & (1<<childrenBitsLo - 1)
u >>= childrenBitsLo
hi = u & (1<<childrenBitsHi - 1)
@@ -154,18 +154,9 @@ func find(label string, lo, hi uint32) uint32 {
return notFound
}
-func nodeValue(i uint32) uint64 {
- off := uint64(i * (nodesBits / 8))
- return uint64(nodes[off])<<32 |
- uint64(nodes[off+1])<<24 |
- uint64(nodes[off+2])<<16 |
- uint64(nodes[off+3])<<8 |
- uint64(nodes[off+4])
-}
-
// nodeLabel returns the label for the i'th node.
func nodeLabel(i uint32) string {
- x := nodeValue(i)
+ x := nodes.get(i)
length := x & (1<<nodesBitsTextLength - 1)
x >>= nodesBitsTextLength
offset := x & (1<<nodesBitsTextOffset - 1)
@@ -189,3 +180,24 @@ func EffectiveTLDPlusOne(domain string) (string, error) {
}
return domain[1+strings.LastIndex(domain[:i], "."):], nil
}
+
+type uint32String string
+
+func (u uint32String) get(i uint32) uint32 {
+ off := i * 4
+ return (uint32(u[off])<<24 |
+ uint32(u[off+1])<<16 |
+ uint32(u[off+2])<<8 |
+ uint32(u[off+3]))
+}
+
+type uint40String string
+
+func (u uint40String) get(i uint32) uint64 {
+ off := uint64(i * (nodesBits / 8))
+ return uint64(u[off])<<32 |
+ uint64(u[off+1])<<24 |
+ uint64(u[off+2])<<16 |
+ uint64(u[off+3])<<8 |
+ uint64(u[off+4])
+}
diff --git a/vendor/golang.org/x/net/publicsuffix/table.go b/vendor/golang.org/x/net/publicsuffix/table.go
index 8b2e07243f71b..6bdadcc448b7f 100644
--- a/vendor/golang.org/x/net/publicsuffix/table.go
+++ b/vendor/golang.org/x/net/publicsuffix/table.go
@@ -2,7 +2,9 @@
package publicsuffix
-const version = "publicsuffix.org's public_suffix_list.dat, git revision 3c213aab32b3c014f171b1673d4ce9b5cd72bf1c (2021-11-26T23:05:53Z)"
+import _ "embed"
+
+const version = "publicsuffix.org's public_suffix_list.dat, git revision e248cbc92a527a166454afe9914c4c1b4253893f (2022-11-15T18:02:38Z)"
const (
nodesBits = 40
@@ -24,522 +26,17 @@ const (
)
// numTLD is the number of top level domains.
-const numTLD = 1504
+const numTLD = 1494
-// Text is the combined text of all labels.
-const text = "9guacuiababia-goracleaningroks-theatree164-balsfjordd-dnshome-we" +
- "bservercellikes-piedmonticellocalzoneastasiaetnaamesjevuemielnod" +
- "umcpeastcoastaldefenceastus2038birdartcenterprisecloudaccesscamb" +
- "ridgeiseiroumuenchenishiazaindielddanuorrindigenamsosnowiecherni" +
- "vtsiciliabirkenesoddtangenovaragusarts3-website-eu-west-1birthpl" +
- "acebitbucketrzynishigovtatsunocelotenkawabjarkoyoshiokanumazuryu" +
- "kindowapblogsiteleafamilycompany-2bjerkreimbaltimore-og-romsdalp" +
- "ha-myqnapcloud66bjugnieznorddalombardynalias3-website-sa-east-1b" +
- "lackfridayukuhashimoichinosekigaharabloombergbauernishiharabloxc" +
- "ms3-website-us-east-1bluebitemasekd1bmoattachments3-website-us-w" +
- "est-1bms3-website-us-west-2bmweeklylotteryurihonjournalistjohnis" +
- "hiizunazukindustriabnrwegroweibolognagareyamakeupowiathletajimag" +
- "eandsoundandvision-riopretochigiftsalangenishikatakatsukindustri" +
- "esteamfamberkeleyusuharabomloabaths-heilbronnoysundivttasvuotnak" +
- "aniikawatanagurabondigitaloceanspacesalon-1bonnishikatsuragit-re" +
- "posts-and-telecommunicationsaltdalomzaporizhzhegurinfinitinsureg" +
- "ruhostingloboavistanbulsan-sudtirolondonetskaratsuginamikatagami" +
- "hokkaidovre-eikerbookinghostedpictetnedalondrinamsskoganeintelli" +
- "gencebookonlinewjerseyusuisservegame-serverboomlajollamericanexp" +
- "ressexyuufcfanishikawazukamisatokaizukameyamatotakadaboschaeffle" +
- "rdalorenskoglogoweirbostik-serveronagasakikuchikuseihicampobasso" +
- "ciatest-iservecounterstrikebostonakijinsekikogentappsselfiparach" +
- "utingloppenzaolbia-tempio-olbiatempioolbialystokkeliwebhostinglu" +
- "gsjcbnpparibashkiriabotanicalgardeno-stagingmbhartipschlesisches" +
- "aludiyuzawabotanicgardenishimerabotanychernovtsyncloudrangedalot" +
- "tokorozawabouncemerckmsdnipropetrovskjervoyageometre-experts-com" +
- "ptablesalvadordalibabalena-devicesalzburgminakamichiharabounty-f" +
- "ullensakerrypropertiesamegawaboutiquebecommerce-shopitsitemp-dns" +
- "watch-and-clockerboutireserve-onlinewmexicodyn-o-saurlandesamnan" +
- "gerbozen-sudtirolouvreisenishinomiyashironocparaglidingmodelling" +
- "mxboxfordelmenhorstalbansampaleoddabozen-suedtirolpusercontentat" +
- "toolforgerockartuzybplaceducatorprojectaxihuanishinoomotegohtawa" +
- "ramotoineppubtlsamsclubartowellbeingzonebrandywinevalleybrasilia" +
- "bresciabrindisibenikikugawashtenawdevcdnaccessobetsuitagajobserv" +
- "ableusercontentcmeloyalistoragebristoloseyouriparisor-fronishino" +
- "shimatsumotofukebritishcolumbialowiezaganquannefrankfurtcp4broad" +
- "castlebtimnetzlgretakaharussiabroadwaybroke-itvedestrandray-dnst" +
- "racebrokerbrothermesaverdealerbrowsersafetymarketsamsungrimstadr" +
- "ayddns5ybrumunddalublindesnesandnessjoenishiokoppegardraydnsupda" +
- "terbrunelastxenishitosashimizunaminamibosognebrusselsandoybruxel" +
- "lesandvikcoromantovalle-daostavangerbryanskodjedugit-pagespeedmo" +
- "bilizeroticagliaricoharuhrbrynewportgorybuskerudrobaknoluoktachi" +
- "kawafflecellclstagehirnishiwakinterhostsolutionsanfranciscofreak" +
- "unekobayashikaoirmembersangomniweatherchannelucaniabuzentsujiieb" +
- "uzzwesteuropenairbusantiquest-a-la-maisondre-landroidrrbwestfale" +
- "nissandiegomurabzhitomirbzzcoloradoplateaudiopsysantacruzsantafe" +
- "djeffersoncolumbusheycommunecommunity-prochowicecomobaranzancomp" +
- "aremarkerryhotelsantamariakecomsecaaskoyabearalvahkievennodesaba" +
- "erobaticketsantoandreamhostersanukintuitjxjavaldaostathellevange" +
- "rcondoshichinohealth-carereformemergencyahabaghdadultkmaxxn--0tr" +
- "q7p7nnconferenceconstructionconsuladogadollsaobernardoconsultant" +
- "hropologyconsultingrossetouchihayaakasakawaharacontactksatxn--11" +
- "b4c3dyndns-blogdnsaogoncarriercontagematsubaraumalatvuopmicrosof" +
- "tbankasaokamikoaniihamatamakawajimaritimodumemorialcontemporarya" +
- "rteducationalchikugodonnagatorogersvp4contractorskenconventuresh" +
- "inodearthruherecipescaracalvinklein-berlindaskvollcookingchannel" +
- "sdvrdnsdojoetsuwanouchikujogaszkolancashireclaimsaotomeiwamashik" +
- "okuchuocoolcooperativano-frankivskygearapparochernigovernmentlon" +
- "-2copenhagencyclopedichitosetoeidsvollucernecoproductionsapporoc" +
- "orporationcorsicahcesuoloansardegnaroycorvettempurlcosenzakopane" +
- "lblagrarchaeologyeongbuk0cosidnsfor-better-thanawatchandclockash" +
- "ibatakasakiwakunigamilanotairestaurantmparsardiniacostumedicalta" +
- "nissettaipeigersundyndns-freeboxosascoli-picenordlandyndns-homed" +
- "nsarlcouchpotatofriesarpsborgroundhandlingroznycoukashiharacounc" +
- "ilcouponsarufutsunomiyawakasaikaitabashijonawatecozoravennaharim" +
- "alborkashiwaracqcxn--12c1fe0bradescotlandyndns-ipartinuyamashina" +
- "tsukigatakaokalmykiacranbrookuwanalyticsxn--12cfi8ixb8lcrdyndns-" +
- "mailcreditcardyndns-office-on-the-webercreditunioncremonashgabad" +
- "addjaguarqhachinohedmarkashiwazakiwielunnercrewfarsundyndns-pics" +
- "asayamatta-varjjatoyosatoyokawacricketoyotapartsasebofagemologic" +
- "allynxn--12co0c3b4evalled-aostakinouecrimeast-kazakhstanangercro" +
- "tonecrownipartycrsaskatchewancruisesassarinvestmentsaudacuisinel" +
- "lancasterculturalcentertainmentoyotomiyazakinzais-a-candidatecun" +
- "eocupcakecuritibackyardsauheradyndns-remotewdyndns-serverdalcurv" +
- "alledaostakkokonoecymruovatmallorcafederation-webpaashorokanaiec" +
- "yonabarumemsettlersavannahgacyouthachiojiyaitakahashimamakisosak" +
- "itagawaferraraferrarivneferrerotikagoshimalopolskanlandyndns-wik" +
- "irafetsundyndns-workshoparenakanojohanamakinoharafgujoinvilleitu" +
- "ngsenfhvalerfidoomdnsiskinkyotobetsulikescandyn53fieldyndns1figu" +
- "eresinstagingulenfilateliafilegear-audnedalnfilegear-dealstahaug" +
- "esunderseaportsinfolionetworkangerfilegear-gbizfilegear-iefilege" +
- "ar-jpmorganfilegear-sg-1filminamifuranofinalfinancefineartschule" +
- "finlandynnsaveincloudyndns-webhareidsbergentingrpasadenarashinof" +
- "innoyfirebaseappassenger-associationfirenetoyourafirenzefireston" +
- "efirewebhopocznordreisa-hockeynutazurestaticappspaceusercontento" +
- "ystre-slidrettozawafirmdalegoldpoint2thisamitsukefishingolffansc" +
- "hulserverfitjarvodkagaminogiessennanjobojis-a-catererfitnessettl" +
- "ementozsdeloittenrissagaeroclubmedecincinnativeamericanantiquest" +
- "-mon-blogueurodirumaceratabitorderimo-siemenscaledekaascolipicen" +
- "oboribetsuckschwarzgwangjuifminamiiserniafjalerfldrvallee-aoster" +
- "oyflekkefjordynservebbsaves-the-whalessandria-trani-barletta-and" +
- "riatranibarlettaandriaflesbergunmaniwakurateflickragerokunohealt" +
- "hcareerschweizflirfloginlinefloraflorencefloridatsunangojomedici" +
- "nakaiwamizawatchesciencecentersciencehistoryfloripaderbornfloris" +
- "tanohataitogliattis-a-celticsfanfloromskoguovdageaidnulvikasukab" +
- "edzin-addrammenuorochesterflowerscientistordalfltrani-andria-bar" +
- "letta-trani-andriaflynnhosting-clusterfndynulmetacentrumeteorapp" +
- "assagensavonarusawafnwkasumigaurayasudafoodnetworkdalfor-ourfor-" +
- "somedio-campidano-mediocampidanomediofor-theaterforexrothachirog" +
- "atakahatakaishimogosenforgotdnscjohnsonforli-cesena-forlicesenaf" +
- "orlillehammerfeste-ipatriaforsaleikangerforsandasuoloftraniandri" +
- "abarlettatraniandriafortalfortexascrapper-sitefortmissoulanciafo" +
- "rtworthadanorfolkebibleluxembourgushikamifuranore-og-uvdalfosnes" +
- "crappingwiddleksvikasuyanaizuerichardlillyfotranoyfoxafozfranami" +
- "zuhobby-sitextileirfjordynv6francaiseharafranziskanerimaringatla" +
- "ntaiwanairforcechireadthedocscbgxn--1ctwolominamataobaomoriguchi" +
- "haraffleentry-snowplowiczeladzfredrikstadtvscrysecuritytacticser" +
- "vehalflifeinsurancefreeddnsfreebox-oservehttpbin-butterfreedeskt" +
- "oppdalfreemasonryfreemyiphosteurovisionfreesitefreetlservehumour" +
- "freiburgfreseniuscultureggio-calabriafribourgfriuli-v-giuliafriu" +
- "li-ve-giuliafriuli-vegiuliafriuli-venezia-giuliafriuli-veneziagi" +
- "uliafriuli-vgiuliafriuliv-giuliafriulive-giuliafriulivegiuliafri" +
- "ulivenezia-giuliafriuliveneziagiuliafriulivgiuliafrlfroganservei" +
- "rchonanbulsan-suedtirolukowestus2frognfrolandynvpnpluscountryest" +
- "ateofdelawarecreationfrom-akrehamnfrom-alfrom-arfrom-azimuthatog" +
- "ayabukihokumakogenglandyroyrvikingruenoharafrom-capetownnews-sta" +
- "gingfrom-coffeedbackplaneappaviancargodaddyn-vpndnserveminecraft" +
- "ranslatefrom-ctransportefrom-dchoseikarugamvikariyaltakasagotsuk" +
- "isofukushimangyshlakasamatsudopaasnesoddenmarkhangelskjakdneprop" +
- "etrovskiervaapsteiermarkarlsoyfrom-deatnuniversityfrom-flanderse" +
- "rvemp3from-gaulardalfrom-hichisodegaurafrom-iafrom-idfrom-ilfrom" +
- "-in-brbar0from-kservep2pfizerfrom-kyowariasahikawafrom-langevagr" +
- "igentomologyeonggiehtavuoatnabudapest-a-la-masion-rancherkasydne" +
- "yfrom-malselvendrellfrom-mdfrom-medizinhistorischeservepicserveq" +
- "uakefrom-midsundfrom-mnfrom-modalenfrom-mservesarcasmatartanddes" +
- "ignfrom-mtnfrom-nchoshibuyachtsanjotelulubindaluroyfrom-ndfrom-n" +
- "efrom-nhktransurlfrom-njservicesevastopolefrom-nminamiizukaminok" +
- "awanishiaizubangefrom-nvallee-d-aosteigenfrom-nynysagamiharafrom" +
- "-ohdattorelayfrom-oketogonohejis-a-chefastly-terrariuminamiechiz" +
- "enfrom-orfrom-padoval-daostavalleyfrom-pratogurafrom-ris-a-conse" +
- "rvativegasevenassisicilyfrom-schoenbrunnfrom-sdscloudfrom-tnfrom" +
- "-txn--1lqs03nfrom-utsiracusaikirovogradoyfrom-vald-aostarostwodz" +
- "islawhalingfrom-vtrapaniizafrom-wafrom-wiardwebspacefrom-wvallee" +
- "aosteinkjerusalembroideryfrom-wyfrosinonefrostaplesewhoswholding" +
- "small-webredirectmeeresistancefroyahooguyfruskydivingfstcgroupgf" +
- "oggiafujiiderafujikawaguchikonefujiminokamoenairguardiannakadoma" +
- "rineat-urlfujinomiyadattowebcampinashikiminohostfoldnavyfujiokay" +
- "amalvikaszubyfujisatoshonairlinebraskaunicommbankatowicefujisawa" +
- "fujishiroishidakabiratoridebianfujitsurugashimamurogawafujiyoshi" +
- "davvenjargap-northeast-3fukayabeatsharis-a-cpadualstackatsushika" +
- "beebyteapplinzis-a-cubicle-slavellinodeobjectsharpharmacienshawa" +
- "iijimarburgfukuchiyamadavvesiidappnodebalancertificationfukudomi" +
- "gawafukuis-a-democratravelchannelfukumitsubishigakiryuohkurafuku" +
- "okazakisarazure-mobileirvikatsuyamarriottravelersinsurancefukuro" +
- "ishikarikaturindalfukusakishiwadazaifudaigokaseljordfukuyamagata" +
- "jimifunefunabashiriuchinadafunagatajiris-a-designerfunahashikami" +
- "amakusatsumasendaisenergyfundaciofunkfeuerfuoiskujukuriyamandalf" +
- "uosskoczowienfurnitureggio-emilia-romagnakasatsunairportland-4-s" +
- "alernogatabusebastopologyeongnamegawafaicloudinedre-eikerfurubir" +
- "afurudonostiaafurukawairtelebitbridgestoneen-rootaruis-a-doctorf" +
- "usoftwarezzoologyfussaintlouis-a-anarchistoireggiocalabriafutaba" +
- "yamaguchinomihachimanagementrdfutboldlygoingnowhere-for-morenaka" +
- "tombetsumitakagiizefuttsurugimperiafuturecmshellaspeziafuturehos" +
- "tingfuturemailingfvghangglidinghangoutsystemscloudsitehannanmoku" +
- "izumodenakayamansionshimojis-a-greenhannorthwesternmutualhanyuze" +
- "nhapmircloudletshimokawahappounjargaharstadharvestcelebrationhas" +
- "amanxn--1lqs71dhasaminami-alpshimokitayamattelekommunikationhash" +
- "banghasudahasura-appharmacyshimonitayanagitapphdfcbankazohasvika" +
- "zteleportlligatrendhostinghatoyamazakitahiroshimaoris-a-gurunusu" +
- "alpersonhatsukaichikaiseiyoichippubetsubetsugarunzenhattfjelldal" +
- "hayashimamotobungotakadagestangeorgeorgiahazuminobusellfylkesbib" +
- "lackbaudcdn-edgestackhero-networkinggroupliguriahelsinkitakamiiz" +
- "umisanofidelitysvardontexistmein-iservebeero-stagehembygdsforbun" +
- "dhemneshimonosekikawahemsedalhepforgeblockshimosuwalkis-a-hard-w" +
- "orkershimotsukeheroyhgtvalleedaostehidorahigashiagatsumagoianiah" +
- "igashichichibunkyonanaoshimakanegasakilatironrenderhigashihirosh" +
- "imanehigashiizumozakitakatakamoriokakudamatsuehigashikagawahigas" +
- "hikagurasoedahigashikawakitaaikitakyushuaiahigashikurumeetrentin" +
- "-sud-tirolhigashimatsushimapartmentshimotsumayfirstockholmestran" +
- "dhigashimatsuyamakitaakitadaitoigawahigashimurayamamotorcycleshi" +
- "nichinanhigashinarusells-for-lesshinjournalismailillesandefjordh" +
- "igashinehigashiomitamamurausukitamihamadahigashiosakasayamanakak" +
- "ogawahigashishirakawamatakanabeautysfjordhigashisumiyoshikawamin" +
- "amiaikitamotosumy-gatewayhigashitsunortonhigashiurawa-mazowszexn" +
- "etlifyis-a-hunterhigashiyamatokoriyamanashifteditorxn--1qqw23ahi" +
- "gashiyodogawahigashiyoshinogaris-a-knightpointtohoboleslawiecono" +
- "miastalowa-wolawawsmpplanetariuminamimakis-a-landscaperugiahirai" +
- "zumisatohnoshoooshikamaishimodatehirakatashinagawahiranairtraffi" +
- "cplexus-1hirarahiratsukaerusrcfastlylbananarepublic66hirayaizuwa" +
- "kamatsubushikusakadogawahistorichouseshinjukumamotoyamasfjordenh" +
- "itachiomiyagildeskaliszhitachiotagophiladelphiaareadmyblogsytehi" +
- "traeumtgeradell-ogliastraderhjartdalhjelmelandholeckochikushinon" +
- "senasakuchinotsuchiurakawaholidayhomegoodshinkamigototalhomeiphi" +
- "latelyhomelinkyard-cloudjiffyresdalhomelinuxn--2m4a15ehomeoffice" +
- "homesecuritymacaparecidahomesecuritypchoyodobashichikashukujitaw" +
- "araholtalenissayokkaichiropractichirurgiens-dentistes-en-franceh" +
- "omesenseeringhomesklepphilipsynology-diskstationhomeunixn--2scrj" +
- "9christiansburgripehondahongotembaixadahonjyoitakanezawahorninda" +
- "lhorsells-for-ustkanmakitaurahortendofinternet-dnshinshinotsurge" +
- "onshalloffamelbournehospitalhoteleshinshirohotelwithflightshinto" +
- "kushimahotmailhoyangerhoylandetroitskazunoticiashintomikasaharah" +
- "umanitieshinyoshitomiokamishihoronobeauxartsandcraftshiojirishir" +
- "ifujiedahurdalhurumajis-a-lawyerhyllestadhyogoris-a-liberalhyuga" +
- "warahyundaiwafuneis-uberleetrentin-suedtirolis-very-badajozis-a-" +
- "nursells-itrentin-sudtirolis-very-evillageis-very-goodyearis-ver" +
- "y-niceis-very-sweetpepperis-with-thebandownloadisleofmanaustdalj" +
- "env-arubajddarchitecturealtorlandjeonnamerikawauejetztrentino-a-" +
- "adigejevnakershusdecorativeartshitaramajewelryjewishartgalleryjf" +
- "kharkivanylvenneslaskerrylogisticshizukuishimofusakakinokiajgora" +
- "jlljls-sto1jls-sto2jls-sto3jmphoenixn--30rr7yjnjaworznoshiroomgj" +
- "oyentrentino-aadigejoyokaichibalashovhadselburgjpnjprshizuokamit" +
- "suejurkoshimizumakiyosatokamachintaifun-dnsaliashoujis-a-persona" +
- "ltrainerkoshunantankhmelnitskiyamarshallstatebankharkovaokosugek" +
- "otohiradomainstitutekotourakouhokutamakiyosemitekounosupabasells" +
- "yourhomeftphotographysiokouyamarylandkouzushimarylhurstjordalsha" +
- "lsenkozagawakozakiyosunndalkozowiiheyakagekpnkppspbar2krasnikaho" +
- "kutokashikizunokunimilitarykrasnodarkredstonekrelliankristiansan" +
- "dcatshowakristiansundkrodsheradkrokstadelvalle-aostatic-accessho" +
- "wtimeldalkryminamioguni5kumanotteroykumatorinovecoregontrailroad" +
- "kumejimashikekumenantokonamegatakashimashikis-a-photographerokus" +
- "sldkunisakis-a-playershiftcryptonomichigangwonkunitachiarailwayk" +
- "unitomigusukukis-a-republicancerresearchaeologicaliforniakunnepp" +
- "uboliviajessheimpertrixcdn77-secureggioemiliaromagnaklodzkodaira" +
- "kunstsammlungkunstunddesignkuokgrouphxn--3bst00minamisanrikubets" +
- "upplykurehabmerkurgankurobeepilepsykkylvenicekurogimimatakasugai" +
- "s-a-rockstarachowicekuroisogndalkuromatsunais-a-socialistdlibest" +
- "adkurotakikawasakis-a-soxfankushirogawakustanais-a-studentalkusu" +
- "pplieshwildlifestylekutchanelkutnow-dnsienarutomobelementoraykuz" +
- "umakis-a-teacherkassyno-dshirakofuefukihabororoshiranukamisunaga" +
- "wakvafjordkvalsundkvamlidlugolekafjordvagsoygardendoftheinternet" +
- "flixilovecollegefantasyleaguernseykvanangenkvinesdalkvinnheradkv" +
- "iteseidatingkvitsoykwpspdnsigdalkzmisasaguris-an-accountantshira" +
- "ois-a-linux-usershioyandexcloudmisawamisconfusedmishimassa-carra" +
- "ra-massacarraramassabusinessebykleclerchromediatechnologymissile" +
- "zajskhmelnytskyivaporcloudmisugitokuyamassivegridmitakeharamitou" +
- "rismilemitoyoakemiuramiyazurecontainerdpolicemiyotamanomjondalen" +
- "mlbfanmontrealestatefarmequipmentrentino-s-tirolmonza-brianzappo" +
- "siiitesilkhplaystation-cloudyclustermonza-e-della-brianzaptokyot" +
- "angouvichungnamdalseidfjordurbanamexhibitionissedalutskarmoymonz" +
- "abrianzaramonzaebrianzamonzaedellabrianzamoonscaleforcemordoviam" +
- "oriyamasudamoriyoshiminamiashigaramormonstermoroyamatsumaebashik" +
- "shacknetrentino-stirolmortgagemoscowilliamhillmoseushistorymosjo" +
- "enmoskenesimple-urlmossirdalmosviklabudhabikinokawabarthaebaruer" +
- "icssongdalenviknakatsugawamoteginowaniigatakahamangooglecodespot" +
- "rentino-sud-tirolmoviemovimientolgamozilla-iotrentino-sudtirolmt" +
- "ranbymuginozawaonsensiositemuikaminoyamaxunispacemukoebenhavnmul" +
- "houseminemunakatanemuncienciamuosattemupiemontemurmanskmpspawnex" +
- "tdirectrentino-alto-adigemurotorcraftrentino-sued-tirolmusashino" +
- "haramuseetrentino-suedtirolmuseumverenigingmusicarbonia-iglesias" +
- "-carboniaiglesiascarboniamutsuzawamy-vigorlicemy-wanggoupilemyac" +
- "tivedirectorymyasustor-elvdalmycdmycloudnslupsknx-serversicherun" +
- "gmydattolocalhistorymyddnsgeekgalaxymydissentrentinoa-adigemydob" +
- "isshikis-an-actormydroboehringerikemydslzmyeffectrentinoaadigemy" +
- "fastblogermyfirewallonieruchomoscienceandindustrynmyforuminamita" +
- "nemyfritzmyftpaccessmolaquilansmushcdn77-sslingmyhome-servermyji" +
- "nomykolaivarggatrentinoalto-adigemymailermymediapchurchaseljeeps" +
- "ondriodejaneirodoymyokohamamatsudamypepilotsnoasakataketomisatos" +
- "himatsuzakis-an-actresshiraokamitondabayashiogamagoriziamypetsok" +
- "ndalmyphotoshibalatinoopencraftrainingmypicturesolarssonmypsxn--" +
- "3ds443gmysecuritycamerakermyshopblocksolognemyshopifymyspreadsho" +
- "ppingmythic-beastsolundbeckomaganemytis-a-bookkeeperspectakarazu" +
- "kaluganskomakiyokawaramytuleap-partnersomamyvncircustomer-ocimdb" +
- "amblebesbyeniwaizumiotsukumiyamazonawsglobalacceleratorahimeshim" +
- "abaridagawakuyachimataijibmdevelopmentashkentatamotorsitestingla" +
- "dedyn-berlincolnavigationavoizumizakiitatebayashiibahccavuotnaga" +
- "rag-cloud-charitydalipaywhirlimitedgcanonoichinomiyakebinagisoch" +
- "ildrensgardenavuotnapleskns3-eu-west-2mywirepaircraftingvollolip" +
- "opimientakayamatsuuraplatter-appinbarcelonagawalbrzycharternopil" +
- "awalesundiscountysnes3-eu-west-3utilities-1platterpinkomatsushim" +
- "arugame-hostyhostingplazaplcube-serverplumbingoplurinacionalpodh" +
- "alepodlasiellaktyubinskiptveterinairealmpmnpodzonepohlpoivronpok" +
- "erpokrovskommunalforbundpoliticarrdpolitiendapolkowicepoltavalle" +
- "-d-aostaticsopotrentinos-tirolpomorzeszowinbarclaycards3-externa" +
- "l-1ponpesaro-urbino-pesarourbinopesaromasvuotnaritakoelnponypord" +
- "enonepornporsangerporsangugeporsgrunnanyokoshibahikariwanumataka" +
- "zakis-an-artistgstagepoznanpraxis-a-bruinsfanprdpreservationpres" +
- "idioprgmrprimetelemarkommuneprincipeprivatizehealthinsuranceprof" +
- "esionalprogressivestnesor-odalpromombetsupportrentinostirolprope" +
- "rtyprotectionprotonetrentinosud-tirolprudentialpruszkowindmillpr" +
- "vcyberlevagangaviikanonjis-an-engineeringprzeworskogpugliapulawy" +
- "pupioneerpvhagebostadpvtrentinosudtirolpwcistrondheimmobilieniss" +
- "hingucciprianidurhamburgriwataraidynathomebuiltwithdarkarpaczest" +
- "-le-patroniyodogawapythonanywherepbodynamic-dnsor-varangerpzqldq" +
- "otoyohashimotoolsorfoldqponiatowadaqslgbtrentinosued-tirolqualif" +
- "ioappippueblockbusterniiminamiawajikis-an-anarchistoricalsociety" +
- "quickconnectrentinosuedtirolquicksytesorocabalestrandabergamoare" +
- "keymachineustargardquipelementsorreisahayakawakamiichikawamisato" +
- "ttoris-an-entertainerswedenswidnicartoonartdecologiaswidnikkokam" +
- "iminersouthcarolinarvikomonotogawaswiebodzin-dslattuminanoswinou" +
- "jscienceandhistoryswissmarterthanyoutwentesynology-dsouthwest1-u" +
- "slivinghistorytularvikongsbergtunesowatunkongsvingerturystykaney" +
- "amazoetuscanytushuissier-justicetuvalleaostaverntuxfamilytwmailv" +
- "ibo-valentiavibovalentiavideovillaspectruminamiyamashirokawanabe" +
- "laudibleasingvinnicasacamdvrcampinagrandebuilderschmidtre-gaulda" +
- "lvinnytsiavipsinaappittsburghofficialvirginiavirtual-userveexcha" +
- "ngevirtualcloudvirtualservervirtualuserveftpiwatevirtuelvisakuho" +
- "kksundviterboknowsitallvivolkenkundenvixn--3hcrj9civilaviationth" +
- "ewifiatlassian-dev-myqnapcloudcontrolledogawarabikomaezakirunoip" +
- "irangalsaceomutashinainternationalfirearmsannanvlaanderennesoyvl" +
- "adikavkazimierz-dolnyvladimirvlogintoyonezawavmincomcastresindev" +
- "icenzaporizhzhiavologdanskoninjambylvolvolkswagentspeedpartnervo" +
- "lyngdalvoorlopervossevangenvotevotingvotoyonovps-hostrowiecivili" +
- "sationwithgoogleapiszwithyoutuberspacekitagatamayufuettertdasnet" +
- "zwiwatsukiyonosegawawixsitewloclawekonsulatrobeeldengeluidvarese" +
- "rvdwmcloudwmflabspydebergwoodsideltairavpagexlworse-thandawowind" +
- "owskrakowinnersphinxn--3e0b707ewpdevcloudwpenginepoweredwphosted" +
- "mailwpmucdnpixolinodeusercontentrentinoaltoadigewpmudeveloperaun" +
- "iterois-foundationwritesthisblogwroclawiospjelkavikomorotsukagaw" +
- "awtcirclerkstagets-itrentoyonakagyokutoyakolobrzegersundwtfastvp" +
- "s-serverisignwuozuwzmiuwajimaxn--45q11civilwarmiasadoesntexistei" +
- "ngeekaruizawaxn--4gbriminingxn--4it168dxn--4it797kooris-a-painte" +
- "ractivestfoldxn--4pvxs4allxn--54b7fta0cclanbibaidarmeniaxn--55qw" +
- "42gxn--55qx5dxn--5js045dxn--5rtp49cldmailuxuryxn--5rtq34kopervik" +
- "hersonxn--5su34j936bgsgxn--5tzm5gxn--6btw5axn--6frz82gxn--6orx2r" +
- "xn--6qq986b3xlxn--7t0a264cleverappstmnxn--80aaa0cvacationsrhtren" +
- "tinsud-tirolxn--80adxhksrlxn--80ao21axn--80aqecdr1axn--80asehdba" +
- "refootballooninglassassinationalheritagebinordre-landiscourses3-" +
- "sa-east-1xn--80aswgxn--80augustowitdkonskowolayangrouphonefossho" +
- "pwarendalenugxn--8ltr62koryokamikawanehonbetsurutaharaxn--8pvr4u" +
- "xn--8y0a063axn--90a1affinitylotterybnikeisenbahnxn--90a3academia" +
- "micable-modemoneyxn--90aeroportalaheadjudaicadaquesrvaroyxn--90a" +
- "ishobarakawagoexn--90amcdirxn--90azhytomyravendbargainstances3-u" +
- "s-east-2xn--9dbhblg6dietrevisojamisonxn--9dbq2axn--9et52uxn--9kr" +
- "t00axn--andy-iraxn--aroport-byaotsurnadalxn--asky-iraxn--aurskog" +
- "-hland-jnbarreauctioncilla-speziauthgear-stagingjesdalimanowarud" +
- "aurskog-holandinggfarmerseineatonsbergitpagefrontappalmspringsak" +
- "erevistarnbergivestbytemark12xn--avery-yuasakuragawaxn--b-5gaxn-" +
- "-b4w605ferdxn--balsan-sdtirol-nsbstorebaselectrentinsudtirolxn--" +
- "bck1b9a5dre4clicketcloudcontrolapparmatsushigexn--bdddj-mrabdxn-" +
- "-bearalvhki-y4axn--berlevg-jxaxn--bhcavuotna-s4axn--bhccavuotna-" +
- "k7axn--bidr-5nachikatsuuraxn--bievt-0qa2xn--bjarky-fyasakaiminat" +
- "oyookanazawaxn--bjddar-ptargetmyipizzaxn--blt-elabourxn--bmlo-gr" +
- "aingerxn--bod-2natalxn--bozen-sdtirol-2obanazawaxn--brnny-wuacad" +
- "emy-firewall-gatewayxn--brnnysund-m8accident-investigation-aptib" +
- "leadpagesquare7xn--brum-voagatritonxn--btsfjord-9zaxn--bulsan-sd" +
- "tirol-nsbarrel-of-knowledgeappleborkaragandauthgearappspacehoste" +
- "d-by-previderhclouddnslivegarsheiheijibigawaustevoll-o-g-i-n4t3l" +
- "3p0rtarnobrzegyptianatuurwetenschappenginebetsuikirkenes3-ap-sou" +
- "th-1xn--c1avgxn--c2br7gxn--c3s14miniserverxn--cck2b3barrell-of-k" +
- "nowledgecomputerhistoryofscience-fictionfabricafjs3-us-gov-west-" +
- "1xn--cckwcxetdxn--cesena-forl-mcbremangerxn--cesenaforl-i8axn--c" +
- "g4bkis-gonexn--ciqpnxn--clchc0ea0b2g2a9gcdxn--comunicaes-v6a2oxn" +
- "--correios-e-telecomunicaes-ghc29axn--czr694barsycenterprisesaki" +
- "joburgleezebizenakanotoddenayorovnobirauthordalanddnss3-ap-south" +
- "east-2xn--czrs0troandinosaureplantationxn--czru2dxn--czrw28barsy" +
- "onlinewhampshirebungoonord-frontierxn--d1acj3basicserversaillesj" +
- "abbottatarantours3-us-west-1xn--d1alfaromeoxn--d1atrogstadxn--d5" +
- "qv7z876clickrisinglesannohelplfinancialuzernxn--davvenjrga-y4axn" +
- "--djrs72d6uyxn--djty4kosaigawaxn--dnna-grajewolterskluwerxn--drb" +
- "ak-wuaxn--dyry-iraxn--e1a4clinichitachinakagawassamukawatarikuze" +
- "ntakatainaioiraseating-organicbcn-north-1xn--eckvdtc9dxn--efvn9s" +
- "torfjordxn--efvy88haibarakitahatakamatsukawaxn--ehqz56nxn--elqq1" +
- "6hair-surveillancexn--eveni-0qa01gaxn--f6qx53axn--fct429kosakaer" +
- "odromegallupaasdaburxn--fhbeiarnxn--finny-yuaxn--fiq228c5hstorjc" +
- "loud-ver-jpchristmasakinderoyxn--fiq64basilicataniautomotiveland" +
- "ds3-ca-central-1xn--fiqs8stpetersburgxn--fiqz9streamscompute-1xn" +
- "--fjord-lraxn--fjq720axn--fl-ziaxn--flor-jraxn--flw351exn--forl-" +
- "cesena-fcbsstudioxn--forlcesena-c8axn--fpcrj9c3dxn--frde-grandra" +
- "pidstudynamisches-dnsortlandxn--frna-woaraisaijosoyrovigotpanthe" +
- "onsitexn--frya-hraxn--fzc2c9e2cliniquedapliernewyorkshirecifedex" +
- "eterxn--fzys8d69uvgmailxn--g2xx48clintonoshoesanokarumaifarmstea" +
- "dyndns-at-homedepotenzamamidorittogakushimotoganexn--gckr3f0faus" +
- "kedsmokorsetagayaseralingenoamishirasatogitsumidatlantichofunato" +
- "riginstantcloudfrontdoorxn--gecrj9clothingdustdatadetectjmaxxxer" +
- "oxfinityxn--ggaviika-8ya47hakatanorth-kazakhstanxn--gildeskl-g0a" +
- "xn--givuotna-8yasugitlaborxn--gjvik-wuaxn--gk3at1exn--gls-elacai" +
- "xaxn--gmq050is-into-animegurownproviderxn--gmqw5axn--gnstigbeste" +
- "llen-zvbrplsbxn--3pxu8konyvelohmusashimurayamarumorimachidaxn--g" +
- "nstigliefern-wobihirosakikamijimatsunowtvestre-totennishiawakura" +
- "xn--h-2failxn--h1aeghakodatexn--h1ahnxn--h1alizxn--h2breg3evenes" +
- "tuff-4-salexn--h2brj9c8cn-northwest-1xn--h3cuzk1diherokuappkomfo" +
- "rbar1xn--hbmer-xqaxn--hcesuolo-7ya35basketballfinanzjampalacehim" +
- "ejiiyamanouchikuhokuryugasakitanakagusukumodernfshostrodawarauto" +
- "scanadaeguambulancentralus-2xn--hery-iraxn--hgebostad-g3axn--hkk" +
- "inen-5waxn--hmmrfeasta-s4accident-prevention-k3stufftoread-books" +
- "nesoruminamiuonumasoyxn--hnefoss-q1axn--hobl-iraxn--holtlen-hxax" +
- "n--hpmir-xqaxn--hxt814exn--hyanger-q1axn--hylandet-54axn--i1b6b1" +
- "a6a2exn--imr513nxn--indery-fyasuokannamiharuxn--io0a7is-into-car" +
- "shiratakahagithubpreviewsaitamatsukuris-a-llamarcheapigeelvinckd" +
- "diamondshirahamatonbetsurgeryxn--j1adplantsomnarviikamiokameokam" +
- "akurazakitashiobaraxn--j1aefbsbxn--1ck2e1banzaicloudappspotagerx" +
- "n--j1ael8batochiokinoshimaintenancempresashibetsukuin-vpncasadel" +
- "amonedancemrxn--j1amhakonexn--j6w193gxn--jlq480n2rgxn--jlq61u9w7" +
- "batsfjordiscoveryokoteu-1xn--jlster-byatominamidaitomanchesterxn" +
- "--jrpeland-54axn--jvr189minisitexn--k7yn95exn--karmy-yuaxn--kbrq" +
- "7oxn--kcrx77d1x4axn--kfjord-iuaxn--klbu-woaxn--klt787dxn--kltp7d" +
- "xn--kltx9axn--klty5xn--41axn--koluokta-7ya57hakubahcavuotnagaivu" +
- "otnagaokakyotambabydgoszczecinemagnethnologyxn--kprw13dxn--kpry5" +
- "7dxn--kput3is-into-cartoonshishikuis-a-musicianxn--krager-gyatsu" +
- "kanoyakumoldellogliastradingxn--kranghke-b0axn--krdsherad-m8axn-" +
- "-krehamn-dxaxn--krjohka-hwab49jdevcloudfunctionshisohugheshisuif" +
- "uelveruminamiminowaxn--ksnes-uuaxn--kvfjord-nxaxn--kvitsy-fyatsu" +
- "shiroxn--kvnangen-k0axn--l-1fairwindstuttgartrentinsued-tirolxn-" +
- "-l1accentureklamborghinikolaeventsurreyxn--laheadju-7yawaraxn--l" +
- "angevg-jxaxn--lcvr32dxn--ldingen-q1axn--leagaviika-52bauhauspost" +
- "man-echocolatelevisionflashdrivefsncfdishakotanhlfanhsbcasertail" +
- "scalecznagasukeu-2xn--lesund-huaxn--lgbbat1ad8jdfaststacksaxoxn-" +
- "-lgrd-poacctromsakegawaxn--lhppi-xqaxn--linds-pramericanartromso" +
- "kamogawaxn--lns-qlavagiskexn--loabt-0qaxn--lrdal-sraxn--lrenskog" +
- "-54axn--lt-liacngroks-thisayamanobeokakegawaxn--lten-granexn--lu" +
- "ry-iraxn--m3ch0j3axn--mely-iraxn--merker-kuaxn--mgb2ddesusakis-b" +
- "ytomaritimekeepingxn--mgb9awbfbx-oslodingenxn--mgba3a3ejtrusteex" +
- "n--mgba3a4f16axn--mgba3a4fra1-deportevaksdalxn--mgba7c0bbn0axn--" +
- "mgbaakc7dvfbxostrowwlkpmguidefinimamateramochizukindlegallocus-4" +
- "xn--mgbaam7a8hakuis-a-financialadvisor-aurdalxn--mgbab2bdxn--mgb" +
- "ah1a3hjkrdxn--mgbai9a5eva00bellunord-odalvdalaskanittedallasalle" +
- "angaviikadenagahamaroyerxn--mgbai9azgqp6jejuniperxn--mgbayh7gpal" +
- "ermomahachijolsterxn--mgbbh1a71exn--mgbc0a9azcgxn--mgbca7dzdoxn-" +
- "-mgbcpq6gpa1axn--mgberp4a5d4a87gxn--mgberp4a5d4arxn--mgbgu82axn-" +
- "-mgbi4ecexposedxn--mgbpl2fhskypexn--mgbqly7c0a67fbcnpyatigorskol" +
- "efrakkestadyndns-at-workisboringrondarxn--mgbqly7cvafr-1xn--mgbt" +
- "3dhdxn--mgbtf8flapymntrvestre-slidretrosnubarclays3-fips-us-gov-" +
- "west-1xn--mgbtx2beneventodayokozeu-3xn--mgbx4cd0abbvieeexn--mix0" +
- "82fedorainfraclouderaxn--mix891fedorapeoplegnicapebretonamicroli" +
- "ghtinguitarschokokekschokoladenxn--mjndalen-64axn--mk0axin-the-b" +
- "andais-into-gamessinazawaxn--mk1bu44cnsantabarbaraxn--mkru45is-l" +
- "eetrentin-sued-tirolxn--mlatvuopmi-s4axn--mli-tlavangenxn--mlsel" +
- "v-iuaxn--moreke-juaxn--mori-qsakurais-lostre-toteneis-a-nascarfa" +
- "nxn--mosjen-eyawatahamaxn--mot-tlazioxn--mre-og-romsdal-qqbusera" +
- "nishiaritakurashikis-not-certifiedxn--msy-ula0hakusanagochijiwad" +
- "egreexn--mtta-vrjjat-k7aflakstadaokagakicks-assnasaarlandxn--muo" +
- "st-0qaxn--mxtq1minnesotaketakatoris-a-techietis-a-libertarianxn-" +
- "-ngbc5azdxn--ngbe9e0axn--ngbrxn--42c2d9axn--nit225koseis-a-patsf" +
- "anxn--nmesjevuemie-tcbalsan-sudtirollagdenesnaaseinet-freaksuson" +
- "oxn--nnx388axn--nodessakyotanabellevuelosangelesuzakanagawaxn--n" +
- "qv7fs00emaxn--nry-yla5gxn--ntso0iqx3axn--ntsq17gxn--nttery-byaes" +
- "eoullensvanguardxn--nvuotna-hwaxn--nyqy26axn--o1achernihivgubsuz" +
- "ukananiikappudoxn--o3cw4haldenxn--o3cyx2axn--od0algxn--od0aq3ben" +
- "tleyolasiteu-4lima-cityeatselinogradimo-i-rana4u2-localhostrolek" +
- "aniepce12hpalmaserati234xn--ogbpf8flatangerxn--oppegrd-ixaxn--os" +
- "tery-fyaxn--osyro-wuaxn--otu796dxn--p1acfedoraprojectoyotsukaido" +
- "xn--p1ais-savedxn--pgbs0dhlx3xn--porsgu-sta26feiraquarelleaseekl" +
- "ogescholarshipschoolsztynsettsurfashionxn--pssu33lxn--pssy2uxn--" +
- "q7ce6axn--q9jyb4cntjomelhusgardenxn--qcka1pmckinseyxn--qqqt11min" +
- "tereitrentino-altoadigexn--qxa6axn--qxamsterdamnserverbaniaxn--r" +
- "ady-iraxn--rdal-poaxn--rde-ulaxn--rdy-0nabaris-slickfh-muensterx" +
- "n--rennesy-v1axn--rhkkervju-01afermockasserverrankoshigayamein-v" +
- "igorgexn--rholt-mragowoltlab-democraciaxn--rhqv96gxn--rht27zxn--" +
- "rht3dxn--rht61exn--risa-5naturalhistorymuseumcenterxn--risr-irax" +
- "n--rland-uuaxn--rlingen-mxaxn--rmskog-byaxn--rny31halsaitohmanno" +
- "rthflankaufentigerxn--rovu88beppublishproxyombolzano-altoadigeol" +
- "ogyomitanobninskarasjohkamikitayamatsurincheonikonanporobserverx" +
- "n--rros-granvindafjordxn--rskog-uuaxn--rst-0naturalsciencesnatur" +
- "ellesuzukis-certifiedxn--rsta-framercanvasvalbardunloppacificita" +
- "deliveryggeexn--rvc1e0am3exn--ryken-vuaxn--ryrvik-byaxn--s-1fait" +
- "hammarfeastafricapitalonewspaperxn--s9brj9collectionxn--sandness" +
- "jen-ogbeskidyn-ip24xn--sandy-yuaxn--sdtirol-n2axn--seral-lraxn--" +
- "ses554gxn--sgne-graphoxn--45br5cylxn--skierv-utazasvcitichiryuky" +
- "uragifuchungbukharahkkeravjuegoshikimobetsuldaluccaravantaarparl" +
- "iamentjeldsundrudupontariobranconavstackareliancexn--skjervy-v1a" +
- "xn--skjk-soaxn--sknit-yqaxn--sknland-fxaxn--slat-5naturbruksgymn" +
- "xn--slt-elabcieszynh-serveblogspotaribeiraogakibichuoxn--smla-hr" +
- "axn--smna-gratangentlentapisa-geekosherbrookegawaxn--snase-nraxn" +
- "--sndre-land-0cbestbuyshouses3-us-west-2xn--snes-poaxn--snsa-roa" +
- "xn--sr-aurdal-l8axn--sr-fron-q1axn--sr-odal-q1axn--sr-varanger-g" +
- "gbetainaboxfusejnyanagawalmartateshinanomachimkentateyamaveroyke" +
- "nebakkeshibechambagriculturealtychyattorneyagawakepnombrendlynge" +
- "nflfanpachigasakids3-eu-central-1xn--srfold-byaxn--srreisa-q1axn" +
- "--srum-gratis-a-bulls-fanxn--stfold-9xaxn--stjrdal-s1axn--stjrda" +
- "lshalsen-sqbhzcasinordeste-idcateringebuildinglitcheltenham-radi" +
- "o-opensocialimolisembokuleuvenetokigawavocatanzaroweddingjovikan" +
- "zakitchenaval-d-aosta-valleyboltarumizusawaustinnaumburgivingjem" +
- "nes3-ap-southeast-1xn--stre-toten-zcbieidskoguchikuzenvironmenta" +
- "lconservationionjukudoyamaizurugbyglandroverhallaakesvuemielecce" +
- "vje-og-hornnes3-website-ap-northeast-1xn--t60b56axn--tckwebthing" +
- "sveioxn--tiq49xqyjelasticbeanstalkhakassiaxn--tjme-hraxn--tn0agr" +
- "ocerydxn--tnsberg-q1axn--tor131oxn--trany-yuaxn--trentin-sd-tiro" +
- "l-rzbielawaltervistaikikonaikawachinaganoharamcoachampionshiphop" +
- "tobamadridnbloggerxn--trentin-sdtirol-7vbiellahppiacenzachpomors" +
- "kieninohekinannestadiskussionsbereichattanooganordkappgafaninomi" +
- "yakonojorpelandisrechtranakamagayahikobeardubaiduckdnsnillfjordi" +
- "tchyouripanamatsusakahoginankokubunjindianapolis-a-bloggerxn--tr" +
- "entino-sd-tirol-c3bieszczadygeyachiyodaejeonbukcoalwaysdatabaseb" +
- "allangenkainanaejrietisalatinabeno-ipifony-1xn--trentino-sdtirol" +
- "-szbievat-band-campaniavoues3-eu-west-1xn--trentinosd-tirol-rzbi" +
- "fukagawashingtondclk3xn--trentinosdtirol-7vbigv-infolldalivornow" +
- "ruzhgorodeoceanographics3-website-ap-southeast-1xn--trentinsd-ti" +
- "rol-6vbihorologyonagoyaxarnetbankaracoldwarszawaustraliamusement" +
- "dllpages3-ap-northeast-2ix4432-balsan-suedtirolkuszczytnord-aurd" +
- "alp16-b-datacentermezproxyzgorabruzzoologicalabamagasakishimabar" +
- "aogashimadachicagoboats3-ap-northeast-1kappchizip611xn--trentins" +
- "dtirol-nsbikedaemonmoutheworkpccwedeployonagunicloudivtasvuodnak" +
- "amurataishinomakinkobierzycextraspace-to-rentalstomakomaibarazur" +
- "ewebsiteshikagamiishibukawakkanaibetsubamericanfamilydsmynasushi" +
- "obarackmazeplayokosukanraustrheimatunduhrennebugattiffanyaarbort" +
- "eaches-yogasawaracingjerdrumcprequalifymeinforumzgorzeleccogjers" +
- "tadotsuruokakamigaharaukraanghkembuchikumagayagawakayamagentosit" +
- "ecnologiajudygarlanddnskingdyniamunemurorangecloudplatform0emmaf" +
- "ann-arboretumbriamallamaceiobbcg120001wwwbq-abogadobeaemcloud-fr" +
- "1337xn--trgstad-r1axn--trna-woaxn--troms-zuaxn--tysvr-vraxn--uc0" +
- "atvestvagoyxn--uc0ay4axn--uist22hamurakamigoris-a-geekautokeinot" +
- "iceablewismillerxn--uisz3gxn--unjrga-rtargithubusercontentryclou" +
- "dflareportrentinsuedtirolxn--unup4yxn--uuwu58axn--vads-jraxn--va" +
- "lle-aoste-ebbtrysiljanxn--valle-d-aoste-ehbodoes-itcouldbeworldx" +
- "n--valleaoste-e7axn--valledaoste-ebbvadsoccertmgrazerbaijan-maye" +
- "ngerdalcesvelvikomvuxn--32vp30hagakhanamigawaxn--vard-jraxn--veg" +
- "rshei-c0axn--vermgensberater-ctbitsvizzeraxn--vermgensberatung-p" +
- "wblogoiplatformshangrilanxessooxn--vestvgy-ixa6oxn--vg-yiabkhazi" +
- "axn--vgan-qoaxn--vgsy-qoa0jelenia-goraxn--vgu402colognexus-3xn--" +
- "vhquvevelstadxn--vler-qoaxn--vre-eiker-k8axn--vrggt-xqadxn--vry-" +
- "yla5gxn--vuq861bilbaokinawashirosatobishimagazineues3-website-ap" +
- "-southeast-2xn--w4r85el8fhu5dnraxn--w4rs40lxn--wcvs22dxn--wgbh1c" +
- "olonialwilliamsburgrongausdalvivanovoldaxn--wgbl6axn--xhq521bill" +
- "ustrationredumbrellair-traffic-controlleyoriikarasjokarasuyamarn" +
- "ardalombardiadembetsukubankaratexn--xkc2al3hye2axn--xkc2dl3a5ee0" +
- "handsonyxn--y9a3aquariumisakis-a-therapistoiaxn--yer-znaturhisto" +
- "rischesvn-reposoundcastronomy-routerxn--yfro4i67oxn--ygarden-p1a" +
- "xn--ygbi2ammxn--45brj9civilizationxn--ystre-slidre-ujbioceanogra" +
- "phiquexn--zbx025dxn--zf0ao64axn--zf0avxlxn--zfr164bipanasonicath" +
- "olicaxiaskimitsubatamibudejjuedischesapeakebayernirasakindianmar" +
- "ketingliwicexnbayxz"
+// text is the combined text of all labels.
+//
+//go:embed data/text
+var text string
// nodes is the list of nodes. Each node is represented as a 40-bit integer,
// which encodes the node's children, wildcard bit and node type (as an index
// into the children array), ICANN bit and text.
//
-// If the table was generated with the -comments flag, there is a //-comment
-// after each node's data. In it is the nodes-array indexes of the children,
-// formatted as (n0x1234-n0x1256), with * denoting the wildcard bit. The
-// nodeType is printed as + for normal, ! for exception, and o for parent-only
-// nodes that have children but don't match a domain label in their own right.
-// An I denotes an ICANN domain.
-//
// The layout within the node, from MSB to LSB, is:
//
// [ 7 bits] unused
@@ -547,9353 +44,9 @@ const text = "9guacuiababia-goracleaningroks-theatree164-balsfjordd-dnshome-we"
// [ 1 bits] ICANN bit
// [16 bits] text index
// [ 6 bits] text length
-var nodes = [...]uint8{
- 0x00, 0x00, 0x53, 0x0b, 0x03,
- 0x00, 0x00, 0x5b, 0x6e, 0x44,
- 0x00, 0x00, 0x4e, 0x8c, 0x86,
- 0x00, 0x00, 0x55, 0x00, 0x03,
- 0x00, 0x00, 0x55, 0x00, 0x06,
- 0x00, 0x00, 0x59, 0x2c, 0x06,
- 0x00, 0x00, 0x5b, 0x92, 0x83,
- 0x00, 0x00, 0x41, 0xa0, 0x84,
- 0x00, 0x00, 0x5d, 0xeb, 0x07,
- 0x00, 0x00, 0x4e, 0x88, 0xc8,
- 0x00, 0x03, 0x40, 0x00, 0xc2,
- 0x00, 0x03, 0xd4, 0x2f, 0x07,
- 0x00, 0x00, 0x57, 0xf0, 0xc9,
- 0x00, 0x00, 0x4d, 0xdc, 0x4a,
- 0x00, 0x00, 0x4d, 0xdc, 0x4b,
- 0x00, 0x00, 0x43, 0x3b, 0x83,
- 0x00, 0x00, 0x43, 0x6a, 0xc5,
- 0x00, 0x04, 0x41, 0x3c, 0x82,
- 0x00, 0x00, 0x5d, 0x62, 0x04,
- 0x00, 0x00, 0x4c, 0x89, 0x83,
- 0x00, 0x00, 0x43, 0x1c, 0x05,
- 0x00, 0x04, 0xc0, 0x1a, 0xc2,
- 0x00, 0x00, 0x56, 0x74, 0x43,
- 0x00, 0x05, 0x42, 0xff, 0xc4,
- 0x00, 0x00, 0x40, 0x1a, 0xc5,
- 0x00, 0x05, 0xc0, 0x64, 0x82,
- 0x00, 0x00, 0x40, 0x64, 0x8e,
- 0x00, 0x00, 0x45, 0xb5, 0x43,
- 0x00, 0x00, 0x5b, 0x32, 0xc6,
- 0x00, 0x06, 0x40, 0x47, 0x82,
- 0x00, 0x00, 0x5e, 0x57, 0xc7,
- 0x00, 0x00, 0x43, 0xa2, 0x06,
- 0x00, 0x06, 0xc0, 0x36, 0x82,
- 0x00, 0x00, 0x49, 0x09, 0xc3,
- 0x00, 0x00, 0x42, 0xc3, 0x86,
- 0x00, 0x00, 0x46, 0x91, 0xc8,
- 0x00, 0x00, 0x49, 0x55, 0x46,
- 0x00, 0x00, 0x47, 0x6d, 0xc4,
- 0x00, 0x07, 0x40, 0x0b, 0x02,
- 0x00, 0x00, 0x55, 0x08, 0x89,
- 0x00, 0x00, 0x41, 0xa3, 0xc7,
- 0x00, 0x00, 0x4f, 0xf4, 0x86,
- 0x00, 0x00, 0x56, 0x9a, 0xc9,
- 0x00, 0x00, 0x4c, 0xa9, 0x48,
- 0x00, 0x00, 0x44, 0x60, 0x04,
- 0x00, 0x00, 0x52, 0x01, 0x46,
- 0x00, 0x00, 0x5d, 0x8b, 0x46,
- 0x00, 0x07, 0xc0, 0x1c, 0x02,
- 0x00, 0x00, 0x4f, 0xc7, 0x46,
- 0x00, 0x00, 0x41, 0x2d, 0x4f,
- 0x00, 0x00, 0x5d, 0x99, 0xce,
- 0x00, 0x00, 0x4e, 0x48, 0x04,
- 0x00, 0x00, 0x40, 0xd1, 0x05,
- 0x00, 0x00, 0x53, 0x5f, 0xc5,
- 0x00, 0x00, 0x5a, 0x89, 0x89,
- 0x00, 0x00, 0x44, 0x27, 0xc9,
- 0x00, 0x00, 0x42, 0xcb, 0x87,
- 0x00, 0x00, 0x42, 0x39, 0xc6,
- 0x00, 0x00, 0x42, 0xed, 0xc3,
- 0x00, 0x08, 0x41, 0x63, 0x02,
- 0x00, 0x00, 0x41, 0x63, 0x03,
- 0x00, 0x00, 0x4a, 0x86, 0x8a,
- 0x00, 0x08, 0xc1, 0x5c, 0x43,
- 0x00, 0x00, 0x54, 0x56, 0xc5,
- 0x00, 0x00, 0x4f, 0x45, 0xc2,
- 0x00, 0x00, 0x5a, 0x5c, 0x49,
- 0x00, 0x09, 0xc0, 0x28, 0xc2,
- 0x00, 0x00, 0x40, 0x88, 0x44,
- 0x00, 0x00, 0x5c, 0x9a, 0x86,
- 0x00, 0x00, 0x49, 0x68, 0xc5,
- 0x00, 0x00, 0x57, 0x6c, 0x04,
- 0x00, 0x0a, 0xd0, 0xfd, 0xc4,
- 0x00, 0x00, 0x40, 0x28, 0xc3,
- 0x00, 0x00, 0x43, 0x5f, 0xc4,
- 0x00, 0x0b, 0x40, 0x19, 0x42,
- 0x00, 0x00, 0x55, 0x73, 0x44,
- 0x00, 0x0b, 0xc0, 0x1a, 0x04,
- 0x00, 0x00, 0x41, 0x4f, 0x0a,
- 0x00, 0x0c, 0x40, 0x08, 0x82,
- 0x00, 0x00, 0x40, 0xbd, 0x07,
- 0x00, 0x00, 0x5b, 0xe8, 0xc8,
- 0x00, 0x0f, 0x40, 0x8b, 0x82,
- 0x00, 0x00, 0x53, 0xa3, 0x87,
- 0x00, 0x00, 0x42, 0xda, 0x04,
- 0x00, 0x00, 0x51, 0xb0, 0x47,
- 0x00, 0x00, 0x42, 0xda, 0x05,
- 0x00, 0x00, 0x58, 0x0e, 0x47,
- 0x00, 0x00, 0x54, 0xd9, 0x86,
- 0x00, 0x00, 0x55, 0x8c, 0x84,
- 0x00, 0x00, 0x56, 0xaf, 0x05,
- 0x00, 0x00, 0x47, 0x47, 0x07,
- 0x00, 0x12, 0x40, 0x59, 0x82,
- 0x00, 0x00, 0x4b, 0x04, 0x03,
- 0x00, 0x12, 0xc1, 0xf9, 0xc2,
- 0x00, 0x00, 0x5d, 0x35, 0x83,
- 0x00, 0x13, 0x40, 0x36, 0x02,
- 0x00, 0x00, 0x45, 0x48, 0x45,
- 0x00, 0x13, 0xc0, 0x02, 0x02,
- 0x00, 0x00, 0x57, 0x93, 0xc4,
- 0x00, 0x00, 0x5c, 0xcb, 0x05,
- 0x00, 0x00, 0x4e, 0x47, 0x47,
- 0x00, 0x00, 0x4b, 0x29, 0x4e,
- 0x00, 0x00, 0x4c, 0x39, 0x04,
- 0x00, 0x00, 0x43, 0x50, 0x44,
- 0x00, 0x00, 0x40, 0x78, 0x43,
- 0x00, 0x00, 0x50, 0x18, 0x89,
- 0x00, 0x00, 0x50, 0x6a, 0xcb,
- 0x00, 0x00, 0x59, 0x1a, 0x88,
- 0x00, 0x00, 0x53, 0x1f, 0x88,
- 0x00, 0x00, 0x53, 0x7b, 0xc8,
- 0x00, 0x00, 0x5c, 0xee, 0xc8,
- 0x00, 0x14, 0x56, 0x99, 0x0a,
- 0x00, 0x00, 0x58, 0x0d, 0x47,
- 0x00, 0x00, 0x5f, 0x3a, 0xc6,
- 0x00, 0x14, 0xc5, 0xa5, 0x02,
- 0x00, 0x00, 0x5d, 0xe7, 0x03,
- 0x00, 0x00, 0x5e, 0x32, 0xc3,
- 0x00, 0x00, 0x5e, 0x48, 0x84,
- 0x00, 0x00, 0x5d, 0xe7, 0x43,
- 0x00, 0x00, 0x55, 0x47, 0x83,
- 0x00, 0x02, 0xd3, 0xec, 0x82,
- 0x00, 0x15, 0x40, 0x8a, 0x42,
- 0x00, 0x00, 0x48, 0xb7, 0x85,
- 0x00, 0x00, 0x4a, 0xc7, 0x46,
- 0x00, 0x00, 0x4a, 0x29, 0xc4,
- 0x00, 0x00, 0x5a, 0x1f, 0x47,
- 0x00, 0x00, 0x43, 0x79, 0x06,
- 0x00, 0x00, 0x4d, 0x7f, 0x04,
- 0x00, 0x00, 0x5b, 0xb3, 0xc7,
- 0x00, 0x00, 0x42, 0x1b, 0xc3,
- 0x00, 0x16, 0xce, 0x20, 0x82,
- 0x00, 0x17, 0x46, 0x97, 0x82,
- 0x00, 0x17, 0xc1, 0x6d, 0x82,
- 0x00, 0x00, 0x41, 0x7b, 0x46,
- 0x00, 0x18, 0x40, 0x02, 0x82,
- 0x00, 0x00, 0x46, 0x64, 0x85,
- 0x00, 0x00, 0x54, 0x01, 0xc3,
- 0x00, 0x00, 0x5d, 0x72, 0x44,
- 0x00, 0x00, 0x50, 0x3a, 0x84,
- 0x00, 0x00, 0x50, 0x3a, 0x85,
- 0x00, 0x00, 0x5f, 0x1d, 0x43,
- 0x00, 0x18, 0xc5, 0x0b, 0x03,
- 0x00, 0x19, 0x40, 0x5a, 0x42,
- 0x00, 0x00, 0x40, 0x7f, 0xc5,
- 0x00, 0x00, 0x40, 0x7f, 0xcb,
- 0x00, 0x00, 0x51, 0x22, 0x8b,
- 0x00, 0x00, 0x40, 0x62, 0x04,
- 0x00, 0x00, 0x40, 0x89, 0x09,
- 0x00, 0x00, 0x40, 0x95, 0x44,
- 0x00, 0x19, 0xc0, 0x99, 0x02,
- 0x00, 0x00, 0x40, 0xa1, 0x43,
- 0x00, 0x00, 0x40, 0xa6, 0xc3,
- 0x00, 0x1a, 0x40, 0xb4, 0xc2,
- 0x00, 0x00, 0x41, 0x71, 0x0a,
- 0x00, 0x1a, 0xc0, 0xb7, 0x82,
- 0x00, 0x00, 0x5d, 0x64, 0x85,
- 0x00, 0x00, 0x4f, 0x25, 0x8a,
- 0x00, 0x00, 0x44, 0x5c, 0xc4,
- 0x00, 0x00, 0x40, 0xd6, 0x03,
- 0x00, 0x00, 0x40, 0xe4, 0x04,
- 0x00, 0x00, 0x41, 0x14, 0x43,
- 0x00, 0x00, 0x41, 0x14, 0x44,
- 0x00, 0x00, 0x41, 0x14, 0x47,
- 0x00, 0x00, 0x41, 0x3d, 0x45,
- 0x00, 0x00, 0x41, 0x45, 0x06,
- 0x00, 0x00, 0x41, 0x56, 0xc6,
- 0x00, 0x00, 0x41, 0x75, 0x03,
- 0x00, 0x00, 0x41, 0xb7, 0x48,
- 0x00, 0x00, 0x41, 0xe0, 0x83,
- 0x00, 0x1b, 0x40, 0x2f, 0xc2,
- 0x00, 0x00, 0x44, 0x17, 0x08,
- 0x00, 0x00, 0x49, 0x57, 0xcb,
- 0x00, 0x00, 0x42, 0x47, 0x88,
- 0x00, 0x00, 0x42, 0x51, 0x06,
- 0x00, 0x00, 0x42, 0x52, 0x87,
- 0x00, 0x00, 0x42, 0x7b, 0x48,
- 0x00, 0x1e, 0x40, 0x10, 0x02,
- 0x00, 0x1e, 0xc2, 0x03, 0x02,
- 0x00, 0x00, 0x47, 0xa7, 0x48,
- 0x00, 0x00, 0x5d, 0xab, 0x47,
- 0x00, 0x00, 0x51, 0xba, 0x45,
- 0x00, 0x1f, 0x51, 0xba, 0x48,
- 0x00, 0x1f, 0xcd, 0xf5, 0x08,
- 0x00, 0x00, 0x47, 0xd5, 0xc3,
- 0x00, 0x00, 0x42, 0xbf, 0xc4,
- 0x00, 0x00, 0x59, 0x2c, 0x82,
- 0x00, 0x20, 0x42, 0xcd, 0xc2,
- 0x00, 0x20, 0xc6, 0x81, 0x42,
- 0x00, 0x21, 0xc2, 0xd3, 0xc2,
- 0x00, 0x00, 0x42, 0xd3, 0xc3,
- 0x00, 0x22, 0x40, 0x17, 0x82,
- 0x00, 0x00, 0x51, 0x3a, 0x43,
- 0x00, 0x00, 0x44, 0xa8, 0x44,
- 0x00, 0x00, 0x40, 0x17, 0x83,
- 0x00, 0x00, 0x44, 0x5f, 0xc4,
- 0x00, 0x00, 0x43, 0x76, 0x0b,
- 0x00, 0x00, 0x40, 0x2f, 0x03,
- 0x00, 0x00, 0x4f, 0x94, 0x46,
- 0x00, 0x00, 0x41, 0x4d, 0x84,
- 0x00, 0x00, 0x4d, 0x36, 0x8e,
- 0x00, 0x00, 0x4f, 0xf9, 0x05,
- 0x00, 0x00, 0x47, 0x3c, 0x08,
- 0x00, 0x00, 0x5b, 0x33, 0xc7,
- 0x00, 0x00, 0x5b, 0x33, 0xca,
- 0x00, 0x00, 0x43, 0x15, 0x43,
- 0x00, 0x00, 0x5b, 0x6c, 0x47,
- 0x00, 0x00, 0x50, 0x6c, 0x85,
- 0x00, 0x00, 0x43, 0x15, 0x44,
- 0x00, 0x00, 0x45, 0xc0, 0x46,
- 0x00, 0x00, 0x45, 0xc0, 0x47,
- 0x00, 0x00, 0x56, 0xff, 0x44,
- 0x00, 0x22, 0xd1, 0xb4, 0x84,
- 0x00, 0x00, 0x58, 0x1d, 0xc4,
- 0x00, 0x00, 0x43, 0x89, 0x04,
- 0x00, 0x00, 0x5c, 0x13, 0x86,
- 0x00, 0x00, 0x40, 0xf5, 0x43,
- 0x00, 0x00, 0x5c, 0x17, 0x48,
- 0x00, 0x00, 0x5f, 0x2f, 0x08,
- 0x00, 0x00, 0x49, 0xdc, 0x43,
- 0x00, 0x00, 0x41, 0x70, 0xc3,
- 0x00, 0x00, 0x54, 0xa7, 0xc4,
- 0x00, 0x00, 0x55, 0xb2, 0x03,
- 0x00, 0x23, 0xc0, 0x2d, 0xc2,
- 0x00, 0x24, 0xc2, 0x19, 0x42,
- 0x00, 0x00, 0x40, 0x29, 0x86,
- 0x00, 0x00, 0x52, 0x02, 0x43,
- 0x00, 0x00, 0x43, 0xa9, 0xc4,
- 0x00, 0x25, 0x41, 0x32, 0x82,
- 0x00, 0x00, 0x41, 0x32, 0x83,
- 0x00, 0x00, 0x58, 0x18, 0xc3,
- 0x00, 0x00, 0x41, 0x84, 0x42,
- 0x00, 0x25, 0xc0, 0x34, 0x02,
- 0x00, 0x00, 0x4d, 0x95, 0xc6,
- 0x00, 0x00, 0x42, 0xb9, 0x87,
- 0x00, 0x00, 0x4f, 0xf2, 0x87,
- 0x00, 0x00, 0x4f, 0x5d, 0x45,
- 0x00, 0x00, 0x5c, 0xb8, 0xc4,
- 0x00, 0x00, 0x57, 0x0c, 0x05,
- 0x00, 0x00, 0x4c, 0x97, 0x47,
- 0x00, 0x00, 0x55, 0x82, 0xc9,
- 0x00, 0x00, 0x4d, 0xf9, 0x86,
- 0x00, 0x00, 0x4f, 0x5c, 0x46,
- 0x00, 0x27, 0xc0, 0x41, 0x02,
- 0x00, 0x00, 0x50, 0xf1, 0x88,
- 0x00, 0x00, 0x52, 0xa0, 0xc6,
- 0x00, 0x00, 0x42, 0xad, 0x85,
- 0x00, 0x00, 0x5b, 0x1f, 0x07,
- 0x00, 0x00, 0x5b, 0x5d, 0x04,
- 0x00, 0x00, 0x5b, 0x5d, 0x05,
- 0x00, 0x00, 0x5a, 0x24, 0xc4,
- 0x00, 0x00, 0x5a, 0x24, 0xc8,
- 0x00, 0x28, 0x40, 0x52, 0x02,
- 0x00, 0x28, 0xc0, 0x04, 0x82,
- 0x00, 0x00, 0x43, 0x8a, 0xc6,
- 0x00, 0x00, 0x40, 0x04, 0x88,
- 0x00, 0x00, 0x53, 0xe3, 0x05,
- 0x00, 0x00, 0x55, 0x36, 0x86,
- 0x00, 0x00, 0x55, 0xd7, 0x88,
- 0x00, 0x00, 0x56, 0x18, 0x88,
- 0x00, 0x29, 0x40, 0x2c, 0x45,
- 0x00, 0x2e, 0xc2, 0x04, 0xc4,
- 0x00, 0x00, 0x45, 0x76, 0xc7,
- 0x00, 0x2f, 0x40, 0x8f, 0xc2,
- 0x00, 0x2f, 0xd5, 0x47, 0xc2,
- 0x00, 0x32, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x5c, 0x9b, 0x85,
- 0x00, 0x33, 0xce, 0x9e, 0x05,
- 0x00, 0x00, 0x47, 0x42, 0x46,
- 0x00, 0x00, 0x4d, 0xc2, 0x47,
- 0x00, 0x00, 0x5e, 0x8c, 0x07,
- 0x00, 0x34, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x52, 0x1c, 0x47,
- 0x00, 0x00, 0x48, 0x9a, 0x48,
- 0x00, 0x50, 0x42, 0xe7, 0x09,
- 0x00, 0x00, 0x40, 0x66, 0x47,
- 0x00, 0x00, 0x42, 0xef, 0x07,
- 0x00, 0x00, 0x54, 0x92, 0x08,
- 0x00, 0x00, 0x42, 0xf7, 0x06,
- 0x00, 0x00, 0x43, 0x10, 0x46,
- 0x00, 0x00, 0x43, 0x24, 0x0c,
- 0x00, 0x00, 0x43, 0x32, 0x4a,
- 0x00, 0x00, 0x43, 0x3b, 0xc7,
- 0x00, 0x00, 0x43, 0x69, 0x8b,
- 0x00, 0x00, 0x43, 0x7c, 0x87,
- 0x00, 0x00, 0x43, 0x7c, 0x8e,
- 0x00, 0x50, 0xc3, 0x91, 0xc4,
- 0x00, 0x00, 0x43, 0x92, 0xc4,
- 0x00, 0x00, 0x43, 0xb2, 0x87,
- 0x00, 0x00, 0x47, 0x1d, 0x87,
- 0x00, 0x00, 0x44, 0x00, 0x86,
- 0x00, 0x00, 0x44, 0x00, 0x87,
- 0x00, 0x00, 0x53, 0x2d, 0xc7,
- 0x00, 0x00, 0x41, 0xda, 0xc3,
- 0x00, 0x51, 0x42, 0xdd, 0x42,
- 0x00, 0x00, 0x44, 0x31, 0x06,
- 0x00, 0x00, 0x44, 0x31, 0x0a,
- 0x00, 0x00, 0x44, 0x39, 0xcb,
- 0x00, 0x00, 0x44, 0x57, 0xc7,
- 0x00, 0x00, 0x44, 0x71, 0x05,
- 0x00, 0x00, 0x44, 0x73, 0xc3,
- 0x00, 0x00, 0x44, 0x77, 0x46,
- 0x00, 0x00, 0x44, 0x77, 0x47,
- 0x00, 0x00, 0x46, 0x96, 0xc3,
- 0x00, 0x51, 0xc0, 0x01, 0x02,
- 0x00, 0x00, 0x44, 0x7e, 0x0a,
- 0x00, 0x52, 0x53, 0x0c, 0x02,
- 0x00, 0x52, 0xda, 0x15, 0x42,
- 0x00, 0x53, 0x44, 0x14, 0x02,
- 0x00, 0x53, 0xc3, 0x19, 0x82,
- 0x00, 0x00, 0x44, 0xa4, 0x85,
- 0x00, 0x00, 0x44, 0xb7, 0x04,
- 0x00, 0x55, 0x45, 0x43, 0x02,
- 0x00, 0x00, 0x55, 0x73, 0xc5,
- 0x00, 0x00, 0x43, 0x1b, 0xc3,
- 0x00, 0x00, 0x57, 0x41, 0x45,
- 0x00, 0x00, 0x56, 0x1b, 0x84,
- 0x00, 0x00, 0x42, 0x6f, 0x84,
- 0x00, 0x00, 0x4d, 0xd1, 0x86,
- 0x00, 0x00, 0x45, 0xcb, 0x86,
- 0x00, 0x00, 0x40, 0x81, 0xc3,
- 0x00, 0x00, 0x5d, 0x14, 0x04,
- 0x00, 0x00, 0x55, 0x8f, 0xc3,
- 0x00, 0x57, 0x40, 0x23, 0xc2,
- 0x00, 0x00, 0x42, 0x56, 0x04,
- 0x00, 0x00, 0x42, 0x56, 0x06,
- 0x00, 0x00, 0x44, 0xfd, 0x45,
- 0x00, 0x00, 0x59, 0x9f, 0xc6,
- 0x00, 0x00, 0x5b, 0x20, 0x08,
- 0x00, 0x00, 0x41, 0xde, 0x44,
- 0x00, 0x00, 0x45, 0x72, 0x08,
- 0x00, 0x00, 0x52, 0x67, 0xc5,
- 0x00, 0x00, 0x48, 0xe3, 0x48,
- 0x00, 0x00, 0x4d, 0x8d, 0x86,
- 0x00, 0x00, 0x4b, 0x9b, 0x07,
- 0x00, 0x00, 0x47, 0xcf, 0x44,
- 0x00, 0x5a, 0xc7, 0xcf, 0x46,
- 0x00, 0x5b, 0x41, 0xa6, 0xc3,
- 0x00, 0x00, 0x5a, 0x56, 0x03,
- 0x00, 0x00, 0x57, 0x10, 0x08,
- 0x00, 0x00, 0x53, 0x85, 0x04,
- 0x00, 0x5b, 0xc0, 0xe4, 0xc7,
- 0x00, 0x00, 0x48, 0x62, 0xc6,
- 0x00, 0x00, 0x4f, 0x01, 0x09,
- 0x00, 0x00, 0x50, 0x22, 0x08,
- 0x00, 0x00, 0x57, 0x52, 0x08,
- 0x00, 0x00, 0x58, 0x19, 0x44,
- 0x00, 0x00, 0x41, 0x80, 0xc3,
- 0x00, 0x00, 0x42, 0x8b, 0x02,
- 0x00, 0x5c, 0xc5, 0x64, 0x42,
- 0x00, 0x5d, 0x40, 0x14, 0xc2,
- 0x00, 0x00, 0x52, 0x82, 0x43,
- 0x00, 0x5d, 0xc0, 0x60, 0xc2,
- 0x00, 0x00, 0x46, 0x96, 0x44,
- 0x00, 0x00, 0x49, 0x5e, 0x46,
- 0x00, 0x00, 0x43, 0x28, 0xc3,
- 0x00, 0x00, 0x4c, 0xb1, 0xc7,
- 0x00, 0x00, 0x5d, 0xc0, 0x83,
- 0x00, 0x00, 0x4c, 0x39, 0xc8,
- 0x00, 0x00, 0x58, 0x16, 0xc5,
- 0x00, 0x00, 0x46, 0xaa, 0x03,
- 0x00, 0x00, 0x5c, 0xca, 0x85,
- 0x00, 0x00, 0x5c, 0xcb, 0xc4,
- 0x00, 0x00, 0x5b, 0x1c, 0x06,
- 0x00, 0x00, 0x5b, 0x74, 0x06,
- 0x00, 0x00, 0x4e, 0x46, 0x86,
- 0x00, 0x00, 0x4d, 0xb9, 0x44,
- 0x00, 0x00, 0x43, 0x80, 0x43,
- 0x00, 0x5e, 0x45, 0xf0, 0x42,
- 0x00, 0x5e, 0xc3, 0x71, 0x05,
- 0x00, 0x00, 0x40, 0x08, 0x43,
- 0x00, 0x5f, 0xc0, 0x2c, 0x02,
- 0x00, 0x00, 0x40, 0xf3, 0x43,
- 0x00, 0x00, 0x45, 0x8c, 0x05,
- 0x00, 0x60, 0x41, 0xf6, 0x03,
- 0x00, 0x61, 0x43, 0x60, 0x89,
- 0x00, 0x61, 0xc0, 0x09, 0x42,
- 0x00, 0x62, 0xc0, 0xb5, 0xc2,
- 0x00, 0x00, 0x49, 0x92, 0x45,
- 0x00, 0x00, 0x41, 0x93, 0xc6,
- 0x00, 0x00, 0x49, 0x24, 0xc6,
- 0x00, 0x00, 0x50, 0xd7, 0x88,
- 0x00, 0x00, 0x50, 0xd7, 0x8b,
- 0x00, 0x00, 0x54, 0xcc, 0x8b,
- 0x00, 0x00, 0x4f, 0x5f, 0x45,
- 0x00, 0x00, 0x4e, 0x26, 0x09,
- 0x00, 0x02, 0xc0, 0x10, 0x82,
- 0x00, 0x00, 0x4e, 0x8f, 0x88,
- 0x00, 0x00, 0x40, 0x3f, 0x04,
- 0x00, 0x63, 0xc0, 0x13, 0x42,
- 0x00, 0x00, 0x54, 0x41, 0xc3,
- 0x00, 0x64, 0xc7, 0x1f, 0x46,
- 0x00, 0x65, 0x40, 0x1b, 0x02,
- 0x00, 0x00, 0x5c, 0xf4, 0xc8,
- 0x00, 0x65, 0xc0, 0x4c, 0x02,
- 0x00, 0x00, 0x46, 0xc7, 0x4a,
- 0x00, 0x66, 0xc2, 0x20, 0xc3,
- 0x00, 0x67, 0xd7, 0xf7, 0x06,
- 0x00, 0x00, 0x51, 0xce, 0xc8,
- 0x00, 0x00, 0x41, 0x9d, 0x46,
- 0x00, 0x00, 0x58, 0xf2, 0x07,
- 0x00, 0x00, 0x41, 0x2f, 0x47,
- 0x00, 0x00, 0x5d, 0x86, 0xca,
- 0x00, 0x00, 0x44, 0x5d, 0x44,
- 0x00, 0x00, 0x56, 0x71, 0xc4,
- 0x00, 0x00, 0x57, 0xe7, 0x09,
- 0x00, 0x68, 0x5b, 0x2f, 0x05,
- 0x00, 0x00, 0x40, 0x64, 0xc6,
- 0x00, 0x00, 0x41, 0x32, 0xc3,
- 0x00, 0x00, 0x45, 0x5e, 0xc4,
- 0x00, 0x68, 0xce, 0x25, 0x04,
- 0x00, 0x00, 0x53, 0xb4, 0x87,
- 0x00, 0x69, 0x5a, 0x68, 0x07,
- 0x00, 0x00, 0x48, 0x09, 0x84,
- 0x00, 0x00, 0x55, 0xde, 0xc5,
- 0x00, 0x00, 0x47, 0x43, 0x08,
- 0x00, 0x00, 0x44, 0xc3, 0x87,
- 0x00, 0x00, 0x44, 0xc6, 0x07,
- 0x00, 0x69, 0xc0, 0xfd, 0x02,
- 0x00, 0x00, 0x51, 0xf0, 0xc4,
- 0x00, 0x00, 0x4a, 0x21, 0xc8,
- 0x00, 0x00, 0x44, 0xe3, 0x04,
- 0x00, 0x00, 0x45, 0x16, 0x04,
- 0x00, 0x00, 0x45, 0x19, 0xc5,
- 0x00, 0x00, 0x45, 0x1b, 0x07,
- 0x00, 0x6b, 0x55, 0x17, 0x89,
- 0x00, 0x00, 0x45, 0x31, 0x44,
- 0x00, 0x00, 0x45, 0x3e, 0x09,
- 0x00, 0x00, 0x45, 0x54, 0xc8,
- 0x00, 0x00, 0x45, 0x5c, 0x44,
- 0x00, 0x00, 0x45, 0x5c, 0x47,
- 0x00, 0x00, 0x45, 0x62, 0x43,
- 0x00, 0x00, 0x45, 0x6d, 0x47,
- 0x00, 0x6b, 0xc0, 0x0b, 0xc2,
- 0x00, 0x02, 0xcc, 0x5f, 0xc2,
- 0x00, 0x00, 0x45, 0xbb, 0x06,
- 0x00, 0x00, 0x4b, 0xdd, 0x07,
- 0x00, 0x00, 0x45, 0xc3, 0x84,
- 0x00, 0x00, 0x45, 0xde, 0x87,
- 0x00, 0x00, 0x45, 0xf6, 0x87,
- 0x00, 0x00, 0x46, 0x04, 0x83,
- 0x00, 0x6c, 0x45, 0x96, 0xc2,
- 0x00, 0x00, 0x41, 0xe1, 0x42,
- 0x00, 0x00, 0x46, 0x19, 0xc3,
- 0x00, 0x00, 0x46, 0x19, 0xc4,
- 0x00, 0x00, 0x46, 0x19, 0xcb,
- 0x00, 0x00, 0x53, 0x20, 0x88,
- 0x00, 0x00, 0x41, 0xe1, 0x44,
- 0x00, 0x00, 0x46, 0x2c, 0x05,
- 0x00, 0x00, 0x46, 0x46, 0x87,
- 0x00, 0x00, 0x4f, 0x3d, 0x05,
- 0x00, 0x00, 0x52, 0x92, 0x0a,
- 0x00, 0x00, 0x46, 0x7c, 0x83,
- 0x00, 0x6c, 0xc0, 0x81, 0x02,
- 0x00, 0x00, 0x43, 0xe6, 0x44,
- 0x00, 0x00, 0x46, 0xd2, 0x09,
- 0x00, 0x00, 0x47, 0x0c, 0x43,
- 0x00, 0x00, 0x47, 0x0d, 0x07,
- 0x00, 0x00, 0x56, 0x13, 0xc9,
- 0x00, 0x00, 0x54, 0xf6, 0xc8,
- 0x00, 0x00, 0x46, 0x4d, 0x43,
- 0x00, 0x00, 0x48, 0xa7, 0xc7,
- 0x00, 0x00, 0x49, 0x11, 0x03,
- 0x00, 0x00, 0x49, 0x26, 0x44,
- 0x00, 0x00, 0x49, 0x33, 0x49,
- 0x00, 0x00, 0x49, 0x77, 0x86,
- 0x00, 0x00, 0x4a, 0xe1, 0x03,
- 0x00, 0x00, 0x40, 0x87, 0x82,
- 0x00, 0x00, 0x4c, 0x5d, 0xc3,
- 0x00, 0x00, 0x4c, 0x5d, 0xc7,
- 0x00, 0x00, 0x58, 0x9d, 0x85,
- 0x00, 0x00, 0x55, 0x71, 0x86,
- 0x00, 0x00, 0x41, 0x28, 0x04,
- 0x00, 0x00, 0x59, 0x53, 0x05,
- 0x00, 0x00, 0x48, 0xb2, 0x43,
- 0x00, 0x00, 0x41, 0x77, 0x46,
- 0x00, 0x00, 0x47, 0x2f, 0xc3,
- 0x00, 0x00, 0x40, 0x8b, 0x02,
- 0x00, 0x00, 0x45, 0x0a, 0xc4,
- 0x00, 0x6d, 0x43, 0x43, 0x82,
- 0x00, 0x6d, 0xc3, 0x43, 0x83,
- 0x00, 0x6e, 0x40, 0x30, 0xc2,
- 0x00, 0x00, 0x40, 0xbf, 0xc3,
- 0x00, 0x00, 0x41, 0x5b, 0x44,
- 0x00, 0x00, 0x45, 0x2a, 0x07,
- 0x00, 0x00, 0x4a, 0x07, 0x86,
- 0x00, 0x00, 0x46, 0xd1, 0xc2,
- 0x00, 0x6e, 0xc6, 0xd6, 0x02,
- 0x00, 0x00, 0x5b, 0x22, 0x04,
- 0x00, 0x6f, 0xc1, 0x15, 0xc2,
- 0x00, 0x70, 0x40, 0xc7, 0x82,
- 0x00, 0x00, 0x40, 0xc7, 0x84,
- 0x00, 0x00, 0x40, 0xc7, 0x85,
- 0x00, 0x00, 0x53, 0xc3, 0x45,
- 0x00, 0x00, 0x5c, 0x3d, 0xc6,
- 0x00, 0x70, 0xc1, 0x02, 0x02,
- 0x00, 0x00, 0x4f, 0xdf, 0x45,
- 0x00, 0x00, 0x53, 0x23, 0xc5,
- 0x00, 0x00, 0x4e, 0x9d, 0x43,
- 0x00, 0x00, 0x4f, 0xc9, 0x86,
- 0x00, 0x00, 0x41, 0x02, 0x05,
- 0x00, 0x00, 0x41, 0x7a, 0xc2,
- 0x00, 0x00, 0x55, 0xe4, 0x85,
- 0x00, 0x00, 0x41, 0x7a, 0xc4,
- 0x00, 0x00, 0x41, 0xdd, 0x83,
- 0x00, 0x00, 0x41, 0xdf, 0xc3,
- 0x00, 0x71, 0x40, 0x74, 0xc2,
- 0x00, 0x00, 0x47, 0x49, 0x07,
- 0x00, 0x00, 0x45, 0x56, 0xc4,
- 0x00, 0x00, 0x45, 0x56, 0xc9,
- 0x00, 0x00, 0x45, 0x5d, 0xc4,
- 0x00, 0x00, 0x4b, 0x69, 0x43,
- 0x00, 0x00, 0x4c, 0x2c, 0x88,
- 0x00, 0x71, 0xce, 0x9c, 0x84,
- 0x00, 0x00, 0x4e, 0x9c, 0x86,
- 0x00, 0x00, 0x4b, 0x48, 0x43,
- 0x00, 0x00, 0x46, 0x36, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x72, 0x50, 0x34, 0xc2,
- 0x00, 0x00, 0x58, 0xc9, 0x02,
- 0x00, 0x72, 0xc0, 0x06, 0x42,
- 0x00, 0x00, 0x54, 0x1f, 0x88,
- 0x00, 0x00, 0x5d, 0x24, 0x08,
- 0x00, 0x00, 0x5c, 0x01, 0xc6,
- 0x00, 0x00, 0x49, 0xa7, 0xc5,
- 0x00, 0x00, 0x4b, 0xb3, 0x85,
- 0x00, 0x00, 0x5c, 0x7f, 0x87,
- 0x00, 0x73, 0x48, 0x6e, 0x45,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x73, 0xca, 0x45, 0x42,
- 0x00, 0x74, 0x40, 0x00, 0x42,
- 0x00, 0x00, 0x48, 0x7c, 0x08,
- 0x00, 0x00, 0x50, 0xf0, 0xc5,
- 0x00, 0x00, 0x50, 0x86, 0x04,
- 0x00, 0x00, 0x58, 0x96, 0x05,
- 0x00, 0x00, 0x59, 0x41, 0x47,
- 0x00, 0x00, 0x49, 0xee, 0x04,
- 0x00, 0x00, 0x45, 0x94, 0xc2,
- 0x00, 0x74, 0xc3, 0x31, 0xc2,
- 0x00, 0x00, 0x55, 0x60, 0x44,
- 0x00, 0x00, 0x50, 0xf4, 0x47,
- 0x00, 0x00, 0x49, 0x97, 0xc7,
- 0x00, 0x00, 0x58, 0x0e, 0x04,
- 0x00, 0x00, 0x5e, 0x3a, 0x43,
- 0x00, 0x00, 0x49, 0xdb, 0x84,
- 0x00, 0x00, 0x49, 0xdb, 0x88,
- 0x00, 0x75, 0x43, 0x13, 0x86,
- 0x00, 0x00, 0x45, 0xbe, 0xca,
- 0x00, 0x00, 0x55, 0x16, 0x44,
- 0x00, 0x00, 0x4a, 0x1c, 0x08,
- 0x00, 0x00, 0x43, 0x72, 0xc4,
- 0x00, 0x00, 0x42, 0x53, 0x86,
- 0x00, 0x00, 0x4a, 0x45, 0x04,
- 0x00, 0x00, 0x5c, 0x9e, 0x86,
- 0x00, 0x00, 0x45, 0x59, 0x89,
- 0x00, 0x00, 0x4b, 0x3f, 0xc7,
- 0x00, 0x00, 0x5a, 0x0d, 0xc3,
- 0x00, 0x75, 0xc1, 0x73, 0x82,
- 0x00, 0x00, 0x47, 0xe1, 0xc3,
- 0x00, 0x00, 0x40, 0x9b, 0x02,
- 0x00, 0x76, 0x40, 0xaf, 0x02,
- 0x00, 0x00, 0x45, 0x46, 0x06,
- 0x00, 0x00, 0x48, 0x5e, 0x48,
- 0x00, 0x00, 0x4b, 0x66, 0x87,
- 0x00, 0x00, 0x55, 0xf2, 0x89,
- 0x00, 0x00, 0x4b, 0x68, 0x49,
- 0x00, 0x00, 0x4b, 0x80, 0x05,
- 0x00, 0x00, 0x4b, 0x9f, 0xc9,
- 0x00, 0x00, 0x4b, 0xb4, 0xc5,
- 0x00, 0x00, 0x4b, 0xc0, 0x45,
- 0x00, 0x00, 0x4b, 0xd5, 0x08,
- 0x00, 0x76, 0xc1, 0x00, 0x84,
- 0x00, 0x77, 0x41, 0x00, 0x87,
- 0x00, 0x00, 0x42, 0xf2, 0xc3,
- 0x00, 0x00, 0x4b, 0xd7, 0x07,
- 0x00, 0x00, 0x42, 0xf2, 0xc6,
- 0x00, 0x00, 0x4b, 0xe1, 0xc7,
- 0x00, 0x00, 0x4b, 0x38, 0x05,
- 0x00, 0x00, 0x42, 0xea, 0x83,
- 0x00, 0x77, 0xc2, 0x96, 0x02,
- 0x00, 0x00, 0x58, 0x1d, 0x04,
- 0x00, 0x78, 0x41, 0xfe, 0xc2,
- 0x00, 0x78, 0xc1, 0x5f, 0xc2,
- 0x00, 0x00, 0x57, 0xcd, 0x06,
- 0x00, 0x00, 0x5b, 0xe8, 0x45,
- 0x00, 0x00, 0x4c, 0x11, 0x07,
- 0x00, 0x00, 0x4f, 0xd6, 0x03,
- 0x00, 0x00, 0x55, 0x47, 0x04,
- 0x00, 0x00, 0x40, 0x16, 0x03,
- 0x00, 0x00, 0x5b, 0xe5, 0x03,
- 0x00, 0x79, 0x40, 0x30, 0x42,
- 0x00, 0x7a, 0xc0, 0x14, 0x42,
- 0x00, 0x00, 0x59, 0x2d, 0x04,
- 0x00, 0x00, 0x45, 0x96, 0x83,
- 0x00, 0x00, 0x50, 0xd4, 0x45,
- 0x00, 0x7b, 0x40, 0x41, 0x42,
- 0x00, 0x7c, 0x40, 0x6a, 0x42,
- 0x00, 0x00, 0x58, 0x98, 0x06,
- 0x00, 0x00, 0x4f, 0xbf, 0x04,
- 0x00, 0x00, 0x50, 0xec, 0xc4,
- 0x00, 0x00, 0x50, 0xec, 0xca,
- 0x00, 0x7d, 0x40, 0x05, 0xc2,
- 0x00, 0x00, 0x45, 0x23, 0x83,
- 0x00, 0x00, 0x40, 0xce, 0x0a,
- 0x00, 0x00, 0x40, 0xfc, 0x88,
- 0x00, 0x7d, 0xc5, 0x03, 0xc4,
- 0x00, 0x00, 0x40, 0x05, 0xc3,
- 0x00, 0x00, 0x43, 0x77, 0x03,
- 0x00, 0x00, 0x4c, 0xb2, 0xc9,
- 0x00, 0x00, 0x46, 0xb2, 0x89,
- 0x00, 0x00, 0x40, 0xfe, 0x46,
- 0x00, 0x7e, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x52, 0x05, 0x4d,
- 0x00, 0x00, 0x43, 0x08, 0x86,
- 0x00, 0x00, 0x44, 0x7a, 0x4b,
- 0x00, 0x7e, 0xc0, 0x5c, 0xc2,
- 0x00, 0x00, 0x51, 0xff, 0x88,
- 0x00, 0x88, 0x41, 0xb8, 0x42,
- 0x00, 0x88, 0xc0, 0x28, 0x02,
- 0x00, 0x00, 0x4b, 0xfe, 0x45,
- 0x00, 0x89, 0x40, 0x2b, 0x82,
- 0x00, 0x00, 0x4a, 0xaa, 0xc7,
- 0x00, 0x00, 0x40, 0xad, 0xc3,
- 0x00, 0x00, 0x41, 0x03, 0xc8,
- 0x00, 0x89, 0xc0, 0x4b, 0x02,
- 0x00, 0x00, 0x4b, 0xc5, 0xc4,
- 0x00, 0x00, 0x42, 0x4b, 0x03,
- 0x00, 0x00, 0x44, 0x40, 0xc6,
- 0x00, 0x00, 0x43, 0x0a, 0x84,
- 0x00, 0x00, 0x41, 0x70, 0x83,
- 0x00, 0x8c, 0x40, 0x1d, 0x02,
- 0x00, 0x00, 0x4f, 0x5e, 0xc4,
- 0x00, 0x00, 0x4c, 0x4c, 0x45,
- 0x00, 0x00, 0x4c, 0x59, 0xc7,
- 0x00, 0x00, 0x48, 0x8e, 0x83,
- 0x00, 0x00, 0x4c, 0x70, 0x03,
- 0x00, 0x02, 0xcc, 0x76, 0xc2,
- 0x00, 0x00, 0x4c, 0x76, 0xc3,
- 0x00, 0x00, 0x4c, 0x7b, 0x43,
- 0x00, 0x8c, 0xc0, 0x0c, 0x02,
- 0x00, 0x00, 0x42, 0x1e, 0x44,
- 0x00, 0x00, 0x54, 0xd0, 0x06,
- 0x00, 0x00, 0x47, 0xd8, 0x43,
- 0x00, 0x00, 0x4c, 0x7f, 0xc3,
- 0x00, 0x8d, 0x45, 0x10, 0xc2,
- 0x00, 0x00, 0x45, 0x10, 0xc8,
- 0x00, 0x00, 0x4c, 0x8c, 0x84,
- 0x00, 0x00, 0x5b, 0x66, 0x86,
- 0x00, 0x00, 0x58, 0xca, 0x87,
- 0x00, 0x00, 0x5a, 0xe1, 0xc6,
- 0x00, 0x00, 0x57, 0x0f, 0x84,
- 0x00, 0xa9, 0xc0, 0x13, 0x02,
- 0x00, 0x00, 0x42, 0xf1, 0x8b,
- 0x00, 0x00, 0x4c, 0x65, 0x0e,
- 0x00, 0x00, 0x41, 0xb1, 0xcf,
- 0x00, 0x00, 0x5a, 0x9c, 0xc3,
- 0x00, 0xaa, 0xcd, 0x57, 0x82,
- 0x00, 0x02, 0xc4, 0x6c, 0x82,
- 0x00, 0xab, 0x40, 0x60, 0x02,
- 0x00, 0x00, 0x44, 0x24, 0x43,
- 0x00, 0x00, 0x5b, 0xf3, 0xc4,
- 0x00, 0x00, 0x48, 0x89, 0x83,
- 0x00, 0x00, 0x55, 0x85, 0x46,
- 0x00, 0x00, 0x58, 0x9c, 0x06,
- 0x00, 0x00, 0x5c, 0x30, 0x87,
- 0x00, 0x00, 0x44, 0x48, 0x04,
- 0x00, 0xab, 0xc1, 0x95, 0x02,
- 0x00, 0xac, 0x42, 0x9d, 0x02,
- 0x00, 0x00, 0x50, 0x7c, 0xc5,
- 0x00, 0x00, 0x50, 0x2d, 0x47,
- 0x00, 0x00, 0x5b, 0xa8, 0x46,
- 0x00, 0xac, 0xc7, 0x44, 0xc2,
- 0x00, 0x00, 0x58, 0x95, 0x44,
- 0x00, 0x00, 0x4c, 0xda, 0x83,
- 0x00, 0xad, 0x40, 0x69, 0x82,
- 0x00, 0xad, 0xd7, 0xbc, 0x03,
- 0x00, 0x00, 0x4c, 0xe9, 0x04,
- 0x00, 0x00, 0x4d, 0x56, 0xc9,
- 0x00, 0xae, 0x4d, 0xd4, 0xc2,
- 0x00, 0xae, 0xc3, 0x98, 0x42,
- 0x00, 0x00, 0x44, 0xe6, 0x85,
- 0x00, 0xaf, 0x4d, 0xd8, 0x02,
- 0x00, 0xb0, 0x40, 0x4f, 0xc2,
- 0x00, 0x00, 0x56, 0x3e, 0xc7,
- 0x00, 0x00, 0x57, 0xf3, 0x4b,
- 0x00, 0x00, 0x41, 0x2d, 0x05,
- 0x00, 0x00, 0x44, 0x80, 0x09,
- 0x00, 0x00, 0x46, 0x5e, 0x06,
- 0x00, 0xb0, 0xc1, 0xcd, 0x44,
- 0x00, 0x00, 0x5c, 0x58, 0xc9,
- 0x00, 0x00, 0x5e, 0x75, 0x87,
- 0x00, 0x00, 0x58, 0xbe, 0x47,
- 0x00, 0x00, 0x42, 0xd9, 0x03,
- 0x00, 0x00, 0x4f, 0x84, 0x06,
- 0x00, 0x00, 0x52, 0x5a, 0x07,
- 0x00, 0x00, 0x47, 0x21, 0xc3,
- 0x00, 0x00, 0x4c, 0x06, 0x86,
- 0x00, 0xb1, 0xc0, 0xd9, 0xc2,
- 0x00, 0xb2, 0x42, 0xa2, 0xc2,
- 0x00, 0x00, 0x5b, 0x72, 0x03,
- 0x00, 0x00, 0x5a, 0x5e, 0x05,
- 0x00, 0x00, 0x4d, 0xf8, 0x07,
- 0x00, 0x00, 0x58, 0xff, 0xc6,
- 0x00, 0x00, 0x58, 0x9d, 0x05,
- 0x00, 0x00, 0x45, 0x56, 0x44,
- 0x00, 0x00, 0x4b, 0x20, 0x85,
- 0x00, 0x00, 0x51, 0x19, 0x44,
- 0x00, 0xb2, 0xc0, 0x12, 0x82,
- 0x00, 0x00, 0x4d, 0xb5, 0x84,
- 0x00, 0x00, 0x46, 0xb1, 0x84,
- 0x00, 0x00, 0x46, 0xb1, 0x8d,
- 0x00, 0x00, 0x4d, 0x92, 0xc9,
- 0x00, 0x00, 0x59, 0x3f, 0x88,
- 0x00, 0x00, 0x40, 0x12, 0x84,
- 0x00, 0x00, 0x46, 0x79, 0x45,
- 0x00, 0x00, 0x4f, 0xf7, 0x07,
- 0x00, 0x00, 0x5c, 0x22, 0xc4,
- 0x00, 0x00, 0x4f, 0xe2, 0x47,
- 0x00, 0x00, 0x42, 0x65, 0x05,
- 0x00, 0xb3, 0x4b, 0x72, 0x84,
- 0x00, 0x00, 0x4b, 0xa6, 0x45,
- 0x00, 0xb3, 0xc6, 0xf9, 0x04,
- 0x00, 0x00, 0x51, 0x80, 0x46,
- 0x00, 0x00, 0x4d, 0xc0, 0x45,
- 0x00, 0xb4, 0x46, 0x63, 0xc2,
- 0x00, 0x00, 0x42, 0xa2, 0x83,
- 0x00, 0x00, 0x50, 0xcf, 0x03,
- 0x00, 0x00, 0x43, 0xb5, 0xc4,
- 0x00, 0x00, 0x43, 0xb5, 0xc5,
- 0x00, 0x00, 0x41, 0xc2, 0xc6,
- 0x00, 0x00, 0x58, 0x9e, 0x45,
- 0x00, 0x00, 0x46, 0x4c, 0xc4,
- 0x00, 0xb4, 0xd0, 0x0e, 0xc3,
- 0x00, 0xb5, 0x41, 0x08, 0x86,
- 0x00, 0x00, 0x40, 0xa8, 0xc5,
- 0x00, 0x00, 0x41, 0x8f, 0x45,
- 0x00, 0x00, 0x4d, 0xc1, 0x44,
- 0x00, 0x00, 0x55, 0x16, 0xc3,
- 0x00, 0x00, 0x55, 0x16, 0xcc,
- 0x00, 0xb5, 0xcc, 0x5a, 0xc2,
- 0x00, 0xb6, 0x40, 0x0b, 0x42,
- 0x00, 0xb6, 0xc0, 0x6b, 0x42,
- 0x00, 0x00, 0x40, 0xf7, 0x43,
- 0x00, 0x00, 0x40, 0xf7, 0x44,
- 0x00, 0xb7, 0x40, 0x95, 0x82,
- 0x00, 0x00, 0x4f, 0xa4, 0xc8,
- 0x00, 0x00, 0x46, 0x65, 0xc4,
- 0x00, 0x00, 0x52, 0xea, 0x06,
- 0x00, 0xb7, 0xc1, 0xa2, 0x02,
- 0x00, 0xb8, 0x40, 0x65, 0xc2,
- 0x00, 0xb8, 0xc0, 0x5e, 0x42,
- 0x00, 0x00, 0x49, 0xd5, 0xc5,
- 0x00, 0x00, 0x5c, 0xa1, 0x06,
- 0x00, 0x00, 0x55, 0xed, 0x44,
- 0x00, 0x00, 0x42, 0xc8, 0xc6,
- 0x00, 0x00, 0x40, 0xba, 0xc6,
- 0x00, 0x00, 0x42, 0x83, 0x43,
- 0x00, 0xb9, 0x49, 0x74, 0x8a,
- 0x00, 0x00, 0x4e, 0x9b, 0xc5,
- 0x00, 0x00, 0x4a, 0x86, 0x43,
- 0x00, 0x00, 0x42, 0x5a, 0xc6,
- 0x00, 0xb9, 0xdf, 0x3f, 0x49,
- 0x00, 0x00, 0x42, 0x5a, 0xc7,
- 0x00, 0x00, 0x48, 0xf8, 0x48,
- 0x00, 0x00, 0x4c, 0xa8, 0x09,
- 0x00, 0x00, 0x5a, 0x33, 0x48,
- 0x00, 0x00, 0x49, 0xca, 0x06,
- 0x00, 0x00, 0x40, 0x6a, 0x83,
- 0x00, 0xba, 0x40, 0x20, 0x42,
- 0x00, 0x00, 0x5a, 0x7a, 0xc8,
- 0x00, 0xba, 0xc4, 0xe4, 0x42,
- 0x00, 0xbb, 0x40, 0x0e, 0xc2,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x4d, 0xfa, 0x85,
- 0x00, 0x00, 0x4a, 0x7d, 0x84,
- 0x00, 0x00, 0x4b, 0xd2, 0xc9,
- 0x00, 0x00, 0x43, 0x17, 0x84,
- 0x00, 0x00, 0x43, 0x5a, 0xc8,
- 0x00, 0xbc, 0x40, 0x9b, 0x43,
- 0x00, 0xbc, 0xc5, 0xf3, 0x04,
- 0x00, 0x00, 0x41, 0x94, 0x08,
- 0x00, 0xbd, 0x4c, 0x7f, 0x42,
- 0x00, 0x00, 0x43, 0x05, 0x82,
- 0x00, 0x00, 0x53, 0x5f, 0x45,
- 0x00, 0x00, 0x43, 0x4e, 0x09,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x52, 0xc5, 0x84,
- 0x00, 0x00, 0x5a, 0x7f, 0x44,
- 0x00, 0x00, 0x45, 0x5a, 0x83,
- 0x00, 0x00, 0x48, 0xe9, 0x4a,
- 0x00, 0xbd, 0xd9, 0x4c, 0xc2,
- 0x00, 0xbe, 0x40, 0xd6, 0x82,
- 0x00, 0x00, 0x4e, 0x20, 0x03,
- 0x00, 0x00, 0x59, 0x6e, 0xc3,
- 0x00, 0x02, 0xc0, 0xf4, 0x02,
- 0x00, 0x00, 0x5b, 0x30, 0x83,
- 0x00, 0xbe, 0xc1, 0xcf, 0x02,
- 0x00, 0xbf, 0x40, 0x15, 0x02,
- 0x00, 0xbf, 0xc2, 0x8f, 0x84,
- 0x00, 0x00, 0x48, 0xf4, 0x06,
- 0x00, 0x00, 0x47, 0xc7, 0x04,
- 0x00, 0x00, 0x48, 0x7a, 0x43,
- 0x00, 0x00, 0x40, 0x84, 0x83,
- 0x00, 0xc0, 0x50, 0xb8, 0x43,
- 0x00, 0x00, 0x44, 0x3d, 0x46,
- 0x00, 0x00, 0x53, 0x63, 0x05,
- 0x00, 0x00, 0x4e, 0x69, 0x47,
- 0x00, 0x00, 0x4e, 0x68, 0x86,
- 0x00, 0x00, 0x4e, 0x75, 0x88,
- 0x00, 0x00, 0x4e, 0x77, 0x86,
- 0x00, 0x00, 0x42, 0x00, 0x84,
- 0x00, 0x00, 0x4a, 0x9c, 0xcb,
- 0x00, 0x00, 0x4e, 0xa4, 0x43,
- 0x00, 0x00, 0x4e, 0xa4, 0x45,
- 0x00, 0xc0, 0xc0, 0x66, 0xc2,
- 0x00, 0x00, 0x56, 0x41, 0xc2,
- 0x00, 0xc1, 0x44, 0xa5, 0x02,
- 0x00, 0xc1, 0xc0, 0x3c, 0x42,
- 0x00, 0x00, 0x40, 0x6e, 0x83,
- 0x00, 0xc2, 0x47, 0xd2, 0x02,
- 0x00, 0x00, 0x47, 0xd2, 0x03,
- 0x00, 0x00, 0x4e, 0xaf, 0x83,
- 0x00, 0xc3, 0x40, 0x33, 0x02,
- 0x00, 0xc3, 0xce, 0xe6, 0xc6,
- 0x00, 0x00, 0x4e, 0xea, 0xc5,
- 0x00, 0x00, 0x49, 0xac, 0xc6,
- 0x00, 0xc4, 0x47, 0x5a, 0x82,
- 0x00, 0xc4, 0xc0, 0xa7, 0x02,
- 0x00, 0xc5, 0x41, 0xe0, 0x02,
- 0x00, 0xc5, 0xc0, 0x70, 0xc2,
- 0x00, 0xc6, 0x40, 0xf8, 0xc2,
- 0x00, 0xc6, 0xc0, 0x1b, 0x82,
- 0x00, 0x00, 0x44, 0xb0, 0x83,
- 0x00, 0x00, 0x5d, 0x34, 0x46,
- 0x00, 0xc7, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x5a, 0xc6, 0x46,
- 0x00, 0x00, 0x48, 0x8d, 0x04,
- 0x00, 0x00, 0x50, 0x18, 0x43,
- 0x00, 0xc8, 0xc0, 0x24, 0xc2,
- 0x00, 0x00, 0x40, 0x18, 0xc2,
- 0x00, 0x00, 0x42, 0xe6, 0x83,
- 0x00, 0xc9, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x5d, 0x36, 0x87,
- 0x00, 0x00, 0x4d, 0xbf, 0x47,
- 0x00, 0xd5, 0x45, 0x05, 0x87,
- 0x00, 0x00, 0x51, 0x42, 0x07,
- 0x00, 0x00, 0x41, 0x23, 0x43,
- 0x00, 0xd5, 0xc7, 0x3e, 0x04,
- 0x00, 0x00, 0x4e, 0xcf, 0x44,
- 0x00, 0x00, 0x4e, 0xcf, 0x4a,
- 0x00, 0x00, 0x5e, 0x8d, 0x45,
- 0x00, 0xd6, 0x40, 0xfc, 0xc2,
- 0x00, 0x00, 0x45, 0xde, 0x43,
- 0x00, 0xd6, 0xc0, 0x06, 0x02,
- 0x00, 0x00, 0x42, 0xb6, 0x43,
- 0x00, 0x00, 0x47, 0xe1, 0x83,
- 0x00, 0xd7, 0xc0, 0x05, 0x82,
- 0x00, 0x00, 0x48, 0x99, 0xc4,
- 0x00, 0x00, 0x53, 0x59, 0x04,
- 0x00, 0x00, 0x5a, 0xfb, 0x45,
- 0x00, 0x00, 0x52, 0x26, 0xc5,
- 0x00, 0x00, 0x42, 0xd0, 0x06,
- 0x00, 0x00, 0x4b, 0x92, 0x86,
- 0x00, 0xd8, 0x41, 0x22, 0x82,
- 0x00, 0xd8, 0xc0, 0x1f, 0x42,
- 0x00, 0x00, 0x4c, 0x6d, 0x85,
- 0x00, 0x00, 0x49, 0xa9, 0xd2,
- 0x00, 0x00, 0x4a, 0xd8, 0xc6,
- 0x00, 0x00, 0x40, 0x3d, 0x43,
- 0x00, 0x00, 0x5d, 0x1f, 0x46,
- 0x00, 0x00, 0x56, 0x69, 0x05,
- 0x00, 0x02, 0xc1, 0x71, 0x42,
- 0x00, 0xe9, 0x40, 0xb5, 0x02,
- 0x00, 0x00, 0x5b, 0xae, 0xc3,
- 0x00, 0x00, 0x40, 0xb5, 0x03,
- 0x00, 0x00, 0x4a, 0xfb, 0x03,
- 0x00, 0xe9, 0xc0, 0x39, 0x02,
- 0x00, 0x00, 0x41, 0x89, 0x03,
- 0x00, 0xea, 0x41, 0x62, 0x82,
- 0x00, 0x00, 0x42, 0x8f, 0xc3,
- 0x00, 0x00, 0x5a, 0xfd, 0xc8,
- 0x00, 0x00, 0x44, 0x35, 0x03,
- 0x00, 0x00, 0x44, 0x35, 0x06,
- 0x00, 0x00, 0x5e, 0xa5, 0x07,
- 0x00, 0x00, 0x53, 0x3a, 0xc6,
- 0x00, 0x00, 0x53, 0x3a, 0xcb,
- 0x00, 0x00, 0x48, 0x8c, 0x47,
- 0x00, 0x00, 0x50, 0x0e, 0x44,
- 0x00, 0xeb, 0x40, 0x0e, 0x82,
- 0x00, 0x00, 0x55, 0x70, 0xc5,
- 0x00, 0xeb, 0xc0, 0x18, 0x83,
- 0x00, 0x00, 0x43, 0xc4, 0x83,
- 0x00, 0x00, 0x5c, 0x52, 0xc5,
- 0x00, 0x00, 0x41, 0x22, 0x43,
- 0x00, 0xec, 0xc1, 0x22, 0x46,
- 0x00, 0x00, 0x4b, 0x13, 0x43,
- 0x00, 0x00, 0x42, 0xc2, 0x84,
- 0x00, 0x00, 0x40, 0x03, 0xc6,
- 0x00, 0x00, 0x5d, 0xd9, 0xc6,
- 0x00, 0xed, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x55, 0x45, 0xc7,
- 0x00, 0x00, 0x56, 0x0f, 0xc7,
- 0x00, 0x00, 0x4a, 0xbc, 0x05,
- 0x00, 0x00, 0x52, 0x9d, 0xc6,
- 0x00, 0x00, 0x40, 0xa9, 0x03,
- 0x00, 0xf2, 0xcc, 0x88, 0xc3,
- 0x00, 0xf3, 0x40, 0x67, 0x02,
- 0x00, 0xf3, 0xc2, 0x8d, 0x44,
- 0x00, 0x00, 0x5f, 0x2d, 0x09,
- 0x00, 0x00, 0x42, 0x2b, 0x85,
- 0x00, 0x00, 0x43, 0xd9, 0xc4,
- 0x00, 0x00, 0x4f, 0xb7, 0xc8,
- 0x00, 0x00, 0x44, 0x5a, 0xc5,
- 0x00, 0xf4, 0x44, 0x72, 0x85,
- 0x00, 0x00, 0x46, 0x0f, 0xc9,
- 0x00, 0x00, 0x4f, 0xf5, 0x43,
- 0x00, 0x00, 0x5d, 0x77, 0x44,
- 0x00, 0xf4, 0xc0, 0x20, 0xc2,
- 0x00, 0x00, 0x41, 0x97, 0x43,
- 0x00, 0xf5, 0x47, 0x95, 0xc2,
- 0x00, 0x00, 0x47, 0x95, 0xc6,
- 0x00, 0x02, 0xc8, 0x6f, 0x42,
- 0x00, 0xf5, 0xc0, 0x6f, 0xc2,
- 0x00, 0x00, 0x49, 0xd4, 0xc8,
- 0x00, 0x00, 0x49, 0xdb, 0x43,
- 0x00, 0x00, 0x4b, 0xa5, 0x87,
- 0x00, 0x00, 0x53, 0x3d, 0x45,
- 0x00, 0x00, 0x4c, 0xc2, 0x85,
- 0x00, 0x00, 0x4c, 0xc2, 0x8b,
- 0x00, 0x00, 0x4f, 0x81, 0x86,
- 0x00, 0x00, 0x4c, 0xc4, 0x86,
- 0x00, 0x00, 0x44, 0x4f, 0x04,
- 0x00, 0x00, 0x41, 0x17, 0x86,
- 0x00, 0xf6, 0x4f, 0x8a, 0x08,
- 0x00, 0x00, 0x46, 0x22, 0xc3,
- 0x00, 0x00, 0x46, 0x71, 0x03,
- 0x00, 0x00, 0x46, 0x71, 0x04,
- 0x00, 0x00, 0x50, 0x2c, 0x84,
- 0x00, 0x00, 0x50, 0xe0, 0x87,
- 0x00, 0x00, 0x54, 0x18, 0x45,
- 0x00, 0xf6, 0xd6, 0x8e, 0x82,
- 0x00, 0xf7, 0x40, 0x4f, 0x82,
- 0x00, 0xf8, 0x40, 0x4f, 0x85,
- 0x00, 0x00, 0x4d, 0x23, 0xc4,
- 0x00, 0x00, 0x4e, 0x32, 0xcb,
- 0x00, 0x00, 0x50, 0x39, 0x88,
- 0x00, 0x00, 0x47, 0x1c, 0x84,
- 0x00, 0xf8, 0xc3, 0x4d, 0xc2,
- 0x00, 0xf9, 0x47, 0x1c, 0x02,
- 0x00, 0x00, 0x57, 0x3d, 0xc3,
- 0x00, 0x00, 0x50, 0x4c, 0x84,
- 0x00, 0x00, 0x50, 0x4f, 0x45,
- 0x00, 0x00, 0x50, 0x58, 0xc7,
- 0x00, 0xf9, 0xd0, 0x81, 0x44,
- 0x00, 0x00, 0x40, 0xf0, 0x04,
- 0x00, 0xfa, 0x40, 0x2b, 0x02,
- 0x00, 0x00, 0x58, 0x3b, 0x89,
- 0x00, 0x00, 0x50, 0x96, 0xc5,
- 0x00, 0x00, 0x41, 0x2f, 0xc5,
- 0x00, 0x00, 0x50, 0xa2, 0x45,
- 0x00, 0xfa, 0xc1, 0x96, 0x83,
- 0x00, 0x00, 0x43, 0xab, 0x84,
- 0x00, 0x00, 0x43, 0xab, 0x8b,
- 0x00, 0x00, 0x50, 0xaf, 0x04,
- 0x00, 0x00, 0x50, 0xb1, 0xcb,
- 0x00, 0x00, 0x50, 0xb7, 0x85,
- 0x00, 0x00, 0x41, 0xb3, 0x0a,
- 0x00, 0x00, 0x50, 0xbe, 0xc8,
- 0x00, 0x00, 0x50, 0xc0, 0xca,
- 0x00, 0x00, 0x50, 0xc9, 0x43,
- 0x00, 0x00, 0x50, 0xc9, 0x4a,
- 0x00, 0xfb, 0xc1, 0x5c, 0xc2,
- 0x00, 0xfc, 0x41, 0xa0, 0x02,
- 0x00, 0xfc, 0xc2, 0x02, 0x83,
- 0x00, 0xfd, 0x50, 0xe9, 0xc2,
- 0x00, 0x00, 0x50, 0xe9, 0xc3,
- 0x00, 0xfd, 0xd1, 0x04, 0xc2,
- 0x00, 0xfe, 0x54, 0x09, 0x42,
- 0x00, 0x00, 0x51, 0x15, 0xc4,
- 0x00, 0x00, 0x41, 0xb8, 0x86,
- 0x00, 0x00, 0x42, 0xc6, 0x05,
- 0x00, 0x00, 0x5d, 0xb3, 0xc6,
- 0x00, 0x00, 0x5c, 0x1f, 0x05,
- 0x00, 0x00, 0x50, 0xf7, 0x84,
- 0x00, 0xfe, 0xc0, 0x09, 0x02,
- 0x00, 0x00, 0x46, 0x94, 0x84,
- 0x00, 0x00, 0x4e, 0x22, 0x8a,
- 0x00, 0x00, 0x4c, 0x45, 0x87,
- 0x00, 0x00, 0x5b, 0xe6, 0x86,
- 0x00, 0x00, 0x43, 0x73, 0x47,
- 0x00, 0x00, 0x44, 0x31, 0x43,
- 0x00, 0x00, 0x4c, 0xe9, 0x48,
- 0x00, 0x00, 0x5e, 0xd2, 0x4b,
- 0x00, 0x00, 0x4d, 0x61, 0xc5,
- 0x00, 0x00, 0x41, 0xd5, 0x05,
- 0x00, 0x00, 0x41, 0xd5, 0x06,
- 0x00, 0x00, 0x5a, 0x80, 0x84,
- 0x00, 0x00, 0x5b, 0x7a, 0x48,
- 0x00, 0x00, 0x41, 0x41, 0x43,
- 0x00, 0x00, 0x4a, 0x7e, 0x84,
- 0x00, 0x00, 0x5d, 0x8a, 0x47,
- 0x00, 0x00, 0x50, 0x0a, 0x86,
- 0x00, 0x00, 0x5e, 0x21, 0x06,
- 0x00, 0x00, 0x4d, 0x34, 0xca,
- 0x00, 0x00, 0x43, 0xd7, 0x04,
- 0x00, 0x00, 0x43, 0xd7, 0x0a,
- 0x00, 0xff, 0x57, 0x04, 0x86,
- 0x00, 0x00, 0x57, 0x04, 0x87,
- 0x00, 0x00, 0x46, 0x2c, 0x87,
- 0x00, 0x00, 0x46, 0x77, 0x84,
- 0x00, 0x00, 0x46, 0x77, 0x89,
- 0x00, 0x00, 0x42, 0x94, 0x05,
- 0x00, 0x00, 0x5e, 0x75, 0x03,
- 0x00, 0x00, 0x40, 0xc4, 0xc3,
- 0x00, 0xff, 0xc2, 0x2b, 0x03,
- 0x01, 0x00, 0x40, 0x06, 0x82,
- 0x00, 0x00, 0x43, 0x9a, 0xc6,
- 0x01, 0x00, 0xcd, 0x71, 0x05,
- 0x00, 0x00, 0x5d, 0x21, 0x85,
- 0x00, 0x00, 0x43, 0x67, 0x46,
- 0x00, 0x00, 0x4c, 0x7e, 0x84,
- 0x01, 0x01, 0x41, 0x24, 0x82,
- 0x00, 0x00, 0x43, 0x68, 0x44,
- 0x01, 0x02, 0x41, 0x00, 0x02,
- 0x00, 0x00, 0x5c, 0x57, 0x45,
- 0x00, 0x00, 0x42, 0x95, 0x84,
- 0x01, 0x04, 0xc2, 0x71, 0x03,
- 0x01, 0x05, 0x40, 0xb5, 0x42,
- 0x00, 0x00, 0x40, 0xb5, 0x43,
- 0x00, 0x00, 0x5b, 0x5e, 0xc6,
- 0x01, 0x05, 0xc0, 0x48, 0x42,
- 0x00, 0x00, 0x59, 0xac, 0x48,
- 0x00, 0x00, 0x42, 0x59, 0x44,
- 0x00, 0x00, 0x42, 0x59, 0x46,
- 0x00, 0x00, 0x53, 0xca, 0x86,
- 0x01, 0x06, 0x46, 0x47, 0x44,
- 0x00, 0x00, 0x40, 0xe9, 0x05,
- 0x00, 0x00, 0x42, 0x03, 0xc8,
- 0x00, 0x00, 0x42, 0x5c, 0x47,
- 0x00, 0x00, 0x42, 0x80, 0x87,
- 0x00, 0x00, 0x42, 0x80, 0x8f,
- 0x00, 0x00, 0x4a, 0x20, 0xc6,
- 0x00, 0x00, 0x43, 0xae, 0x03,
- 0x00, 0x00, 0x43, 0xf0, 0x44,
- 0x00, 0x00, 0x42, 0x75, 0x43,
- 0x00, 0x00, 0x42, 0x54, 0xc4,
- 0x00, 0x00, 0x58, 0x2e, 0x44,
- 0x01, 0x06, 0xc3, 0xf6, 0x02,
- 0x00, 0x00, 0x4a, 0x0f, 0x03,
- 0x00, 0x00, 0x53, 0xd7, 0xc3,
- 0x01, 0x07, 0x40, 0x2e, 0xc2,
- 0x00, 0x00, 0x40, 0x2e, 0xc3,
- 0x00, 0x00, 0x46, 0x97, 0x03,
- 0x00, 0x00, 0x41, 0x3d, 0xca,
- 0x00, 0x00, 0x51, 0xbc, 0x07,
- 0x00, 0x00, 0x5a, 0x60, 0xcc,
- 0x00, 0x00, 0x5a, 0x63, 0x86,
- 0x00, 0x00, 0x45, 0x1e, 0x86,
- 0x00, 0x00, 0x45, 0x93, 0x07,
- 0x01, 0x07, 0xc5, 0xd4, 0x47,
- 0x00, 0x00, 0x46, 0x37, 0x89,
- 0x01, 0x08, 0x44, 0x18, 0x44,
- 0x01, 0x09, 0x40, 0x6e, 0xc2,
- 0x01, 0x09, 0xc0, 0x10, 0x42,
- 0x00, 0x00, 0x4d, 0x38, 0x86,
- 0x00, 0x00, 0x55, 0x43, 0xc4,
- 0x00, 0x00, 0x4d, 0x47, 0x46,
- 0x00, 0x00, 0x46, 0xab, 0xc8,
- 0x00, 0x00, 0x5a, 0x5e, 0xc4,
- 0x00, 0x00, 0x53, 0xda, 0x06,
- 0x00, 0x00, 0x49, 0x24, 0x85,
- 0x01, 0x0a, 0xc7, 0xe6, 0x08,
- 0x00, 0x00, 0x44, 0x78, 0x43,
- 0x00, 0x00, 0x48, 0x22, 0x45,
- 0x00, 0x00, 0x48, 0x5c, 0x83,
- 0x00, 0x00, 0x41, 0x30, 0xc3,
- 0x00, 0x00, 0x41, 0x30, 0xc4,
- 0x00, 0x00, 0x46, 0xb6, 0x83,
- 0x01, 0x0b, 0x45, 0x15, 0x02,
- 0x01, 0x0b, 0xc0, 0x0e, 0x02,
- 0x00, 0x00, 0x5e, 0x73, 0xc9,
- 0x00, 0x00, 0x48, 0xcb, 0x45,
- 0x00, 0x00, 0x48, 0xce, 0xc4,
- 0x00, 0x00, 0x49, 0x8a, 0xc5,
- 0x00, 0x00, 0x40, 0x35, 0x44,
- 0x00, 0x00, 0x4e, 0x6f, 0x07,
- 0x00, 0x00, 0x55, 0xea, 0x45,
- 0x01, 0x0c, 0xc1, 0xbc, 0x04,
- 0x00, 0x00, 0x4f, 0x9f, 0x48,
- 0x00, 0x00, 0x4c, 0x9b, 0xc6,
- 0x00, 0x00, 0x4c, 0xf1, 0x04,
- 0x00, 0x00, 0x4c, 0xff, 0x48,
- 0x01, 0x0d, 0x40, 0x1a, 0x42,
- 0x00, 0x00, 0x4e, 0x31, 0x84,
- 0x00, 0x00, 0x51, 0xc3, 0x44,
- 0x00, 0x00, 0x55, 0x13, 0x87,
- 0x01, 0x0d, 0xc0, 0x4a, 0xc4,
- 0x00, 0x00, 0x40, 0x1c, 0xc2,
- 0x01, 0x0e, 0x41, 0x0a, 0x82,
- 0x00, 0x00, 0x44, 0xe5, 0x83,
- 0x00, 0x00, 0x44, 0xe5, 0x84,
- 0x00, 0x00, 0x43, 0x98, 0x03,
- 0x00, 0x00, 0x58, 0xf6, 0xc5,
- 0x01, 0x0e, 0xc5, 0x51, 0x82,
- 0x00, 0x00, 0x4f, 0x4a, 0x85,
- 0x00, 0x00, 0x47, 0xcc, 0xc2,
- 0x00, 0x00, 0x51, 0x75, 0x85,
- 0x00, 0x00, 0x4e, 0x10, 0x85,
- 0x01, 0x0f, 0x40, 0x3d, 0x02,
- 0x00, 0x00, 0x58, 0x18, 0x44,
- 0x01, 0x0f, 0xc0, 0x3c, 0x82,
- 0x00, 0x00, 0x5e, 0x49, 0xc6,
- 0x00, 0x00, 0x4d, 0x7c, 0x06,
- 0x00, 0x00, 0x43, 0x4f, 0x48,
- 0x00, 0x00, 0x49, 0x60, 0x48,
- 0x00, 0x00, 0x57, 0xcc, 0x84,
- 0x00, 0x00, 0x4f, 0x8b, 0xc5,
- 0x01, 0x10, 0x42, 0xa9, 0xc9,
- 0x00, 0x00, 0x4e, 0x90, 0xc4,
- 0x00, 0x00, 0x5e, 0xf1, 0x04,
- 0x00, 0x00, 0x47, 0x76, 0xc3,
- 0x00, 0x00, 0x40, 0xe7, 0xc3,
- 0x01, 0x10, 0xc0, 0xe7, 0xc5,
- 0x00, 0x00, 0x47, 0x54, 0x85,
- 0x00, 0x00, 0x4e, 0x9f, 0x04,
- 0x00, 0x00, 0x4b, 0x26, 0xc2,
- 0x00, 0x00, 0x53, 0x15, 0xc3,
- 0x01, 0x11, 0x40, 0x2e, 0x82,
- 0x01, 0x11, 0xc0, 0x19, 0x82,
- 0x00, 0x00, 0x59, 0xa7, 0x05,
- 0x00, 0x00, 0x48, 0x5b, 0x07,
- 0x00, 0x00, 0x48, 0x3d, 0x44,
- 0x00, 0x00, 0x4c, 0xaa, 0x09,
- 0x00, 0x00, 0x4e, 0x23, 0xc9,
- 0x00, 0x00, 0x40, 0x21, 0x83,
- 0x00, 0x00, 0x48, 0x6d, 0x88,
- 0x00, 0x00, 0x4a, 0x8c, 0x49,
- 0x00, 0x00, 0x42, 0x26, 0x07,
- 0x01, 0x12, 0x53, 0xd8, 0x45,
- 0x00, 0x00, 0x55, 0x9b, 0x86,
- 0x00, 0x00, 0x55, 0xb2, 0xc6,
- 0x00, 0x00, 0x55, 0xc0, 0xc5,
- 0x00, 0x00, 0x4d, 0x93, 0xc5,
- 0x01, 0x12, 0xc0, 0x56, 0x82,
- 0x00, 0x00, 0x45, 0x92, 0x05,
- 0x00, 0x00, 0x4d, 0x8f, 0x88,
- 0x00, 0x00, 0x4d, 0x5f, 0xc6,
- 0x01, 0x13, 0x50, 0xb9, 0xc7,
- 0x00, 0x00, 0x5a, 0x67, 0x44,
- 0x00, 0x00, 0x57, 0x15, 0x87,
- 0x00, 0x00, 0x5b, 0x11, 0x06,
- 0x01, 0x13, 0xc0, 0xde, 0x02,
- 0x00, 0x00, 0x41, 0xbf, 0xc6,
- 0x00, 0x00, 0x51, 0x74, 0x85,
- 0x01, 0x14, 0x44, 0x29, 0xc2,
- 0x01, 0x14, 0xc1, 0x8b, 0x82,
- 0x00, 0x00, 0x47, 0xae, 0xc6,
- 0x01, 0x15, 0x49, 0x99, 0x87,
- 0x01, 0x15, 0xc3, 0x87, 0x42,
- 0x00, 0x00, 0x41, 0xa0, 0x43,
- 0x00, 0x00, 0x43, 0xe1, 0x86,
- 0x00, 0x00, 0x4d, 0x8e, 0x44,
- 0x00, 0x00, 0x46, 0x9c, 0x46,
- 0x00, 0x00, 0x54, 0x16, 0x06,
- 0x00, 0x00, 0x4f, 0xdb, 0x0a,
- 0x00, 0x00, 0x55, 0x01, 0x45,
- 0x00, 0x00, 0x41, 0xef, 0x46,
- 0x00, 0x00, 0x41, 0xf9, 0x83,
- 0x00, 0x00, 0x41, 0xf9, 0x84,
- 0x01, 0x16, 0x40, 0x21, 0xc2,
- 0x00, 0x00, 0x52, 0xa0, 0x83,
- 0x01, 0x16, 0xc0, 0xf7, 0x82,
- 0x00, 0x00, 0x53, 0x38, 0x83,
- 0x01, 0x17, 0x40, 0xd0, 0x84,
- 0x00, 0x00, 0x4d, 0xfb, 0xc4,
- 0x01, 0x17, 0xcd, 0xfb, 0xca,
- 0x00, 0x00, 0x40, 0x63, 0x83,
- 0x00, 0x00, 0x40, 0x96, 0xc7,
- 0x00, 0x00, 0x56, 0x6c, 0x46,
- 0x00, 0x00, 0x58, 0x88, 0xc4,
- 0x00, 0x00, 0x42, 0xce, 0xc2,
- 0x00, 0x00, 0x42, 0x98, 0xc2,
- 0x01, 0x18, 0x40, 0x07, 0xc2,
- 0x00, 0x00, 0x50, 0xfc, 0x43,
- 0x00, 0x00, 0x46, 0x2a, 0x47,
- 0x00, 0x00, 0x40, 0x07, 0xc7,
- 0x00, 0x00, 0x49, 0x52, 0x84,
- 0x00, 0x00, 0x43, 0x01, 0x47,
- 0x00, 0x00, 0x50, 0x59, 0xc6,
- 0x00, 0x00, 0x5d, 0xac, 0x87,
- 0x00, 0x00, 0x41, 0x7c, 0x44,
- 0x00, 0x00, 0x41, 0xc5, 0x05,
- 0x00, 0x00, 0x41, 0x07, 0x85,
- 0x01, 0x18, 0xc0, 0xae, 0x42,
- 0x00, 0x00, 0x56, 0x1d, 0xc6,
- 0x00, 0x00, 0x43, 0x09, 0xc3,
- 0x00, 0x00, 0x43, 0x1d, 0x02,
- 0x00, 0x00, 0x43, 0x1d, 0x06,
- 0x01, 0x19, 0x42, 0x03, 0x42,
- 0x01, 0x19, 0xc3, 0xd9, 0x42,
- 0x00, 0x00, 0x44, 0xa6, 0x85,
- 0x01, 0x1a, 0x40, 0x1b, 0x42,
- 0x01, 0x1a, 0xc0, 0xc6, 0x42,
- 0x01, 0x1b, 0xd9, 0x25, 0xc5,
- 0x00, 0x00, 0x4e, 0x3e, 0x85,
- 0x00, 0x00, 0x51, 0x13, 0x05,
- 0x01, 0x1c, 0x46, 0xbf, 0xc3,
- 0x00, 0x00, 0x4d, 0x9e, 0x05,
- 0x00, 0x00, 0x4f, 0x82, 0x47,
- 0x00, 0x00, 0x4b, 0x6c, 0xc5,
- 0x00, 0x00, 0x55, 0x03, 0x05,
- 0x00, 0x00, 0x47, 0x3d, 0x04,
- 0x00, 0x00, 0x44, 0x59, 0x46,
- 0x00, 0x00, 0x45, 0x4f, 0x84,
- 0x01, 0x1c, 0xc0, 0x08, 0xc2,
- 0x01, 0x1e, 0x4b, 0x55, 0x85,
- 0x00, 0x00, 0x57, 0xb5, 0x47,
- 0x00, 0x00, 0x4f, 0x87, 0x88,
- 0x00, 0x00, 0x48, 0xe5, 0x06,
- 0x00, 0x00, 0x48, 0xe5, 0x0d,
- 0x00, 0x00, 0x48, 0xfa, 0x09,
- 0x00, 0x00, 0x48, 0xfa, 0x12,
- 0x00, 0x00, 0x58, 0x7e, 0x05,
- 0x00, 0x00, 0x59, 0x15, 0x43,
- 0x01, 0x1e, 0xc0, 0x9a, 0x02,
- 0x00, 0x00, 0x52, 0x47, 0x04,
- 0x00, 0x00, 0x43, 0x09, 0x03,
- 0x00, 0x00, 0x51, 0x87, 0x85,
- 0x00, 0x00, 0x51, 0x93, 0x45,
- 0x01, 0x1f, 0x42, 0x4b, 0x42,
- 0x00, 0x00, 0x46, 0xaa, 0x43,
- 0x01, 0x1f, 0xc5, 0x06, 0x02,
- 0x01, 0x20, 0xc2, 0x43, 0x02,
- 0x01, 0x21, 0x40, 0x00, 0x82,
- 0x00, 0x00, 0x5e, 0xe5, 0x85,
- 0x00, 0x00, 0x5a, 0x0e, 0xc3,
- 0x01, 0x21, 0xc0, 0x74, 0x82,
- 0x01, 0x22, 0x40, 0x5f, 0xc2,
- 0x00, 0x00, 0x48, 0x99, 0x86,
- 0x00, 0x00, 0x47, 0x7a, 0x0a,
- 0x00, 0x00, 0x40, 0x56, 0xc3,
- 0x00, 0x00, 0x43, 0xb5, 0x43,
- 0x00, 0x00, 0x4f, 0x0a, 0xc3,
- 0x01, 0x25, 0xc0, 0x26, 0x42,
- 0x01, 0x42, 0xc4, 0x1d, 0x82,
- 0x01, 0x43, 0xc1, 0x81, 0x82,
- 0x00, 0x00, 0x40, 0x46, 0xc2,
- 0x00, 0x00, 0x53, 0x0c, 0x49,
- 0x00, 0x00, 0x4d, 0xc8, 0xc4,
- 0x00, 0x00, 0x5a, 0x02, 0x08,
- 0x01, 0x44, 0x42, 0x19, 0x02,
- 0x01, 0x45, 0x40, 0x11, 0x02,
- 0x00, 0x00, 0x48, 0x21, 0x45,
- 0x00, 0x00, 0x43, 0x6d, 0xc8,
- 0x00, 0x00, 0x52, 0xb1, 0x48,
- 0x00, 0x00, 0x4f, 0x0d, 0x4c,
- 0x00, 0x00, 0x43, 0xba, 0x43,
- 0x01, 0x45, 0xc6, 0xf2, 0xc2,
- 0x01, 0x46, 0x40, 0xc3, 0x02,
- 0x00, 0x00, 0x4d, 0x41, 0x46,
- 0x00, 0x00, 0x51, 0xa6, 0x05,
- 0x00, 0x00, 0x4e, 0xf9, 0x43,
- 0x00, 0x00, 0x47, 0x37, 0x06,
- 0x00, 0x00, 0x51, 0xa7, 0x46,
- 0x00, 0x00, 0x43, 0x76, 0xc3,
- 0x00, 0x00, 0x51, 0xc2, 0x83,
- 0x00, 0x00, 0x51, 0xc9, 0x46,
- 0x00, 0x00, 0x51, 0xde, 0x04,
- 0x00, 0x00, 0x40, 0xc3, 0x06,
- 0x00, 0x00, 0x5e, 0xc7, 0x44,
- 0x00, 0x00, 0x51, 0xe5, 0xc4,
- 0x00, 0x00, 0x52, 0x0b, 0xca,
- 0x01, 0x46, 0xc4, 0xc5, 0x42,
- 0x00, 0x00, 0x45, 0x63, 0xc5,
- 0x00, 0x00, 0x52, 0x29, 0xca,
- 0x00, 0x00, 0x52, 0x29, 0x05,
- 0x00, 0x00, 0x52, 0x36, 0xc4,
- 0x00, 0x00, 0x52, 0x37, 0xc6,
- 0x00, 0x00, 0x52, 0x39, 0x44,
- 0x00, 0x00, 0x41, 0x9a, 0x06,
- 0x01, 0x47, 0x40, 0x1d, 0x82,
- 0x00, 0x00, 0x59, 0xe8, 0xc6,
- 0x00, 0x00, 0x50, 0x20, 0x45,
- 0x00, 0x00, 0x5b, 0xd5, 0xc7,
- 0x00, 0x00, 0x5c, 0x92, 0x46,
- 0x00, 0x00, 0x45, 0x95, 0x04,
- 0x00, 0x00, 0x4e, 0xfc, 0x47,
- 0x00, 0x00, 0x41, 0xc0, 0x05,
- 0x00, 0x00, 0x45, 0xd2, 0xc7,
- 0x00, 0x00, 0x42, 0xb7, 0xc7,
- 0x00, 0x00, 0x42, 0xb7, 0xce,
- 0x00, 0x00, 0x48, 0x86, 0x46,
- 0x00, 0x00, 0x44, 0x38, 0x85,
- 0x00, 0x00, 0x40, 0x4a, 0x07,
- 0x00, 0x00, 0x5c, 0x2c, 0x87,
- 0x00, 0x00, 0x40, 0xb6, 0xc5,
- 0x00, 0x00, 0x41, 0x44, 0x04,
- 0x00, 0x00, 0x44, 0x4b, 0x82,
- 0x00, 0x00, 0x48, 0x5d, 0x07,
- 0x00, 0x00, 0x49, 0x32, 0x44,
- 0x00, 0x00, 0x44, 0xcf, 0x44,
- 0x00, 0x00, 0x4e, 0x78, 0xcb,
- 0x01, 0x47, 0xc2, 0x0b, 0x83,
- 0x00, 0x00, 0x52, 0x6f, 0x07,
- 0x00, 0x00, 0x42, 0x0b, 0x84,
- 0x00, 0x00, 0x52, 0x72, 0x07,
- 0x00, 0x00, 0x41, 0xc9, 0x03,
- 0x00, 0x00, 0x55, 0x2b, 0x0d,
- 0x00, 0x00, 0x52, 0x66, 0x48,
- 0x01, 0x48, 0x44, 0xd4, 0x04,
- 0x00, 0x00, 0x44, 0xd4, 0x05,
- 0x00, 0x00, 0x5e, 0x3e, 0x85,
- 0x00, 0x00, 0x52, 0x6e, 0x83,
- 0x01, 0x48, 0xc2, 0x58, 0x42,
- 0x00, 0x00, 0x52, 0xa0, 0x43,
- 0x00, 0x00, 0x52, 0xae, 0x03,
- 0x00, 0x00, 0x41, 0xe0, 0x44,
- 0x00, 0x00, 0x56, 0x1f, 0x45,
- 0x00, 0x00, 0x56, 0x20, 0x47,
- 0x00, 0x00, 0x41, 0xfa, 0x06,
- 0x00, 0x00, 0x59, 0x4d, 0xc3,
- 0x00, 0x00, 0x43, 0x3e, 0x8b,
- 0x00, 0x00, 0x57, 0x27, 0xcb,
- 0x00, 0x00, 0x4a, 0xec, 0xcb,
- 0x00, 0x00, 0x4b, 0xad, 0xcb,
- 0x00, 0x00, 0x4c, 0x78, 0xca,
- 0x00, 0x00, 0x4d, 0x59, 0x4b,
- 0x00, 0x00, 0x4f, 0x8f, 0x0b,
- 0x00, 0x00, 0x52, 0x74, 0xcc,
- 0x00, 0x00, 0x51, 0xe9, 0xcb,
- 0x00, 0x00, 0x56, 0x53, 0x4a,
- 0x00, 0x00, 0x59, 0xc7, 0x4b,
- 0x00, 0x00, 0x5b, 0x55, 0x8c,
- 0x00, 0x00, 0x5f, 0x13, 0x0b,
- 0x00, 0x00, 0x52, 0xb7, 0x4a,
- 0x00, 0x00, 0x52, 0xc3, 0x4a,
- 0x00, 0x00, 0x52, 0xd6, 0x8e,
- 0x00, 0x00, 0x52, 0xde, 0x0b,
- 0x00, 0x00, 0x52, 0xe0, 0xca,
- 0x00, 0x00, 0x52, 0xf1, 0x91,
- 0x00, 0x00, 0x52, 0xf5, 0xca,
- 0x00, 0x00, 0x52, 0xfa, 0xcb,
- 0x00, 0x00, 0x53, 0x00, 0x0e,
- 0x00, 0x00, 0x53, 0x13, 0x0c,
- 0x00, 0x00, 0x53, 0x16, 0x8b,
- 0x00, 0x00, 0x53, 0x19, 0x4e,
- 0x00, 0x00, 0x53, 0x1c, 0xcc,
- 0x00, 0x00, 0x53, 0x32, 0x4a,
- 0x00, 0x00, 0x53, 0x50, 0x0c,
- 0x01, 0x49, 0x53, 0x5c, 0x0a,
- 0x00, 0x00, 0x53, 0x64, 0x48,
- 0x00, 0x00, 0x53, 0x6e, 0x49,
- 0x00, 0x00, 0x53, 0x89, 0x4a,
- 0x00, 0x00, 0x53, 0x8b, 0xca,
- 0x00, 0x00, 0x53, 0x8e, 0x4b,
- 0x00, 0x00, 0x53, 0xcf, 0x4e,
- 0x00, 0x00, 0x53, 0xdf, 0x11,
- 0x00, 0x00, 0x54, 0x81, 0x09,
- 0x00, 0x00, 0x54, 0x83, 0x4a,
- 0x00, 0x00, 0x54, 0x8a, 0x8b,
- 0x00, 0x00, 0x54, 0xa0, 0x4d,
- 0x00, 0x00, 0x54, 0xae, 0xca,
- 0x00, 0x00, 0x54, 0xb5, 0x16,
- 0x00, 0x00, 0x54, 0xc8, 0x8b,
- 0x00, 0x00, 0x54, 0xe1, 0x8a,
- 0x00, 0x00, 0x54, 0xe9, 0xca,
- 0x00, 0x00, 0x54, 0xf8, 0xcb,
- 0x00, 0x00, 0x55, 0x07, 0x09,
- 0x00, 0x00, 0x55, 0x34, 0x89,
- 0x00, 0x00, 0x55, 0x4a, 0x4d,
- 0x00, 0x00, 0x55, 0x52, 0x0b,
- 0x00, 0x00, 0x55, 0x6b, 0x8b,
- 0x00, 0x00, 0x55, 0x75, 0x09,
- 0x00, 0x00, 0x55, 0x7b, 0x4e,
- 0x00, 0x00, 0x55, 0x87, 0x4a,
- 0x00, 0x00, 0x55, 0x94, 0x0a,
- 0x00, 0x00, 0x55, 0x99, 0x4a,
- 0x00, 0x00, 0x55, 0xa2, 0xcb,
- 0x00, 0x00, 0x55, 0xab, 0x0b,
- 0x00, 0x00, 0x55, 0xb8, 0xcd,
- 0x00, 0x00, 0x55, 0xd4, 0x8d,
- 0x00, 0x00, 0x55, 0xe1, 0x10,
- 0x00, 0x00, 0x55, 0xe5, 0xcb,
- 0x00, 0x00, 0x55, 0xfc, 0x4c,
- 0x00, 0x00, 0x56, 0x16, 0x0b,
- 0x00, 0x00, 0x56, 0x39, 0xcb,
- 0x00, 0x00, 0x56, 0x7b, 0xce,
- 0x00, 0x00, 0x56, 0x82, 0xcb,
- 0x00, 0x00, 0x56, 0x82, 0xcd,
- 0x00, 0x00, 0x56, 0xe3, 0x0b,
- 0x00, 0x00, 0x56, 0xed, 0x8f,
- 0x00, 0x00, 0x56, 0xf1, 0x4b,
- 0x00, 0x00, 0x56, 0xfb, 0x0a,
- 0x00, 0x00, 0x57, 0x24, 0xc9,
- 0x00, 0x00, 0x57, 0x43, 0x09,
- 0x01, 0x49, 0xd7, 0x46, 0x8b,
- 0x00, 0x00, 0x57, 0x49, 0x4e,
- 0x00, 0x00, 0x57, 0x4c, 0xce,
- 0x00, 0x00, 0x57, 0x63, 0x8b,
- 0x00, 0x00, 0x57, 0x70, 0x8f,
- 0x00, 0x00, 0x57, 0x9b, 0x0b,
- 0x00, 0x00, 0x57, 0x9d, 0xcb,
- 0x00, 0x00, 0x57, 0xa0, 0x8a,
- 0x00, 0x00, 0x57, 0xef, 0x49,
- 0x00, 0x00, 0x58, 0x28, 0x0f,
- 0x00, 0x00, 0x58, 0x6b, 0x0c,
- 0x00, 0x00, 0x58, 0x74, 0x8c,
- 0x00, 0x00, 0x58, 0x7a, 0xce,
- 0x00, 0x00, 0x58, 0x7f, 0xcf,
- 0x00, 0x00, 0x58, 0x83, 0x8e,
- 0x00, 0x00, 0x58, 0x8b, 0x10,
- 0x00, 0x00, 0x58, 0x8f, 0x0f,
- 0x00, 0x00, 0x58, 0xa0, 0x0e,
- 0x00, 0x00, 0x58, 0xab, 0x4c,
- 0x00, 0x00, 0x58, 0xae, 0x51,
- 0x00, 0x00, 0x58, 0xb2, 0x92,
- 0x00, 0x00, 0x58, 0xc6, 0x11,
- 0x00, 0x00, 0x58, 0xcc, 0x4e,
- 0x00, 0x00, 0x58, 0xd4, 0x8b,
- 0x00, 0x00, 0x58, 0xd4, 0x8e,
- 0x00, 0x00, 0x58, 0xd8, 0x0f,
- 0x00, 0x00, 0x58, 0xdb, 0xce,
- 0x00, 0x00, 0x58, 0xdf, 0x50,
- 0x00, 0x00, 0x58, 0xe3, 0x53,
- 0x00, 0x00, 0x58, 0xe8, 0x11,
- 0x00, 0x00, 0x58, 0xec, 0x4c,
- 0x00, 0x00, 0x58, 0xef, 0x4e,
- 0x00, 0x00, 0x58, 0xf3, 0xcc,
- 0x00, 0x00, 0x58, 0xf8, 0x13,
- 0x00, 0x00, 0x59, 0x09, 0x90,
- 0x00, 0x00, 0x59, 0x0e, 0x0c,
- 0x00, 0x00, 0x59, 0x11, 0x0c,
- 0x00, 0x00, 0x59, 0x21, 0x8b,
- 0x00, 0x00, 0x59, 0x29, 0x0e,
- 0x00, 0x00, 0x59, 0x2e, 0x0b,
- 0x00, 0x00, 0x59, 0x35, 0x4b,
- 0x00, 0x00, 0x59, 0x56, 0x4c,
- 0x00, 0x00, 0x59, 0xb1, 0x8a,
- 0x00, 0x00, 0x59, 0xbf, 0x4c,
- 0x00, 0x00, 0x59, 0xc2, 0x4c,
- 0x00, 0x00, 0x59, 0xc5, 0x49,
- 0x00, 0x00, 0x59, 0xe0, 0x4b,
- 0x00, 0x00, 0x59, 0xe3, 0x08,
- 0x00, 0x00, 0x59, 0xee, 0xc9,
- 0x00, 0x00, 0x59, 0xee, 0xcf,
- 0x00, 0x00, 0x5a, 0x07, 0xcb,
- 0x01, 0x4a, 0x5a, 0x13, 0xca,
- 0x00, 0x00, 0x5a, 0x36, 0x0c,
- 0x00, 0x00, 0x5a, 0x45, 0x4b,
- 0x01, 0x4a, 0xda, 0x48, 0x09,
- 0x00, 0x00, 0x5a, 0x50, 0x08,
- 0x00, 0x00, 0x5a, 0x53, 0xcb,
- 0x00, 0x00, 0x5a, 0x6c, 0x8a,
- 0x00, 0x00, 0x5a, 0x6f, 0x0a,
- 0x00, 0x00, 0x5a, 0x71, 0x8b,
- 0x00, 0x00, 0x5a, 0x78, 0x4c,
- 0x00, 0x00, 0x5a, 0x85, 0xc9,
- 0x00, 0x00, 0x5a, 0x88, 0x08,
- 0x00, 0x00, 0x5a, 0xb9, 0xcb,
- 0x00, 0x00, 0x5a, 0xe4, 0x8b,
- 0x00, 0x00, 0x5b, 0x23, 0x0e,
- 0x00, 0x00, 0x5b, 0x38, 0x0b,
- 0x00, 0x00, 0x5b, 0x4f, 0x0b,
- 0x00, 0x00, 0x5c, 0x69, 0x8b,
- 0x00, 0x00, 0x5c, 0x6c, 0x49,
- 0x00, 0x00, 0x5c, 0x71, 0x4d,
- 0x00, 0x00, 0x5e, 0x26, 0x4a,
- 0x00, 0x00, 0x5e, 0x62, 0x57,
- 0x00, 0x00, 0x5e, 0x6a, 0x98,
- 0x00, 0x00, 0x5e, 0x8f, 0x09,
- 0x00, 0x00, 0x5e, 0xa1, 0x4b,
- 0x00, 0x00, 0x5e, 0xb3, 0x14,
- 0x00, 0x00, 0x5e, 0xb8, 0x0b,
- 0x00, 0x00, 0x5e, 0xbd, 0x8a,
- 0x00, 0x00, 0x5e, 0xca, 0x0a,
- 0x00, 0x00, 0x5e, 0xcc, 0x8b,
- 0x00, 0x00, 0x5e, 0xe8, 0x10,
- 0x00, 0x00, 0x5e, 0xec, 0x11,
- 0x00, 0x00, 0x5e, 0xf2, 0x0a,
- 0x00, 0x00, 0x5f, 0x09, 0x0d,
- 0x00, 0x00, 0x5f, 0x10, 0x0d,
- 0x00, 0x00, 0x5f, 0x2a, 0x0b,
- 0x00, 0x00, 0x56, 0x1e, 0xc3,
- 0x01, 0x4b, 0x5d, 0x56, 0x03,
- 0x00, 0x00, 0x47, 0xd6, 0x46,
- 0x00, 0x00, 0x48, 0x68, 0x45,
- 0x00, 0x00, 0x4e, 0xb9, 0x07,
- 0x00, 0x00, 0x4d, 0xe5, 0x06,
- 0x01, 0x4b, 0xc3, 0xc4, 0x02,
- 0x00, 0x00, 0x4b, 0x8a, 0x09,
- 0x00, 0x00, 0x5d, 0xb1, 0xc4,
- 0x00, 0x00, 0x4f, 0x64, 0xc8,
- 0x00, 0x00, 0x42, 0x2a, 0x43,
- 0x00, 0x00, 0x52, 0x46, 0x47,
- 0x01, 0x4c, 0x44, 0x28, 0xc2,
- 0x00, 0x00, 0x4c, 0x11, 0x43,
- 0x01, 0x4c, 0xc0, 0x36, 0x42,
- 0x00, 0x00, 0x4e, 0x2e, 0xc6,
- 0x00, 0x00, 0x4e, 0x51, 0x84,
- 0x00, 0x00, 0x42, 0x91, 0x04,
- 0x00, 0x00, 0x5d, 0x6b, 0x83,
- 0x01, 0x4d, 0xcd, 0xd8, 0x42,
- 0x01, 0x4e, 0x40, 0x18, 0x44,
- 0x00, 0x00, 0x46, 0x76, 0xc7,
- 0x01, 0x4e, 0xc2, 0xc0, 0x82,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x02, 0xcd, 0x03,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x11, 0xc7, 0x48,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x54, 0x39, 0x16,
- 0x00, 0x00, 0x56, 0xc6, 0x53,
- 0x00, 0x00, 0x42, 0xff, 0xc9,
- 0x00, 0x00, 0x45, 0x75, 0xc8,
- 0x00, 0x00, 0x55, 0x6f, 0x49,
- 0x00, 0x00, 0x52, 0x2b, 0x46,
- 0x00, 0x00, 0x55, 0x60, 0x90,
- 0x00, 0x00, 0x5e, 0xd4, 0xd3,
- 0x00, 0x00, 0x50, 0x0b, 0x48,
- 0x00, 0x00, 0x48, 0x96, 0x47,
- 0x00, 0x00, 0x49, 0x3c, 0x47,
- 0x00, 0x00, 0x4b, 0x1d, 0xca,
- 0x00, 0x00, 0x56, 0xb2, 0x89,
- 0x00, 0x00, 0x5d, 0x3d, 0xc9,
- 0x00, 0x00, 0x45, 0x36, 0x4b,
- 0x00, 0x00, 0x54, 0xd9, 0x86,
- 0x00, 0x00, 0x53, 0x21, 0x8a,
- 0x00, 0x00, 0x42, 0x51, 0x06,
- 0x00, 0x00, 0x42, 0xf8, 0x43,
- 0x00, 0x00, 0x47, 0x48, 0x45,
- 0x00, 0x00, 0x5c, 0x17, 0x48,
- 0x00, 0x00, 0x48, 0xda, 0xcd,
- 0x00, 0x00, 0x5c, 0x9c, 0x4c,
- 0x00, 0x00, 0x50, 0x1d, 0x07,
- 0x00, 0x00, 0x51, 0xec, 0x4d,
- 0x00, 0x00, 0x42, 0x04, 0xc4,
- 0x00, 0x00, 0x43, 0x21, 0x8a,
- 0x00, 0x00, 0x43, 0x2d, 0x8a,
- 0x00, 0x00, 0x43, 0x32, 0x4a,
- 0x00, 0x00, 0x51, 0xf7, 0x87,
- 0x00, 0x00, 0x43, 0xfe, 0xc7,
- 0x00, 0x00, 0x44, 0x4a, 0xc4,
- 0x00, 0x00, 0x47, 0xcf, 0x46,
- 0x00, 0x00, 0x4f, 0xf8, 0x84,
- 0x00, 0x00, 0x41, 0xf6, 0x08,
- 0x00, 0x00, 0x43, 0x17, 0xc9,
- 0x00, 0x00, 0x50, 0xd7, 0x86,
- 0x00, 0x00, 0x50, 0xd7, 0x88,
- 0x00, 0x00, 0x44, 0x84, 0x8d,
- 0x00, 0x00, 0x4e, 0x26, 0x09,
- 0x00, 0x00, 0x51, 0xce, 0xc8,
- 0x00, 0x00, 0x41, 0x2f, 0x47,
- 0x00, 0x00, 0x44, 0xa8, 0xca,
- 0x00, 0x00, 0x4b, 0xdd, 0x06,
- 0x00, 0x00, 0x57, 0xcf, 0xc4,
- 0x00, 0x00, 0x41, 0xdc, 0x07,
- 0x00, 0x00, 0x43, 0x9c, 0xca,
- 0x00, 0x00, 0x43, 0xf7, 0x0e,
- 0x00, 0x00, 0x48, 0x6e, 0x45,
- 0x00, 0x00, 0x49, 0x95, 0x0b,
- 0x00, 0x00, 0x50, 0xf9, 0x89,
- 0x00, 0x00, 0x46, 0xb2, 0x89,
- 0x00, 0x00, 0x40, 0xac, 0x07,
- 0x00, 0x00, 0x40, 0xac, 0x0a,
- 0x00, 0x00, 0x51, 0xb1, 0x87,
- 0x00, 0x00, 0x4c, 0x66, 0x49,
- 0x00, 0x00, 0x5e, 0xaa, 0x48,
- 0x00, 0x00, 0x57, 0x36, 0x0b,
- 0x00, 0x00, 0x4d, 0xfa, 0x85,
- 0x00, 0x00, 0x59, 0x3e, 0x4a,
- 0x00, 0x00, 0x41, 0xdd, 0xc9,
- 0x00, 0x00, 0x4f, 0xe3, 0xca,
- 0x00, 0x00, 0x41, 0x5e, 0x8b,
- 0x00, 0x00, 0x41, 0xdb, 0x0b,
- 0x00, 0x00, 0x45, 0x33, 0xd5,
- 0x00, 0x00, 0x4f, 0x69, 0x85,
- 0x00, 0x00, 0x41, 0x2f, 0xc5,
- 0x00, 0x00, 0x43, 0xab, 0x8a,
- 0x00, 0x00, 0x47, 0x22, 0xca,
- 0x00, 0x00, 0x51, 0x07, 0xc7,
- 0x00, 0x00, 0x41, 0x30, 0x03,
- 0x00, 0x00, 0x4d, 0x38, 0x08,
- 0x00, 0x00, 0x4e, 0xd6, 0xca,
- 0x00, 0x00, 0x42, 0x59, 0x46,
- 0x00, 0x00, 0x45, 0xf8, 0x09,
- 0x00, 0x00, 0x47, 0xe6, 0x08,
- 0x00, 0x00, 0x4c, 0xf1, 0x04,
- 0x00, 0x00, 0x48, 0x6b, 0x09,
- 0x00, 0x00, 0x49, 0x60, 0x48,
- 0x00, 0x00, 0x4d, 0x8c, 0xc7,
- 0x00, 0x00, 0x4b, 0x55, 0x86,
- 0x00, 0x00, 0x57, 0xb5, 0x47,
- 0x00, 0x00, 0x4c, 0xa0, 0x47,
- 0x00, 0x00, 0x44, 0x3b, 0x45,
- 0x00, 0x00, 0x4a, 0x17, 0x4c,
- 0x00, 0x00, 0x44, 0xd4, 0x05,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x44, 0x35, 0x03,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1e, 0x2a, 0x03,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x42, 0xe5, 0x47,
- 0x00, 0x00, 0x02, 0xb0, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x1d, 0xad, 0xc4,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1c, 0x1f, 0x05,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x14, 0x82,
- 0x00, 0x00, 0x50, 0x0d, 0xc2,
- 0x00, 0x00, 0x40, 0x4b, 0x02,
- 0x00, 0x00, 0x40, 0x62, 0x82,
- 0x00, 0x00, 0x40, 0x61, 0xc2,
- 0x00, 0x00, 0x01, 0x21, 0x4a,
- 0x00, 0x00, 0x12, 0xa1, 0x85,
- 0x00, 0x00, 0x12, 0xa1, 0x8a,
- 0x00, 0x02, 0x92, 0x8d, 0x09,
- 0x00, 0x00, 0x14, 0x91, 0x0b,
- 0x00, 0x00, 0x05, 0x40, 0x47,
- 0x00, 0x00, 0x1b, 0x17, 0x86,
- 0x00, 0x00, 0x09, 0xd2, 0x86,
- 0x00, 0x00, 0x05, 0xc4, 0xc9,
- 0x00, 0x00, 0x0a, 0xdf, 0xc7,
- 0x00, 0x00, 0x0f, 0x85, 0x04,
- 0x00, 0x02, 0x9a, 0xdf, 0x8a,
- 0x00, 0x00, 0x00, 0xe4, 0x4e,
- 0x00, 0x00, 0x18, 0x15, 0x0c,
- 0x00, 0x00, 0x1d, 0xdc, 0x89,
- 0x00, 0x09, 0x02, 0x71, 0x03,
- 0x00, 0x00, 0x09, 0x56, 0x07,
- 0x00, 0x00, 0x00, 0x11, 0x06,
- 0x00, 0x00, 0x00, 0x0f, 0x83,
- 0x00, 0x00, 0x0e, 0xcf, 0x05,
- 0x00, 0x00, 0x00, 0x00, 0xc1,
- 0x00, 0x00, 0x42, 0x1b, 0xc3,
- 0x00, 0x0a, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x92, 0xc4,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x4e, 0xea, 0xc6,
- 0x00, 0x00, 0x49, 0xac, 0xc6,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x4b, 0x60, 0x06,
- 0x00, 0x00, 0x43, 0x6e, 0x83,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x09, 0x84,
- 0x00, 0x00, 0x45, 0xf0, 0xc7,
- 0x00, 0x00, 0x5d, 0x6b, 0xc3,
- 0x00, 0x00, 0x49, 0x19, 0x04,
- 0x00, 0x00, 0x40, 0xaa, 0x83,
- 0x00, 0x00, 0x40, 0xac, 0x83,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x00, 0x00, 0x1a, 0x31, 0xc4,
- 0x00, 0x00, 0x1d, 0x45, 0xc3,
- 0x00, 0x00, 0x1a, 0x23, 0x45,
- 0x00, 0x0c, 0xc0, 0x00, 0xc2,
- 0x00, 0x00, 0x05, 0x0b, 0x03,
- 0x00, 0x0d, 0x40, 0x22, 0x02,
- 0x00, 0x0d, 0xc9, 0x28, 0x49,
- 0x00, 0x0e, 0x09, 0x88, 0xc9,
- 0x00, 0x00, 0x09, 0x8d, 0xcd,
- 0x00, 0x00, 0x09, 0x91, 0x0d,
- 0x00, 0x00, 0x50, 0x0d, 0xc2,
- 0x00, 0x00, 0x05, 0x03, 0xc4,
- 0x00, 0x00, 0x1a, 0x23, 0x89,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x0e, 0xc5, 0x02, 0xc8,
- 0x00, 0x00, 0x10, 0xa9, 0x04,
- 0x00, 0x00, 0x52, 0x95, 0xc3,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x09, 0x32, 0x44,
- 0x00, 0x02, 0x81, 0x2f, 0x42,
- 0x00, 0x02, 0x80, 0x05, 0xc2,
- 0x00, 0x02, 0x81, 0x2f, 0x42,
- 0x00, 0x02, 0x91, 0xe7, 0xc6,
- 0x00, 0x00, 0x43, 0x3c, 0xc3,
- 0x00, 0x00, 0x47, 0x68, 0x03,
- 0x00, 0x0f, 0xc0, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x21, 0x84,
- 0x00, 0x10, 0xc1, 0xf6, 0x03,
- 0x00, 0x11, 0xc0, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x30, 0x42,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xbf, 0x83,
- 0x00, 0x00, 0x40, 0x15, 0x82,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x91, 0x42,
- 0x00, 0x00, 0x51, 0x0f, 0x03,
- 0x00, 0x00, 0x40, 0x48, 0x42,
- 0x00, 0x00, 0x40, 0x19, 0xc3,
- 0x00, 0x00, 0x41, 0xa7, 0x43,
- 0x00, 0x00, 0x40, 0x59, 0xc2,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x10, 0x49, 0xb1, 0xc9,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x02, 0x24, 0x03,
- 0x00, 0x00, 0x43, 0x3c, 0xc3,
- 0x00, 0x00, 0x5f, 0x2f, 0x08,
- 0x00, 0x11, 0x41, 0xbf, 0x83,
- 0x00, 0x00, 0x40, 0x15, 0x82,
- 0x00, 0x00, 0x51, 0x0f, 0x03,
- 0x00, 0x00, 0x40, 0x48, 0x42,
- 0x00, 0x00, 0x40, 0x19, 0xc3,
- 0x00, 0x00, 0x41, 0xa7, 0x43,
- 0x00, 0x00, 0x40, 0x59, 0xc2,
- 0x00, 0x00, 0x5a, 0x63, 0x87,
- 0x00, 0x00, 0x51, 0x0f, 0x03,
- 0x00, 0x00, 0x40, 0x48, 0x42,
- 0x00, 0x00, 0x40, 0x19, 0xc3,
- 0x00, 0x00, 0x41, 0xa7, 0x43,
- 0x00, 0x00, 0x40, 0x59, 0xc2,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x00, 0x8a, 0x42,
- 0x00, 0x00, 0x00, 0xf5, 0x43,
- 0x00, 0x00, 0x00, 0x13, 0x42,
- 0x00, 0x00, 0x00, 0x4c, 0x02,
- 0x00, 0x00, 0x06, 0xd6, 0x02,
- 0x00, 0x00, 0x00, 0x20, 0x42,
- 0x00, 0x00, 0x00, 0x26, 0x42,
- 0x00, 0x00, 0x01, 0x31, 0x42,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x5c, 0x82,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x01, 0x24, 0x82,
- 0x00, 0x00, 0x0a, 0xb6, 0x43,
- 0x00, 0x00, 0x01, 0x62, 0x82,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x53, 0xd8, 0x45,
- 0x00, 0x00, 0x42, 0x4b, 0x42,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x15, 0xc2, 0xa7, 0x92,
- 0x00, 0x16, 0x5c, 0x25, 0x88,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x02, 0x87, 0xe2, 0x48,
- 0x00, 0x00, 0x01, 0x6d, 0x0a,
- 0x00, 0x00, 0x00, 0x2c, 0x45,
- 0x00, 0x00, 0x1d, 0x54, 0xc7,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x27, 0x01,
- 0x00, 0x00, 0x40, 0x09, 0xc1,
- 0x00, 0x00, 0x40, 0x26, 0xc1,
- 0x00, 0x00, 0x40, 0x27, 0x41,
- 0x00, 0x00, 0x40, 0x0a, 0x41,
- 0x00, 0x00, 0x42, 0x61, 0x81,
- 0x00, 0x00, 0x40, 0x0a, 0x01,
- 0x00, 0x00, 0x43, 0x20, 0x41,
- 0x00, 0x00, 0x40, 0x27, 0x81,
- 0x00, 0x00, 0x40, 0x00, 0x01,
- 0x00, 0x00, 0x40, 0x00, 0xc1,
- 0x00, 0x00, 0x40, 0x02, 0x01,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x01, 0x01,
- 0x00, 0x00, 0x40, 0x0c, 0xc1,
- 0x00, 0x00, 0x40, 0x05, 0x01,
- 0x00, 0x00, 0x40, 0x0b, 0xc1,
- 0x00, 0x00, 0x40, 0x00, 0x41,
- 0x00, 0x00, 0x40, 0x08, 0x01,
- 0x00, 0x00, 0x40, 0x01, 0x81,
- 0x00, 0x00, 0x40, 0x0c, 0x01,
- 0x00, 0x00, 0x40, 0x07, 0x01,
- 0x00, 0x00, 0x40, 0x04, 0xc1,
- 0x00, 0x00, 0x40, 0x0e, 0xc1,
- 0x00, 0x00, 0x40, 0x05, 0x81,
- 0x00, 0x00, 0x40, 0x03, 0xc1,
- 0x00, 0x00, 0x40, 0x14, 0x01,
- 0x00, 0x00, 0x40, 0x71, 0x41,
- 0x00, 0x00, 0x40, 0x04, 0x01,
- 0x00, 0x00, 0x40, 0x07, 0x41,
- 0x00, 0x00, 0x40, 0x07, 0xc1,
- 0x00, 0x00, 0x40, 0x00, 0x81,
- 0x00, 0x00, 0x40, 0x11, 0x01,
- 0x00, 0x00, 0x40, 0x0f, 0x81,
- 0x00, 0x00, 0x40, 0x8f, 0x81,
- 0x00, 0x00, 0x40, 0x53, 0x81,
- 0x00, 0x00, 0x40, 0x18, 0x41,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x00, 0x00, 0x08, 0x2b, 0x87,
- 0x00, 0x00, 0x03, 0x41, 0x06,
- 0x00, 0x00, 0x03, 0xc8, 0x0a,
- 0x00, 0x00, 0x09, 0x7d, 0x08,
- 0x00, 0x00, 0x06, 0x1f, 0x08,
- 0x00, 0x00, 0x06, 0x29, 0x47,
- 0x00, 0x00, 0x0c, 0x1e, 0x04,
- 0x00, 0x00, 0x1d, 0xdf, 0x06,
- 0x00, 0x00, 0x0f, 0x42, 0x45,
- 0x00, 0x00, 0x1c, 0xf8, 0x05,
- 0x00, 0x00, 0x0a, 0xec, 0x43,
- 0x00, 0x00, 0x01, 0x5d, 0x46,
- 0x00, 0x00, 0x05, 0x41, 0x46,
- 0x00, 0x00, 0x41, 0x4f, 0x04,
- 0x00, 0x00, 0x53, 0xa2, 0x47,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x4e, 0x40, 0x84,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x53, 0x5e, 0x08,
- 0x00, 0x00, 0x5c, 0x7f, 0x44,
- 0x00, 0x00, 0x43, 0x5f, 0xc4,
- 0x00, 0x00, 0x40, 0x62, 0x04,
- 0x00, 0x00, 0x4d, 0x40, 0x47,
- 0x00, 0x00, 0x4e, 0xc5, 0x87,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x92, 0xcb,
- 0x00, 0x00, 0x5a, 0xb7, 0x4a,
- 0x00, 0x00, 0x58, 0x87, 0x87,
- 0x00, 0x00, 0x51, 0x5b, 0x08,
- 0x00, 0x00, 0x4a, 0xfe, 0xc8,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x57, 0x38, 0x87,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x3a, 0x48,
- 0x00, 0x00, 0x40, 0xb2, 0x89,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x45, 0xce, 0x88,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x4e, 0xa5, 0x8a,
- 0x00, 0x00, 0x4e, 0xea, 0xc6,
- 0x00, 0x00, 0x5a, 0xc6, 0x47,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x5b, 0xec, 0xc6,
- 0x00, 0x00, 0x4b, 0xea, 0x08,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x46, 0x45, 0x06,
- 0x00, 0x00, 0x50, 0x3b, 0xcd,
- 0x00, 0x00, 0x50, 0x56, 0x08,
- 0x00, 0x00, 0x50, 0xaf, 0x0b,
- 0x00, 0x00, 0x51, 0x21, 0xc6,
- 0x00, 0x00, 0x53, 0xc0, 0x47,
- 0x00, 0x00, 0x41, 0x7f, 0xc5,
- 0x00, 0x00, 0x5d, 0xcf, 0xca,
- 0x00, 0x00, 0x43, 0x3d, 0x45,
- 0x00, 0x00, 0x47, 0x53, 0x8a,
- 0x00, 0x00, 0x42, 0x4b, 0x42,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x44, 0xcf, 0x44,
- 0x00, 0x00, 0x40, 0x00, 0x06,
- 0x00, 0x00, 0x5b, 0x92, 0x83,
- 0x00, 0x00, 0x4b, 0x4c, 0x83,
- 0x00, 0x00, 0x58, 0xa7, 0xc3,
- 0x00, 0x00, 0x43, 0xc0, 0xc3,
- 0x00, 0x00, 0x5d, 0xd2, 0x03,
- 0x00, 0x00, 0x40, 0x1c, 0x02,
- 0x00, 0x00, 0x5a, 0x10, 0x85,
- 0x00, 0x00, 0x4b, 0x83, 0xc9,
- 0x00, 0x00, 0x41, 0x5c, 0x43,
- 0x00, 0x00, 0x44, 0x41, 0xc3,
- 0x00, 0x00, 0x40, 0x28, 0xc3,
- 0x00, 0x00, 0x41, 0x37, 0x43,
- 0x00, 0x00, 0x40, 0x02, 0x01,
- 0x00, 0x00, 0x4e, 0x8e, 0x87,
- 0x00, 0x00, 0x4d, 0x9c, 0x45,
- 0x00, 0x00, 0x5c, 0x12, 0xc3,
- 0x00, 0x00, 0x46, 0x64, 0x83,
- 0x00, 0x00, 0x5f, 0x1d, 0x43,
- 0x00, 0x00, 0x40, 0x62, 0x04,
- 0x00, 0x00, 0x4f, 0xd6, 0x43,
- 0x00, 0x00, 0x41, 0x02, 0xc8,
- 0x00, 0x00, 0x57, 0x27, 0x03,
- 0x00, 0x00, 0x51, 0xb7, 0x0d,
- 0x00, 0x00, 0x48, 0x87, 0x08,
- 0x00, 0x00, 0x5f, 0x30, 0xc6,
- 0x00, 0x00, 0x4f, 0xbe, 0xc3,
- 0x00, 0x00, 0x58, 0x53, 0x83,
- 0x00, 0x00, 0x5a, 0x74, 0x03,
- 0x00, 0x1b, 0xc0, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x47, 0x88,
- 0x00, 0x00, 0x43, 0x92, 0xc4,
- 0x00, 0x00, 0x44, 0x0c, 0x03,
- 0x00, 0x00, 0x44, 0x57, 0xc3,
- 0x00, 0x00, 0x40, 0x01, 0x06,
- 0x00, 0x00, 0x44, 0x92, 0x48,
- 0x00, 0x00, 0x40, 0x23, 0xc3,
- 0x00, 0x00, 0x41, 0xce, 0x03,
- 0x00, 0x00, 0x4b, 0xe7, 0x03,
- 0x00, 0x00, 0x41, 0xa6, 0xc3,
- 0x00, 0x00, 0x5d, 0xd0, 0x03,
- 0x00, 0x00, 0x40, 0xf3, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x41, 0x65, 0x03,
- 0x00, 0x00, 0x44, 0xe0, 0xc3,
- 0x00, 0x00, 0x45, 0x34, 0xc3,
- 0x00, 0x00, 0x42, 0x8e, 0x03,
- 0x00, 0x00, 0x53, 0xe4, 0x43,
- 0x00, 0x00, 0x59, 0xba, 0xc3,
- 0x00, 0x00, 0x44, 0x4b, 0xc3,
- 0x00, 0x00, 0x5a, 0x5b, 0x45,
- 0x00, 0x00, 0x45, 0xc4, 0x84,
- 0x00, 0x00, 0x45, 0xdb, 0x07,
- 0x00, 0x00, 0x45, 0x96, 0xc2,
- 0x00, 0x00, 0x46, 0x09, 0x83,
- 0x00, 0x00, 0x46, 0x55, 0x06,
- 0x00, 0x00, 0x46, 0x7a, 0x83,
- 0x00, 0x00, 0x46, 0x7d, 0x83,
- 0x00, 0x00, 0x48, 0x6d, 0x43,
- 0x00, 0x00, 0x5d, 0xe7, 0xc3,
- 0x00, 0x00, 0x41, 0x9b, 0x03,
- 0x00, 0x00, 0x53, 0xb4, 0x03,
- 0x00, 0x00, 0x4a, 0x4e, 0x07,
- 0x00, 0x1d, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0xff, 0xc3,
- 0x00, 0x00, 0x40, 0x6a, 0x43,
- 0x00, 0x00, 0x40, 0x36, 0xc3,
- 0x00, 0x00, 0x40, 0xfc, 0x83,
- 0x00, 0x00, 0x54, 0xff, 0xc3,
- 0x00, 0x00, 0x56, 0x9d, 0x05,
- 0x00, 0x00, 0x58, 0x2b, 0x83,
- 0x00, 0x00, 0x44, 0xdd, 0x09,
- 0x00, 0x00, 0x40, 0x0c, 0x03,
- 0x00, 0x00, 0x51, 0x96, 0x43,
- 0x00, 0x1d, 0xc4, 0xf7, 0x03,
- 0x00, 0x00, 0x46, 0x65, 0x43,
- 0x00, 0x00, 0x40, 0x62, 0x43,
- 0x00, 0x00, 0x41, 0x1a, 0x08,
- 0x00, 0x00, 0x4b, 0x83, 0x06,
- 0x00, 0x00, 0x5d, 0xe5, 0x86,
- 0x00, 0x00, 0x4c, 0x3d, 0x46,
- 0x00, 0x00, 0x46, 0x8d, 0x87,
- 0x00, 0x00, 0x41, 0x3a, 0xc3,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x49, 0x7e, 0x06,
- 0x00, 0x00, 0x40, 0x66, 0xc2,
- 0x00, 0x00, 0x4e, 0xdd, 0xc3,
- 0x00, 0x00, 0x54, 0x23, 0xc5,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x52, 0x8f, 0xc7,
- 0x00, 0x02, 0xc1, 0xd7, 0x83,
- 0x00, 0x00, 0x43, 0xd5, 0xc3,
- 0x00, 0x00, 0x43, 0x6a, 0x03,
- 0x00, 0x00, 0x43, 0x64, 0xc3,
- 0x00, 0x00, 0x43, 0xc4, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x43, 0xe8, 0x06,
- 0x00, 0x00, 0x5a, 0x32, 0x86,
- 0x00, 0x00, 0x58, 0x31, 0x83,
- 0x00, 0x00, 0x5c, 0xbd, 0x03,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x41, 0x7c, 0xc3,
- 0x00, 0x00, 0x51, 0xc3, 0x03,
- 0x00, 0x00, 0x50, 0xe2, 0x43,
- 0x00, 0x00, 0x51, 0x18, 0xc3,
- 0x00, 0x00, 0x5c, 0x1f, 0x05,
- 0x00, 0x00, 0x43, 0x73, 0x43,
- 0x00, 0x00, 0x55, 0xdd, 0xc6,
- 0x00, 0x00, 0x40, 0xef, 0x83,
- 0x00, 0x00, 0x5b, 0x98, 0xc8,
- 0x00, 0x00, 0x40, 0xc4, 0xc3,
- 0x00, 0x00, 0x5b, 0x75, 0xc9,
- 0x00, 0x00, 0x40, 0xc4, 0xc8,
- 0x00, 0x00, 0x41, 0xa1, 0x88,
- 0x00, 0x00, 0x41, 0xe6, 0x05,
- 0x00, 0x00, 0x42, 0xf4, 0x0a,
- 0x00, 0x00, 0x43, 0x02, 0xca,
- 0x00, 0x00, 0x43, 0x2a, 0xcb,
- 0x00, 0x00, 0x43, 0x44, 0x48,
- 0x00, 0x00, 0x52, 0x53, 0x83,
- 0x00, 0x00, 0x41, 0x70, 0x43,
- 0x00, 0x00, 0x51, 0x19, 0x03,
- 0x00, 0x00, 0x4f, 0x29, 0x83,
- 0x00, 0x00, 0x51, 0x39, 0x48,
- 0x00, 0x00, 0x53, 0x6c, 0x83,
- 0x00, 0x00, 0x41, 0xf9, 0x84,
- 0x00, 0x00, 0x40, 0x21, 0xc2,
- 0x00, 0x00, 0x44, 0x0b, 0x83,
- 0x00, 0x00, 0x46, 0x0e, 0x43,
- 0x00, 0x00, 0x40, 0x07, 0xc3,
- 0x00, 0x00, 0x43, 0xd9, 0x43,
- 0x00, 0x00, 0x49, 0x76, 0xc3,
- 0x00, 0x00, 0x43, 0x6e, 0x83,
- 0x00, 0x00, 0x42, 0x4b, 0x42,
- 0x00, 0x00, 0x41, 0x80, 0x83,
- 0x00, 0x00, 0x43, 0xba, 0x43,
- 0x00, 0x00, 0x51, 0xe9, 0x43,
- 0x00, 0x00, 0x52, 0x17, 0x44,
- 0x00, 0x00, 0x44, 0xcf, 0x44,
- 0x00, 0x00, 0x42, 0x43, 0x43,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x1c, 0x51, 0xd0, 0xcc,
- 0x00, 0x1c, 0xc5, 0x8b, 0x05,
- 0x00, 0x00, 0x0d, 0xe3, 0x05,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x0b, 0x02,
- 0x00, 0x00, 0x40, 0x1c, 0x02,
- 0x00, 0x00, 0x40, 0x61, 0x82,
- 0x00, 0x00, 0x40, 0x02, 0x02,
- 0x00, 0x00, 0x40, 0x11, 0xc2,
- 0x00, 0x00, 0x47, 0x8d, 0x02,
- 0x00, 0x00, 0x40, 0x13, 0x42,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x40, 0x5e, 0x42,
- 0x00, 0x00, 0x4c, 0x7f, 0x42,
- 0x00, 0x00, 0x40, 0x3c, 0x42,
- 0x00, 0x00, 0x47, 0xd2, 0x02,
- 0x00, 0x00, 0x40, 0x67, 0x02,
- 0x00, 0x00, 0x40, 0x61, 0xc2,
- 0x00, 0x00, 0x40, 0x20, 0xc2,
- 0x00, 0x00, 0x40, 0x14, 0x02,
- 0x00, 0x00, 0x40, 0x2b, 0x02,
- 0x00, 0x00, 0x44, 0x53, 0x42,
- 0x00, 0x00, 0x40, 0x3f, 0x42,
- 0x00, 0x00, 0x40, 0x06, 0x82,
- 0x00, 0x00, 0x40, 0x39, 0xc2,
- 0x00, 0x00, 0x41, 0x24, 0x82,
- 0x00, 0x00, 0x40, 0x2e, 0xc2,
- 0x00, 0x00, 0x40, 0x10, 0x42,
- 0x00, 0x00, 0x40, 0xe7, 0xc2,
- 0x00, 0x00, 0x40, 0xc6, 0x42,
- 0x00, 0x00, 0x00, 0x00, 0xc2,
- 0x00, 0x00, 0x00, 0x0b, 0x02,
- 0x00, 0x00, 0x00, 0x1c, 0x02,
- 0x00, 0x00, 0x00, 0x61, 0x82,
- 0x00, 0x00, 0x00, 0x02, 0x02,
- 0x00, 0x00, 0x00, 0x11, 0xc2,
- 0x00, 0x00, 0x07, 0x8d, 0x02,
- 0x00, 0x00, 0x00, 0x13, 0x42,
- 0x00, 0x00, 0x00, 0x03, 0x82,
- 0x00, 0x00, 0x00, 0x5e, 0x42,
- 0x00, 0x00, 0x0c, 0x7f, 0x42,
- 0x00, 0x00, 0x00, 0x3c, 0x42,
- 0x00, 0x00, 0x07, 0xd2, 0x02,
- 0x00, 0x00, 0x00, 0x67, 0x02,
- 0x00, 0x00, 0x00, 0x61, 0xc2,
- 0x00, 0x00, 0x00, 0x20, 0xc2,
- 0x00, 0x00, 0x00, 0x14, 0x02,
- 0x00, 0x00, 0x00, 0x2b, 0x02,
- 0x00, 0x00, 0x04, 0x53, 0x42,
- 0x00, 0x00, 0x00, 0x3f, 0x42,
- 0x00, 0x00, 0x00, 0x06, 0x82,
- 0x00, 0x00, 0x00, 0x39, 0xc2,
- 0x00, 0x00, 0x01, 0x24, 0x82,
- 0x00, 0x00, 0x00, 0x2e, 0xc2,
- 0x00, 0x00, 0x00, 0x10, 0x42,
- 0x00, 0x00, 0x00, 0xe7, 0xc2,
- 0x00, 0x00, 0x00, 0xc6, 0x42,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x0f, 0x82,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x09, 0x9b, 0x49,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x21, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x0d, 0xfa, 0x89,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x0f, 0x08, 0x47,
- 0x00, 0x00, 0x42, 0x32, 0xc2,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x01, 0x70, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x36, 0x42,
- 0x00, 0x00, 0x40, 0x01, 0xc2,
- 0x00, 0x02, 0x82, 0x18, 0x05,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x48, 0x86, 0xc2,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x43, 0x7e, 0x02,
- 0x00, 0x00, 0x40, 0x2f, 0x82,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x41, 0x04, 0xc2,
- 0x00, 0x00, 0x40, 0xfc, 0xc2,
- 0x00, 0x00, 0x41, 0x22, 0x82,
- 0x00, 0x00, 0x1c, 0xf8, 0x05,
- 0x00, 0x00, 0x40, 0x0d, 0xc2,
- 0x00, 0x00, 0x40, 0x15, 0x82,
- 0x00, 0x00, 0x40, 0x39, 0x02,
- 0x00, 0x00, 0x40, 0x15, 0x42,
- 0x00, 0x00, 0x40, 0x20, 0xc2,
- 0x00, 0x00, 0x44, 0x13, 0xc2,
- 0x00, 0x00, 0x41, 0x0a, 0x82,
- 0x00, 0x00, 0x44, 0x24, 0x02,
- 0x00, 0x23, 0x47, 0xa6, 0xc4,
- 0x00, 0x00, 0x00, 0x01, 0x42,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x00, 0x00, 0x04, 0x29, 0x83,
- 0x00, 0x00, 0x0d, 0xc4, 0x0d,
- 0x00, 0x00, 0x0f, 0x42, 0xc9,
- 0x00, 0x00, 0x01, 0x28, 0x0b,
- 0x00, 0x00, 0x0f, 0x81, 0x08,
- 0x00, 0x00, 0x06, 0x6f, 0xc9,
- 0x00, 0x24, 0x4e, 0xce, 0x45,
- 0x00, 0x00, 0x11, 0x9e, 0x46,
- 0x00, 0x00, 0x13, 0x7d, 0x09,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x1a, 0x31, 0xc4,
- 0x00, 0x00, 0x1d, 0x45, 0xc3,
- 0x00, 0x00, 0x1a, 0x23, 0x45,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x1d, 0xd5, 0x07,
- 0x00, 0x26, 0x05, 0x39, 0x07,
- 0x00, 0x26, 0xc5, 0xf6, 0x84,
- 0x00, 0x00, 0x06, 0x36, 0x46,
- 0x00, 0x00, 0x1a, 0x23, 0x89,
- 0x00, 0x00, 0x0b, 0x72, 0x8e,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x14, 0x42, 0x07,
- 0x00, 0x02, 0x9b, 0x5c, 0x83,
- 0x00, 0x27, 0x40, 0x1a, 0xc2,
- 0x00, 0x00, 0x14, 0x78, 0x49,
- 0x00, 0x00, 0x1d, 0x50, 0x04,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x41, 0x4f, 0x04,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x40, 0x14, 0x82,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x00, 0xfd, 0x03,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x4e, 0x40, 0x84,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x44, 0xe4, 0x42,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x01, 0x22, 0x82,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x2f, 0xc6,
- 0x00, 0x00, 0x53, 0x94, 0x0f,
- 0x00, 0x00, 0xdd, 0xe9, 0x83,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x29, 0xdd, 0x35, 0x87,
- 0x00, 0x02, 0x97, 0x2a, 0x46,
- 0x00, 0x00, 0x1e, 0xe2, 0x86,
- 0x00, 0x00, 0x0d, 0x9c, 0x89,
- 0x00, 0x2a, 0x5c, 0x74, 0x48,
- 0x00, 0x00, 0x1e, 0x86, 0x84,
- 0x00, 0x2a, 0xcc, 0x3f, 0xca,
- 0x00, 0x00, 0x07, 0xd8, 0x48,
- 0x00, 0x2c, 0x01, 0x6c, 0x07,
- 0x00, 0x00, 0x1c, 0x25, 0x88,
- 0x00, 0x00, 0x0b, 0x72, 0x88,
- 0x00, 0x02, 0x9d, 0xcd, 0x8b,
- 0x00, 0x02, 0x87, 0xab, 0xca,
- 0x00, 0x2c, 0x86, 0x7c, 0xc3,
- 0x00, 0x00, 0x0f, 0xac, 0x49,
- 0x00, 0x2d, 0x10, 0xa2, 0x48,
- 0x00, 0x2d, 0xc3, 0x8a, 0x47,
- 0x00, 0x02, 0x8e, 0xb4, 0x4a,
- 0x00, 0x02, 0x90, 0x61, 0x47,
- 0x00, 0x00, 0x0b, 0x1e, 0x8b,
- 0x00, 0x2e, 0x49, 0xe3, 0x8c,
- 0x00, 0x00, 0x16, 0x46, 0x85,
- 0x00, 0x00, 0x0e, 0x04, 0x05,
- 0x00, 0x00, 0x12, 0x31, 0xc9,
- 0x00, 0x00, 0x10, 0x29, 0xc4,
- 0x00, 0x00, 0x11, 0xc2, 0x83,
- 0x00, 0x2b, 0x4c, 0x41, 0x05,
- 0x00, 0x00, 0x12, 0xc8, 0x43,
- 0x00, 0x2b, 0xc2, 0xc1, 0xc3,
- 0x00, 0x00, 0x12, 0xc8, 0x43,
- 0x00, 0x00, 0x04, 0x29, 0x82,
- 0x00, 0x00, 0x00, 0x1b, 0x42,
- 0x00, 0x00, 0x00, 0x5f, 0xc2,
- 0x00, 0x00, 0x00, 0x5f, 0xc2,
- 0x00, 0x00, 0x00, 0x17, 0x82,
- 0x00, 0x00, 0x00, 0x5f, 0xc2,
- 0x00, 0x00, 0x00, 0x26, 0x42,
- 0x00, 0x00, 0x00, 0x34, 0x02,
- 0x00, 0x00, 0x00, 0x23, 0xc2,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x00, 0x00, 0x1e, 0x86, 0x84,
- 0x00, 0x00, 0x10, 0x7e, 0x04,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x87, 0xc2,
- 0x00, 0x00, 0x40, 0x5a, 0x42,
- 0x00, 0x30, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x44, 0x13, 0x82,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x0b, 0xc2,
- 0x00, 0x00, 0x43, 0x43, 0x82,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x46, 0x97, 0xc2,
- 0x00, 0x00, 0x42, 0x08, 0xc2,
- 0x00, 0x00, 0x40, 0x70, 0x02,
- 0x00, 0x00, 0x49, 0xcf, 0xc2,
- 0x00, 0x00, 0x40, 0x08, 0x02,
- 0x00, 0x00, 0x40, 0x35, 0x82,
- 0x00, 0x00, 0x41, 0x73, 0x82,
- 0x00, 0x00, 0x40, 0xbd, 0x82,
- 0x00, 0x00, 0x40, 0xaf, 0x02,
- 0x00, 0x00, 0x16, 0x10, 0xcc,
- 0x00, 0x00, 0x4c, 0x70, 0x02,
- 0x00, 0x00, 0x47, 0xe5, 0xc2,
- 0x00, 0x00, 0x43, 0x0a, 0x02,
- 0x00, 0x00, 0x40, 0x1f, 0x02,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x40, 0x15, 0x02,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x43, 0x9e, 0x02,
- 0x00, 0x00, 0x44, 0x61, 0x02,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x44, 0x42, 0x42,
- 0x00, 0x00, 0x40, 0x2e, 0xc2,
- 0x00, 0x00, 0x40, 0x6e, 0xc2,
- 0x00, 0x00, 0x40, 0x0e, 0x02,
- 0x00, 0x00, 0x40, 0x3d, 0x02,
- 0x00, 0x00, 0x44, 0x29, 0xc2,
- 0x00, 0x00, 0x40, 0xae, 0x42,
- 0x00, 0x00, 0x45, 0x06, 0x02,
- 0x00, 0x00, 0x43, 0x0a, 0x42,
- 0x00, 0x00, 0x52, 0xe0, 0xca,
- 0x00, 0x00, 0x56, 0xfb, 0x0a,
- 0x00, 0x00, 0x5a, 0x1a, 0x4a,
- 0x00, 0x00, 0x5f, 0x44, 0x42,
- 0x00, 0x00, 0x40, 0x53, 0x82,
- 0x00, 0x00, 0x56, 0x9c, 0xc2,
- 0x00, 0x30, 0xcf, 0xc7, 0x49,
- 0x00, 0x31, 0x55, 0x47, 0xca,
- 0x00, 0x02, 0x94, 0x92, 0x07,
- 0x00, 0x31, 0xc0, 0x0f, 0xc2,
- 0x00, 0x02, 0x83, 0xbf, 0xc3,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x00, 0x15, 0x47, 0xca,
- 0x00, 0x00, 0x16, 0x85, 0xce,
- 0x00, 0x00, 0x40, 0x48, 0x84,
- 0x00, 0x00, 0x10, 0x57, 0x85,
- 0x00, 0x32, 0xc0, 0x66, 0x43,
- 0x00, 0x00, 0x04, 0x2d, 0xc3,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x45, 0x54, 0xc4,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x14, 0x40, 0x09,
- 0x00, 0x00, 0x1d, 0x40, 0x86,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x0f, 0x89, 0x84,
- 0x00, 0x00, 0x14, 0x6e, 0xc3,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x00, 0x1f, 0x45,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x02, 0x84, 0x5a, 0x04,
- 0x00, 0x00, 0x43, 0x73, 0x43,
- 0x00, 0x33, 0x14, 0xe6, 0xc4,
- 0x00, 0x00, 0x0c, 0xbd, 0x48,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x00, 0x30, 0x42,
- 0x00, 0x02, 0x93, 0x3a, 0x43,
- 0x00, 0x00, 0x1d, 0xe8, 0xc6,
- 0x00, 0x02, 0x9d, 0xde, 0x84,
- 0x00, 0x00, 0x1d, 0x69, 0x85,
- 0x00, 0x00, 0x10, 0x27, 0xca,
- 0x00, 0x00, 0x13, 0x4f, 0x82,
- 0x00, 0x34, 0x9d, 0xec, 0x0d,
- 0x00, 0x00, 0x1b, 0x32, 0xc6,
- 0x00, 0x00, 0x00, 0x6f, 0x51,
- 0x00, 0x35, 0x4f, 0xc7, 0x49,
- 0x00, 0x00, 0x15, 0x9c, 0x8a,
- 0x00, 0x00, 0x1d, 0x6a, 0x08,
- 0x00, 0x00, 0x08, 0xc1, 0xc8,
- 0x00, 0x00, 0x14, 0x5c, 0xce,
- 0x00, 0x00, 0x05, 0x4b, 0x13,
- 0x00, 0x42, 0x97, 0x2d, 0x07,
- 0x00, 0x00, 0x00, 0x28, 0xc2,
- 0x00, 0x00, 0x13, 0xa8, 0x10,
- 0x00, 0x00, 0x14, 0x5a, 0xcc,
- 0x00, 0x00, 0x0f, 0xc8, 0xd4,
- 0x00, 0x00, 0x0b, 0x04, 0x07,
- 0x00, 0x00, 0x01, 0xa5, 0x0e,
- 0x00, 0x00, 0x14, 0xcb, 0x0b,
- 0x00, 0x00, 0x14, 0xee, 0xcb,
- 0x00, 0x00, 0x1b, 0xd0, 0x4a,
- 0x00, 0x00, 0x03, 0x42, 0xc7,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x0b, 0x4d, 0x88,
- 0x00, 0x00, 0x00, 0x8e, 0xc7,
- 0x00, 0x43, 0x01, 0xae, 0x0b,
- 0x00, 0x00, 0x01, 0xc4, 0x46,
- 0x00, 0x00, 0x01, 0xf4, 0xc7,
- 0x00, 0x00, 0x00, 0x2f, 0xc2,
- 0x00, 0x00, 0x10, 0xfa, 0x8d,
- 0x00, 0x00, 0x14, 0x9b, 0x45,
- 0x00, 0x00, 0x06, 0x93, 0x47,
- 0x00, 0x00, 0x02, 0xad, 0x8a,
- 0x00, 0x00, 0x13, 0xe3, 0x0c,
- 0x00, 0x00, 0x13, 0xe4, 0xcf,
- 0x00, 0x00, 0x11, 0xf6, 0x4f,
- 0x00, 0x00, 0x15, 0x47, 0xc2,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x0e, 0x9e, 0x08,
- 0x00, 0x43, 0x8f, 0xbc, 0x4c,
- 0x00, 0x00, 0x1a, 0x8b, 0x0a,
- 0x00, 0x44, 0x56, 0x1b, 0x8a,
- 0x00, 0x00, 0x0f, 0x10, 0xca,
- 0x00, 0x00, 0x08, 0x00, 0xca,
- 0x00, 0x00, 0x08, 0x85, 0x08,
- 0x00, 0x00, 0x02, 0x60, 0x85,
- 0x00, 0x00, 0x06, 0xb5, 0xc8,
- 0x00, 0x00, 0x0f, 0x15, 0x88,
- 0x00, 0x00, 0x1d, 0xd4, 0xc8,
- 0x00, 0x00, 0x14, 0x64, 0x88,
- 0x00, 0x00, 0x00, 0x23, 0xc2,
- 0x00, 0x00, 0x11, 0xf3, 0xcf,
- 0x00, 0x02, 0x82, 0x18, 0x8d,
- 0x00, 0x02, 0x80, 0xe4, 0xd2,
- 0x00, 0x00, 0x1c, 0xcf, 0x8b,
- 0x00, 0x00, 0x0c, 0x9a, 0x08,
- 0x00, 0x00, 0x03, 0x81, 0x07,
- 0x00, 0x00, 0x04, 0xe4, 0x8a,
- 0x00, 0x00, 0x12, 0xbc, 0xcb,
- 0x00, 0x00, 0x0a, 0x24, 0xc9,
- 0x00, 0x00, 0x04, 0xe3, 0x87,
- 0x00, 0x00, 0x07, 0x67, 0x06,
- 0x00, 0x00, 0x02, 0x5f, 0x88,
- 0x00, 0x00, 0x03, 0x04, 0x8c,
- 0x00, 0x00, 0x1d, 0x9d, 0x47,
- 0x00, 0x00, 0x01, 0xca, 0xca,
- 0x00, 0x00, 0x00, 0x79, 0x08,
- 0x00, 0x00, 0x15, 0xf0, 0x0e,
- 0x00, 0x00, 0x19, 0x02, 0x8e,
- 0x00, 0x00, 0x03, 0x41, 0x0b,
- 0x00, 0x00, 0x03, 0xe4, 0x8b,
- 0x00, 0x00, 0x03, 0xed, 0x0b,
- 0x00, 0x00, 0x04, 0x1a, 0x09,
- 0x00, 0x00, 0x04, 0x2e, 0x4b,
- 0x00, 0x00, 0x04, 0x33, 0x4d,
- 0x00, 0x00, 0x04, 0x4d, 0x4b,
- 0x00, 0x00, 0x04, 0x97, 0x8d,
- 0x00, 0x00, 0x04, 0x9b, 0x0d,
- 0x00, 0x00, 0x05, 0x25, 0x0a,
- 0x00, 0x00, 0x04, 0xcd, 0x8b,
- 0x00, 0x00, 0x04, 0xd2, 0x4b,
- 0x00, 0x00, 0x05, 0x21, 0x85,
- 0x00, 0x44, 0x9c, 0x74, 0x90,
- 0x00, 0x00, 0x02, 0xc6, 0x8f,
- 0x00, 0x00, 0x07, 0xa8, 0x8f,
- 0x00, 0x00, 0x10, 0xff, 0x4d,
- 0x00, 0x00, 0x05, 0x7f, 0x50,
- 0x00, 0x00, 0x00, 0x4c, 0x02,
- 0x00, 0x45, 0x42, 0xfd, 0x08,
- 0x00, 0x00, 0x1d, 0x9b, 0xc8,
- 0x00, 0x00, 0x08, 0x09, 0x90,
- 0x00, 0x00, 0x12, 0xae, 0x8e,
- 0x00, 0x45, 0xd7, 0x26, 0xc5,
- 0x00, 0x00, 0x05, 0x31, 0x4b,
- 0x00, 0x00, 0x14, 0x31, 0x10,
- 0x00, 0x00, 0x05, 0x9b, 0xc5,
- 0x00, 0x00, 0x0a, 0x38, 0x0b,
- 0x00, 0x00, 0x1b, 0x17, 0x8c,
- 0x00, 0x00, 0x06, 0xb6, 0xca,
- 0x00, 0x00, 0x03, 0xe6, 0x49,
- 0x00, 0x00, 0x06, 0xc4, 0x48,
- 0x00, 0x00, 0x07, 0x25, 0x47,
- 0x00, 0x00, 0x07, 0x28, 0x87,
- 0x00, 0x00, 0x07, 0x2a, 0x47,
- 0x00, 0x00, 0x07, 0x3a, 0xc7,
- 0x00, 0x00, 0x07, 0x52, 0x07,
- 0x00, 0x00, 0x07, 0x56, 0x07,
- 0x00, 0x00, 0x07, 0x77, 0x87,
- 0x00, 0x00, 0x07, 0x7c, 0x87,
- 0x00, 0x00, 0x07, 0x81, 0x87,
- 0x00, 0x00, 0x07, 0x85, 0x07,
- 0x00, 0x00, 0x07, 0x89, 0xc7,
- 0x00, 0x00, 0x07, 0x8b, 0x87,
- 0x00, 0x00, 0x07, 0x8d, 0x47,
- 0x00, 0x00, 0x07, 0x8f, 0x07,
- 0x00, 0x00, 0x07, 0x92, 0x87,
- 0x00, 0x00, 0x07, 0x97, 0x47,
- 0x00, 0x00, 0x07, 0xb0, 0x47,
- 0x00, 0x00, 0x07, 0xb5, 0x07,
- 0x00, 0x00, 0x07, 0xc1, 0x07,
- 0x00, 0x00, 0x07, 0xc4, 0x07,
- 0x00, 0x00, 0x07, 0xc5, 0xc7,
- 0x00, 0x00, 0x07, 0xc8, 0xc7,
- 0x00, 0x00, 0x07, 0xd0, 0xc7,
- 0x00, 0x00, 0x07, 0xd2, 0xc7,
- 0x00, 0x00, 0x07, 0xdc, 0xc7,
- 0x00, 0x00, 0x07, 0xde, 0x87,
- 0x00, 0x00, 0x07, 0xe0, 0x47,
- 0x00, 0x00, 0x07, 0xe4, 0x47,
- 0x00, 0x00, 0x07, 0xea, 0x87,
- 0x00, 0x00, 0x07, 0xf4, 0x47,
- 0x00, 0x00, 0x07, 0xff, 0x07,
- 0x00, 0x00, 0x08, 0x03, 0x47,
- 0x00, 0x00, 0x08, 0x10, 0x87,
- 0x00, 0x00, 0x08, 0x12, 0x47,
- 0x00, 0x00, 0x08, 0x18, 0x87,
- 0x00, 0x00, 0x08, 0x1c, 0x07,
- 0x00, 0x00, 0x08, 0x26, 0x07,
- 0x00, 0x00, 0x08, 0x2a, 0x07,
- 0x00, 0x00, 0x08, 0x2d, 0x47,
- 0x00, 0x00, 0x08, 0x2f, 0x07,
- 0x00, 0x00, 0x08, 0x33, 0x47,
- 0x00, 0x00, 0x08, 0x3a, 0x47,
- 0x00, 0x00, 0x08, 0x42, 0xc7,
- 0x00, 0x00, 0x08, 0x46, 0xc7,
- 0x00, 0x00, 0x08, 0x48, 0x87,
- 0x00, 0x00, 0x08, 0x4d, 0x07,
- 0x00, 0x00, 0x08, 0x56, 0x47,
- 0x00, 0x00, 0x0f, 0x16, 0x8a,
- 0x00, 0x00, 0x01, 0x5b, 0x48,
- 0x00, 0x00, 0x1b, 0xa4, 0x0c,
- 0x00, 0x00, 0x14, 0x16, 0xc7,
- 0x00, 0x00, 0x09, 0x83, 0x85,
- 0x00, 0x00, 0x1e, 0x1a, 0x51,
- 0x00, 0x00, 0x14, 0xd1, 0x46,
- 0x00, 0x00, 0x12, 0x42, 0x8a,
- 0x00, 0x00, 0x0e, 0x9c, 0x8a,
- 0x00, 0x00, 0x06, 0x36, 0x46,
- 0x00, 0x00, 0x15, 0xcd, 0xcb,
- 0x00, 0x00, 0x00, 0x06, 0x42,
- 0x00, 0x00, 0x03, 0x13, 0x91,
- 0x00, 0x00, 0x16, 0x8c, 0x89,
- 0x00, 0x00, 0x0d, 0x1e, 0x49,
- 0x00, 0x00, 0x0a, 0x48, 0xc6,
- 0x00, 0x00, 0x01, 0x73, 0x82,
- 0x00, 0x00, 0x06, 0x80, 0x8a,
- 0x00, 0x00, 0x0b, 0x78, 0xc9,
- 0x00, 0x00, 0x0b, 0x80, 0x0f,
- 0x00, 0x00, 0x0b, 0x86, 0x0e,
- 0x00, 0x00, 0x0b, 0xac, 0x08,
- 0x00, 0x46, 0x54, 0x5e, 0xd2,
- 0x00, 0x00, 0x01, 0x16, 0x08,
- 0x00, 0x46, 0xc6, 0xc6, 0x47,
- 0x00, 0x00, 0x0b, 0xda, 0xcf,
- 0x00, 0x00, 0x01, 0x5f, 0xc2,
- 0x00, 0x00, 0x1d, 0xe3, 0xc9,
- 0x00, 0x00, 0x1c, 0xa2, 0x0a,
- 0x00, 0x47, 0x41, 0x46, 0x09,
- 0x00, 0x00, 0x0d, 0x43, 0x89,
- 0x00, 0x00, 0x0d, 0x43, 0x8c,
- 0x00, 0x00, 0x00, 0x60, 0x4b,
- 0x00, 0x00, 0x09, 0x67, 0x0e,
- 0x00, 0x00, 0x1c, 0xdb, 0x8c,
- 0x00, 0x00, 0x0f, 0xa9, 0x4f,
- 0x00, 0x00, 0x1c, 0x02, 0xce,
- 0x00, 0x00, 0x05, 0x6a, 0x4c,
- 0x00, 0x00, 0x08, 0x07, 0x89,
- 0x00, 0x00, 0x08, 0x1d, 0x91,
- 0x00, 0x00, 0x08, 0xb9, 0x88,
- 0x00, 0x00, 0x08, 0xc3, 0x92,
- 0x00, 0x00, 0x08, 0xe2, 0x0d,
- 0x00, 0x00, 0x09, 0x19, 0x8d,
- 0x00, 0x00, 0x09, 0x5d, 0x0b,
- 0x00, 0x00, 0x18, 0xa4, 0x55,
- 0x00, 0x00, 0x1e, 0x0b, 0x49,
- 0x00, 0x00, 0x09, 0xa6, 0x8a,
- 0x00, 0x00, 0x09, 0xec, 0xc9,
- 0x00, 0x00, 0x0a, 0x3d, 0x50,
- 0x00, 0x00, 0x0a, 0xe1, 0x8b,
- 0x00, 0x00, 0x0b, 0x0a, 0x0f,
- 0x00, 0x00, 0x0c, 0x05, 0x4b,
- 0x00, 0x00, 0x0c, 0x0b, 0xcc,
- 0x00, 0x00, 0x19, 0xbb, 0x50,
- 0x00, 0x00, 0x17, 0x09, 0x4a,
- 0x00, 0x00, 0x17, 0xa8, 0x8d,
- 0x00, 0x00, 0x19, 0x7c, 0xce,
- 0x00, 0x00, 0x0c, 0x1e, 0xca,
- 0x00, 0x00, 0x12, 0xcd, 0x4c,
- 0x00, 0x00, 0x0c, 0x9d, 0x14,
- 0x00, 0x00, 0x0d, 0x1a, 0xd1,
- 0x00, 0x00, 0x0d, 0x22, 0x8b,
- 0x00, 0x00, 0x0d, 0x33, 0x8f,
- 0x00, 0x00, 0x0d, 0x6f, 0xcd,
- 0x00, 0x00, 0x0d, 0x7a, 0xce,
- 0x00, 0x00, 0x0d, 0x8b, 0x8c,
- 0x00, 0x00, 0x0d, 0xa1, 0x0c,
- 0x00, 0x00, 0x19, 0xb8, 0x4b,
- 0x00, 0x00, 0x1e, 0xf7, 0x0e,
- 0x00, 0x00, 0x0d, 0xda, 0xd0,
- 0x00, 0x00, 0x0f, 0x21, 0x8b,
- 0x00, 0x00, 0x0f, 0x72, 0x8d,
- 0x00, 0x00, 0x11, 0x29, 0x0f,
- 0x00, 0x00, 0x10, 0x90, 0xcc,
- 0x00, 0x00, 0x10, 0xd6, 0x0e,
- 0x00, 0x00, 0x11, 0x51, 0x11,
- 0x00, 0x00, 0x1b, 0x12, 0x4c,
- 0x00, 0x00, 0x14, 0xb1, 0x07,
- 0x00, 0x00, 0x16, 0x43, 0x0d,
- 0x00, 0x00, 0x16, 0xfd, 0x4c,
- 0x00, 0x00, 0x17, 0xa2, 0xd0,
- 0x00, 0x00, 0x19, 0x51, 0x0d,
- 0x00, 0x00, 0x19, 0x5f, 0x07,
- 0x00, 0x00, 0x19, 0x94, 0x90,
- 0x00, 0x00, 0x1a, 0x9b, 0x08,
- 0x00, 0x00, 0x0c, 0x14, 0x4b,
- 0x00, 0x00, 0x0c, 0x36, 0x4f,
- 0x00, 0x00, 0x1b, 0xa6, 0x88,
- 0x00, 0x00, 0x05, 0x45, 0x0d,
- 0x00, 0x00, 0x11, 0x75, 0x10,
- 0x00, 0x00, 0x17, 0xc7, 0x89,
- 0x00, 0x47, 0xdc, 0x74, 0x48,
- 0x00, 0x48, 0x4c, 0x7f, 0xc6,
- 0x00, 0x00, 0x0c, 0x8b, 0xc3,
- 0x00, 0x00, 0x1a, 0xa9, 0x49,
- 0x00, 0x00, 0x0a, 0x59, 0x09,
- 0x00, 0x00, 0x0c, 0xd6, 0xc5,
- 0x00, 0x00, 0x00, 0x69, 0x82,
- 0x00, 0x00, 0x00, 0x12, 0x89,
- 0x00, 0x00, 0x04, 0xe9, 0x0a,
- 0x00, 0x48, 0xc8, 0xc8, 0x46,
- 0x00, 0x02, 0x88, 0xc8, 0x4d,
- 0x00, 0x49, 0x52, 0x83, 0xd1,
- 0x00, 0x49, 0xd0, 0x49, 0x84,
- 0x00, 0x00, 0x1e, 0x70, 0x86,
- 0x00, 0x00, 0x02, 0x29, 0x4a,
- 0x00, 0x00, 0x01, 0xec, 0x4d,
- 0x00, 0x4a, 0x4e, 0x09, 0x8b,
- 0x00, 0x00, 0x1d, 0xa1, 0xc8,
- 0x00, 0x4a, 0x86, 0x0d, 0xc9,
- 0x00, 0x00, 0x01, 0xc9, 0x43,
- 0x00, 0x00, 0x14, 0x88, 0x0a,
- 0x00, 0x00, 0x0e, 0xff, 0x11,
- 0x00, 0x00, 0x0f, 0x03, 0x49,
- 0x00, 0x00, 0x0f, 0x10, 0x47,
- 0x00, 0x00, 0x0f, 0x1e, 0xc8,
- 0x00, 0x00, 0x0f, 0x24, 0x47,
- 0x00, 0x00, 0x06, 0xc5, 0x48,
- 0x00, 0x00, 0x00, 0x70, 0xcb,
- 0x00, 0x00, 0x13, 0x79, 0xc9,
- 0x00, 0x00, 0x0f, 0x91, 0xd0,
- 0x00, 0x00, 0x0f, 0x96, 0x8c,
- 0x00, 0x00, 0x0f, 0x9b, 0x09,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x4b, 0x4f, 0xa1, 0x4d,
- 0x00, 0x00, 0x0f, 0xb5, 0x88,
- 0x00, 0x00, 0x0f, 0xba, 0x85,
- 0x00, 0x00, 0x08, 0x80, 0x88,
- 0x00, 0x00, 0x19, 0xdc, 0x8a,
- 0x00, 0x00, 0x16, 0xab, 0x87,
- 0x00, 0x00, 0x00, 0x1f, 0x42,
- 0x00, 0x4b, 0xc2, 0x1e, 0x95,
- 0x00, 0x00, 0x14, 0x3e, 0x0a,
- 0x00, 0x00, 0x14, 0x99, 0x89,
- 0x00, 0x00, 0x0a, 0x5a, 0xc8,
- 0x00, 0x00, 0x11, 0xef, 0x09,
- 0x00, 0x00, 0x08, 0x69, 0x05,
- 0x00, 0x00, 0x12, 0x8e, 0x4a,
- 0x00, 0x00, 0x0f, 0xdc, 0xc7,
- 0x00, 0x00, 0x09, 0x98, 0xcf,
- 0x00, 0x00, 0x16, 0x47, 0x0b,
- 0x00, 0x00, 0x13, 0xba, 0x0c,
- 0x00, 0x00, 0x02, 0x8d, 0x52,
- 0x00, 0x00, 0x12, 0x6a, 0x06,
- 0x00, 0x02, 0x8f, 0xf5, 0x48,
- 0x00, 0x00, 0x08, 0x6f, 0x45,
- 0x00, 0x00, 0x12, 0x82, 0xc8,
- 0x00, 0x00, 0x10, 0x15, 0x4b,
- 0x00, 0x00, 0x0e, 0x32, 0xd1,
- 0x00, 0x00, 0x10, 0x05, 0x07,
- 0x00, 0x00, 0x05, 0x57, 0xca,
- 0x00, 0x00, 0x18, 0x0f, 0x0c,
- 0x00, 0x4c, 0x50, 0xa1, 0x05,
- 0x00, 0x00, 0x1a, 0xe7, 0xcc,
- 0x00, 0x4c, 0x91, 0x04, 0xce,
- 0x00, 0x00, 0x14, 0x09, 0x43,
- 0x00, 0x00, 0x19, 0x8e, 0x46,
- 0x00, 0x00, 0x04, 0x13, 0xc2,
- 0x00, 0x00, 0x11, 0x1e, 0x8b,
- 0x00, 0x00, 0x11, 0x37, 0x0a,
- 0x00, 0x02, 0x91, 0x44, 0xcc,
- 0x00, 0x00, 0x1d, 0xa0, 0xc8,
- 0x00, 0x00, 0x04, 0x99, 0x48,
- 0x00, 0x4d, 0x4a, 0x5b, 0x46,
- 0x00, 0x00, 0x12, 0x5f, 0x07,
- 0x00, 0x00, 0x01, 0xc5, 0x8e,
- 0x00, 0x00, 0x14, 0x63, 0x07,
- 0x00, 0x00, 0x01, 0x00, 0x02,
- 0x00, 0x00, 0x00, 0x48, 0x42,
- 0x00, 0x00, 0x05, 0xa5, 0x90,
- 0x00, 0x00, 0x06, 0xaa, 0xc7,
- 0x00, 0x00, 0x06, 0xab, 0xcf,
- 0x00, 0x00, 0x01, 0x5d, 0x46,
- 0x00, 0x00, 0x0a, 0xa4, 0xce,
- 0x00, 0x00, 0x0b, 0xc1, 0x0b,
- 0x00, 0x00, 0x05, 0xa3, 0xc8,
- 0x00, 0x00, 0x0a, 0x28, 0x89,
- 0x00, 0x00, 0x01, 0x52, 0x52,
- 0x00, 0x00, 0x11, 0xcd, 0x8d,
- 0x00, 0x00, 0x11, 0xd9, 0x08,
- 0x00, 0x00, 0x01, 0x26, 0xc9,
- 0x00, 0x00, 0x06, 0xaf, 0x4d,
- 0x00, 0x00, 0x06, 0xb9, 0x09,
- 0x00, 0x00, 0x06, 0xcd, 0x4b,
- 0x00, 0x00, 0x07, 0x0e, 0x88,
- 0x00, 0x00, 0x07, 0x7f, 0x88,
- 0x00, 0x00, 0x07, 0x94, 0x08,
- 0x00, 0x00, 0x07, 0xbc, 0x89,
- 0x00, 0x00, 0x07, 0xbe, 0x8a,
- 0x00, 0x00, 0x07, 0xca, 0x4c,
- 0x00, 0x00, 0x01, 0xbc, 0x0a,
- 0x00, 0x00, 0x0e, 0x30, 0x07,
- 0x00, 0x00, 0x0e, 0x82, 0x4a,
- 0x00, 0x00, 0x11, 0xc3, 0x47,
- 0x00, 0x00, 0x03, 0x98, 0x0a,
- 0x00, 0x00, 0x0f, 0x47, 0x88,
- 0x00, 0x00, 0x1d, 0x88, 0x0d,
- 0x00, 0x00, 0x0a, 0x14, 0x11,
- 0x00, 0x4d, 0xcd, 0x7d, 0xc6,
- 0x00, 0x00, 0x16, 0xcb, 0xcb,
- 0x00, 0x00, 0x1d, 0xaf, 0xcc,
- 0x00, 0x00, 0x01, 0xbe, 0x08,
- 0x00, 0x00, 0x1d, 0x75, 0x89,
- 0x00, 0x00, 0x16, 0x19, 0x4d,
- 0x00, 0x00, 0x07, 0x3d, 0x10,
- 0x00, 0x00, 0x06, 0xa2, 0x8c,
- 0x00, 0x00, 0x1e, 0x1e, 0x4d,
- 0x00, 0x00, 0x0f, 0xb6, 0x0f,
- 0x00, 0x00, 0x00, 0x5f, 0xc2,
- 0x00, 0x00, 0x09, 0xee, 0xcd,
- 0x00, 0x00, 0x00, 0x26, 0x42,
- 0x00, 0x00, 0x04, 0x1d, 0x82,
- 0x00, 0x00, 0x11, 0xc2, 0x8a,
- 0x00, 0x4e, 0x49, 0x48, 0xca,
- 0x00, 0x00, 0x02, 0xa0, 0x8a,
- 0x00, 0x4e, 0xc8, 0x49, 0xc8,
- 0x00, 0x00, 0x12, 0x41, 0x8a,
- 0x00, 0x00, 0x12, 0x45, 0x4b,
- 0x00, 0x00, 0x12, 0x55, 0x07,
- 0x00, 0x00, 0x1a, 0xb5, 0x4c,
- 0x00, 0x00, 0x19, 0x05, 0x0c,
- 0x00, 0x00, 0x12, 0x77, 0xca,
- 0x00, 0x4f, 0x12, 0x7a, 0x4f,
- 0x00, 0x00, 0x12, 0x7e, 0x0c,
- 0x00, 0x00, 0x12, 0x81, 0x07,
- 0x00, 0x00, 0x12, 0x94, 0x8e,
- 0x00, 0x4f, 0x9f, 0x43, 0x05,
- 0x00, 0x00, 0x1a, 0x20, 0xc8,
- 0x00, 0x00, 0x00, 0x36, 0x42,
- 0x00, 0x02, 0x81, 0xa6, 0xc3,
- 0x00, 0x35, 0xdc, 0x66, 0x0e,
- 0x00, 0x36, 0xdd, 0x42, 0x8e,
- 0x00, 0x37, 0xd4, 0x7e, 0x8a,
- 0x00, 0x38, 0xdc, 0x41, 0x4e,
- 0x00, 0x39, 0xd4, 0xde, 0x0e,
- 0x00, 0x3a, 0xd5, 0x91, 0x0c,
- 0x00, 0x02, 0x94, 0x92, 0x07,
- 0x00, 0x02, 0x95, 0x9d, 0x49,
- 0x00, 0x02, 0x83, 0xbf, 0xc3,
- 0x00, 0x3b, 0xdb, 0xf5, 0x4c,
- 0x00, 0x3c, 0xc0, 0x4c, 0x09,
- 0x00, 0x3d, 0xd0, 0x07, 0x49,
- 0x00, 0x3e, 0xd0, 0x25, 0xc9,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x00, 0x1d, 0x65, 0x91,
- 0x00, 0x00, 0x1d, 0x41, 0xd1,
- 0x00, 0x00, 0x14, 0x7d, 0xcd,
- 0x00, 0x00, 0x1c, 0x40, 0x91,
- 0x00, 0x00, 0x14, 0xdd, 0x51,
- 0x00, 0x00, 0x15, 0x90, 0x4f,
- 0x00, 0x00, 0x1b, 0xf4, 0x8f,
- 0x00, 0x00, 0x1d, 0x06, 0xcc,
- 0x00, 0x00, 0x10, 0x06, 0x8c,
- 0x00, 0x00, 0x10, 0x25, 0x0c,
- 0x00, 0x00, 0x10, 0x6d, 0x8d,
- 0x00, 0x00, 0x19, 0x1c, 0x55,
- 0x00, 0x00, 0x13, 0x2f, 0x4c,
- 0x00, 0x00, 0x13, 0x7f, 0x0c,
- 0x00, 0x00, 0x14, 0x9c, 0x50,
- 0x00, 0x00, 0x15, 0x04, 0x0c,
- 0x00, 0x00, 0x1b, 0xb7, 0x0c,
- 0x00, 0x00, 0x1c, 0x63, 0x59,
- 0x00, 0x00, 0x1d, 0x25, 0xd9,
- 0x00, 0x00, 0x1e, 0xac, 0xd9,
- 0x00, 0x00, 0x00, 0x49, 0x54,
- 0x00, 0x00, 0x00, 0x7a, 0xd4,
- 0x00, 0x00, 0x00, 0x90, 0x54,
- 0x00, 0x00, 0x00, 0x9c, 0x54,
- 0x00, 0x00, 0x00, 0xa1, 0xd4,
- 0x00, 0x3f, 0xc0, 0x7d, 0x89,
- 0x00, 0x40, 0x80, 0x93, 0x09,
- 0x00, 0x41, 0xd3, 0x7f, 0xc9,
- 0x00, 0x36, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x37, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x00, 0x00, 0x49, 0x4a,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x38, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x00, 0x00, 0x49, 0x4a,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x39, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x3a, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x3b, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x00, 0x00, 0x49, 0x4a,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x3c, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x00, 0x00, 0x49, 0x4a,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x3d, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x3e, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x00, 0x00, 0x49, 0x4a,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x3f, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x00, 0x00, 0x49, 0x4a,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x40, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x41, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x42, 0x48, 0xbb, 0x89,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x00, 0x00, 0x49, 0x4a,
- 0x00, 0x00, 0x00, 0x49, 0x42,
- 0x00, 0x02, 0x80, 0x04, 0x01,
- 0x00, 0x00, 0x00, 0x6f, 0x45,
- 0x00, 0x00, 0x1b, 0xd0, 0x44,
- 0x00, 0x02, 0x81, 0x4f, 0xc3,
- 0x00, 0x02, 0x81, 0xda, 0x83,
- 0x00, 0x02, 0x86, 0x96, 0x83,
- 0x00, 0x00, 0x08, 0xe3, 0x44,
- 0x00, 0x00, 0x13, 0x7d, 0x08,
- 0x00, 0x00, 0x1c, 0x66, 0x0e,
- 0x00, 0x00, 0x1d, 0x42, 0x8e,
- 0x00, 0x00, 0x08, 0xb2, 0x8e,
- 0x00, 0x00, 0x14, 0x7e, 0x8a,
- 0x00, 0x00, 0x1c, 0x41, 0x4e,
- 0x00, 0x00, 0x14, 0xde, 0x0e,
- 0x00, 0x00, 0x15, 0x91, 0x0c,
- 0x00, 0x00, 0x1b, 0xf5, 0x4c,
- 0x00, 0x00, 0x00, 0x4c, 0x09,
- 0x00, 0x00, 0x10, 0x07, 0x49,
- 0x00, 0x00, 0x10, 0x25, 0xc9,
- 0x00, 0x00, 0x00, 0x7d, 0x89,
- 0x00, 0x00, 0x00, 0x93, 0x09,
- 0x00, 0x00, 0x13, 0x7f, 0xc9,
- 0x00, 0x00, 0x14, 0x9d, 0x0d,
- 0x00, 0x00, 0x00, 0x9f, 0x09,
- 0x00, 0x00, 0x00, 0xa4, 0x89,
- 0x00, 0x00, 0x17, 0x55, 0x44,
- 0x00, 0x00, 0x18, 0x23, 0x84,
- 0x00, 0x00, 0x19, 0x28, 0x04,
- 0x00, 0x00, 0x1a, 0x22, 0x84,
- 0x00, 0x00, 0x0b, 0x21, 0x44,
- 0x00, 0x00, 0x16, 0xb6, 0x04,
- 0x00, 0x00, 0x1e, 0x8e, 0x04,
- 0x00, 0x00, 0x18, 0x9f, 0x04,
- 0x00, 0x00, 0x01, 0x5c, 0x44,
- 0x00, 0x00, 0x04, 0xac, 0x44,
- 0x00, 0x00, 0x0f, 0xf0, 0x09,
- 0x00, 0x00, 0x0f, 0xf0, 0x0c,
- 0x00, 0x00, 0x15, 0x7f, 0x86,
- 0x00, 0x00, 0x15, 0x7f, 0x8e,
- 0x00, 0x00, 0x08, 0xe3, 0x44,
- 0x00, 0x02, 0x99, 0x59, 0x03,
- 0x00, 0x00, 0x02, 0xb4, 0x47,
- 0x00, 0x02, 0x88, 0xd8, 0x8c,
- 0x00, 0x00, 0x01, 0x5e, 0x42,
- 0x00, 0x00, 0x01, 0x5c, 0x43,
- 0x00, 0x00, 0x04, 0xac, 0x44,
- 0x00, 0x00, 0x00, 0x4c, 0x02,
- 0x00, 0x00, 0x03, 0x75, 0x07,
- 0x00, 0x00, 0x0f, 0xbc, 0x48,
- 0x00, 0x00, 0x1a, 0xe2, 0x88,
- 0x00, 0x00, 0x04, 0x60, 0x84,
- 0x00, 0x00, 0x00, 0x57, 0x46,
- 0x00, 0x00, 0x13, 0xa4, 0xc7,
- 0x00, 0x00, 0x0e, 0x2c, 0x44,
- 0x00, 0x00, 0x12, 0x73, 0x86,
- 0x00, 0x00, 0x01, 0x98, 0x82,
- 0x00, 0x00, 0x00, 0x8f, 0x81,
- 0x00, 0x00, 0x02, 0x25, 0x04,
- 0x00, 0x00, 0x05, 0x49, 0x86,
- 0x00, 0x00, 0x02, 0x73, 0x03,
- 0x00, 0x00, 0x00, 0x4c, 0x02,
- 0x00, 0x00, 0x01, 0x5c, 0x43,
- 0x00, 0x00, 0x12, 0x44, 0x03,
- 0x00, 0x00, 0x02, 0x8b, 0x43,
- 0x00, 0x00, 0x00, 0xe9, 0x83,
- 0x00, 0x00, 0x1c, 0x80, 0xc3,
- 0x00, 0x00, 0x02, 0x8d, 0x45,
- 0x00, 0x00, 0x07, 0xe5, 0xc2,
- 0x00, 0x00, 0x14, 0xeb, 0x82,
- 0x00, 0x00, 0x1a, 0x2b, 0xc8,
- 0x00, 0x00, 0x0f, 0x3c, 0x87,
- 0x00, 0x00, 0x05, 0x66, 0x03,
- 0x00, 0x00, 0x13, 0x7a, 0x47,
- 0x00, 0x00, 0x00, 0x23, 0xc2,
- 0x00, 0x00, 0x0d, 0x9c, 0x89,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x14, 0x82,
- 0x00, 0x00, 0x40, 0xfd, 0x02,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x40, 0x48, 0x42,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0xfc, 0x83,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x46, 0x27, 0x84,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0xbd, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x05, 0x03, 0xc4,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x54, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x5a, 0x5f, 0x47,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0xf7, 0x43,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x5b, 0x6f, 0x0a,
- 0x00, 0x00, 0x41, 0x2f, 0xc5,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x54, 0xce, 0x5b, 0x4a,
- 0x00, 0x00, 0x00, 0x0c, 0x01,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x13, 0xe2, 0x82,
- 0x00, 0x55, 0xc6, 0x0b, 0x4b,
- 0x00, 0x56, 0x40, 0xf4, 0x44,
- 0x00, 0x00, 0x10, 0x48, 0xc5,
- 0x00, 0x02, 0x80, 0x2c, 0x45,
- 0x00, 0x00, 0x0f, 0xbc, 0x46,
- 0x00, 0x56, 0xc0, 0x2c, 0x45,
- 0x00, 0x00, 0x05, 0xfb, 0x83,
- 0x00, 0x00, 0x0b, 0x26, 0x83,
- 0x00, 0x00, 0x1a, 0x31, 0xc4,
- 0x00, 0x00, 0x1d, 0x45, 0xc3,
- 0x00, 0x00, 0x1a, 0x23, 0x45,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x01, 0xf4, 0xc7,
- 0x00, 0x00, 0x00, 0x66, 0x43,
- 0x00, 0x00, 0x02, 0xe7, 0x0d,
- 0x00, 0x57, 0xc3, 0xc6, 0x47,
- 0x00, 0x00, 0x00, 0x0c, 0xc6,
- 0x00, 0x58, 0x14, 0xdc, 0x45,
- 0x00, 0x00, 0x1c, 0xb4, 0xd2,
- 0x00, 0x00, 0x00, 0x0d, 0x87,
- 0x00, 0x00, 0x02, 0x6e, 0x4a,
- 0x00, 0x00, 0x02, 0x4d, 0xc8,
- 0x00, 0x00, 0x02, 0x6d, 0x47,
- 0x00, 0x00, 0x0f, 0xe0, 0x8a,
- 0x00, 0x00, 0x1b, 0x42, 0xc8,
- 0x00, 0x00, 0x07, 0x4a, 0x47,
- 0x00, 0x00, 0x15, 0xc1, 0x8f,
- 0x00, 0x00, 0x04, 0xed, 0x47,
- 0x00, 0x00, 0x07, 0x1b, 0x06,
- 0x00, 0x00, 0x14, 0x31, 0x10,
- 0x00, 0x02, 0x88, 0x6a, 0x46,
- 0x00, 0x00, 0x12, 0x4c, 0x8f,
- 0x00, 0x00, 0x00, 0xee, 0x89,
- 0x00, 0x00, 0x1e, 0x71, 0x04,
- 0x00, 0x58, 0x80, 0x0e, 0x4e,
- 0x00, 0x59, 0x40, 0xd8, 0x4c,
- 0x00, 0x00, 0x03, 0x78, 0x49,
- 0x00, 0x00, 0x07, 0x90, 0x46,
- 0x00, 0x00, 0x06, 0xbb, 0x89,
- 0x00, 0x00, 0x11, 0x6a, 0x86,
- 0x00, 0x00, 0x17, 0x3c, 0xc6,
- 0x00, 0x00, 0x0b, 0xc9, 0x8c,
- 0x00, 0x00, 0x12, 0xbe, 0xca,
- 0x00, 0x00, 0x0a, 0x26, 0x47,
- 0x00, 0x00, 0x11, 0x40, 0x0a,
- 0x00, 0x00, 0x14, 0x6c, 0xc9,
- 0x00, 0x00, 0x10, 0x36, 0x8c,
- 0x00, 0x00, 0x02, 0x41, 0x0a,
- 0x00, 0x00, 0x04, 0xde, 0xca,
- 0x00, 0x00, 0x1a, 0x23, 0x89,
- 0x00, 0x00, 0x1e, 0x70, 0x86,
- 0x00, 0x00, 0x0a, 0x27, 0x0a,
- 0x00, 0x00, 0x1a, 0xae, 0x8a,
- 0x00, 0x00, 0x0a, 0xd3, 0xca,
- 0x00, 0x00, 0x1f, 0x06, 0xc9,
- 0x00, 0x00, 0x0e, 0xf8, 0x88,
- 0x00, 0x00, 0x0e, 0xfb, 0x86,
- 0x00, 0x00, 0x0f, 0x4b, 0xcd,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x05, 0x5f, 0x8b,
- 0x00, 0x00, 0x0d, 0xd5, 0x85,
- 0x00, 0x5a, 0x52, 0x2c, 0x8c,
- 0x00, 0x00, 0x14, 0x42, 0x07,
- 0x00, 0x00, 0x1f, 0x01, 0x89,
- 0x00, 0x00, 0x0d, 0xa4, 0xc7,
- 0x00, 0x00, 0x0b, 0xa7, 0x14,
- 0x00, 0x00, 0x11, 0x7a, 0x8b,
- 0x00, 0x00, 0x0c, 0x98, 0x4a,
- 0x00, 0x00, 0x01, 0x50, 0xca,
- 0x00, 0x00, 0x0b, 0x50, 0x0d,
- 0x00, 0x02, 0x92, 0x47, 0x49,
- 0x00, 0x00, 0x11, 0xcb, 0x4c,
- 0x00, 0x00, 0x11, 0xd7, 0x0b,
- 0x00, 0x00, 0x16, 0x4c, 0x57,
- 0x00, 0x00, 0x16, 0x5e, 0xd5,
- 0x00, 0x00, 0x00, 0x79, 0x03,
- 0x00, 0x00, 0x00, 0x79, 0x03,
- 0x00, 0x00, 0x03, 0x41, 0x06,
- 0x00, 0x00, 0x00, 0x79, 0x03,
- 0x00, 0x59, 0xc0, 0x4b, 0x02,
- 0x00, 0x00, 0x02, 0x8d, 0x45,
- 0x00, 0x00, 0x0f, 0xbc, 0x48,
- 0x00, 0x00, 0x15, 0xb2, 0x43,
- 0x00, 0x00, 0x04, 0x9f, 0x04,
- 0x00, 0x00, 0x01, 0x78, 0x04,
- 0x00, 0x00, 0x01, 0x78, 0x0c,
- 0x00, 0x00, 0x06, 0x04, 0x83,
- 0x00, 0x02, 0x8a, 0xd4, 0x87,
- 0x00, 0x00, 0x17, 0x02, 0xcd,
- 0x00, 0x00, 0x01, 0x52, 0x05,
- 0x00, 0x02, 0x82, 0xa2, 0xc3,
- 0x00, 0x02, 0x82, 0xa2, 0xc8,
- 0x00, 0x00, 0x05, 0xc4, 0xc9,
- 0x00, 0x00, 0x0d, 0xfa, 0x89,
- 0x00, 0x00, 0x02, 0x8d, 0x45,
- 0x00, 0x00, 0x10, 0x15, 0x4b,
- 0x00, 0x00, 0x0d, 0x25, 0x4b,
- 0x00, 0x02, 0x90, 0x93, 0x43,
- 0x00, 0x02, 0x90, 0x93, 0x48,
- 0x00, 0x00, 0x00, 0x11, 0x06,
- 0x00, 0x02, 0x85, 0x26, 0xc7,
- 0x00, 0x00, 0x0a, 0x3f, 0xc7,
- 0x00, 0x5c, 0x17, 0x2b, 0xc9,
- 0x00, 0x00, 0x01, 0x08, 0x86,
- 0x00, 0x00, 0x05, 0x0b, 0x03,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x05, 0x54, 0xc4,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x00, 0xff, 0x43,
- 0x00, 0x00, 0x13, 0xd8, 0x45,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x28, 0xc3,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x4d, 0x20, 0x03,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x40, 0x28, 0xc3,
- 0x00, 0x00, 0x41, 0x4f, 0x04,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x43, 0x30, 0xc3,
- 0x00, 0x00, 0x41, 0x4f, 0x83,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x5f, 0x17, 0x2c, 0xc5,
- 0x00, 0x02, 0x82, 0xd6, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0xfd, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x60, 0xc2, 0x34, 0x43,
- 0x00, 0x00, 0x02, 0x8c, 0x49,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x43, 0xe1, 0xc3,
- 0x00, 0x62, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x44, 0xec, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x2b, 0x03,
- 0x00, 0x00, 0x5e, 0xda, 0x84,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x63, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x4b, 0xac, 0xc3,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x42, 0x10, 0xc3,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x64, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x02, 0x94, 0x92, 0x07,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x00, 0x00, 0x0b, 0xa9, 0x4b,
- 0x00, 0x66, 0x43, 0x28, 0xc6,
- 0x00, 0x00, 0x0f, 0x07, 0x44,
- 0x00, 0x00, 0x0d, 0xd5, 0x85,
- 0x00, 0x02, 0x87, 0xe2, 0x48,
- 0x00, 0x00, 0x02, 0x06, 0xcd,
- 0x00, 0x00, 0x1c, 0x74, 0x48,
- 0x00, 0x67, 0x44, 0x72, 0x85,
- 0x00, 0x00, 0x01, 0xec, 0xc4,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x1c, 0x36, 0xc3,
- 0x00, 0x00, 0x15, 0x7e, 0x85,
- 0x00, 0x00, 0x02, 0x32, 0xc2,
- 0x00, 0x00, 0x54, 0xdb, 0x45,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x6a, 0x5d, 0xdb, 0x4d,
- 0x00, 0x6a, 0xdd, 0x52, 0x0a,
- 0x00, 0x00, 0x00, 0x79, 0x02,
- 0x00, 0x00, 0x02, 0x14, 0x83,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x16, 0xbe, 0x4f,
- 0x00, 0x00, 0x00, 0xfd, 0x02,
- 0x00, 0x00, 0x08, 0xe3, 0x44,
- 0x00, 0x00, 0x04, 0xac, 0x44,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1c, 0x1f, 0x05,
- 0x00, 0x00, 0x53, 0x65, 0xc8,
- 0x00, 0x00, 0x41, 0x4f, 0x04,
- 0x00, 0x00, 0x5c, 0x29, 0x86,
- 0x00, 0x00, 0x5d, 0x05, 0x86,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x50, 0xf1, 0x43,
- 0x00, 0x00, 0x5b, 0xe3, 0x09,
- 0x00, 0x00, 0x4b, 0x98, 0x15,
- 0x00, 0x00, 0x0b, 0x98, 0x1f,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x09, 0x5a, 0x87,
- 0x00, 0x00, 0x41, 0x9d, 0x52,
- 0x00, 0x00, 0x18, 0x77, 0x46,
- 0x00, 0x00, 0x18, 0x92, 0x85,
- 0x00, 0x00, 0x06, 0xb6, 0xca,
- 0x00, 0x00, 0x03, 0xe6, 0x49,
- 0x00, 0x00, 0x41, 0x9b, 0x0f,
- 0x00, 0x00, 0x0e, 0xfd, 0x47,
- 0x00, 0x00, 0x4e, 0x40, 0x84,
- 0x00, 0x00, 0x42, 0x43, 0xc5,
- 0x00, 0x00, 0x51, 0x94, 0x10,
- 0x00, 0x00, 0x45, 0x77, 0xc7,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x43, 0xd5, 0xc8,
- 0x00, 0x00, 0x04, 0xaa, 0xc6,
- 0x00, 0x00, 0x48, 0xcc, 0x4a,
- 0x00, 0x00, 0x42, 0x98, 0x04,
- 0x00, 0x00, 0x50, 0x9b, 0x43,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x00, 0x50, 0x46, 0x8b,
- 0x00, 0x00, 0x1b, 0x94, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x18, 0xfc, 0x84,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x50, 0xe5, 0x43,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x03, 0x19, 0x83,
- 0x00, 0x00, 0x05, 0x8c, 0xc4,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x6f, 0x03, 0xbf, 0x05,
- 0x00, 0x00, 0x1d, 0x73, 0x46,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0xf7, 0x43,
- 0x00, 0x00, 0x41, 0xdd, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x05, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x03, 0x1e, 0x02,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x2c, 0x45,
- 0x00, 0x00, 0x07, 0x4f, 0xc9,
- 0x00, 0x02, 0x81, 0xec, 0xcb,
- 0x00, 0x00, 0x01, 0x5c, 0x43,
- 0x00, 0x00, 0x41, 0x4f, 0x04,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x42, 0x8f, 0x84,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x13, 0x2d, 0x09,
- 0x00, 0x00, 0x00, 0x62, 0x04,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x00, 0x23, 0xc2,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x36, 0xc3,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0xc6, 0x42,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x56, 0xb2, 0x04,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x00, 0x8a, 0x42,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x15, 0xd3, 0xc3,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x52, 0x5e, 0x03,
- 0x00, 0x00, 0x05, 0x23, 0x83,
- 0x00, 0x00, 0x00, 0xf7, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x02, 0xf7, 0xc6,
- 0x00, 0x00, 0x52, 0xe0, 0xca,
- 0x00, 0x00, 0x54, 0xb2, 0xc9,
- 0x00, 0x00, 0x56, 0x40, 0x8b,
- 0x00, 0x00, 0x56, 0x49, 0xca,
- 0x00, 0x00, 0x56, 0xfb, 0x0a,
- 0x00, 0x00, 0x58, 0x02, 0x8b,
- 0x00, 0x00, 0x59, 0x4b, 0xca,
- 0x00, 0x00, 0x59, 0xb1, 0x8a,
- 0x00, 0x00, 0x5a, 0x1a, 0x4a,
- 0x00, 0x00, 0x5a, 0x1c, 0xcb,
- 0x00, 0x00, 0x5c, 0x7d, 0x89,
- 0x00, 0x00, 0x5d, 0xfe, 0x4a,
- 0x00, 0x00, 0x5e, 0x02, 0xcb,
- 0x00, 0x00, 0x5e, 0xba, 0xcb,
- 0x00, 0x00, 0x5f, 0x27, 0x4a,
- 0x00, 0x00, 0x00, 0x2d, 0xc2,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x2c, 0x4b,
- 0x00, 0x00, 0x12, 0xae, 0x87,
- 0x00, 0x00, 0x06, 0xc9, 0xc8,
- 0x00, 0x00, 0x06, 0xb0, 0x84,
- 0x00, 0x00, 0x1e, 0x86, 0x84,
- 0x00, 0x00, 0x09, 0xb1, 0x08,
- 0x00, 0x00, 0x0f, 0x2f, 0x86,
- 0x00, 0x00, 0x00, 0x72, 0x06,
- 0x00, 0x00, 0x03, 0xbb, 0x07,
- 0x00, 0x00, 0x12, 0x8c, 0x07,
- 0x00, 0x00, 0x0f, 0x85, 0x89,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x03, 0xe6, 0x44,
- 0x00, 0x00, 0x47, 0x25, 0x44,
- 0x00, 0x00, 0x40, 0xb0, 0x82,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x43, 0x1c, 0x05,
- 0x00, 0x00, 0x40, 0x28, 0xc3,
- 0x00, 0x00, 0x41, 0x4f, 0x04,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x92, 0xc4,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x45, 0x54, 0xc4,
- 0x00, 0x00, 0x4e, 0x40, 0x84,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x49, 0xf0, 0x85,
- 0x00, 0x00, 0x43, 0x30, 0xc3,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x47, 0xb4, 0x03,
- 0x00, 0x00, 0x41, 0xbc, 0x04,
- 0x00, 0x00, 0x5d, 0xe8, 0x44,
- 0x00, 0x00, 0x43, 0xc0, 0xc5,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x5c, 0x37, 0x04,
- 0x00, 0x00, 0x5c, 0x13, 0x86,
- 0x00, 0x00, 0x5a, 0x24, 0xc4,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x42, 0x91, 0x47,
- 0x00, 0x00, 0x44, 0xc7, 0x07,
- 0x00, 0x00, 0x45, 0x16, 0x04,
- 0x00, 0x00, 0x4f, 0x3d, 0x05,
- 0x00, 0x00, 0x59, 0x53, 0x05,
- 0x00, 0x00, 0x42, 0xf2, 0xc5,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x46, 0x8e, 0x48,
- 0x00, 0x00, 0x43, 0x86, 0x46,
- 0x00, 0x00, 0x56, 0x55, 0x88,
- 0x00, 0x00, 0x47, 0x64, 0x45,
- 0x00, 0x00, 0x4d, 0xfa, 0x85,
- 0x00, 0x00, 0x47, 0x3e, 0x04,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x50, 0xa9, 0x04,
- 0x00, 0x00, 0x57, 0xf2, 0x86,
- 0x00, 0x00, 0x41, 0x30, 0xc3,
- 0x00, 0x00, 0x41, 0xbc, 0x04,
- 0x00, 0x00, 0x47, 0x54, 0x85,
- 0x00, 0x00, 0x44, 0xe8, 0x84,
- 0x00, 0x00, 0x4a, 0xde, 0xc4,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x00, 0x45, 0x71, 0x06,
- 0x00, 0x00, 0x5b, 0x5b, 0x46,
- 0x00, 0x00, 0x51, 0xa6, 0x05,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x0f, 0x31, 0x06,
- 0x00, 0x79, 0xc0, 0x22, 0x02,
- 0x00, 0x00, 0x41, 0xce, 0x04,
- 0x00, 0x00, 0x19, 0x13, 0x84,
- 0x00, 0x00, 0x06, 0x56, 0x85,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x7a, 0x40, 0x70, 0xc2,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x50, 0x70, 0xc6,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x1e, 0x70, 0x05,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x02, 0x8f, 0x8c, 0xc3,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x7b, 0xc0, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x47, 0x8c, 0xc3,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x40, 0xf4, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x12, 0x69, 0x47,
- 0x00, 0x00, 0x05, 0x86, 0x0a,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x7c, 0xc0, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x06, 0x82,
- 0x00, 0x00, 0x40, 0x9a, 0x02,
- 0x00, 0x00, 0x42, 0x4b, 0x42,
- 0x00, 0x00, 0x40, 0xf7, 0x43,
- 0x00, 0x00, 0x50, 0x36, 0x43,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x45, 0x54, 0xc4,
- 0x00, 0x00, 0x40, 0x3b, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x36, 0xc3,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x40, 0xf4, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x30, 0x03,
- 0x00, 0x00, 0x0c, 0xbd, 0x48,
- 0x00, 0x00, 0x00, 0x0f, 0x83,
- 0x00, 0x00, 0x14, 0x52, 0x13,
- 0x00, 0x00, 0x14, 0x8d, 0x14,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x00, 0x00, 0x02, 0x6e, 0x49,
- 0x00, 0x00, 0x11, 0xb6, 0x46,
- 0x00, 0x00, 0x11, 0x09, 0x0b,
- 0x00, 0x00, 0x03, 0x41, 0x06,
- 0x00, 0x00, 0x06, 0x1d, 0x47,
- 0x00, 0x00, 0x1d, 0xba, 0xc6,
- 0x00, 0x00, 0x00, 0x06, 0x49,
- 0x00, 0x00, 0x18, 0x54, 0x0a,
- 0x00, 0x00, 0x09, 0x7b, 0xcd,
- 0x00, 0x00, 0x0d, 0xc1, 0x0c,
- 0x00, 0x00, 0x11, 0xe3, 0x4a,
- 0x00, 0x00, 0x0a, 0x8a, 0xc8,
- 0x00, 0x00, 0x1c, 0xf8, 0x05,
- 0x00, 0x00, 0x02, 0x6e, 0x88,
- 0x00, 0x00, 0x01, 0x5d, 0x46,
- 0x00, 0x00, 0x1d, 0x1b, 0x86,
- 0x00, 0x00, 0x05, 0x41, 0x46,
- 0x00, 0x00, 0x40, 0x4c, 0x02,
- 0x00, 0x00, 0x00, 0x26, 0xc4,
- 0x00, 0x00, 0x17, 0x0c, 0xc6,
- 0x00, 0x02, 0x8e, 0x16, 0x0e,
- 0x00, 0x00, 0x1d, 0x51, 0x86,
- 0x00, 0x00, 0x07, 0x44, 0x0c,
- 0x00, 0x7f, 0x57, 0x2a, 0x4b,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x14, 0xfb, 0x4b,
- 0x00, 0x7f, 0xc8, 0xc0, 0x07,
- 0x00, 0x80, 0x48, 0xc0, 0x0a,
- 0x00, 0x80, 0xdd, 0x1a, 0xc4,
- 0x00, 0x00, 0x00, 0x50, 0xc9,
- 0x00, 0x00, 0x00, 0x95, 0x48,
- 0x00, 0x00, 0x1b, 0xd2, 0x07,
- 0x00, 0x00, 0x02, 0x57, 0x91,
- 0x00, 0x00, 0x13, 0x06, 0x4a,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x81, 0x48, 0xd7, 0x88,
- 0x00, 0x00, 0x0f, 0xe0, 0x05,
- 0x00, 0x00, 0x18, 0x96, 0xc8,
- 0x00, 0x00, 0x1b, 0x73, 0x44,
- 0x00, 0x00, 0x04, 0xeb, 0x05,
- 0x00, 0x00, 0x0a, 0xeb, 0x47,
- 0x00, 0x00, 0x1a, 0x9d, 0x0b,
- 0x00, 0x81, 0xc1, 0xf1, 0x09,
- 0x00, 0x00, 0x01, 0x15, 0xc5,
- 0x00, 0x00, 0x17, 0x02, 0xc6,
- 0x00, 0x00, 0x16, 0x34, 0x86,
- 0x00, 0x00, 0x09, 0xd2, 0x8a,
- 0x00, 0x00, 0x10, 0x32, 0x0c,
- 0x00, 0x00, 0x1c, 0x13, 0x03,
- 0x00, 0x00, 0x1e, 0x86, 0x84,
- 0x00, 0x82, 0x5e, 0xd4, 0x84,
- 0x00, 0x00, 0x05, 0xc4, 0xc9,
- 0x00, 0x00, 0x10, 0x0e, 0xc7,
- 0x00, 0x00, 0x05, 0x88, 0xca,
- 0x00, 0x02, 0x8e, 0x5a, 0x49,
- 0x00, 0x00, 0x00, 0x06, 0x05,
- 0x00, 0x00, 0x10, 0xf3, 0x03,
- 0x00, 0x82, 0xc3, 0x70, 0x47,
- 0x00, 0x00, 0x00, 0x1f, 0x45,
- 0x00, 0x02, 0x96, 0xca, 0x86,
- 0x00, 0x02, 0x80, 0xc4, 0x06,
- 0x00, 0x00, 0x15, 0xce, 0x8c,
- 0x00, 0x00, 0x10, 0xc3, 0x48,
- 0x00, 0x83, 0x13, 0x08, 0x45,
- 0x00, 0x83, 0x84, 0x13, 0xc3,
- 0x00, 0x00, 0x11, 0x0f, 0xc4,
- 0x00, 0x00, 0x06, 0x94, 0x8b,
- 0x00, 0x00, 0x12, 0x1e, 0x0b,
- 0x00, 0x84, 0x44, 0xf0, 0x4c,
- 0x00, 0x02, 0x82, 0x61, 0x43,
- 0x00, 0x00, 0x0c, 0xef, 0x48,
- 0x00, 0x00, 0x0d, 0x25, 0x4b,
- 0x00, 0x00, 0x0a, 0xea, 0x09,
- 0x00, 0x00, 0x0d, 0x91, 0x43,
- 0x00, 0x00, 0x12, 0x48, 0x48,
- 0x00, 0x02, 0x82, 0x28, 0x86,
- 0x00, 0x00, 0x09, 0x56, 0x07,
- 0x00, 0x84, 0xd6, 0x19, 0x49,
- 0x00, 0x00, 0x03, 0x01, 0x47,
- 0x00, 0x87, 0x4e, 0xba, 0x48,
- 0x00, 0x00, 0x0a, 0x19, 0xc4,
- 0x00, 0x00, 0x11, 0x78, 0xc7,
- 0x00, 0x00, 0x0e, 0x04, 0x0a,
- 0x00, 0x87, 0xd6, 0x51, 0x88,
- 0x00, 0x00, 0x11, 0xd3, 0xcd,
- 0x00, 0x00, 0x1c, 0x6e, 0x09,
- 0x00, 0x00, 0x1d, 0x78, 0x08,
- 0x00, 0x00, 0x01, 0x5c, 0x43,
- 0x00, 0x02, 0x84, 0x93, 0xc9,
- 0x00, 0x00, 0x04, 0xac, 0x44,
- 0x00, 0x00, 0x00, 0x97, 0xc5,
- 0x00, 0x00, 0x03, 0xc5, 0x83,
- 0x00, 0x00, 0x03, 0x41, 0x06,
- 0x00, 0x00, 0x00, 0x30, 0x42,
- 0x00, 0x00, 0x01, 0x5c, 0x44,
- 0x00, 0x00, 0x02, 0xa3, 0x85,
- 0x00, 0x00, 0x1a, 0xa8, 0x84,
- 0x00, 0x02, 0x82, 0xdb, 0x83,
- 0x00, 0x00, 0x01, 0xa6, 0xc7,
- 0x00, 0x85, 0x41, 0xa6, 0xc3,
- 0x00, 0x85, 0xdc, 0xc3, 0x86,
- 0x00, 0x86, 0x43, 0xab, 0x84,
- 0x00, 0x86, 0xcf, 0xdd, 0xc7,
- 0x00, 0x00, 0x0f, 0xbc, 0x44,
- 0x00, 0x00, 0x12, 0x5f, 0x07,
- 0x00, 0x00, 0x0f, 0xbc, 0x44,
- 0x00, 0x00, 0x12, 0x5f, 0x07,
- 0x00, 0x00, 0x0f, 0xbc, 0x44,
- 0x00, 0x00, 0x0f, 0xbc, 0x44,
- 0x00, 0x00, 0x12, 0x5f, 0x07,
- 0x00, 0x00, 0x1d, 0xee, 0x09,
- 0x00, 0x00, 0x00, 0x00, 0x41,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x30, 0x42,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x58, 0x7f, 0xcf,
- 0x00, 0x00, 0x58, 0x83, 0x8e,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x04, 0x90, 0x87,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1d, 0x50, 0xc4,
- 0x00, 0x00, 0x1d, 0x47, 0x04,
- 0x00, 0x00, 0x00, 0x0a, 0x04,
- 0x00, 0x00, 0x42, 0x0f, 0xc3,
- 0x00, 0x00, 0x5d, 0x57, 0x87,
- 0x00, 0x00, 0x40, 0x47, 0x82,
- 0x00, 0x00, 0x47, 0x9f, 0x49,
- 0x00, 0x00, 0x40, 0x0b, 0x02,
- 0x00, 0x00, 0x45, 0xa8, 0x8b,
- 0x00, 0x00, 0x4e, 0xd3, 0xca,
- 0x00, 0x00, 0x52, 0x89, 0xc9,
- 0x00, 0x00, 0x40, 0x05, 0x42,
- 0x00, 0x00, 0x5b, 0x77, 0x06,
- 0x00, 0x00, 0x45, 0xfd, 0x95,
- 0x00, 0x00, 0x45, 0xa9, 0xd5,
- 0x00, 0x00, 0x46, 0x4e, 0xd3,
- 0x00, 0x00, 0x45, 0xaf, 0x53,
- 0x00, 0x00, 0x41, 0x63, 0x02,
- 0x00, 0x00, 0x42, 0x86, 0x45,
- 0x00, 0x00, 0x5c, 0x34, 0x0c,
- 0x00, 0x00, 0x48, 0x15, 0xcb,
- 0x00, 0x00, 0x45, 0x9e, 0x85,
- 0x00, 0x00, 0x40, 0x61, 0x82,
- 0x00, 0x00, 0x4f, 0x45, 0xc2,
- 0x00, 0x00, 0x4f, 0x45, 0xc6,
- 0x00, 0x00, 0x40, 0x28, 0xc2,
- 0x00, 0x00, 0x49, 0x61, 0x86,
- 0x00, 0x00, 0x43, 0xe8, 0x8d,
- 0x00, 0x00, 0x45, 0x8d, 0x4c,
- 0x00, 0x00, 0x5c, 0x75, 0x04,
- 0x00, 0x00, 0x40, 0x08, 0x82,
- 0x00, 0x00, 0x41, 0x03, 0x42,
- 0x00, 0x00, 0x48, 0xc7, 0x48,
- 0x00, 0x00, 0x40, 0x02, 0x02,
- 0x00, 0x00, 0x53, 0xd3, 0xc6,
- 0x00, 0x00, 0x59, 0xd5, 0x4f,
- 0x00, 0x00, 0x5d, 0x47, 0xd0,
- 0x00, 0x00, 0x4f, 0xd0, 0x44,
- 0x00, 0x00, 0x45, 0xff, 0x55,
- 0x00, 0x00, 0x46, 0x50, 0x53,
- 0x00, 0x00, 0x41, 0x4e, 0xc3,
- 0x00, 0x00, 0x55, 0x89, 0x8a,
- 0x00, 0x00, 0x58, 0xb6, 0xc7,
- 0x00, 0x00, 0x59, 0x24, 0x09,
- 0x00, 0x00, 0x51, 0x3d, 0x87,
- 0x00, 0x00, 0x46, 0x97, 0x82,
- 0x00, 0x00, 0x40, 0x02, 0x82,
- 0x00, 0x00, 0x5c, 0xaa, 0x06,
- 0x00, 0x00, 0x40, 0x62, 0x02,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0xb4, 0xc2,
- 0x00, 0x00, 0x40, 0xb7, 0x82,
- 0x00, 0x00, 0x40, 0xb7, 0x87,
- 0x00, 0x00, 0x5a, 0xeb, 0x47,
- 0x00, 0x00, 0x5a, 0xeb, 0x51,
- 0x00, 0x00, 0x41, 0xd0, 0xc5,
- 0x00, 0x00, 0x41, 0xd0, 0xce,
- 0x00, 0x00, 0x41, 0xe8, 0xcf,
- 0x00, 0x00, 0x40, 0x2f, 0xc2,
- 0x00, 0x00, 0x42, 0x10, 0x07,
- 0x00, 0x00, 0x42, 0x11, 0xc8,
- 0x00, 0x00, 0x40, 0x10, 0x02,
- 0x00, 0x00, 0x42, 0x03, 0x02,
- 0x00, 0x00, 0x41, 0x05, 0x06,
- 0x00, 0x00, 0x41, 0x05, 0x0f,
- 0x00, 0x00, 0x47, 0x11, 0xd0,
- 0x00, 0x00, 0x42, 0xd3, 0xc2,
- 0x00, 0x00, 0x40, 0x17, 0x82,
- 0x00, 0x00, 0x42, 0x92, 0x88,
- 0x00, 0x00, 0x40, 0x17, 0x83,
- 0x00, 0x00, 0x46, 0xd7, 0x48,
- 0x00, 0x00, 0x43, 0xde, 0xcd,
- 0x00, 0x00, 0x40, 0x2f, 0x03,
- 0x00, 0x00, 0x5d, 0x03, 0xc8,
- 0x00, 0x00, 0x46, 0x22, 0x0f,
- 0x00, 0x00, 0x46, 0x25, 0xce,
- 0x00, 0x00, 0x41, 0x4d, 0x8a,
- 0x00, 0x00, 0x4e, 0xeb, 0xd1,
- 0x00, 0x00, 0x4e, 0xf0, 0x50,
- 0x00, 0x00, 0x4d, 0xee, 0xcd,
- 0x00, 0x00, 0x4d, 0xf2, 0x0c,
- 0x00, 0x00, 0x58, 0x1d, 0xc7,
- 0x00, 0x00, 0x55, 0x8b, 0x07,
- 0x00, 0x00, 0x5c, 0x2a, 0x49,
- 0x00, 0x00, 0x41, 0x70, 0xc2,
- 0x00, 0x00, 0x40, 0x11, 0xc2,
- 0x00, 0x00, 0x46, 0x3b, 0x4c,
- 0x00, 0x00, 0x46, 0x3e, 0x4b,
- 0x00, 0x00, 0x40, 0x34, 0x02,
- 0x00, 0x00, 0x59, 0xba, 0x06,
- 0x00, 0x00, 0x40, 0x41, 0x02,
- 0x00, 0x00, 0x40, 0x04, 0x82,
- 0x00, 0x00, 0x55, 0x47, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x42, 0xec, 0x04,
- 0x00, 0x00, 0x43, 0xbc, 0xc7,
- 0x00, 0x00, 0x42, 0xdd, 0x42,
- 0x00, 0x00, 0x44, 0x3c, 0x87,
- 0x00, 0x00, 0x44, 0x6f, 0x47,
- 0x00, 0x00, 0x44, 0x29, 0x82,
- 0x00, 0x00, 0x41, 0x18, 0x02,
- 0x00, 0x00, 0x44, 0x8f, 0x45,
- 0x00, 0x00, 0x45, 0x43, 0x02,
- 0x00, 0x00, 0x4b, 0x52, 0xce,
- 0x00, 0x00, 0x57, 0xb2, 0xcd,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x49, 0x36, 0xce,
- 0x00, 0x00, 0x4d, 0x49, 0xcd,
- 0x00, 0x00, 0x57, 0x42, 0x43,
- 0x00, 0x00, 0x40, 0x24, 0x82,
- 0x00, 0x00, 0x44, 0x0d, 0xc4,
- 0x00, 0x00, 0x41, 0x32, 0x42,
- 0x00, 0x00, 0x40, 0x24, 0x42,
- 0x00, 0x00, 0x5a, 0xa7, 0xc5,
- 0x00, 0x00, 0x44, 0xc1, 0xc7,
- 0x00, 0x00, 0x44, 0xdc, 0x42,
- 0x00, 0x00, 0x40, 0xfd, 0x02,
- 0x00, 0x00, 0x45, 0x3c, 0x47,
- 0x00, 0x00, 0x45, 0xc8, 0x48,
- 0x00, 0x00, 0x45, 0x96, 0xc2,
- 0x00, 0x00, 0x48, 0x6f, 0xc6,
- 0x00, 0x00, 0x46, 0x39, 0xcc,
- 0x00, 0x00, 0x46, 0x3d, 0x0b,
- 0x00, 0x00, 0x40, 0x81, 0x02,
- 0x00, 0x00, 0x46, 0xdb, 0x4f,
- 0x00, 0x00, 0x46, 0xdf, 0x10,
- 0x00, 0x00, 0x46, 0xe3, 0x0f,
- 0x00, 0x00, 0x46, 0xe6, 0xd5,
- 0x00, 0x00, 0x46, 0xec, 0x14,
- 0x00, 0x00, 0x46, 0xf1, 0x0e,
- 0x00, 0x00, 0x46, 0xf4, 0x8e,
- 0x00, 0x00, 0x46, 0xf8, 0x0f,
- 0x00, 0x00, 0x46, 0xfb, 0xce,
- 0x00, 0x00, 0x46, 0xff, 0x54,
- 0x00, 0x00, 0x47, 0x04, 0x53,
- 0x00, 0x00, 0x47, 0x09, 0x0d,
- 0x00, 0x00, 0x48, 0x58, 0x09,
- 0x00, 0x00, 0x49, 0x94, 0x43,
- 0x00, 0x00, 0x40, 0x30, 0xc2,
- 0x00, 0x00, 0x56, 0x06, 0x05,
- 0x00, 0x00, 0x40, 0x45, 0xc6,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x4f, 0x7c, 0x47,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x06, 0x42,
- 0x00, 0x00, 0x43, 0x34, 0x88,
- 0x00, 0x00, 0x4e, 0xee, 0x11,
- 0x00, 0x00, 0x4e, 0xf2, 0x50,
- 0x00, 0x00, 0x40, 0x6a, 0x42,
- 0x00, 0x00, 0x49, 0x87, 0x07,
- 0x00, 0x00, 0x40, 0x2b, 0x82,
- 0x00, 0x00, 0x45, 0x98, 0x87,
- 0x00, 0x00, 0x40, 0x69, 0x82,
- 0x00, 0x00, 0x53, 0xa6, 0x09,
- 0x00, 0x00, 0x4f, 0x45, 0x87,
- 0x00, 0x00, 0x49, 0x8b, 0xc8,
- 0x00, 0x00, 0x5c, 0xf6, 0x06,
- 0x00, 0x00, 0x50, 0x35, 0x43,
- 0x00, 0x00, 0x59, 0x88, 0x05,
- 0x00, 0x00, 0x42, 0xa2, 0xc2,
- 0x00, 0x00, 0x40, 0x04, 0xc2,
- 0x00, 0x00, 0x5c, 0x5e, 0xc5,
- 0x00, 0x00, 0x5d, 0xbd, 0x85,
- 0x00, 0x00, 0x40, 0x12, 0x82,
- 0x00, 0x00, 0x41, 0x1f, 0x83,
- 0x00, 0x00, 0x4a, 0x1a, 0x47,
- 0x00, 0x00, 0x5d, 0x1e, 0x07,
- 0x00, 0x00, 0x40, 0x17, 0x02,
- 0x00, 0x00, 0x58, 0x78, 0xc4,
- 0x00, 0x00, 0x40, 0x77, 0x83,
- 0x00, 0x00, 0x5e, 0xe1, 0x09,
- 0x00, 0x00, 0x40, 0x77, 0x88,
- 0x00, 0x00, 0x40, 0x6b, 0x42,
- 0x00, 0x00, 0x40, 0x95, 0x82,
- 0x00, 0x00, 0x42, 0xbb, 0x07,
- 0x00, 0x00, 0x5b, 0x6b, 0x85,
- 0x00, 0x00, 0x45, 0x84, 0x88,
- 0x00, 0x00, 0x42, 0x83, 0x47,
- 0x00, 0x00, 0x42, 0x5a, 0xc3,
- 0x00, 0x00, 0x57, 0x0b, 0x46,
- 0x00, 0x00, 0x4d, 0xed, 0x4d,
- 0x00, 0x00, 0x4d, 0xf0, 0xcc,
- 0x00, 0x00, 0x58, 0x98, 0xc6,
- 0x00, 0x00, 0x40, 0x2f, 0x82,
- 0x00, 0x00, 0x40, 0x20, 0x42,
- 0x00, 0x00, 0x40, 0x0e, 0xc2,
- 0x00, 0x00, 0x46, 0x20, 0x8f,
- 0x00, 0x00, 0x46, 0x24, 0x8e,
- 0x00, 0x00, 0x59, 0x53, 0x87,
- 0x00, 0x00, 0x40, 0x1e, 0x42,
- 0x00, 0x00, 0x43, 0xd4, 0xc5,
- 0x00, 0x00, 0x43, 0xd4, 0xc6,
- 0x00, 0x00, 0x41, 0xcf, 0x02,
- 0x00, 0x00, 0x40, 0x15, 0x02,
- 0x00, 0x00, 0x49, 0xa0, 0xc6,
- 0x00, 0x00, 0x44, 0x5e, 0x43,
- 0x00, 0x00, 0x5c, 0x23, 0x46,
- 0x00, 0x00, 0x4e, 0x2c, 0x05,
- 0x00, 0x00, 0x4e, 0x2c, 0x0d,
- 0x00, 0x00, 0x4e, 0x39, 0x15,
- 0x00, 0x00, 0x4e, 0x4f, 0x0c,
- 0x00, 0x00, 0x4e, 0x52, 0x8d,
- 0x00, 0x00, 0x4e, 0x55, 0xd2,
- 0x00, 0x00, 0x40, 0x3c, 0x42,
- 0x00, 0x00, 0x47, 0xd2, 0x02,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x40, 0x1b, 0x82,
- 0x00, 0x00, 0x50, 0x04, 0x86,
- 0x00, 0x00, 0x5c, 0xda, 0x46,
- 0x00, 0x8a, 0x49, 0x5a, 0x04,
- 0x00, 0x00, 0x40, 0x1f, 0x42,
- 0x00, 0x00, 0x40, 0x46, 0x46,
- 0x00, 0x00, 0x40, 0x39, 0x02,
- 0x00, 0x00, 0x45, 0xf3, 0x85,
- 0x00, 0x00, 0x40, 0x62, 0x82,
- 0x00, 0x00, 0x4b, 0x54, 0x09,
- 0x00, 0x00, 0x41, 0x63, 0x4c,
- 0x00, 0x00, 0x41, 0x66, 0x8b,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x45, 0xdf, 0x08,
- 0x00, 0x00, 0x40, 0x58, 0x42,
- 0x00, 0x00, 0x40, 0x67, 0x02,
- 0x00, 0x00, 0x48, 0x13, 0x86,
- 0x00, 0x00, 0x48, 0xbb, 0x05,
- 0x00, 0x00, 0x58, 0xcf, 0x47,
- 0x00, 0x00, 0x53, 0xe8, 0x45,
- 0x00, 0x00, 0x47, 0x46, 0xc5,
- 0x00, 0x00, 0x40, 0x72, 0x02,
- 0x00, 0x00, 0x42, 0x6f, 0x42,
- 0x00, 0x00, 0x40, 0x20, 0xc2,
- 0x00, 0x00, 0x4b, 0x0d, 0x07,
- 0x00, 0x00, 0x50, 0x71, 0x8d,
- 0x00, 0x00, 0x50, 0x75, 0x0c,
- 0x00, 0x00, 0x43, 0x74, 0x47,
- 0x00, 0x00, 0x48, 0x6f, 0x42,
- 0x00, 0x00, 0x40, 0x14, 0x02,
- 0x00, 0x00, 0x5c, 0xac, 0x08,
- 0x00, 0x00, 0x40, 0x14, 0x08,
- 0x00, 0x00, 0x4e, 0xc9, 0xc8,
- 0x00, 0x00, 0x5b, 0xa6, 0x44,
- 0x00, 0x00, 0x5e, 0xf9, 0x87,
- 0x00, 0x00, 0x50, 0x4a, 0x03,
- 0x00, 0x00, 0x47, 0x1c, 0x02,
- 0x00, 0x00, 0x40, 0xbc, 0x02,
- 0x00, 0x00, 0x50, 0x7f, 0x09,
- 0x00, 0x00, 0x55, 0xf4, 0x07,
- 0x00, 0x00, 0x40, 0x2b, 0x02,
- 0x00, 0x00, 0x48, 0x19, 0xc5,
- 0x00, 0x00, 0x41, 0xa0, 0x02,
- 0x00, 0x00, 0x41, 0xec, 0x82,
- 0x00, 0x00, 0x50, 0xdd, 0x03,
- 0x00, 0x00, 0x50, 0xdd, 0x06,
- 0x00, 0x00, 0x50, 0xe2, 0x42,
- 0x00, 0x00, 0x51, 0x0e, 0x82,
- 0x00, 0x00, 0x40, 0x04, 0x02,
- 0x00, 0x00, 0x40, 0x47, 0x46,
- 0x00, 0x00, 0x44, 0x0d, 0x07,
- 0x00, 0x00, 0x40, 0x11, 0x82,
- 0x00, 0x00, 0x40, 0x09, 0x02,
- 0x00, 0x00, 0x46, 0xd5, 0x8f,
- 0x00, 0x00, 0x49, 0x35, 0x0d,
- 0x00, 0x00, 0x49, 0x6b, 0x0e,
- 0x00, 0x00, 0x4d, 0x48, 0x4c,
- 0x00, 0x00, 0x40, 0x8a, 0xc2,
- 0x00, 0x00, 0x40, 0x2b, 0x42,
- 0x00, 0x00, 0x5c, 0xf4, 0x45,
- 0x00, 0x00, 0x52, 0xc5, 0x06,
- 0x00, 0x00, 0x42, 0x54, 0x02,
- 0x00, 0x00, 0x40, 0x3f, 0x42,
- 0x00, 0x00, 0x40, 0x06, 0x82,
- 0x00, 0x00, 0x42, 0x82, 0xc4,
- 0x00, 0x00, 0x4d, 0xfa, 0x04,
- 0x00, 0x00, 0x55, 0xcc, 0xc6,
- 0x00, 0x00, 0x40, 0x48, 0x42,
- 0x00, 0x00, 0x49, 0x3f, 0xc7,
- 0x00, 0x00, 0x40, 0x48, 0x43,
- 0x00, 0x00, 0x43, 0xb6, 0xc8,
- 0x00, 0x00, 0x43, 0xda, 0x88,
- 0x00, 0x00, 0x44, 0x78, 0xc7,
- 0x00, 0x00, 0x46, 0x11, 0xc6,
- 0x00, 0x00, 0x40, 0x1a, 0x42,
- 0x00, 0x00, 0x40, 0x40, 0x83,
- 0x00, 0x00, 0x40, 0x40, 0x87,
- 0x00, 0x00, 0x48, 0x24, 0x86,
- 0x00, 0x00, 0x4d, 0x99, 0x85,
- 0x00, 0x00, 0x48, 0x35, 0x08,
- 0x00, 0x00, 0x40, 0x3c, 0x82,
- 0x00, 0x00, 0x4f, 0x5f, 0xc7,
- 0x00, 0x00, 0x40, 0xe7, 0xc2,
- 0x00, 0x00, 0x4b, 0x26, 0xc2,
- 0x00, 0x00, 0x40, 0x2e, 0x82,
- 0x00, 0x00, 0x41, 0xea, 0x49,
- 0x00, 0x00, 0x40, 0xde, 0x02,
- 0x00, 0x00, 0x01, 0x8b, 0x88,
- 0x00, 0x00, 0x40, 0x19, 0xc2,
- 0x00, 0x00, 0x45, 0xb3, 0x83,
- 0x00, 0x00, 0x55, 0x01, 0xc7,
- 0x00, 0x00, 0x40, 0x2a, 0x42,
- 0x00, 0x00, 0x41, 0x64, 0xcc,
- 0x00, 0x00, 0x41, 0x67, 0xcb,
- 0x00, 0x00, 0x58, 0x99, 0x46,
- 0x00, 0x00, 0x51, 0x24, 0x85,
- 0x00, 0x8a, 0xc0, 0x6b, 0x83,
- 0x00, 0x00, 0x40, 0x1b, 0x42,
- 0x00, 0x00, 0x40, 0xc6, 0x42,
- 0x00, 0x00, 0x4d, 0x0a, 0x06,
- 0x00, 0x00, 0x42, 0x74, 0xc3,
- 0x00, 0x00, 0x56, 0xaf, 0x87,
- 0x00, 0x00, 0x46, 0xba, 0xc2,
- 0x00, 0x00, 0x40, 0x08, 0xc2,
- 0x00, 0x00, 0x45, 0xfc, 0x15,
- 0x00, 0x00, 0x45, 0xab, 0x95,
- 0x00, 0x00, 0x46, 0x4d, 0x93,
- 0x00, 0x00, 0x45, 0xb0, 0xd3,
- 0x00, 0x00, 0x48, 0x44, 0x47,
- 0x00, 0x00, 0x4a, 0x7e, 0x51,
- 0x00, 0x00, 0x4c, 0x22, 0x90,
- 0x00, 0x00, 0x59, 0x60, 0x92,
- 0x00, 0x00, 0x4c, 0x16, 0xd1,
- 0x00, 0x00, 0x4c, 0x4d, 0x48,
- 0x00, 0x00, 0x4c, 0x4d, 0x50,
- 0x00, 0x00, 0x4c, 0x81, 0x0f,
- 0x00, 0x00, 0x4e, 0xd1, 0x93,
- 0x00, 0x00, 0x5a, 0x81, 0x52,
- 0x00, 0x00, 0x4e, 0x28, 0x10,
- 0x00, 0x00, 0x4e, 0x71, 0xcf,
- 0x00, 0x00, 0x4e, 0x9f, 0xd2,
- 0x00, 0x00, 0x4e, 0xab, 0x51,
- 0x00, 0x00, 0x4e, 0xd9, 0x13,
- 0x00, 0x00, 0x4e, 0xe2, 0x52,
- 0x00, 0x00, 0x4f, 0x1b, 0x0f,
- 0x00, 0x00, 0x4f, 0x2c, 0x0e,
- 0x00, 0x00, 0x4f, 0x53, 0xd2,
- 0x00, 0x00, 0x52, 0x87, 0xd1,
- 0x00, 0x00, 0x50, 0x63, 0xcf,
- 0x00, 0x00, 0x50, 0xbb, 0x4e,
- 0x00, 0x00, 0x50, 0xc5, 0x11,
- 0x00, 0x00, 0x50, 0xe5, 0xd0,
- 0x00, 0x00, 0x51, 0x1a, 0x12,
- 0x00, 0x00, 0x51, 0x32, 0xd1,
- 0x00, 0x00, 0x53, 0x0f, 0x10,
- 0x00, 0x00, 0x53, 0xdb, 0x4f,
- 0x00, 0x00, 0x57, 0xeb, 0x11,
- 0x00, 0x00, 0x5e, 0x22, 0x50,
- 0x00, 0x00, 0x52, 0xa5, 0x06,
- 0x00, 0x00, 0x53, 0x85, 0xc7,
- 0x00, 0x00, 0x40, 0xcf, 0x47,
- 0x00, 0x00, 0x40, 0x40, 0x42,
- 0x00, 0x00, 0x49, 0x02, 0x85,
- 0x00, 0x00, 0x51, 0x91, 0x87,
- 0x00, 0x00, 0x42, 0x4b, 0x42,
- 0x00, 0x00, 0x40, 0x2d, 0x02,
- 0x00, 0x00, 0x49, 0x4a, 0xc5,
- 0x00, 0x00, 0x42, 0x36, 0x83,
- 0x00, 0x00, 0x5d, 0xe2, 0xc6,
- 0x00, 0x00, 0x50, 0x73, 0x4d,
- 0x00, 0x00, 0x50, 0x76, 0x8c,
- 0x00, 0x00, 0x40, 0x46, 0xc2,
- 0x00, 0x00, 0x5c, 0x32, 0x8b,
- 0x00, 0x00, 0x48, 0x14, 0x8a,
- 0x00, 0x00, 0x48, 0x3b, 0x8a,
- 0x00, 0x00, 0x43, 0x0b, 0x09,
- 0x00, 0x00, 0x4c, 0xfa, 0x4b,
- 0x00, 0x00, 0x50, 0x5e, 0xcd,
- 0x00, 0x00, 0x42, 0x84, 0x8c,
- 0x00, 0x00, 0x51, 0x98, 0x8a,
- 0x00, 0x00, 0x44, 0x64, 0xcc,
- 0x00, 0x00, 0x44, 0x9f, 0xcb,
- 0x00, 0x00, 0x45, 0x9c, 0xcc,
- 0x00, 0x00, 0x47, 0xf5, 0xce,
- 0x00, 0x00, 0x48, 0x4e, 0x8b,
- 0x00, 0x00, 0x4a, 0x45, 0xcc,
- 0x00, 0x00, 0x4c, 0xad, 0x83,
- 0x00, 0x00, 0x52, 0x5e, 0x86,
- 0x00, 0x00, 0x56, 0x51, 0x82,
- 0x00, 0x00, 0x42, 0x19, 0x02,
- 0x00, 0x00, 0x45, 0xeb, 0x03,
- 0x00, 0x00, 0x40, 0x11, 0x02,
- 0x00, 0x00, 0x42, 0xfd, 0x43,
- 0x00, 0x00, 0x5c, 0x26, 0xc6,
- 0x00, 0x00, 0x46, 0xe8, 0x87,
- 0x00, 0x00, 0x4d, 0x69, 0xc6,
- 0x00, 0x00, 0x5a, 0x8c, 0xc8,
- 0x00, 0x00, 0x40, 0x11, 0x08,
- 0x00, 0x00, 0x41, 0x47, 0x86,
- 0x00, 0x00, 0x40, 0xc3, 0x02,
- 0x00, 0x00, 0x51, 0x9f, 0xcd,
- 0x00, 0x00, 0x51, 0xa3, 0x0c,
- 0x00, 0x00, 0x52, 0x1f, 0xc7,
- 0x00, 0x00, 0x51, 0xe1, 0x87,
- 0x00, 0x00, 0x41, 0x12, 0x02,
- 0x00, 0x00, 0x41, 0x98, 0x82,
- 0x00, 0x00, 0x40, 0x40, 0x02,
- 0x00, 0x00, 0x48, 0xb0, 0x82,
- 0x00, 0x00, 0x53, 0xd2, 0xd6,
- 0x00, 0x00, 0x54, 0x25, 0x15,
- 0x00, 0x00, 0x54, 0x4c, 0xd6,
- 0x00, 0x00, 0x54, 0xa3, 0x93,
- 0x00, 0x00, 0x54, 0xaa, 0x52,
- 0x00, 0x00, 0x55, 0xad, 0xd3,
- 0x00, 0x00, 0x55, 0xb4, 0x52,
- 0x00, 0x00, 0x5b, 0x48, 0x0f,
- 0x00, 0x00, 0x5c, 0x8b, 0x58,
- 0x00, 0x00, 0x5c, 0xa4, 0x97,
- 0x00, 0x00, 0x5c, 0xde, 0x99,
- 0x00, 0x00, 0x5c, 0xfb, 0x18,
- 0x00, 0x00, 0x5d, 0x09, 0xd8,
- 0x00, 0x00, 0x5d, 0x15, 0x57,
- 0x00, 0x00, 0x5d, 0x2c, 0x17,
- 0x00, 0x00, 0x5d, 0x6d, 0x16,
- 0x00, 0x00, 0x5e, 0x2e, 0x93,
- 0x00, 0x00, 0x5e, 0x35, 0x95,
- 0x00, 0x00, 0x5e, 0x3f, 0xd2,
- 0x00, 0x00, 0x5e, 0x44, 0x53,
- 0x00, 0x00, 0x01, 0x70, 0x82,
- 0x00, 0x8b, 0x42, 0xc2, 0x84,
- 0x00, 0x8b, 0xdc, 0x74, 0x48,
- 0x00, 0x00, 0x00, 0x2c, 0x45,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x02, 0x32, 0xc2,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x41, 0x3c, 0x82,
- 0x00, 0x8d, 0xc9, 0xf3, 0x85,
- 0x00, 0x8e, 0x44, 0xbf, 0x85,
- 0x00, 0x8e, 0xc6, 0x9d, 0x46,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x8f, 0x4c, 0x86, 0x85,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x14, 0x82,
- 0x00, 0x8f, 0xd6, 0x9f, 0x05,
- 0x00, 0x90, 0x48, 0xe1, 0x05,
- 0x00, 0x90, 0xc8, 0xef, 0x07,
- 0x00, 0x91, 0x47, 0x60, 0xc9,
- 0x00, 0x91, 0xdb, 0x60, 0xc4,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x40, 0x06, 0x42,
- 0x00, 0x92, 0x45, 0xb7, 0x45,
- 0x00, 0x92, 0xc9, 0xea, 0x09,
- 0x00, 0x93, 0x41, 0x10, 0x08,
- 0x00, 0x93, 0xcc, 0x0a, 0x45,
- 0x00, 0x94, 0x55, 0x55, 0x07,
- 0x00, 0x94, 0xc1, 0x33, 0x88,
- 0x00, 0x95, 0x51, 0xdb, 0x05,
- 0x00, 0x95, 0xca, 0x6d, 0x46,
- 0x00, 0x96, 0x44, 0xc8, 0x49,
- 0x00, 0x96, 0xd9, 0xec, 0xc8,
- 0x00, 0x97, 0x4d, 0x82, 0x88,
- 0x00, 0x97, 0xca, 0x77, 0x8a,
- 0x00, 0x98, 0x5c, 0xc9, 0x04,
- 0x00, 0x98, 0xcb, 0x5c, 0xc5,
- 0x00, 0x99, 0x4b, 0x3a, 0xc8,
- 0x00, 0x99, 0xc4, 0xe6, 0x85,
- 0x00, 0x00, 0x41, 0x77, 0x02,
- 0x00, 0x9a, 0x40, 0x1e, 0x43,
- 0x00, 0x9a, 0xcb, 0x41, 0xc6,
- 0x00, 0x9b, 0x44, 0x89, 0x08,
- 0x00, 0x9b, 0xdc, 0x98, 0xc6,
- 0x00, 0x9c, 0x41, 0x48, 0x88,
- 0x00, 0x9c, 0xdd, 0x34, 0x46,
- 0x00, 0x9d, 0x45, 0x2d, 0x84,
- 0x00, 0x9d, 0xc0, 0x18, 0xc2,
- 0x00, 0x9e, 0xce, 0x99, 0x07,
- 0x00, 0x9f, 0x4b, 0xbb, 0x04,
- 0x00, 0x9f, 0xc8, 0x8f, 0x07,
- 0x00, 0xa0, 0x5e, 0xa5, 0x07,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0xa0, 0xca, 0xbc, 0x05,
- 0x00, 0xa1, 0x45, 0x75, 0x04,
- 0x00, 0xa1, 0xd7, 0x06, 0x07,
- 0x00, 0xa2, 0x43, 0xae, 0x07,
- 0x00, 0xa2, 0xc9, 0x23, 0x06,
- 0x00, 0xa3, 0x48, 0xeb, 0x05,
- 0x00, 0xa3, 0xca, 0x5f, 0x87,
- 0x00, 0xa4, 0x4c, 0x8d, 0x48,
- 0x00, 0xa4, 0xc0, 0xc6, 0x47,
- 0x00, 0xa5, 0x4b, 0xdf, 0x89,
- 0x00, 0xa5, 0xce, 0x3e, 0x85,
- 0x00, 0xa6, 0x51, 0x4f, 0x87,
- 0x00, 0xa6, 0xc9, 0xe7, 0x06,
- 0x00, 0x00, 0x01, 0xec, 0xcb,
- 0x00, 0xa7, 0x5d, 0xcc, 0x08,
- 0x00, 0x00, 0x43, 0x1e, 0x4d,
- 0x00, 0x00, 0x46, 0x98, 0x09,
- 0x00, 0x00, 0x48, 0x30, 0x8b,
- 0x00, 0x00, 0x49, 0xc1, 0x0b,
- 0x00, 0x00, 0x4b, 0x7a, 0xcb,
- 0x00, 0x00, 0x5e, 0x55, 0x0b,
- 0x00, 0x00, 0x52, 0xc7, 0x0b,
- 0x00, 0x00, 0x52, 0xc9, 0xcb,
- 0x00, 0x00, 0x52, 0xd3, 0x49,
- 0x00, 0x00, 0x52, 0xe3, 0x4b,
- 0x00, 0x00, 0x52, 0xe6, 0x0b,
- 0x00, 0x00, 0x52, 0xeb, 0x8b,
- 0x00, 0x00, 0x52, 0xf8, 0x4a,
- 0x00, 0x00, 0x52, 0xfd, 0x8a,
- 0x00, 0x00, 0x53, 0x03, 0x8c,
- 0x00, 0x00, 0x53, 0x43, 0x4b,
- 0x00, 0x00, 0x53, 0x4d, 0x8a,
- 0x00, 0x00, 0x54, 0x85, 0xca,
- 0x00, 0x00, 0x55, 0x0e, 0x8e,
- 0x00, 0x00, 0x55, 0x1f, 0x8e,
- 0x00, 0x00, 0x55, 0x23, 0x0a,
- 0x00, 0x00, 0x55, 0x4d, 0x8a,
- 0x00, 0x00, 0x55, 0x5b, 0x0b,
- 0x00, 0x00, 0x55, 0x5d, 0xcb,
- 0x00, 0x00, 0x55, 0x68, 0xcb,
- 0x00, 0x00, 0x57, 0x68, 0x0b,
- 0x00, 0x00, 0x57, 0x6e, 0x0a,
- 0x00, 0x00, 0x57, 0x7a, 0xcb,
- 0x00, 0x00, 0x57, 0x7d, 0x8a,
- 0x00, 0x00, 0x57, 0x80, 0x0a,
- 0x00, 0x00, 0x57, 0x82, 0x8a,
- 0x00, 0x00, 0x59, 0x5c, 0x8b,
- 0x00, 0x00, 0x59, 0xca, 0x0b,
- 0x00, 0x00, 0x59, 0xf5, 0xce,
- 0x00, 0x00, 0x59, 0xf9, 0x4b,
- 0x00, 0x00, 0x5a, 0x69, 0xcb,
- 0x00, 0x00, 0x5a, 0x7c, 0xcb,
- 0x00, 0x00, 0x5a, 0xbc, 0x8a,
- 0x00, 0x00, 0x5a, 0xbf, 0x09,
- 0x00, 0x00, 0x5a, 0xc1, 0x4a,
- 0x00, 0x00, 0x5a, 0xda, 0x4a,
- 0x00, 0x00, 0x5c, 0x85, 0x4b,
- 0x00, 0x00, 0x5e, 0x05, 0x8b,
- 0x00, 0x00, 0x5e, 0x14, 0x0a,
- 0x00, 0x00, 0x5e, 0x28, 0xcb,
- 0x00, 0x00, 0x5e, 0x89, 0x8b,
- 0x00, 0x00, 0x5f, 0x21, 0x8b,
- 0x00, 0xa7, 0xc9, 0x0e, 0x08,
- 0x00, 0xa8, 0x49, 0x70, 0x09,
- 0x00, 0xa8, 0xca, 0xe8, 0x89,
- 0x00, 0xa9, 0x4f, 0x64, 0xc8,
- 0x00, 0x00, 0x55, 0xca, 0x45,
- 0x00, 0x00, 0x40, 0xba, 0x83,
- 0x00, 0x00, 0x45, 0x68, 0x84,
- 0x00, 0x00, 0x58, 0x22, 0x85,
- 0x00, 0x00, 0x5b, 0x5e, 0x06,
- 0x00, 0x00, 0x55, 0x37, 0xc5,
- 0x00, 0x00, 0x49, 0x5f, 0xc4,
- 0x00, 0x00, 0x4f, 0x7b, 0x48,
- 0x00, 0x00, 0x52, 0x6d, 0x45,
- 0x00, 0x00, 0x4a, 0x0a, 0x44,
- 0x00, 0x00, 0x5c, 0xb1, 0xc7,
- 0x00, 0x00, 0x4a, 0xda, 0x4a,
- 0x00, 0x00, 0x4f, 0xfa, 0x8a,
- 0x00, 0x00, 0x59, 0x54, 0x87,
- 0x00, 0x00, 0x44, 0x1d, 0x07,
- 0x00, 0x00, 0x4f, 0x20, 0x07,
- 0x00, 0x00, 0x45, 0xb8, 0xc7,
- 0x00, 0x00, 0x5a, 0x2f, 0xc5,
- 0x00, 0x00, 0x47, 0x5b, 0x46,
- 0x00, 0x00, 0x4d, 0x6e, 0x47,
- 0x00, 0x00, 0x44, 0xcf, 0xc4,
- 0x00, 0x00, 0x4c, 0xd4, 0x06,
- 0x00, 0x00, 0x4f, 0xb2, 0x06,
- 0x00, 0x00, 0x5a, 0xfb, 0xc5,
- 0x00, 0x00, 0x58, 0x1a, 0x84,
- 0x00, 0x00, 0x4a, 0xb8, 0x46,
- 0x00, 0x00, 0x4a, 0xcc, 0x87,
- 0x00, 0x00, 0x42, 0x69, 0xc6,
- 0x00, 0x00, 0x50, 0xef, 0x07,
- 0x00, 0x00, 0x4a, 0x03, 0x03,
- 0x00, 0x00, 0x5c, 0x4a, 0xc6,
- 0x00, 0x00, 0x42, 0x94, 0xc5,
- 0x00, 0x00, 0x48, 0xf0, 0x07,
- 0x00, 0x00, 0x47, 0x99, 0x0a,
- 0x00, 0x00, 0x43, 0x35, 0x84,
- 0x00, 0x00, 0x40, 0xec, 0x08,
- 0x00, 0x00, 0x4b, 0xd8, 0x89,
- 0x00, 0x00, 0x4c, 0x57, 0x87,
- 0x00, 0x00, 0x53, 0x4c, 0x06,
- 0x00, 0x00, 0x4e, 0x9a, 0x48,
- 0x00, 0x00, 0x5e, 0xa8, 0x89,
- 0x00, 0x00, 0x43, 0xa7, 0x04,
- 0x00, 0x00, 0x48, 0x05, 0x44,
- 0x00, 0x00, 0x41, 0x37, 0x85,
- 0x00, 0x00, 0x4d, 0x15, 0x08,
- 0x00, 0x00, 0x4e, 0x12, 0x47,
- 0x00, 0x00, 0x51, 0x10, 0x49,
- 0x00, 0x00, 0x44, 0x55, 0xc8,
- 0x00, 0x00, 0x52, 0x18, 0xc6,
- 0x00, 0x00, 0x44, 0x59, 0x46,
- 0x00, 0x00, 0x4a, 0x84, 0xc8,
- 0x00, 0x00, 0x57, 0x59, 0x46,
- 0x00, 0x00, 0x44, 0xbf, 0x85,
- 0x00, 0x00, 0x49, 0x23, 0xc6,
- 0x00, 0x00, 0x48, 0x93, 0x48,
- 0x00, 0x00, 0x45, 0xcd, 0x86,
- 0x00, 0x00, 0x46, 0x2e, 0x0b,
- 0x00, 0x00, 0x49, 0xb6, 0x06,
- 0x00, 0x00, 0x4a, 0xa2, 0x0d,
- 0x00, 0x00, 0x40, 0xab, 0x45,
- 0x00, 0x00, 0x4b, 0xb9, 0xc6,
- 0x00, 0x00, 0x40, 0xe0, 0xc5,
- 0x00, 0x00, 0x43, 0x52, 0x49,
- 0x00, 0x00, 0x57, 0x1c, 0xc7,
- 0x00, 0x00, 0x41, 0x0e, 0x08,
- 0x00, 0x00, 0x4b, 0xe9, 0x06,
- 0x00, 0x00, 0x4a, 0x94, 0x49,
- 0x00, 0x00, 0x55, 0x6e, 0x06,
- 0x00, 0x00, 0x47, 0x98, 0x85,
- 0x00, 0x00, 0x41, 0x36, 0x86,
- 0x00, 0x00, 0x4f, 0x3f, 0x06,
- 0x00, 0x00, 0x4e, 0x62, 0xc9,
- 0x00, 0x00, 0x4c, 0xde, 0x86,
- 0x00, 0x00, 0x4c, 0x7d, 0x47,
- 0x00, 0x00, 0x4b, 0x17, 0x05,
- 0x00, 0x00, 0x41, 0x1d, 0x03,
- 0x00, 0x00, 0x46, 0x2f, 0x85,
- 0x00, 0x00, 0x5c, 0x24, 0x47,
- 0x00, 0x00, 0x57, 0x54, 0x06,
- 0x00, 0x00, 0x40, 0xaa, 0x49,
- 0x00, 0x00, 0x46, 0x9d, 0x46,
- 0x00, 0x00, 0x48, 0x05, 0xc6,
- 0x00, 0x00, 0x44, 0x42, 0x89,
- 0x00, 0x00, 0x49, 0x1d, 0xc9,
- 0x00, 0x00, 0x4b, 0x1c, 0x87,
- 0x00, 0x00, 0x56, 0x64, 0x48,
- 0x00, 0x00, 0x49, 0xd8, 0x49,
- 0x00, 0x00, 0x48, 0xff, 0x08,
- 0x00, 0x00, 0x4d, 0xe7, 0xc6,
- 0x00, 0x00, 0x4e, 0xf6, 0x45,
- 0x00, 0x00, 0x48, 0x7e, 0x0a,
- 0x00, 0x00, 0x48, 0x06, 0x46,
- 0x00, 0x00, 0x55, 0x42, 0xc6,
- 0x00, 0x00, 0x4e, 0x98, 0x45,
- 0x00, 0x00, 0x45, 0xbd, 0x48,
- 0x00, 0x00, 0x55, 0x15, 0x07,
- 0x00, 0x00, 0x43, 0x11, 0x8a,
- 0x00, 0x00, 0x45, 0x5c, 0xc6,
- 0x00, 0x00, 0x44, 0xa2, 0x05,
- 0x00, 0x00, 0x51, 0x17, 0x46,
- 0x00, 0x00, 0x49, 0xfa, 0x47,
- 0x00, 0x00, 0x53, 0x4a, 0xc7,
- 0x00, 0x00, 0x51, 0xfb, 0x85,
- 0x00, 0x00, 0x47, 0x9a, 0x45,
- 0x00, 0x00, 0x47, 0x10, 0x46,
- 0x00, 0x00, 0x47, 0x57, 0x86,
- 0x00, 0x00, 0x47, 0xd4, 0x46,
- 0x00, 0x00, 0x43, 0x90, 0xc4,
- 0x00, 0x00, 0x49, 0x12, 0x09,
- 0x00, 0x00, 0x49, 0x84, 0xc6,
- 0x00, 0x00, 0x5e, 0x58, 0xca,
- 0x00, 0x00, 0x41, 0xac, 0x08,
- 0x00, 0x00, 0x51, 0x4c, 0x88,
- 0x00, 0x00, 0x4f, 0xfa, 0x8a,
- 0x00, 0x00, 0x44, 0x8a, 0xc5,
- 0x00, 0x00, 0x4a, 0xcb, 0xc5,
- 0x00, 0x00, 0x5c, 0xc6, 0x88,
- 0x00, 0x00, 0x58, 0x40, 0x08,
- 0x00, 0x00, 0x44, 0x11, 0x47,
- 0x00, 0x00, 0x59, 0xb7, 0x06,
- 0x00, 0x00, 0x54, 0x04, 0x88,
- 0x00, 0x00, 0x5f, 0x32, 0x47,
- 0x00, 0x00, 0x48, 0xf1, 0x48,
- 0x00, 0x00, 0x4c, 0xd3, 0x06,
- 0x00, 0x00, 0x49, 0x2b, 0xc8,
- 0x00, 0x00, 0x4a, 0x53, 0xc6,
- 0x00, 0x00, 0x47, 0x65, 0xc7,
- 0x00, 0x00, 0x55, 0xf6, 0x46,
- 0x00, 0x00, 0x4a, 0xb8, 0x46,
- 0x00, 0x00, 0x42, 0x78, 0x0a,
- 0x00, 0x00, 0x42, 0xec, 0x86,
- 0x00, 0x00, 0x4e, 0xf6, 0x49,
- 0x00, 0x00, 0x4f, 0xef, 0x06,
- 0x00, 0x00, 0x40, 0xb8, 0xca,
- 0x00, 0x00, 0x45, 0x2d, 0x89,
- 0x00, 0x00, 0x50, 0x7b, 0x46,
- 0x00, 0x00, 0x4c, 0xe7, 0xc4,
- 0x00, 0x00, 0x56, 0x06, 0xcd,
- 0x00, 0x00, 0x48, 0xdf, 0x47,
- 0x00, 0x00, 0x5c, 0x53, 0x86,
- 0x00, 0x00, 0x4d, 0x81, 0x45,
- 0x00, 0x00, 0x55, 0x6e, 0x85,
- 0x00, 0x00, 0x53, 0xca, 0x86,
- 0x00, 0x00, 0x4c, 0x6a, 0x49,
- 0x00, 0x00, 0x4d, 0xa6, 0x47,
- 0x00, 0x00, 0x48, 0xa1, 0x06,
- 0x00, 0x00, 0x57, 0xce, 0x46,
- 0x00, 0x00, 0x47, 0x87, 0x89,
- 0x00, 0x00, 0x44, 0xbe, 0xc4,
- 0x00, 0x00, 0x50, 0x7c, 0x44,
- 0x00, 0x00, 0x5b, 0xdc, 0x48,
- 0x00, 0x00, 0x55, 0xfa, 0xc6,
- 0x00, 0x00, 0x4b, 0x11, 0x88,
- 0x00, 0x00, 0x4f, 0x6d, 0xc8,
- 0x00, 0x00, 0x46, 0x17, 0x87,
- 0x00, 0x00, 0x4f, 0xd3, 0x49,
- 0x00, 0x00, 0x5c, 0xe7, 0x87,
- 0x00, 0x00, 0x4c, 0x85, 0x4a,
- 0x00, 0x00, 0x50, 0x89, 0xcf,
- 0x00, 0x00, 0x5a, 0x4d, 0x8a,
- 0x00, 0x00, 0x5c, 0xf2, 0x45,
- 0x00, 0x00, 0x48, 0x95, 0x85,
- 0x00, 0x00, 0x40, 0xcb, 0x45,
- 0x00, 0x00, 0x4f, 0xcf, 0x87,
- 0x00, 0x00, 0x48, 0xae, 0x83,
- 0x00, 0x00, 0x56, 0x66, 0x48,
- 0x00, 0x00, 0x47, 0x31, 0x46,
- 0x00, 0x00, 0x47, 0x32, 0x49,
- 0x00, 0x00, 0x55, 0x84, 0x46,
- 0x00, 0x00, 0x56, 0x67, 0xc7,
- 0x00, 0x00, 0x4a, 0x92, 0x09,
- 0x00, 0x00, 0x41, 0x0d, 0x08,
- 0x00, 0x00, 0x43, 0x54, 0x07,
- 0x00, 0x00, 0x52, 0xb3, 0xc3,
- 0x00, 0x00, 0x55, 0xca, 0xc5,
- 0x00, 0x00, 0x49, 0xf5, 0x85,
- 0x00, 0x00, 0x43, 0x8f, 0x0b,
- 0x00, 0x00, 0x44, 0xe7, 0x44,
- 0x00, 0x00, 0x51, 0x59, 0x04,
- 0x00, 0x00, 0x48, 0x75, 0xc6,
- 0x00, 0x00, 0x52, 0xb5, 0x87,
- 0x00, 0x00, 0x59, 0x83, 0x4a,
- 0x00, 0x00, 0x48, 0x0e, 0xc7,
- 0x00, 0x00, 0x54, 0x67, 0xc7,
- 0x00, 0x00, 0x48, 0xe1, 0x05,
- 0x00, 0x00, 0x5d, 0x72, 0x85,
- 0x00, 0x00, 0x48, 0xf6, 0x89,
- 0x00, 0x00, 0x4a, 0xb8, 0x46,
- 0x00, 0x00, 0x48, 0x0d, 0x4d,
- 0x00, 0x00, 0x5c, 0xa0, 0x45,
- 0x00, 0x00, 0x4c, 0xb0, 0x83,
- 0x00, 0x00, 0x41, 0xda, 0x03,
- 0x00, 0x00, 0x42, 0xff, 0x05,
- 0x00, 0x00, 0x54, 0x0d, 0x85,
- 0x00, 0x00, 0x4e, 0x9a, 0x48,
- 0x00, 0x00, 0x48, 0xa8, 0xc7,
- 0x00, 0x00, 0x44, 0x05, 0xc6,
- 0x00, 0x00, 0x4a, 0xe5, 0x06,
- 0x00, 0x00, 0x42, 0xbc, 0xc5,
- 0x00, 0x00, 0x43, 0x63, 0x47,
- 0x00, 0x00, 0x56, 0xa2, 0xc7,
- 0x00, 0x00, 0x43, 0x85, 0x07,
- 0x00, 0x00, 0x4b, 0x5d, 0x4a,
- 0x00, 0x00, 0x5c, 0x4b, 0x88,
- 0x00, 0x00, 0x43, 0x90, 0xc4,
- 0x00, 0x00, 0x49, 0x07, 0xc7,
- 0x00, 0x00, 0x48, 0xd2, 0xc7,
- 0x00, 0x00, 0x56, 0x26, 0x86,
- 0x00, 0x00, 0x4a, 0x4a, 0x47,
- 0x00, 0x00, 0x51, 0xad, 0x08,
- 0x00, 0x00, 0x58, 0x55, 0xc8,
- 0x00, 0x00, 0x47, 0xed, 0x46,
- 0x00, 0x00, 0x44, 0x1f, 0x48,
- 0x00, 0x00, 0x4c, 0xdf, 0x04,
- 0x00, 0x00, 0x4d, 0x6e, 0x46,
- 0x00, 0x00, 0x46, 0x74, 0x86,
- 0x00, 0x00, 0x4f, 0xb4, 0x06,
- 0x00, 0x00, 0x53, 0x59, 0x86,
- 0x00, 0x00, 0x41, 0x1c, 0xc4,
- 0x00, 0x00, 0x45, 0xb9, 0x86,
- 0x00, 0x00, 0x4d, 0x6b, 0x46,
- 0x00, 0x00, 0x4a, 0x7c, 0x86,
- 0x00, 0x00, 0x42, 0x78, 0x06,
- 0x00, 0x00, 0x5c, 0xbf, 0x86,
- 0x00, 0x00, 0x4f, 0xc6, 0x86,
- 0x00, 0x00, 0x44, 0x04, 0xc8,
- 0x00, 0x00, 0x4c, 0x92, 0xc8,
- 0x00, 0x00, 0x4e, 0xc2, 0x48,
- 0x00, 0x00, 0x55, 0x39, 0xc8,
- 0x00, 0x00, 0x5c, 0xc6, 0x06,
- 0x00, 0x00, 0x40, 0x34, 0xc5,
- 0x00, 0x00, 0x49, 0x41, 0x06,
- 0x00, 0x00, 0x4c, 0x0a, 0xc5,
- 0x00, 0x00, 0x59, 0xa8, 0x47,
- 0x00, 0x00, 0x44, 0x56, 0x85,
- 0x00, 0x00, 0x41, 0x14, 0xc3,
- 0x00, 0x00, 0x52, 0x02, 0xc5,
- 0x00, 0x00, 0x5e, 0xc5, 0x04,
- 0x00, 0x00, 0x5c, 0xc0, 0xc5,
- 0x00, 0x00, 0x42, 0x00, 0xc3,
- 0x00, 0x00, 0x55, 0x25, 0xc7,
- 0x00, 0x00, 0x4d, 0xd9, 0x08,
- 0x00, 0x00, 0x50, 0xef, 0xc6,
- 0x00, 0x00, 0x4b, 0xf2, 0x0d,
- 0x00, 0x00, 0x48, 0x95, 0x46,
- 0x00, 0x00, 0x4a, 0x72, 0x05,
- 0x00, 0x00, 0x41, 0xea, 0x43,
- 0x00, 0x00, 0x4d, 0x2c, 0x89,
- 0x00, 0x00, 0x44, 0xc0, 0x46,
- 0x00, 0x00, 0x4a, 0x65, 0x46,
- 0x00, 0x00, 0x4a, 0xfb, 0x84,
- 0x00, 0x00, 0x5a, 0x4d, 0x07,
- 0x00, 0x00, 0x54, 0x7b, 0x06,
- 0x00, 0x00, 0x4d, 0xa9, 0x05,
- 0x00, 0x00, 0x46, 0x16, 0x43,
- 0x00, 0x00, 0x41, 0x25, 0x84,
- 0x00, 0x00, 0x48, 0xd4, 0x86,
- 0x00, 0x00, 0x5a, 0xae, 0x04,
- 0x00, 0x00, 0x5b, 0xd4, 0x48,
- 0x00, 0x00, 0x40, 0x82, 0x49,
- 0x00, 0x00, 0x47, 0xf2, 0x09,
- 0x00, 0x00, 0x4b, 0x0f, 0x8a,
- 0x00, 0x00, 0x4b, 0x2e, 0x8d,
- 0x00, 0x00, 0x43, 0x39, 0x07,
- 0x00, 0x00, 0x59, 0x4f, 0xc6,
- 0x00, 0x00, 0x42, 0x6f, 0x84,
- 0x00, 0x00, 0x47, 0x60, 0xc9,
- 0x00, 0x00, 0x49, 0x50, 0x48,
- 0x00, 0x00, 0x49, 0x6e, 0x86,
- 0x00, 0x00, 0x43, 0xcb, 0x06,
- 0x00, 0x00, 0x4a, 0x4a, 0x47,
- 0x00, 0x00, 0x57, 0x00, 0x46,
- 0x00, 0x00, 0x41, 0xd9, 0x46,
- 0x00, 0x00, 0x4f, 0xe9, 0x06,
- 0x00, 0x00, 0x5e, 0xa5, 0x8a,
- 0x00, 0x00, 0x41, 0x33, 0x88,
- 0x00, 0x00, 0x42, 0xa6, 0x85,
- 0x00, 0x00, 0x4f, 0xe7, 0x09,
- 0x00, 0x00, 0x5d, 0x92, 0xca,
- 0x00, 0x00, 0x51, 0x8e, 0xc8,
- 0x00, 0x00, 0x4a, 0xc4, 0x48,
- 0x00, 0x00, 0x4a, 0x64, 0xc8,
- 0x00, 0x00, 0x57, 0x21, 0xcc,
- 0x00, 0x00, 0x52, 0xcc, 0x45,
- 0x00, 0x00, 0x4a, 0xe7, 0x88,
- 0x00, 0x00, 0x4c, 0xe3, 0xc6,
- 0x00, 0x00, 0x56, 0xf9, 0x86,
- 0x00, 0x00, 0x4d, 0xec, 0x07,
- 0x00, 0x00, 0x48, 0x0d, 0xc5,
- 0x00, 0x00, 0x45, 0xcc, 0xc5,
- 0x00, 0x00, 0x47, 0xf0, 0xc9,
- 0x00, 0x00, 0x40, 0x53, 0xc7,
- 0x00, 0x00, 0x47, 0x32, 0x05,
- 0x00, 0x00, 0x41, 0xfd, 0x07,
- 0x00, 0x00, 0x41, 0xda, 0x03,
- 0x00, 0x00, 0x4e, 0x1c, 0x05,
- 0x00, 0x00, 0x41, 0xb5, 0x48,
- 0x00, 0x00, 0x46, 0x33, 0x07,
- 0x00, 0x00, 0x4a, 0xc3, 0x09,
- 0x00, 0x00, 0x4c, 0xf1, 0x05,
- 0x00, 0x00, 0x4f, 0xb9, 0x84,
- 0x00, 0x00, 0x52, 0x9e, 0x48,
- 0x00, 0x00, 0x5d, 0x81, 0x47,
- 0x00, 0x00, 0x43, 0x55, 0xc8,
- 0x00, 0x00, 0x40, 0xe2, 0x08,
- 0x00, 0x00, 0x59, 0x7b, 0xc5,
- 0x00, 0x00, 0x47, 0x30, 0x46,
- 0x00, 0x00, 0x41, 0x3a, 0x46,
- 0x00, 0x00, 0x5a, 0xf7, 0xc9,
- 0x00, 0x00, 0x46, 0x75, 0x87,
- 0x00, 0x00, 0x4c, 0x0f, 0x06,
- 0x00, 0x00, 0x5b, 0xf1, 0x07,
- 0x00, 0x00, 0x40, 0x3b, 0x83,
- 0x00, 0x00, 0x5b, 0x60, 0xc4,
- 0x00, 0x00, 0x5c, 0xd5, 0xc5,
- 0x00, 0x00, 0x43, 0x64, 0x84,
- 0x00, 0x00, 0x44, 0xdc, 0x84,
- 0x00, 0x00, 0x44, 0xbc, 0x07,
- 0x00, 0x00, 0x47, 0x86, 0x47,
- 0x00, 0x00, 0x48, 0xa2, 0xc4,
- 0x00, 0x00, 0x4a, 0xc1, 0x50,
- 0x00, 0x00, 0x54, 0x69, 0x47,
- 0x00, 0x00, 0x5d, 0x72, 0x85,
- 0x00, 0x00, 0x5d, 0xc2, 0x8c,
- 0x00, 0x00, 0x40, 0xdf, 0xc4,
- 0x00, 0x00, 0x4b, 0xed, 0xc8,
- 0x00, 0x00, 0x47, 0x64, 0xc9,
- 0x00, 0x00, 0x4c, 0x4a, 0xc6,
- 0x00, 0x00, 0x52, 0x49, 0x88,
- 0x00, 0x00, 0x45, 0x65, 0x84,
- 0x00, 0x00, 0x48, 0x78, 0xc8,
- 0x00, 0x00, 0x4e, 0x0c, 0x46,
- 0x00, 0x00, 0x42, 0x76, 0x88,
- 0x00, 0x00, 0x4a, 0xd2, 0x46,
- 0x00, 0x00, 0x4e, 0x93, 0xcb,
- 0x00, 0x00, 0x5b, 0x9a, 0x85,
- 0x00, 0x00, 0x5c, 0xd4, 0x48,
- 0x00, 0x00, 0x40, 0x86, 0x84,
- 0x00, 0x00, 0x40, 0x86, 0x8a,
- 0x00, 0x00, 0x4a, 0xc3, 0x09,
- 0x00, 0x00, 0x49, 0x0f, 0x86,
- 0x00, 0x00, 0x50, 0x11, 0xc8,
- 0x00, 0x00, 0x49, 0x5c, 0x05,
- 0x00, 0x00, 0x5c, 0x27, 0xc4,
- 0x00, 0x00, 0x4b, 0xec, 0xc6,
- 0x00, 0x00, 0x43, 0x83, 0xc8,
- 0x00, 0x00, 0x49, 0x0e, 0x08,
- 0x00, 0x00, 0x53, 0x97, 0x46,
- 0x00, 0x00, 0x54, 0xd7, 0x04,
- 0x00, 0x00, 0x48, 0x7d, 0x86,
- 0x00, 0x00, 0x5c, 0xe8, 0x07,
- 0x00, 0x00, 0x48, 0x8e, 0x07,
- 0x00, 0x00, 0x4a, 0x4a, 0x4f,
- 0x00, 0x00, 0x54, 0xce, 0x47,
- 0x00, 0x00, 0x59, 0x91, 0xc7,
- 0x00, 0x00, 0x56, 0xf8, 0x45,
- 0x00, 0x00, 0x5d, 0xa5, 0x05,
- 0x00, 0x00, 0x4b, 0x19, 0x49,
- 0x00, 0x00, 0x4e, 0x8b, 0xc6,
- 0x00, 0x00, 0x48, 0xec, 0x45,
- 0x00, 0x00, 0x49, 0x20, 0xc7,
- 0x00, 0x00, 0x4e, 0x6d, 0xc8,
- 0x00, 0x00, 0x43, 0x8d, 0x45,
- 0x00, 0x00, 0x55, 0xf6, 0x46,
- 0x00, 0x00, 0x41, 0xaa, 0x48,
- 0x00, 0x00, 0x5c, 0x98, 0xca,
- 0x00, 0x00, 0x44, 0xd6, 0xc8,
- 0x00, 0x00, 0x49, 0x9e, 0x47,
- 0x00, 0x00, 0x50, 0x8e, 0x06,
- 0x00, 0x00, 0x4f, 0xe6, 0xc6,
- 0x00, 0x00, 0x40, 0x03, 0xc3,
- 0x00, 0x00, 0x41, 0x3b, 0x83,
- 0x00, 0x00, 0x5d, 0x94, 0x89,
- 0x00, 0x00, 0x49, 0xd6, 0xc9,
- 0x00, 0x00, 0x4b, 0xde, 0x86,
- 0x00, 0x00, 0x4c, 0xf1, 0x05,
- 0x00, 0x00, 0x43, 0xd0, 0xc8,
- 0x00, 0x00, 0x50, 0x11, 0xc8,
- 0x00, 0x00, 0x4a, 0xb4, 0x88,
- 0x00, 0x00, 0x4f, 0xe9, 0x8b,
- 0x00, 0x00, 0x4b, 0xf4, 0x47,
- 0x00, 0x00, 0x52, 0x51, 0x49,
- 0x00, 0x00, 0x4a, 0x4c, 0xc8,
- 0x00, 0x00, 0x47, 0x38, 0xc4,
- 0x00, 0x00, 0x40, 0x5b, 0xc8,
- 0x00, 0x00, 0x49, 0xc4, 0x09,
- 0x00, 0x00, 0x4c, 0x12, 0x05,
- 0x00, 0x00, 0x40, 0xbe, 0x07,
- 0x00, 0x00, 0x5b, 0x61, 0x45,
- 0x00, 0x00, 0x49, 0x0d, 0x08,
- 0x00, 0x00, 0x49, 0xf2, 0x0b,
- 0x00, 0x00, 0x4a, 0x5c, 0xd0,
- 0x00, 0x00, 0x4b, 0xb6, 0x05,
- 0x00, 0x00, 0x41, 0x59, 0x0c,
- 0x00, 0x00, 0x44, 0x07, 0x85,
- 0x00, 0x00, 0x48, 0xe1, 0x83,
- 0x00, 0x00, 0x4d, 0x06, 0x06,
- 0x00, 0x00, 0x4d, 0x61, 0x44,
- 0x00, 0x00, 0x47, 0xfd, 0x86,
- 0x00, 0x00, 0x4a, 0xcc, 0x87,
- 0x00, 0x00, 0x40, 0xdf, 0x44,
- 0x00, 0x00, 0x4d, 0x27, 0xc8,
- 0x00, 0x00, 0x56, 0x65, 0x0d,
- 0x00, 0x00, 0x59, 0xb5, 0x45,
- 0x00, 0x00, 0x43, 0x39, 0x44,
- 0x00, 0x00, 0x56, 0x04, 0x44,
- 0x00, 0x00, 0x59, 0x8f, 0x09,
- 0x00, 0x00, 0x4a, 0x9f, 0x48,
- 0x00, 0x00, 0x53, 0x70, 0x47,
- 0x00, 0x00, 0x4e, 0x0c, 0xc8,
- 0x00, 0x00, 0x49, 0x12, 0xc8,
- 0x00, 0x00, 0x48, 0xa4, 0x05,
- 0x00, 0x00, 0x5d, 0x97, 0xc7,
- 0x00, 0x00, 0x48, 0xa3, 0x87,
- 0x00, 0x00, 0x5b, 0xe0, 0xc7,
- 0x00, 0x00, 0x47, 0x9a, 0x49,
- 0x00, 0x00, 0x57, 0x3a, 0x09,
- 0x00, 0x00, 0x44, 0xac, 0xc6,
- 0x00, 0x00, 0x4d, 0xf4, 0x06,
- 0x00, 0x00, 0x49, 0x21, 0x86,
- 0x00, 0x00, 0x52, 0xda, 0xc5,
- 0x00, 0x00, 0x5c, 0x49, 0x04,
- 0x00, 0x00, 0x5d, 0x0f, 0x86,
- 0x00, 0x00, 0x5d, 0x31, 0x86,
- 0x00, 0x00, 0x48, 0xa4, 0x48,
- 0x00, 0x00, 0x49, 0xf7, 0x0b,
- 0x00, 0x00, 0x43, 0xa5, 0x47,
- 0x00, 0x00, 0x42, 0x6f, 0x84,
- 0x00, 0x00, 0x54, 0x7a, 0x46,
- 0x00, 0x00, 0x5e, 0xe3, 0x87,
- 0x00, 0x00, 0x4f, 0xc2, 0x85,
- 0x00, 0x00, 0x45, 0x88, 0x05,
- 0x00, 0x00, 0x42, 0x66, 0xc4,
- 0x00, 0x00, 0x57, 0x39, 0x86,
- 0x00, 0x00, 0x5d, 0x10, 0x08,
- 0x00, 0x00, 0x47, 0x60, 0xc9,
- 0x00, 0x00, 0x45, 0x18, 0x46,
- 0x00, 0x00, 0x49, 0x4e, 0x48,
- 0x00, 0x00, 0x4d, 0xa9, 0xc6,
- 0x00, 0x00, 0x56, 0x75, 0x08,
- 0x00, 0x00, 0x57, 0x13, 0x0c,
- 0x00, 0x00, 0x48, 0xa2, 0xc6,
- 0x00, 0x00, 0x4a, 0x6e, 0xcd,
- 0x00, 0x00, 0x4a, 0x73, 0x4b,
- 0x00, 0x00, 0x4c, 0x7e, 0x05,
- 0x00, 0x00, 0x56, 0xa4, 0x07,
- 0x00, 0x00, 0x4c, 0xdf, 0x86,
- 0x00, 0x00, 0x53, 0x49, 0x88,
- 0x00, 0x00, 0x44, 0xad, 0x49,
- 0x00, 0x00, 0x4b, 0xf7, 0x48,
- 0x00, 0x00, 0x5d, 0x72, 0x85,
- 0x00, 0x00, 0x4a, 0x56, 0x47,
- 0x00, 0x00, 0x49, 0x00, 0x08,
- 0x00, 0x00, 0x45, 0xd1, 0x09,
- 0x00, 0x00, 0x46, 0x90, 0xc6,
- 0x00, 0x00, 0x46, 0x69, 0x4a,
- 0x00, 0x00, 0x53, 0x47, 0x08,
- 0x00, 0x00, 0x4b, 0xf5, 0x8b,
- 0x00, 0x00, 0x4d, 0xad, 0x8c,
- 0x00, 0x00, 0x48, 0x79, 0xc8,
- 0x00, 0x00, 0x48, 0xb6, 0x86,
- 0x00, 0x00, 0x5d, 0xc7, 0x88,
- 0x00, 0x00, 0x5c, 0x95, 0x47,
- 0x00, 0x00, 0x5b, 0x68, 0x89,
- 0x00, 0x00, 0x49, 0xe9, 0x0d,
- 0x00, 0x00, 0x4a, 0xb7, 0x46,
- 0x00, 0x00, 0x4c, 0x94, 0x48,
- 0x00, 0x00, 0x4c, 0x91, 0x89,
- 0x00, 0x00, 0x4d, 0x3e, 0x48,
- 0x00, 0x00, 0x49, 0x2c, 0xc8,
- 0x00, 0x00, 0x4d, 0x78, 0x0c,
- 0x00, 0x00, 0x4d, 0x87, 0x07,
- 0x00, 0x00, 0x4d, 0x95, 0x07,
- 0x00, 0x00, 0x47, 0x98, 0x85,
- 0x00, 0x00, 0x4d, 0x11, 0x87,
- 0x00, 0x00, 0x4e, 0x6c, 0x88,
- 0x00, 0x00, 0x4b, 0xed, 0x46,
- 0x00, 0x00, 0x45, 0x16, 0xcc,
- 0x00, 0x00, 0x50, 0xb8, 0x48,
- 0x00, 0x00, 0x4e, 0x7b, 0x88,
- 0x00, 0x00, 0x55, 0x3c, 0x86,
- 0x00, 0x00, 0x5d, 0xd9, 0x07,
- 0x00, 0x00, 0x44, 0xae, 0xc4,
- 0x00, 0x00, 0x55, 0x39, 0xc8,
- 0x00, 0x00, 0x49, 0x39, 0xcc,
- 0x00, 0x00, 0x49, 0x7f, 0x0c,
- 0x00, 0x00, 0x5c, 0xf2, 0xc5,
- 0x00, 0x00, 0x5a, 0xfc, 0x47,
- 0x00, 0x00, 0x54, 0xd6, 0x86,
- 0x00, 0x00, 0x5d, 0xd8, 0x86,
- 0x00, 0x00, 0x5a, 0x11, 0x48,
- 0x00, 0x00, 0x42, 0x14, 0x44,
- 0x00, 0x00, 0x42, 0x69, 0xcb,
- 0x00, 0x00, 0x45, 0x8f, 0xcb,
- 0x00, 0x00, 0x50, 0x8e, 0x06,
- 0x00, 0x00, 0x56, 0x63, 0x87,
- 0x00, 0x00, 0x54, 0xd8, 0x85,
- 0x00, 0x00, 0x48, 0x04, 0x85,
- 0x00, 0x00, 0x42, 0x6b, 0x06,
- 0x00, 0x00, 0x49, 0x5b, 0xc5,
- 0x00, 0x00, 0x44, 0xe7, 0x05,
- 0x00, 0x00, 0x42, 0x32, 0x47,
- 0x00, 0x00, 0x42, 0x00, 0xc9,
- 0x00, 0x00, 0x40, 0x32, 0x84,
- 0x00, 0x00, 0x44, 0x0c, 0x45,
- 0x00, 0x00, 0x51, 0x21, 0x05,
- 0x00, 0x00, 0x5a, 0xab, 0x88,
- 0x00, 0x00, 0x54, 0xf3, 0x85,
- 0x00, 0x00, 0x4d, 0x5e, 0x09,
- 0x00, 0x00, 0x4b, 0xfe, 0x87,
- 0x00, 0x00, 0x4b, 0xfe, 0x8b,
- 0x00, 0x00, 0x50, 0x78, 0x86,
- 0x00, 0x00, 0x44, 0x02, 0x09,
- 0x00, 0x00, 0x58, 0x19, 0xc8,
- 0x00, 0x00, 0x48, 0xb8, 0x85,
- 0x00, 0x00, 0x5b, 0xe1, 0xc8,
- 0x00, 0x00, 0x57, 0x3a, 0x48,
- 0x00, 0x00, 0x48, 0xbe, 0xc7,
- 0x00, 0x00, 0x42, 0xb2, 0x47,
- 0x00, 0x00, 0x44, 0xbc, 0x89,
- 0x00, 0x00, 0x42, 0x75, 0xc7,
- 0x00, 0x00, 0x49, 0xb3, 0xc9,
- 0x00, 0x00, 0x4b, 0xcc, 0x4c,
- 0x00, 0x00, 0x4b, 0xde, 0x88,
- 0x00, 0x00, 0x4d, 0xab, 0xc9,
- 0x00, 0x00, 0x4d, 0xde, 0xc7,
- 0x00, 0x00, 0x49, 0x13, 0x89,
- 0x00, 0x00, 0x42, 0x1b, 0x47,
- 0x00, 0x00, 0x4d, 0xae, 0x88,
- 0x00, 0x00, 0x5c, 0x94, 0x85,
- 0x00, 0x00, 0x4d, 0x6d, 0xc6,
- 0x00, 0x00, 0x4d, 0x81, 0x88,
- 0x00, 0x00, 0x44, 0x67, 0x48,
- 0x00, 0x00, 0x5d, 0x91, 0x89,
- 0x00, 0x00, 0x44, 0xe7, 0x47,
- 0x00, 0x00, 0x5a, 0xde, 0x05,
- 0x00, 0x00, 0x5d, 0x8c, 0x09,
- 0x00, 0x00, 0x58, 0x70, 0x46,
- 0x00, 0x00, 0x49, 0xe7, 0x04,
- 0x00, 0x00, 0x52, 0xa9, 0x46,
- 0x00, 0x00, 0x44, 0x87, 0x88,
- 0x00, 0x00, 0x45, 0x3a, 0x87,
- 0x00, 0x00, 0x49, 0xf9, 0x08,
- 0x00, 0x00, 0x44, 0x20, 0x09,
- 0x00, 0x00, 0x4b, 0x64, 0xc7,
- 0x00, 0x00, 0x4a, 0xb5, 0xc6,
- 0x00, 0x00, 0x40, 0x5f, 0x04,
- 0x00, 0x00, 0x52, 0x03, 0x49,
- 0x00, 0x00, 0x5d, 0x96, 0x48,
- 0x00, 0x00, 0x55, 0x3b, 0x47,
- 0x00, 0x00, 0x57, 0xb1, 0x06,
- 0x00, 0x00, 0x49, 0xf6, 0x46,
- 0x00, 0x00, 0x55, 0x42, 0x44,
- 0x00, 0x00, 0x4d, 0x17, 0xc6,
- 0x00, 0x00, 0x43, 0xbe, 0x43,
- 0x00, 0x00, 0x5d, 0x5b, 0xc9,
- 0x00, 0x00, 0x5b, 0x9a, 0x46,
- 0x00, 0x00, 0x47, 0x5d, 0xc5,
- 0x00, 0x00, 0x4a, 0xe5, 0x06,
- 0x00, 0x00, 0x43, 0x57, 0x05,
- 0x00, 0x00, 0x49, 0x04, 0x88,
- 0x00, 0x00, 0x5b, 0x67, 0x47,
- 0x00, 0x00, 0x44, 0x0e, 0xc6,
- 0x00, 0x00, 0x56, 0x9f, 0x46,
- 0x00, 0x00, 0x51, 0x4c, 0x88,
- 0x00, 0x00, 0x4b, 0x1a, 0xc7,
- 0x00, 0x00, 0x4a, 0xb7, 0x85,
- 0x00, 0x00, 0x4a, 0xbf, 0x48,
- 0x00, 0x00, 0x5e, 0x09, 0x88,
- 0x00, 0x00, 0x53, 0x47, 0x08,
- 0x00, 0x00, 0x44, 0x06, 0x45,
- 0x00, 0x00, 0x4d, 0x6e, 0x46,
- 0x00, 0x00, 0x47, 0xef, 0xc9,
- 0x00, 0x00, 0x5a, 0xf6, 0x44,
- 0x00, 0x00, 0x51, 0x26, 0x0b,
- 0x00, 0x00, 0x41, 0xd6, 0x4b,
- 0x00, 0x00, 0x42, 0xa5, 0x89,
- 0x00, 0x00, 0x41, 0xda, 0x03,
- 0x00, 0x00, 0x46, 0x4a, 0xc5,
- 0x00, 0x00, 0x52, 0x0a, 0x46,
- 0x00, 0x00, 0x44, 0x4f, 0xc8,
- 0x00, 0x00, 0x4b, 0x93, 0x44,
- 0x00, 0x00, 0x50, 0xef, 0xc6,
- 0x00, 0x00, 0x4b, 0x5e, 0x89,
- 0x00, 0x00, 0x57, 0xcb, 0xc5,
- 0x00, 0x00, 0x42, 0x31, 0x86,
- 0x00, 0x00, 0x5d, 0x81, 0x46,
- 0x00, 0x00, 0x40, 0xcc, 0x04,
- 0x00, 0x00, 0x4f, 0xae, 0x0a,
- 0x00, 0x00, 0x47, 0x5d, 0x08,
- 0x00, 0x00, 0x44, 0x67, 0x46,
- 0x00, 0x00, 0x57, 0x93, 0x05,
- 0x00, 0x00, 0x40, 0x55, 0xc7,
- 0x00, 0x00, 0x54, 0x0f, 0xc7,
- 0x00, 0x00, 0x47, 0x30, 0x44,
- 0x00, 0x00, 0x41, 0xd8, 0x87,
- 0x00, 0x00, 0x44, 0x56, 0x44,
- 0x00, 0x00, 0x44, 0x56, 0x46,
- 0x00, 0x00, 0x40, 0x65, 0x03,
- 0x00, 0x00, 0x47, 0x9a, 0x45,
- 0x00, 0x00, 0x4c, 0x39, 0x05,
- 0x00, 0x00, 0x41, 0x4b, 0x08,
- 0x00, 0x00, 0x49, 0x09, 0x85,
- 0x00, 0x00, 0x48, 0xa0, 0x09,
- 0x00, 0x00, 0x4b, 0x3f, 0xc7,
- 0x00, 0x00, 0x55, 0x38, 0x0b,
- 0x00, 0x00, 0x4b, 0x3f, 0xcc,
- 0x00, 0x00, 0x4b, 0x45, 0xca,
- 0x00, 0x00, 0x55, 0x55, 0x07,
- 0x00, 0x00, 0x41, 0x0c, 0xc3,
- 0x00, 0x00, 0x48, 0x88, 0x08,
- 0x00, 0x00, 0x50, 0x7c, 0x05,
- 0x00, 0x00, 0x43, 0x8d, 0xc5,
- 0x00, 0x00, 0x55, 0xcb, 0x84,
- 0x00, 0x00, 0x4d, 0xad, 0x86,
- 0x00, 0x00, 0x47, 0x64, 0xc6,
- 0x00, 0x00, 0x4d, 0x18, 0x07,
- 0x00, 0x00, 0x46, 0x15, 0x8b,
- 0x00, 0x00, 0x41, 0x1c, 0xc4,
- 0x00, 0x00, 0x41, 0x0f, 0x84,
- 0x00, 0x00, 0x4e, 0x0e, 0xc4,
- 0x00, 0x00, 0x4e, 0x5f, 0xc6,
- 0x00, 0x00, 0x40, 0xdf, 0x44,
- 0x00, 0x00, 0x4d, 0x16, 0x08,
- 0x00, 0x00, 0x55, 0xc9, 0x85,
- 0x00, 0x00, 0x48, 0xac, 0x85,
- 0x00, 0x00, 0x4a, 0xb3, 0xc7,
- 0x00, 0x00, 0x56, 0xa5, 0x09,
- 0x00, 0x00, 0x54, 0x0d, 0x85,
- 0x00, 0x00, 0x53, 0xca, 0x8a,
- 0x00, 0x00, 0x4b, 0x16, 0x09,
- 0x00, 0x00, 0x4a, 0x88, 0xca,
- 0x00, 0x00, 0x5e, 0xa6, 0xc9,
- 0x00, 0x00, 0x51, 0x88, 0x84,
- 0x00, 0x00, 0x57, 0xcf, 0x05,
- 0x00, 0x00, 0x57, 0x01, 0x48,
- 0x00, 0x00, 0x57, 0x06, 0xcb,
- 0x00, 0x00, 0x41, 0x37, 0x85,
- 0x00, 0x00, 0x4d, 0x9b, 0x46,
- 0x00, 0x00, 0x44, 0x70, 0x04,
- 0x00, 0x00, 0x48, 0xa5, 0x46,
- 0x00, 0x00, 0x4b, 0x63, 0x49,
- 0x00, 0x00, 0x5e, 0xe4, 0x87,
- 0x00, 0x00, 0x46, 0x9f, 0x08,
- 0x00, 0x00, 0x4b, 0x32, 0x06,
- 0x00, 0x00, 0x5c, 0xe7, 0x87,
- 0x00, 0x00, 0x49, 0x0e, 0x08,
- 0x00, 0x00, 0x57, 0xfd, 0x46,
- 0x00, 0x00, 0x40, 0x5f, 0x84,
- 0x00, 0x00, 0x46, 0x7e, 0xc7,
- 0x00, 0x00, 0x58, 0xa3, 0x45,
- 0x00, 0x00, 0x59, 0x9b, 0x87,
- 0x00, 0x00, 0x45, 0x64, 0x84,
- 0x00, 0x00, 0x4c, 0xdf, 0x06,
- 0x00, 0x00, 0x54, 0x11, 0x08,
- 0x00, 0x00, 0x4a, 0x75, 0x08,
- 0x00, 0x00, 0x50, 0x2d, 0x47,
- 0x00, 0x00, 0x54, 0xd3, 0xc8,
- 0x00, 0x00, 0x4a, 0x54, 0x85,
- 0x00, 0x00, 0x41, 0xd7, 0x84,
- 0x00, 0x00, 0x4f, 0xf9, 0x88,
- 0x00, 0x00, 0x51, 0x66, 0x84,
- 0x00, 0x00, 0x40, 0xca, 0xc5,
- 0x00, 0x00, 0x5a, 0x0f, 0x44,
- 0x00, 0x00, 0x5f, 0x33, 0x47,
- 0x00, 0x00, 0x49, 0x85, 0x87,
- 0x00, 0x00, 0x49, 0x14, 0xc8,
- 0x00, 0x00, 0x43, 0x57, 0x46,
- 0x00, 0x00, 0x49, 0x09, 0x05,
- 0x00, 0x00, 0x48, 0x9e, 0x08,
- 0x00, 0x00, 0x44, 0xd8, 0xc8,
- 0x00, 0x00, 0x4b, 0x0e, 0xc9,
- 0x00, 0x00, 0x41, 0xd9, 0x46,
- 0x00, 0x00, 0x43, 0x12, 0x08,
- 0x00, 0x00, 0x40, 0x85, 0x0a,
- 0x00, 0x00, 0x4f, 0xc3, 0x08,
- 0x00, 0x00, 0x51, 0xdb, 0x05,
- 0x00, 0x00, 0x45, 0x69, 0x06,
- 0x00, 0x00, 0x4b, 0x14, 0xc8,
- 0x00, 0x00, 0x4a, 0x57, 0x0a,
- 0x00, 0x00, 0x55, 0xec, 0x47,
- 0x00, 0x00, 0x49, 0x54, 0x85,
- 0x00, 0x00, 0x4a, 0x1d, 0x88,
- 0x00, 0x00, 0x4b, 0x8e, 0x44,
- 0x00, 0x00, 0x45, 0xbd, 0xc6,
- 0x00, 0x00, 0x4d, 0x9f, 0x48,
- 0x00, 0x00, 0x5c, 0xbf, 0x86,
- 0x00, 0x00, 0x5c, 0x38, 0xc8,
- 0x00, 0x00, 0x4a, 0x67, 0x47,
- 0x00, 0x00, 0x5c, 0xb0, 0xc6,
- 0x00, 0x00, 0x4c, 0xe7, 0xc4,
- 0x00, 0x00, 0x56, 0x0e, 0x47,
- 0x00, 0x00, 0x4c, 0xa2, 0x44,
- 0x00, 0x00, 0x4b, 0x63, 0x07,
- 0x00, 0x00, 0x55, 0x3e, 0x8d,
- 0x00, 0x00, 0x42, 0xa6, 0x05,
- 0x00, 0x00, 0x4c, 0x68, 0x4b,
- 0x00, 0x00, 0x49, 0x81, 0x86,
- 0x00, 0x00, 0x45, 0xe0, 0x08,
- 0x00, 0x00, 0x4d, 0x27, 0x84,
- 0x00, 0x00, 0x43, 0x19, 0xc6,
- 0x00, 0x00, 0x48, 0xd4, 0x86,
- 0x00, 0x00, 0x5d, 0xca, 0xc7,
- 0x00, 0x00, 0x4a, 0x6b, 0x8d,
- 0x00, 0x00, 0x50, 0xd4, 0x87,
- 0x00, 0x00, 0x4c, 0xaf, 0xc8,
- 0x00, 0x00, 0x5a, 0xde, 0xc5,
- 0x00, 0x00, 0x50, 0x2f, 0xc8,
- 0x00, 0x00, 0x4e, 0x11, 0xc6,
- 0x00, 0x00, 0x4a, 0x55, 0x08,
- 0x00, 0x00, 0x43, 0x07, 0x46,
- 0x00, 0x00, 0x5d, 0xc0, 0x07,
- 0x00, 0x00, 0x55, 0x57, 0xc9,
- 0x00, 0x00, 0x55, 0xe9, 0x47,
- 0x00, 0x00, 0x49, 0x71, 0x48,
- 0x00, 0x00, 0x45, 0x47, 0xc5,
- 0x00, 0x00, 0x42, 0xbd, 0x48,
- 0x00, 0x00, 0x42, 0xaf, 0x85,
- 0x00, 0x00, 0x55, 0xf5, 0x85,
- 0x00, 0x00, 0x57, 0x33, 0x85,
- 0x00, 0x00, 0x41, 0x36, 0xc3,
- 0x00, 0x00, 0x41, 0xd5, 0x84,
- 0x00, 0x00, 0x4a, 0x1f, 0x85,
- 0x00, 0x00, 0x44, 0xc8, 0x49,
- 0x00, 0x00, 0x57, 0xb0, 0x06,
- 0x00, 0x00, 0x51, 0xae, 0x08,
- 0x00, 0x00, 0x5d, 0x83, 0xc5,
- 0x00, 0x00, 0x4c, 0xc6, 0x07,
- 0x00, 0x00, 0x57, 0x1f, 0xca,
- 0x00, 0x00, 0x42, 0x30, 0xc9,
- 0x00, 0x00, 0x4f, 0x3e, 0x0a,
- 0x00, 0x00, 0x4e, 0xc2, 0xc8,
- 0x00, 0x00, 0x41, 0xfb, 0x4c,
- 0x00, 0x00, 0x49, 0x21, 0x4d,
- 0x00, 0x00, 0x5e, 0x77, 0x03,
- 0x00, 0x00, 0x5c, 0x37, 0xc8,
- 0x00, 0x00, 0x41, 0x25, 0x45,
- 0x00, 0x00, 0x5c, 0x96, 0x86,
- 0x00, 0x00, 0x41, 0x0b, 0x86,
- 0x00, 0x00, 0x56, 0x03, 0x85,
- 0x00, 0x00, 0x5b, 0xf2, 0x09,
- 0x00, 0x00, 0x4f, 0xfd, 0x45,
- 0x00, 0x00, 0x48, 0x9e, 0x08,
- 0x00, 0x00, 0x46, 0x61, 0x86,
- 0x00, 0x00, 0x57, 0x45, 0x06,
- 0x00, 0x00, 0x4b, 0x23, 0xc9,
- 0x00, 0x00, 0x46, 0x88, 0x87,
- 0x00, 0x00, 0x49, 0xf4, 0xc6,
- 0x00, 0x00, 0x57, 0x1f, 0x48,
- 0x00, 0x00, 0x4f, 0xb3, 0x08,
- 0x00, 0x00, 0x4f, 0x66, 0xc7,
- 0x00, 0x00, 0x4e, 0x65, 0x0e,
- 0x00, 0x00, 0x4e, 0x14, 0x05,
- 0x00, 0x00, 0x45, 0xd0, 0x05,
- 0x00, 0x00, 0x5c, 0xbe, 0x88,
- 0x00, 0x00, 0x56, 0xad, 0x87,
- 0x00, 0x00, 0x40, 0x84, 0xc2,
- 0x00, 0x00, 0x4d, 0x75, 0xc4,
- 0x00, 0x00, 0x47, 0xfc, 0x8a,
- 0x00, 0x00, 0x55, 0x3c, 0x08,
- 0x00, 0x00, 0x57, 0x3b, 0x86,
- 0x00, 0x00, 0x4a, 0x93, 0x48,
- 0x00, 0x00, 0x41, 0x3a, 0x46,
- 0x00, 0x00, 0x5d, 0xa3, 0x88,
- 0x00, 0x00, 0x4c, 0x0f, 0x08,
- 0x00, 0x00, 0x55, 0xf5, 0x44,
- 0x00, 0x00, 0x4c, 0xca, 0x05,
- 0x00, 0x00, 0xda, 0x24, 0xc4,
- 0x00, 0x00, 0xda, 0x24, 0xc4,
- 0x00, 0x00, 0xda, 0x24, 0xc4,
- 0x00, 0x00, 0x41, 0x49, 0xc3,
- 0x00, 0x00, 0x40, 0x30, 0xc6,
- 0x00, 0x00, 0x48, 0xa2, 0xc6,
- 0x00, 0x00, 0x4a, 0xd6, 0x4c,
- 0x00, 0x00, 0x40, 0x85, 0xc3,
- 0x00, 0x00, 0x45, 0x64, 0x86,
- 0x00, 0x00, 0x41, 0x36, 0x04,
- 0x00, 0x00, 0x44, 0xbf, 0xc8,
- 0x00, 0x00, 0x4b, 0x5c, 0xc5,
- 0x00, 0x00, 0x47, 0xfd, 0x86,
- 0x00, 0x00, 0x4b, 0x3b, 0xc8,
- 0x00, 0x00, 0x4e, 0xd6, 0x46,
- 0x00, 0x00, 0x44, 0x0e, 0x46,
- 0x00, 0x00, 0x5d, 0x7f, 0x48,
- 0x00, 0x00, 0x5c, 0xd6, 0x47,
- 0x00, 0x00, 0x42, 0x73, 0x89,
- 0x00, 0x00, 0x51, 0x02, 0x4a,
- 0x00, 0x00, 0x40, 0xc6, 0x84,
- 0x00, 0x00, 0x44, 0x56, 0x85,
- 0x00, 0x00, 0x51, 0x10, 0x05,
- 0x00, 0x00, 0x47, 0x5e, 0xc6,
- 0x00, 0x00, 0x43, 0x39, 0x46,
- 0x00, 0x00, 0x56, 0xa9, 0x46,
- 0x00, 0x00, 0x58, 0x6d, 0xc6,
- 0x00, 0x00, 0x42, 0x74, 0xc4,
- 0x00, 0x00, 0x42, 0x74, 0xcb,
- 0x00, 0x00, 0x44, 0x54, 0x44,
- 0x00, 0x00, 0x40, 0x56, 0x45,
- 0x00, 0x00, 0x4c, 0x04, 0x45,
- 0x00, 0x00, 0x46, 0x18, 0x46,
- 0x00, 0x00, 0x40, 0xd4, 0x08,
- 0x00, 0x00, 0x49, 0x20, 0x07,
- 0x00, 0x00, 0x5d, 0x5f, 0x84,
- 0x00, 0x00, 0x46, 0xc2, 0xc3,
- 0x00, 0x00, 0x4b, 0x89, 0x45,
- 0x00, 0x00, 0x52, 0xa8, 0x07,
- 0x00, 0x00, 0x49, 0x1f, 0x0b,
- 0x00, 0x00, 0x41, 0x4a, 0x07,
- 0x00, 0x00, 0x4b, 0x3a, 0xc8,
- 0x00, 0x00, 0x4d, 0x19, 0x47,
- 0x00, 0x00, 0x49, 0x10, 0x86,
- 0x00, 0x00, 0x46, 0x9a, 0xc8,
- 0x00, 0x00, 0x4d, 0x03, 0x0b,
- 0x00, 0x00, 0x58, 0x21, 0xc6,
- 0x00, 0x00, 0x40, 0x8c, 0x89,
- 0x00, 0x00, 0x4d, 0x04, 0x85,
- 0x00, 0x00, 0x52, 0xb3, 0xc3,
- 0x00, 0x00, 0x42, 0x31, 0x86,
- 0x00, 0x00, 0x4a, 0x66, 0x48,
- 0x00, 0x00, 0x40, 0x5f, 0xc3,
- 0x00, 0x00, 0x4c, 0xe0, 0x43,
- 0x00, 0x00, 0x49, 0x0e, 0x06,
- 0x00, 0x00, 0x41, 0x3a, 0x46,
- 0x00, 0x00, 0x57, 0xde, 0xca,
- 0x00, 0x00, 0x48, 0xb6, 0xc5,
- 0x00, 0x00, 0x48, 0xd2, 0xcb,
- 0x00, 0x00, 0x4a, 0xe4, 0x4b,
- 0x00, 0x00, 0x48, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x23, 0x03,
- 0x00, 0x00, 0x4c, 0x84, 0xc4,
- 0x00, 0x00, 0x57, 0x1e, 0x07,
- 0x00, 0x00, 0x48, 0x79, 0xc4,
- 0x00, 0x00, 0x44, 0xbf, 0xc4,
- 0x00, 0x00, 0x4c, 0xe2, 0x44,
- 0x00, 0x00, 0x4f, 0xc6, 0x08,
- 0x00, 0x00, 0x57, 0x92, 0x48,
- 0x00, 0x00, 0x59, 0xe5, 0xc9,
- 0x00, 0x00, 0x4e, 0x3f, 0x08,
- 0x00, 0x00, 0x5c, 0x56, 0x07,
- 0x00, 0x00, 0x42, 0x78, 0x06,
- 0x00, 0x00, 0x51, 0xaa, 0x4f,
- 0x00, 0x00, 0x4e, 0x15, 0x46,
- 0x00, 0x00, 0x4e, 0xbc, 0x44,
- 0x00, 0x00, 0x57, 0x90, 0x8a,
- 0x00, 0x00, 0x52, 0xa7, 0x07,
- 0x00, 0x00, 0x4c, 0xa3, 0x46,
- 0x00, 0x00, 0x49, 0xe7, 0x49,
- 0x00, 0x00, 0x59, 0xe5, 0x45,
- 0x00, 0x00, 0x47, 0x58, 0x45,
- 0x00, 0x00, 0x59, 0xe6, 0x86,
- 0x00, 0x00, 0x42, 0xbe, 0x83,
- 0x00, 0x00, 0x4b, 0x8e, 0x89,
- 0x00, 0x00, 0x41, 0x35, 0x06,
- 0x00, 0x00, 0x44, 0x1d, 0xc9,
- 0x00, 0x00, 0x59, 0x83, 0x46,
- 0x00, 0x00, 0x47, 0x9a, 0x45,
- 0x00, 0x00, 0x5c, 0xf6, 0xc5,
- 0x00, 0x00, 0x40, 0x2b, 0x83,
- 0x00, 0x00, 0x41, 0x39, 0x48,
- 0x00, 0x00, 0x53, 0x72, 0x07,
- 0x00, 0x00, 0x47, 0x31, 0x44,
- 0x00, 0x00, 0x44, 0xbe, 0x48,
- 0x00, 0x00, 0x43, 0xd2, 0x44,
- 0x00, 0x00, 0x51, 0x65, 0x86,
- 0x00, 0x00, 0x4d, 0x06, 0x06,
- 0x00, 0x00, 0x44, 0x27, 0x06,
- 0x00, 0x00, 0x5c, 0xd3, 0x09,
- 0x00, 0x00, 0x43, 0x8d, 0x45,
- 0x00, 0x00, 0x4a, 0xb8, 0x46,
- 0x00, 0x00, 0x45, 0x97, 0x09,
- 0x00, 0x00, 0x4e, 0x06, 0x86,
- 0x00, 0x00, 0x4f, 0xc6, 0x86,
- 0x00, 0x00, 0x5a, 0x99, 0xc6,
- 0x00, 0x00, 0x42, 0x30, 0x05,
- 0x00, 0x00, 0x5a, 0x0f, 0x46,
- 0x00, 0x00, 0x5d, 0xc0, 0x04,
- 0x00, 0x00, 0x5c, 0x94, 0x85,
- 0x00, 0x00, 0x44, 0x67, 0x44,
- 0x00, 0x00, 0x4c, 0xb8, 0x86,
- 0x00, 0x00, 0x5c, 0xa0, 0x04,
- 0x00, 0x00, 0x40, 0x56, 0x43,
- 0x00, 0x00, 0x49, 0x51, 0x05,
- 0x00, 0x00, 0x43, 0x6e, 0xc8,
- 0x00, 0x00, 0x50, 0xf6, 0x87,
- 0x00, 0x00, 0x4b, 0x93, 0xc9,
- 0x00, 0x00, 0x49, 0x53, 0x88,
- 0x00, 0x00, 0x4a, 0x82, 0x91,
- 0x00, 0x00, 0x5d, 0x81, 0xca,
- 0x00, 0x00, 0x50, 0x8d, 0x47,
- 0x00, 0x00, 0x4a, 0x68, 0x86,
- 0x00, 0x00, 0x41, 0x36, 0x04,
- 0x00, 0x00, 0x4d, 0x82, 0x88,
- 0x00, 0x00, 0x56, 0x5b, 0x08,
- 0x00, 0x00, 0x4a, 0x84, 0x4a,
- 0x00, 0x00, 0x4d, 0x5b, 0xcd,
- 0x00, 0x00, 0x41, 0x36, 0x86,
- 0x00, 0x00, 0x5d, 0x80, 0x46,
- 0x00, 0x00, 0x56, 0x0f, 0x06,
- 0x00, 0x00, 0x51, 0xfa, 0x07,
- 0x00, 0x00, 0x4c, 0xb0, 0x85,
- 0x00, 0x00, 0x50, 0x1a, 0x07,
- 0x00, 0x00, 0x44, 0xbf, 0x05,
- 0x00, 0x00, 0x4b, 0xff, 0xc4,
- 0x00, 0x00, 0x4e, 0xc0, 0xc6,
- 0x00, 0x00, 0x43, 0xcf, 0x87,
- 0x00, 0x00, 0x4b, 0x8b, 0x8d,
- 0x00, 0x00, 0x4b, 0x14, 0x07,
- 0x00, 0x00, 0x4f, 0x7a, 0x48,
- 0x00, 0x00, 0x48, 0xa1, 0x09,
- 0x00, 0x00, 0x42, 0x1d, 0x46,
- 0x00, 0x00, 0x46, 0x90, 0x45,
- 0x00, 0x00, 0x43, 0x8c, 0xc4,
- 0x00, 0x00, 0x44, 0x88, 0x86,
- 0x00, 0x00, 0x4f, 0xd2, 0x46,
- 0x00, 0x00, 0x55, 0x3d, 0x86,
- 0x00, 0x00, 0x4a, 0x9b, 0xc8,
- 0x00, 0x00, 0x41, 0x62, 0xc3,
- 0x00, 0x00, 0x42, 0xb1, 0x83,
- 0x00, 0x00, 0x53, 0x29, 0xc5,
- 0x00, 0x00, 0x49, 0x0a, 0x46,
- 0x00, 0x00, 0x4c, 0x0e, 0xc5,
- 0x00, 0x00, 0x4b, 0x34, 0x08,
- 0x00, 0x00, 0x4a, 0xce, 0x4a,
- 0x00, 0x00, 0x54, 0xcf, 0x84,
- 0x00, 0x00, 0x44, 0xbf, 0xc8,
- 0x00, 0x00, 0x4a, 0x64, 0xc8,
- 0x00, 0x00, 0x4b, 0xc5, 0x87,
- 0x00, 0x00, 0x42, 0xb0, 0xc9,
- 0x00, 0x00, 0x4d, 0x2f, 0xc8,
- 0x00, 0x00, 0x47, 0x61, 0x47,
- 0x00, 0x00, 0x4d, 0x6c, 0xc6,
- 0x00, 0x00, 0x5c, 0xbf, 0x8a,
- 0x00, 0x00, 0x44, 0x89, 0x08,
- 0x00, 0x00, 0x52, 0x9d, 0x09,
- 0x00, 0x00, 0x4a, 0xa0, 0x08,
- 0x00, 0x00, 0x41, 0x85, 0xc9,
- 0x00, 0x00, 0x58, 0x57, 0xc7,
- 0x00, 0x00, 0x5a, 0xdd, 0x45,
- 0x00, 0x00, 0x4f, 0xeb, 0x86,
- 0x00, 0x00, 0x4b, 0xeb, 0xc8,
- 0x00, 0x00, 0x42, 0x44, 0x88,
- 0x00, 0x00, 0x4a, 0xc5, 0xc8,
- 0x00, 0x00, 0x50, 0x8f, 0x08,
- 0x00, 0x00, 0x40, 0x56, 0x45,
- 0x00, 0x00, 0x42, 0x9f, 0x04,
- 0x00, 0x00, 0x43, 0x48, 0x88,
- 0x00, 0x00, 0x44, 0x6d, 0x84,
- 0x00, 0x00, 0x5e, 0xa4, 0xc4,
- 0x00, 0x00, 0x47, 0x9a, 0x45,
- 0x00, 0x00, 0x4a, 0x0a, 0x87,
- 0x00, 0x00, 0x56, 0xa2, 0xc9,
- 0x00, 0x00, 0x5d, 0xc8, 0xc7,
- 0x00, 0x00, 0x40, 0x85, 0x45,
- 0x00, 0x00, 0x48, 0x77, 0xc6,
- 0x00, 0x00, 0x57, 0x8b, 0x06,
- 0x00, 0x00, 0x40, 0x88, 0x04,
- 0x00, 0x00, 0x4b, 0x2d, 0x46,
- 0x00, 0x00, 0x48, 0xd7, 0x04,
- 0x00, 0x00, 0x48, 0xd0, 0x06,
- 0x00, 0x00, 0x56, 0xa0, 0x86,
- 0x00, 0x00, 0x40, 0xb0, 0x46,
- 0x00, 0x00, 0x5d, 0x72, 0x85,
- 0x00, 0x00, 0x4b, 0x32, 0xc7,
- 0x00, 0x00, 0x41, 0x0c, 0xc3,
- 0x00, 0x00, 0x52, 0xc1, 0x09,
- 0x00, 0x00, 0x51, 0x4a, 0x88,
- 0x00, 0x00, 0x44, 0xbe, 0x44,
- 0x00, 0x00, 0x47, 0x5f, 0xcd,
- 0x00, 0x00, 0x4a, 0x76, 0x08,
- 0x00, 0x00, 0x4f, 0xa6, 0x88,
- 0x00, 0x00, 0x52, 0x9c, 0x86,
- 0x00, 0x00, 0x55, 0x58, 0xc9,
- 0x00, 0x00, 0x42, 0x30, 0xc9,
- 0x00, 0x00, 0x52, 0xbb, 0x85,
- 0x00, 0x00, 0x4a, 0xcf, 0x4a,
- 0x00, 0x00, 0x4b, 0x07, 0xca,
- 0x00, 0x00, 0x57, 0xd1, 0xcc,
- 0x00, 0x00, 0x57, 0xd3, 0x46,
- 0x00, 0x00, 0x48, 0x84, 0x06,
- 0x00, 0x00, 0x4e, 0x1b, 0x46,
- 0x00, 0x00, 0x58, 0x9a, 0x49,
- 0x00, 0x00, 0x5c, 0x98, 0xc6,
- 0x00, 0x00, 0x4b, 0x1b, 0x06,
- 0x00, 0x00, 0x4f, 0xfe, 0x06,
- 0x00, 0x00, 0x55, 0x39, 0xc8,
- 0x00, 0x00, 0x44, 0xd6, 0xc6,
- 0x00, 0x00, 0x4e, 0xb2, 0x4b,
- 0x00, 0x00, 0x4a, 0x0c, 0x05,
- 0x00, 0x00, 0x48, 0xac, 0x85,
- 0x00, 0x00, 0x48, 0x8f, 0x05,
- 0x00, 0x00, 0x5b, 0xd9, 0xc6,
- 0x00, 0x00, 0x41, 0xd7, 0xc3,
- 0x00, 0x00, 0x44, 0x26, 0x86,
- 0x00, 0x00, 0x4b, 0x13, 0x87,
- 0x00, 0x00, 0x4d, 0x81, 0x45,
- 0x00, 0x00, 0x5b, 0x98, 0x05,
- 0x00, 0x00, 0x55, 0x6e, 0x85,
- 0x00, 0x00, 0x4c, 0x6c, 0x06,
- 0x00, 0x00, 0x4b, 0x61, 0x04,
- 0x00, 0x00, 0x51, 0xde, 0x86,
- 0x00, 0x00, 0x4a, 0x3a, 0x89,
- 0x00, 0x00, 0x5b, 0xd8, 0x4c,
- 0x00, 0x00, 0x4b, 0xfd, 0x08,
- 0x00, 0x00, 0x43, 0x83, 0x44,
- 0x00, 0x00, 0x59, 0xeb, 0xc6,
- 0x00, 0x00, 0x49, 0x82, 0x86,
- 0x00, 0x00, 0x4a, 0x66, 0x48,
- 0x00, 0x00, 0x50, 0x11, 0xc8,
- 0x00, 0x00, 0x5b, 0xd7, 0x49,
- 0x00, 0x00, 0x40, 0x55, 0xc7,
- 0x00, 0x00, 0x55, 0xf8, 0x09,
- 0x00, 0x00, 0x48, 0x1a, 0x86,
- 0x00, 0x00, 0x42, 0xd4, 0xc4,
- 0x00, 0x00, 0x55, 0xd9, 0x04,
- 0x00, 0x00, 0x49, 0x07, 0x44,
- 0x00, 0x00, 0x49, 0x0e, 0x08,
- 0x00, 0x00, 0x56, 0xa1, 0x0a,
- 0x00, 0x00, 0x54, 0x0d, 0x06,
- 0x00, 0x00, 0x56, 0xf7, 0x07,
- 0x00, 0x00, 0x59, 0x9e, 0x07,
- 0x00, 0x00, 0x44, 0x03, 0x05,
- 0x00, 0x00, 0x4b, 0x6c, 0x44,
- 0x00, 0x00, 0x49, 0xc3, 0xc6,
- 0x00, 0x00, 0x4c, 0xb0, 0xc6,
- 0x00, 0x00, 0x42, 0x14, 0x83,
- 0x00, 0x00, 0x51, 0x48, 0xc7,
- 0x00, 0x00, 0x40, 0xe1, 0x08,
- 0x00, 0x00, 0x4b, 0x61, 0x8a,
- 0x00, 0x00, 0x50, 0x13, 0x48,
- 0x00, 0x00, 0x41, 0x48, 0x88,
- 0x00, 0x00, 0x5c, 0xa0, 0x45,
- 0x00, 0x00, 0x42, 0xd2, 0x85,
- 0x00, 0x00, 0x43, 0xa6, 0x45,
- 0x00, 0x00, 0x44, 0x06, 0xc6,
- 0x00, 0x00, 0x44, 0x5b, 0xc6,
- 0x00, 0x00, 0x41, 0x4c, 0x45,
- 0x00, 0x00, 0x5d, 0x5e, 0x09,
- 0x00, 0x00, 0x4b, 0x6a, 0x4c,
- 0x00, 0x00, 0x56, 0x09, 0x87,
- 0x00, 0x00, 0x4a, 0x84, 0xc8,
- 0x00, 0x00, 0x49, 0xfb, 0xc5,
- 0x00, 0x00, 0xda, 0x24, 0xc4,
- 0x00, 0x00, 0x42, 0x7d, 0x44,
- 0x00, 0x00, 0x46, 0x34, 0x44,
- 0x00, 0x00, 0x40, 0xfb, 0x46,
- 0x00, 0x00, 0x4a, 0xf3, 0x4e,
- 0x00, 0x00, 0x47, 0x58, 0xc7,
- 0x00, 0x00, 0x51, 0xfc, 0x05,
- 0x00, 0x00, 0x5a, 0xf5, 0xcc,
- 0x00, 0x00, 0x4b, 0xc4, 0x47,
- 0x00, 0x00, 0x43, 0xcf, 0x07,
- 0x00, 0x00, 0x43, 0xfc, 0x89,
- 0x00, 0x00, 0x40, 0xec, 0xc9,
- 0x00, 0x00, 0x49, 0x54, 0x85,
- 0x00, 0x00, 0x51, 0x4a, 0x88,
- 0x00, 0x00, 0x47, 0xef, 0xc9,
- 0x00, 0x00, 0x53, 0x45, 0xc5,
- 0x00, 0x00, 0x4d, 0x80, 0x88,
- 0x00, 0x00, 0x4c, 0x8f, 0x46,
- 0x00, 0x00, 0x4f, 0xfc, 0x06,
- 0x00, 0x00, 0x45, 0x2d, 0x84,
- 0x00, 0x00, 0x52, 0x53, 0x08,
- 0x00, 0x00, 0x44, 0xb9, 0x83,
- 0x00, 0x00, 0x40, 0x2c, 0xc4,
- 0x00, 0x00, 0x4b, 0x89, 0xc5,
- 0x00, 0x00, 0x59, 0x77, 0x47,
- 0x00, 0x00, 0x42, 0xb5, 0xc5,
- 0x00, 0x00, 0x40, 0x83, 0xc9,
- 0x00, 0x00, 0x49, 0xc7, 0x4d,
- 0x00, 0x00, 0x4a, 0xa8, 0x06,
- 0x00, 0x00, 0x5f, 0x2e, 0x04,
- 0x00, 0x00, 0x59, 0xb6, 0x88,
- 0x00, 0x00, 0x41, 0xff, 0x0a,
- 0x00, 0x00, 0x40, 0x58, 0x87,
- 0x00, 0x00, 0x45, 0x73, 0x85,
- 0x00, 0x00, 0x40, 0x2d, 0x03,
- 0x00, 0x00, 0x4a, 0xe6, 0x0e,
- 0x00, 0x00, 0x41, 0x3a, 0x4c,
- 0x00, 0x00, 0x51, 0x8f, 0xc7,
- 0x00, 0x00, 0x4a, 0xf5, 0x07,
- 0x00, 0x9e, 0x59, 0xdb, 0x87,
- 0x00, 0x00, 0x02, 0xa4, 0x46,
- 0x00, 0x00, 0x01, 0xec, 0xc4,
- 0x00, 0x00, 0x40, 0xb9, 0x03,
- 0x00, 0x00, 0x5c, 0x99, 0x05,
- 0x00, 0x00, 0x46, 0x34, 0x45,
- 0x00, 0x00, 0x4a, 0x97, 0x08,
- 0x00, 0x00, 0x4a, 0x63, 0x09,
- 0x00, 0x00, 0x43, 0x82, 0x46,
- 0x00, 0x00, 0x48, 0x79, 0xc4,
- 0x00, 0x00, 0x50, 0x8c, 0x86,
- 0x00, 0x00, 0x44, 0x45, 0xcb,
- 0x00, 0x00, 0x56, 0xd1, 0x8c,
- 0x00, 0x00, 0x45, 0x5a, 0x87,
- 0x00, 0x00, 0x4e, 0xb6, 0xc5,
- 0x00, 0x00, 0x5e, 0x08, 0x88,
- 0x00, 0x00, 0x4f, 0x64, 0x85,
- 0x00, 0x00, 0x57, 0x90, 0x87,
- 0x00, 0x00, 0x4e, 0x99, 0x07,
- 0x00, 0x00, 0x44, 0xb9, 0x85,
- 0x00, 0x00, 0x41, 0xd7, 0xc3,
- 0x00, 0x00, 0x52, 0xbc, 0x44,
- 0x00, 0x00, 0x47, 0xd7, 0x85,
- 0x00, 0x00, 0x40, 0x31, 0x85,
- 0x00, 0x00, 0x40, 0x31, 0x86,
- 0x00, 0x00, 0x4a, 0x33, 0x48,
- 0x00, 0x00, 0x43, 0xcf, 0x87,
- 0x00, 0x00, 0x41, 0x0e, 0x86,
- 0x00, 0x00, 0x55, 0x41, 0x46,
- 0x00, 0x00, 0x57, 0x32, 0xc6,
- 0x00, 0x00, 0x4c, 0x95, 0xc9,
- 0x00, 0x00, 0x5d, 0x98, 0xc7,
- 0x00, 0x00, 0x45, 0xcc, 0x06,
- 0x00, 0x00, 0x56, 0xd3, 0x06,
- 0x00, 0x00, 0x5c, 0xc8, 0x06,
- 0x00, 0x00, 0x4b, 0xba, 0xc5,
- 0x00, 0x00, 0x41, 0x81, 0x46,
- 0x00, 0x00, 0x5a, 0xe7, 0x05,
- 0x00, 0x00, 0x54, 0xf4, 0x08,
- 0x00, 0x00, 0x4a, 0x03, 0x4b,
- 0x00, 0x00, 0x49, 0xbf, 0x46,
- 0x00, 0x00, 0x59, 0x9e, 0x44,
- 0x00, 0x00, 0x4f, 0xce, 0x09,
- 0x00, 0x00, 0x4b, 0x3f, 0xc4,
- 0x00, 0x00, 0x4c, 0x8e, 0xc8,
- 0x00, 0x00, 0x44, 0xa2, 0xc7,
- 0x00, 0x00, 0x49, 0x2b, 0xc4,
- 0x00, 0x00, 0x4d, 0x20, 0xc8,
- 0x00, 0x00, 0x4d, 0x8f, 0x04,
- 0x00, 0x00, 0x4b, 0xbb, 0x04,
- 0x00, 0x00, 0x48, 0x36, 0x85,
- 0x00, 0x00, 0x59, 0xb5, 0x86,
- 0x00, 0x00, 0x4f, 0xc5, 0x47,
- 0x00, 0x00, 0x40, 0x48, 0x03,
- 0x00, 0x00, 0x4a, 0xb6, 0x85,
- 0x00, 0x00, 0x52, 0x4c, 0x04,
- 0x00, 0x00, 0x45, 0xd0, 0x46,
- 0x00, 0x00, 0x4b, 0x60, 0xc8,
- 0x00, 0x00, 0x54, 0xd2, 0xc5,
- 0x00, 0x00, 0x4a, 0x00, 0x09,
- 0x00, 0x00, 0x55, 0x55, 0x05,
- 0x00, 0x00, 0x45, 0x64, 0x88,
- 0x00, 0x00, 0x43, 0x51, 0x07,
- 0x00, 0x00, 0x5b, 0x9b, 0x48,
- 0x00, 0x00, 0x4d, 0x13, 0x47,
- 0x00, 0x00, 0x59, 0x92, 0x89,
- 0x00, 0x00, 0x45, 0xb8, 0x06,
- 0x00, 0x00, 0x5e, 0xf5, 0xc6,
- 0x00, 0x00, 0x49, 0xd9, 0x84,
- 0x00, 0x00, 0x51, 0x25, 0x45,
- 0x00, 0x00, 0x56, 0x6d, 0x0c,
- 0x00, 0x00, 0x48, 0x8f, 0x07,
- 0x00, 0x00, 0x48, 0x94, 0x47,
- 0x00, 0x00, 0x43, 0x35, 0x88,
- 0x00, 0x00, 0x4a, 0xa8, 0x06,
- 0x00, 0x00, 0x4b, 0x12, 0xc4,
- 0x00, 0x00, 0x53, 0x87, 0x04,
- 0x00, 0x00, 0x44, 0xbb, 0x09,
- 0x00, 0x00, 0x4e, 0x1c, 0x46,
- 0x00, 0x00, 0x48, 0xf7, 0x07,
- 0x00, 0x00, 0x5b, 0xeb, 0x84,
- 0x00, 0x00, 0x4c, 0xd9, 0x06,
- 0x00, 0x00, 0x5d, 0x7a, 0x85,
- 0x00, 0x00, 0x4e, 0x97, 0x87,
- 0x00, 0x00, 0x4e, 0xb1, 0xc6,
- 0x00, 0x00, 0x46, 0x68, 0x09,
- 0x00, 0x00, 0x4e, 0x8d, 0xc7,
- 0x00, 0x00, 0x4a, 0x4a, 0x47,
- 0x00, 0x00, 0x4b, 0x22, 0x46,
- 0x00, 0x00, 0x4c, 0xd8, 0x45,
- 0x00, 0x00, 0x48, 0xea, 0xc8,
- 0x00, 0x00, 0x41, 0x33, 0x88,
- 0x00, 0x00, 0x57, 0x5c, 0x46,
- 0x00, 0x00, 0x54, 0xd3, 0x05,
- 0x00, 0x00, 0x58, 0xc1, 0xc6,
- 0x00, 0x00, 0x40, 0x60, 0x03,
- 0x00, 0x00, 0x4a, 0x95, 0x89,
- 0x00, 0x00, 0x56, 0xa6, 0xce,
- 0x00, 0x00, 0x4d, 0x10, 0x08,
- 0x00, 0x00, 0x43, 0xd3, 0x48,
- 0x00, 0x00, 0x57, 0x5a, 0x4b,
- 0x00, 0x00, 0x4a, 0x02, 0x46,
- 0x00, 0x00, 0x59, 0x9d, 0x04,
- 0x00, 0x00, 0x44, 0x0e, 0x44,
- 0x00, 0x00, 0x56, 0xa7, 0xca,
- 0x00, 0x00, 0x41, 0x58, 0x07,
- 0x00, 0x00, 0x45, 0x68, 0x45,
- 0x00, 0x00, 0x40, 0x8c, 0x89,
- 0x00, 0x00, 0x4d, 0x6c, 0x05,
- 0x00, 0x00, 0x5e, 0xa5, 0x07,
- 0x00, 0x00, 0x43, 0x65, 0x44,
- 0x00, 0x00, 0x49, 0xa5, 0x07,
- 0x00, 0x00, 0x4f, 0x6c, 0xc8,
- 0x00, 0x00, 0x4c, 0x58, 0x46,
- 0x00, 0x00, 0x4c, 0xe0, 0x89,
- 0x00, 0x00, 0x4d, 0x30, 0xca,
- 0x00, 0x00, 0x41, 0x57, 0x86,
- 0x00, 0x00, 0x4a, 0x71, 0x46,
- 0x00, 0x00, 0x4c, 0x03, 0xc5,
- 0x00, 0x00, 0x59, 0xff, 0x05,
- 0x00, 0x00, 0x5a, 0xf0, 0x87,
- 0x00, 0x00, 0x44, 0xb0, 0x08,
- 0x00, 0x00, 0x5d, 0x79, 0xc8,
- 0x00, 0x00, 0x55, 0xf5, 0x46,
- 0x00, 0x00, 0x5c, 0xf7, 0x45,
- 0x00, 0x00, 0x43, 0x36, 0xce,
- 0x00, 0x00, 0x43, 0x90, 0xc4,
- 0x00, 0x00, 0x4a, 0x96, 0x85,
- 0x00, 0x00, 0x48, 0x71, 0x49,
- 0x00, 0x00, 0x4e, 0x89, 0xc8,
- 0x00, 0x00, 0x49, 0x9d, 0x86,
- 0x00, 0x00, 0x4a, 0xba, 0x4c,
- 0x00, 0x00, 0x4a, 0xca, 0x50,
- 0x00, 0x00, 0x4a, 0xef, 0x8f,
- 0x00, 0x00, 0x4b, 0x18, 0x48,
- 0x00, 0x00, 0x55, 0x55, 0x07,
- 0x00, 0x00, 0x5d, 0x72, 0x85,
- 0x00, 0x00, 0x4a, 0x1f, 0x85,
- 0x00, 0x00, 0x4f, 0xc3, 0xc9,
- 0x00, 0x00, 0x4a, 0x1f, 0x89,
- 0x00, 0x00, 0x48, 0x7e, 0x86,
- 0x00, 0x00, 0x41, 0x38, 0x07,
- 0x00, 0x00, 0x5a, 0x10, 0x45,
- 0x00, 0x00, 0x44, 0x11, 0x49,
- 0x00, 0x00, 0x56, 0x27, 0x06,
- 0x00, 0x00, 0x5c, 0x97, 0x0d,
- 0x00, 0x00, 0x49, 0x06, 0x09,
- 0x00, 0x00, 0x44, 0xbf, 0xc4,
- 0x00, 0x00, 0x4d, 0x09, 0x08,
- 0x00, 0x00, 0x43, 0x49, 0x49,
- 0x00, 0x00, 0x54, 0x0e, 0xc6,
- 0x00, 0x00, 0x48, 0x8a, 0x05,
- 0x00, 0x00, 0x5e, 0xf5, 0xc6,
- 0x00, 0x00, 0x46, 0x9d, 0xc9,
- 0x00, 0x00, 0x5b, 0xea, 0x08,
- 0x00, 0x00, 0x40, 0x34, 0xc5,
- 0x00, 0x00, 0x40, 0x86, 0x04,
- 0x00, 0x00, 0x4a, 0xbc, 0x0b,
- 0x00, 0x00, 0x54, 0x0d, 0x85,
- 0x00, 0x00, 0x44, 0x50, 0x46,
- 0x00, 0x00, 0x45, 0x67, 0x86,
- 0x00, 0x00, 0x5a, 0x66, 0x46,
- 0x00, 0x00, 0x44, 0x09, 0x4b,
- 0x00, 0x00, 0x4a, 0x01, 0x09,
- 0x00, 0x00, 0x42, 0x1c, 0x85,
- 0x00, 0x00, 0x59, 0xa7, 0x47,
- 0x00, 0x00, 0x5d, 0x81, 0x46,
- 0x00, 0x00, 0x49, 0x18, 0x46,
- 0x00, 0x00, 0x46, 0x31, 0xc8,
- 0x00, 0x00, 0x40, 0xcc, 0x09,
- 0x00, 0x00, 0x4f, 0x78, 0x0c,
- 0x00, 0x00, 0x52, 0xa6, 0x08,
- 0x00, 0x00, 0x52, 0x39, 0xc6,
- 0x00, 0x00, 0x53, 0x97, 0x43,
- 0x00, 0x00, 0x42, 0xf5, 0x86,
- 0x00, 0x00, 0x50, 0x7b, 0x85,
- 0x00, 0x00, 0x48, 0xde, 0x08,
- 0x00, 0x00, 0x5c, 0xf1, 0x46,
- 0x00, 0x00, 0x43, 0x54, 0xc8,
- 0x00, 0x00, 0x48, 0x0f, 0x45,
- 0x00, 0x00, 0x43, 0x58, 0x05,
- 0x00, 0x00, 0x51, 0x5f, 0x48,
- 0x00, 0x00, 0x5c, 0x2f, 0x47,
- 0x00, 0x00, 0x41, 0x0a, 0xc7,
- 0x00, 0x00, 0x4d, 0x18, 0x07,
- 0x00, 0x00, 0x52, 0x49, 0x88,
- 0x00, 0x00, 0x55, 0x56, 0x48,
- 0x00, 0x00, 0x4c, 0xa7, 0x06,
- 0x00, 0x00, 0x4c, 0xb6, 0xc7,
- 0x00, 0x00, 0x5b, 0x5f, 0x87,
- 0x00, 0x00, 0x59, 0x90, 0x0a,
- 0x00, 0x00, 0x44, 0x5f, 0x03,
- 0x00, 0x00, 0x5b, 0xd9, 0xc6,
- 0x00, 0x00, 0x43, 0x36, 0x45,
- 0x00, 0x00, 0x45, 0x75, 0x04,
- 0x00, 0x00, 0x48, 0xa1, 0x09,
- 0x00, 0x00, 0x59, 0x92, 0x04,
- 0x00, 0x00, 0x4c, 0x58, 0x44,
- 0x00, 0x00, 0x4a, 0xd2, 0xc4,
- 0x00, 0x00, 0x4a, 0xf5, 0x0b,
- 0x00, 0x00, 0x53, 0x71, 0x47,
- 0x00, 0x00, 0x43, 0x39, 0x05,
- 0x00, 0x00, 0x4a, 0x51, 0x88,
- 0x00, 0x00, 0x48, 0x77, 0xc6,
- 0x00, 0x00, 0x48, 0x77, 0xc8,
- 0x00, 0x00, 0x48, 0xb6, 0x06,
- 0x00, 0x00, 0x49, 0xa9, 0x05,
- 0x00, 0x00, 0x49, 0xae, 0x45,
- 0x00, 0x00, 0x49, 0xd1, 0x06,
- 0x00, 0x00, 0x47, 0x2e, 0xc8,
- 0x00, 0x00, 0x49, 0xe6, 0x88,
- 0x00, 0x00, 0x48, 0xa2, 0xc6,
- 0x00, 0x00, 0x4a, 0x4f, 0xcf,
- 0x00, 0x00, 0x4a, 0x90, 0x50,
- 0x00, 0x00, 0x40, 0xab, 0x45,
- 0x00, 0x00, 0x41, 0x0c, 0xc3,
- 0x00, 0x00, 0x45, 0x83, 0xc5,
- 0x00, 0x00, 0x52, 0x50, 0x88,
- 0x00, 0x00, 0x4a, 0x1e, 0x89,
- 0x00, 0x00, 0x53, 0x47, 0x08,
- 0x00, 0x00, 0x41, 0x36, 0x08,
- 0x00, 0x00, 0x45, 0xee, 0x48,
- 0x00, 0x00, 0x53, 0x72, 0x07,
- 0x00, 0x00, 0x48, 0x74, 0x89,
- 0x00, 0x00, 0x43, 0x56, 0xc8,
- 0x00, 0x00, 0x49, 0xdd, 0x44,
- 0x00, 0x00, 0x4a, 0xd1, 0x48,
- 0x00, 0x00, 0x5a, 0xac, 0x49,
- 0x00, 0x00, 0x4c, 0xbc, 0x07,
- 0x00, 0x00, 0x4d, 0x32, 0xc4,
- 0x00, 0x00, 0x5d, 0xc9, 0x88,
- 0x00, 0x00, 0x4b, 0x30, 0x8a,
- 0x00, 0x00, 0x51, 0x6d, 0x06,
- 0x00, 0x00, 0x41, 0x36, 0x86,
- 0x00, 0x00, 0x41, 0xd8, 0x09,
- 0x00, 0x00, 0x4a, 0xcc, 0x87,
- 0x00, 0x00, 0x4e, 0x6b, 0x08,
- 0x00, 0x00, 0x43, 0x65, 0xc8,
- 0x00, 0x00, 0x49, 0x47, 0x48,
- 0x00, 0x00, 0x48, 0x45, 0x85,
- 0x00, 0x00, 0x5c, 0xba, 0xc5,
- 0x00, 0x00, 0x48, 0xac, 0x85,
- 0x00, 0x00, 0x46, 0x34, 0x05,
- 0x00, 0x00, 0x5b, 0xaa, 0x87,
- 0x00, 0x00, 0x41, 0xd7, 0xc5,
- 0x00, 0x00, 0x4d, 0x81, 0x45,
- 0x00, 0x00, 0x42, 0xed, 0x86,
- 0x00, 0x00, 0x53, 0x46, 0x47,
- 0x00, 0x00, 0x57, 0x06, 0x07,
- 0x00, 0x00, 0x4b, 0x33, 0x86,
- 0x00, 0x00, 0x4e, 0xc8, 0x05,
- 0x00, 0x00, 0x44, 0x50, 0x46,
- 0x00, 0x00, 0x48, 0x88, 0xc5,
- 0x00, 0x00, 0x4f, 0x75, 0x88,
- 0x00, 0x00, 0x58, 0x3f, 0x84,
- 0x00, 0x00, 0x4e, 0x07, 0x06,
- 0x00, 0x00, 0x59, 0x25, 0xc4,
- 0x00, 0x00, 0x5c, 0x27, 0xc8,
- 0x00, 0x00, 0x41, 0x90, 0x0a,
- 0x00, 0x00, 0x48, 0xa8, 0xcc,
- 0x00, 0x00, 0x4a, 0xdc, 0x05,
- 0x00, 0x00, 0x51, 0xfa, 0xc6,
- 0x00, 0x00, 0x4f, 0x79, 0xc6,
- 0x00, 0x00, 0x59, 0x26, 0xc6,
- 0x00, 0x00, 0x52, 0x3a, 0x44,
- 0x00, 0x00, 0x5e, 0xda, 0x05,
- 0x00, 0x00, 0x48, 0xae, 0xc7,
- 0x00, 0x00, 0x4a, 0xcd, 0x09,
- 0x00, 0x00, 0x4e, 0x63, 0xc7,
- 0x00, 0x00, 0xda, 0x24, 0xc4,
- 0x00, 0x00, 0xda, 0x24, 0xc4,
- 0x00, 0x00, 0x53, 0x6f, 0xc5,
- 0x00, 0x00, 0x4e, 0xa8, 0x44,
- 0x00, 0x00, 0x4a, 0xb2, 0x0a,
- 0x00, 0x00, 0x48, 0x76, 0x46,
- 0x00, 0x00, 0x51, 0x5e, 0xc4,
- 0x00, 0x00, 0x5a, 0xfb, 0xc5,
- 0x00, 0x00, 0x59, 0xcc, 0x85,
- 0x00, 0x00, 0x4c, 0xaf, 0xc4,
- 0x00, 0x00, 0x49, 0x20, 0xc7,
- 0x00, 0x00, 0x5d, 0x8d, 0x87,
- 0x00, 0x00, 0x4e, 0x5f, 0xc8,
- 0x00, 0x00, 0x58, 0xc2, 0xc8,
- 0x00, 0x00, 0x40, 0x34, 0xc9,
- 0x00, 0x00, 0x51, 0x66, 0x88,
- 0x00, 0x00, 0x49, 0x72, 0x8b,
- 0x00, 0x00, 0x47, 0x5e, 0xc4,
- 0x00, 0x00, 0x55, 0xf7, 0x45,
- 0x00, 0x00, 0x48, 0xec, 0xc5,
- 0x00, 0x00, 0x4d, 0x17, 0x89,
- 0x00, 0x00, 0x40, 0xcc, 0x09,
- 0x00, 0x00, 0x4f, 0xcd, 0x08,
- 0x00, 0x00, 0x44, 0x54, 0x48,
- 0x00, 0x00, 0x46, 0x18, 0x44,
- 0x00, 0x00, 0x49, 0x82, 0xc5,
- 0x00, 0x00, 0x40, 0xba, 0x83,
- 0x00, 0x00, 0x47, 0x5e, 0x85,
- 0x00, 0x00, 0x4a, 0xb8, 0xc6,
- 0x00, 0x00, 0x4a, 0x61, 0x4c,
- 0x00, 0x00, 0x41, 0x34, 0x06,
- 0x00, 0x00, 0x48, 0x89, 0x06,
- 0x00, 0x00, 0x49, 0xa0, 0x05,
- 0x00, 0x00, 0x4c, 0x6c, 0x88,
- 0x00, 0x00, 0x4e, 0x61, 0x46,
- 0x00, 0x00, 0x4a, 0x6a, 0x06,
- 0x00, 0x00, 0x41, 0x36, 0x86,
- 0x00, 0x00, 0x42, 0x2e, 0x4c,
- 0x00, 0x00, 0x48, 0x00, 0x44,
- 0x00, 0x00, 0x57, 0x34, 0x0a,
- 0x00, 0x00, 0x49, 0x9f, 0x48,
- 0x00, 0x00, 0x4a, 0x5f, 0x87,
- 0x00, 0x00, 0x52, 0x4b, 0x06,
- 0x00, 0x00, 0x43, 0x83, 0x07,
- 0x00, 0x00, 0x50, 0x88, 0x85,
- 0x00, 0x00, 0x57, 0xb1, 0x06,
- 0x00, 0x00, 0x56, 0x33, 0x86,
- 0x00, 0x00, 0x57, 0xae, 0xc7,
- 0x00, 0x00, 0x4d, 0x2d, 0xc4,
- 0x00, 0x00, 0x5f, 0x34, 0x45,
- 0x00, 0x00, 0x48, 0x71, 0x44,
- 0x00, 0x00, 0x4c, 0x00, 0x47,
- 0x00, 0x00, 0x48, 0x73, 0x88,
- 0x00, 0x00, 0x48, 0x82, 0x8a,
- 0x00, 0x00, 0x48, 0xfe, 0x87,
- 0x00, 0x00, 0x4b, 0xb6, 0xc7,
- 0x00, 0x00, 0x55, 0x54, 0x87,
- 0x00, 0x00, 0x4f, 0x65, 0xc9,
- 0x00, 0x00, 0x4a, 0x61, 0x4a,
- 0x00, 0x00, 0x42, 0x74, 0x83,
- 0x00, 0x00, 0x50, 0xf6, 0x45,
- 0x00, 0x00, 0x40, 0xb0, 0x83,
- 0x00, 0x00, 0x4c, 0xe2, 0x89,
- 0x00, 0x00, 0x58, 0x59, 0x08,
- 0x00, 0x00, 0x56, 0xf8, 0x47,
- 0x00, 0x00, 0x53, 0x48, 0x09,
- 0x00, 0x00, 0x41, 0x34, 0x86,
- 0x00, 0x00, 0x42, 0x14, 0xc8,
- 0x00, 0x00, 0x55, 0x25, 0x45,
- 0x00, 0x00, 0x44, 0xd9, 0xca,
- 0x00, 0x00, 0x4f, 0x70, 0x89,
- 0x00, 0x00, 0x47, 0xec, 0x09,
- 0x00, 0x00, 0x4d, 0xec, 0x07,
- 0x00, 0x00, 0x56, 0x5c, 0x09,
- 0x00, 0x00, 0x40, 0xaf, 0x48,
- 0x00, 0x00, 0x40, 0x5d, 0xc6,
- 0x00, 0x00, 0x51, 0xfc, 0x88,
- 0x00, 0x00, 0x5d, 0x61, 0x07,
- 0x00, 0x00, 0x42, 0x75, 0xc7,
- 0x00, 0x00, 0x4b, 0x16, 0x07,
- 0x00, 0x00, 0x4c, 0x8d, 0x48,
- 0x00, 0x00, 0x59, 0xde, 0xc6,
- 0x00, 0x00, 0x4b, 0x2e, 0x45,
- 0x00, 0x00, 0x48, 0xae, 0xc7,
- 0x00, 0x00, 0x4a, 0x6c, 0x48,
- 0x00, 0x00, 0x57, 0x32, 0x44,
- 0x00, 0x00, 0x5e, 0x57, 0x84,
- 0x00, 0x00, 0x49, 0xf3, 0xc7,
- 0x00, 0x00, 0x4c, 0x12, 0x87,
- 0x00, 0x00, 0x47, 0xee, 0x4a,
- 0x00, 0x00, 0x40, 0x5d, 0x46,
- 0x00, 0x00, 0x5e, 0xdd, 0x0a,
- 0x00, 0x00, 0x4d, 0x75, 0x07,
- 0x00, 0x00, 0x43, 0x8e, 0x87,
- 0x00, 0x00, 0x5f, 0x35, 0x04,
- 0x00, 0x00, 0x49, 0xb4, 0x84,
- 0x00, 0x00, 0x4e, 0x96, 0x86,
- 0x00, 0x00, 0x5d, 0x9e, 0x44,
- 0x00, 0x00, 0x5d, 0x9e, 0x4c,
- 0x00, 0x00, 0x51, 0x5e, 0x05,
- 0x00, 0x00, 0x40, 0xca, 0x49,
- 0x00, 0x00, 0x45, 0x66, 0x04,
- 0x00, 0x00, 0x4c, 0xb0, 0x85,
- 0x00, 0x00, 0x41, 0xfe, 0x88,
- 0x00, 0x00, 0x49, 0xe7, 0x45,
- 0x00, 0x00, 0x53, 0xca, 0x86,
- 0x00, 0x00, 0x4a, 0x20, 0xc4,
- 0x00, 0x00, 0x4a, 0x41, 0x4a,
- 0x00, 0x00, 0x4d, 0xe4, 0x06,
- 0x00, 0x00, 0x4b, 0xbb, 0x8a,
- 0x00, 0x00, 0x40, 0xc6, 0x47,
- 0x00, 0x00, 0x49, 0xfa, 0x45,
- 0x00, 0x00, 0x42, 0xbe, 0x85,
- 0x00, 0x00, 0x44, 0x03, 0x4a,
- 0x00, 0x00, 0x44, 0xba, 0x45,
- 0x00, 0x00, 0x4b, 0x0f, 0x86,
- 0x00, 0x00, 0x44, 0x6d, 0x84,
- 0x00, 0x00, 0x4c, 0x86, 0x46,
- 0x00, 0x00, 0x5a, 0xf1, 0x45,
- 0x00, 0x00, 0x5c, 0xf2, 0x06,
- 0x00, 0x00, 0x50, 0x2d, 0x4c,
- 0x00, 0x00, 0x53, 0xe9, 0x0a,
- 0x00, 0x00, 0x4b, 0x08, 0xc4,
- 0x00, 0x00, 0x42, 0x78, 0x06,
- 0x00, 0x00, 0x4a, 0xcc, 0x87,
- 0x00, 0x00, 0x4e, 0xb1, 0x44,
- 0x00, 0x00, 0x55, 0x39, 0xc8,
- 0x00, 0x00, 0x4d, 0x9a, 0x46,
- 0x00, 0x00, 0x59, 0x9c, 0x89,
- 0x00, 0x00, 0x57, 0xa6, 0x89,
- 0x00, 0x00, 0x4b, 0xdf, 0x89,
- 0x00, 0x00, 0x51, 0x27, 0xc6,
- 0x00, 0x00, 0x5d, 0x62, 0x06,
- 0x00, 0x00, 0x51, 0xfd, 0xc7,
- 0x00, 0x00, 0x5d, 0x5d, 0x48,
- 0x00, 0x00, 0x5d, 0x60, 0x09,
- 0x00, 0x00, 0x53, 0x71, 0x47,
- 0x00, 0x00, 0x4a, 0x53, 0x06,
- 0x00, 0x00, 0x5c, 0xe8, 0x07,
- 0x00, 0x00, 0x56, 0x0d, 0xc5,
- 0x00, 0x00, 0x43, 0x90, 0xc4,
- 0x00, 0x00, 0x51, 0xf9, 0x87,
- 0x00, 0x00, 0x5b, 0x61, 0x45,
- 0x00, 0x00, 0x49, 0x64, 0x45,
- 0x00, 0x00, 0x58, 0xd1, 0x87,
- 0x00, 0x00, 0x44, 0xb8, 0x48,
- 0x00, 0x00, 0x5e, 0x08, 0x06,
- 0x00, 0x00, 0x4a, 0x7a, 0xcd,
- 0x00, 0x00, 0x4a, 0x99, 0x0f,
- 0x00, 0x00, 0x4a, 0xe4, 0x4d,
- 0x00, 0x00, 0x40, 0x85, 0x84,
- 0x00, 0x00, 0x43, 0x6f, 0xc6,
- 0x00, 0x00, 0x4e, 0xdf, 0x08,
- 0x00, 0x00, 0x4f, 0xfd, 0xc5,
- 0x00, 0x00, 0x44, 0x08, 0x08,
- 0x00, 0x00, 0x48, 0xbd, 0x8a,
- 0x00, 0x00, 0x44, 0xbf, 0xc4,
- 0x00, 0x00, 0x4c, 0xb9, 0x86,
- 0x00, 0x00, 0x4d, 0x4e, 0x47,
- 0x00, 0x00, 0x41, 0x1c, 0xc7,
- 0x00, 0x00, 0x5c, 0xd7, 0x09,
- 0x00, 0x00, 0x51, 0xfc, 0x45,
- 0x00, 0x00, 0x4c, 0xaf, 0xc4,
- 0x00, 0x00, 0x4c, 0xc9, 0x4a,
- 0x00, 0x00, 0x4d, 0x2b, 0x89,
- 0x00, 0x00, 0x56, 0x5d, 0x07,
- 0x00, 0x00, 0x56, 0x45, 0xc6,
- 0x00, 0x00, 0x54, 0x0e, 0xc6,
- 0x00, 0x00, 0x49, 0x82, 0x06,
- 0x00, 0x00, 0x46, 0x7f, 0x86,
- 0x00, 0x00, 0x56, 0x57, 0xcf,
- 0x00, 0x00, 0x4e, 0xdd, 0xc9,
- 0x00, 0x00, 0x44, 0xd6, 0xc6,
- 0x00, 0x00, 0x46, 0x8c, 0x86,
- 0x00, 0x00, 0x5d, 0xb2, 0x09,
- 0x00, 0x00, 0x4c, 0xb7, 0xc7,
- 0x00, 0x00, 0x40, 0x0e, 0x83,
- 0x00, 0x00, 0x42, 0x2f, 0xc6,
- 0x00, 0x00, 0x41, 0x3b, 0x83,
- 0x00, 0x00, 0x56, 0x02, 0x48,
- 0x00, 0x00, 0x47, 0xd5, 0x07,
- 0x00, 0x00, 0x4b, 0x1a, 0x49,
- 0x00, 0x00, 0x4b, 0x39, 0x48,
- 0x00, 0x00, 0x41, 0x0c, 0x08,
- 0x00, 0x00, 0x56, 0x0a, 0xc6,
- 0x00, 0x00, 0x42, 0x9e, 0xc9,
- 0x00, 0x00, 0x45, 0xe1, 0x85,
- 0x00, 0x00, 0x42, 0xf4, 0xc4,
- 0x00, 0x00, 0x4f, 0x6f, 0x47,
- 0x00, 0x00, 0x58, 0x9a, 0xc5,
- 0x00, 0x00, 0x40, 0x85, 0x84,
- 0x00, 0x00, 0x43, 0x39, 0xc8,
- 0x00, 0x00, 0x41, 0x5a, 0xc4,
- 0x00, 0x00, 0x4c, 0xb5, 0x07,
- 0x00, 0x00, 0x4d, 0xd8, 0x86,
- 0x00, 0x00, 0x47, 0x11, 0x05,
- 0x00, 0x00, 0x4a, 0xa0, 0x08,
- 0x00, 0x00, 0x54, 0x0d, 0x8b,
- 0x00, 0x00, 0x51, 0x4f, 0x87,
- 0x00, 0x00, 0x44, 0x05, 0xc6,
- 0x00, 0x00, 0x4e, 0x15, 0xc4,
- 0x00, 0x00, 0x5d, 0x33, 0xc6,
- 0x00, 0x00, 0x47, 0x9a, 0x45,
- 0x00, 0x00, 0x5b, 0x61, 0x45,
- 0x00, 0x00, 0x48, 0xe8, 0x49,
- 0x00, 0x00, 0x49, 0x1c, 0xc9,
- 0x00, 0x00, 0x42, 0x76, 0x04,
- 0x00, 0x00, 0x42, 0x76, 0x45,
- 0x00, 0x00, 0x41, 0x0c, 0xc5,
- 0x00, 0x00, 0x44, 0xd8, 0x46,
- 0x00, 0x00, 0x51, 0x4b, 0x88,
- 0x00, 0x00, 0x4d, 0x65, 0x46,
- 0x00, 0x00, 0x40, 0xdf, 0x4b,
- 0x00, 0x00, 0x4c, 0x49, 0x4a,
- 0x00, 0x00, 0x4d, 0x14, 0x45,
- 0x00, 0x00, 0x49, 0xae, 0xc6,
- 0x00, 0x00, 0x42, 0xf9, 0x85,
- 0x00, 0x00, 0x52, 0x68, 0x85,
- 0x00, 0x00, 0x44, 0x21, 0xc7,
- 0x00, 0x00, 0x5b, 0xdc, 0x48,
- 0x00, 0x00, 0x47, 0x2f, 0x44,
- 0x00, 0x00, 0x59, 0x3d, 0x46,
- 0x00, 0x00, 0x49, 0xe7, 0x06,
- 0x00, 0x00, 0x40, 0xb1, 0x07,
- 0x00, 0x00, 0x52, 0xb3, 0x84,
- 0x00, 0x00, 0x48, 0xd4, 0x86,
- 0x00, 0x00, 0x4f, 0xd0, 0x85,
- 0x00, 0x00, 0x4f, 0xd0, 0x89,
- 0x00, 0x00, 0x5d, 0x64, 0x04,
- 0x00, 0x00, 0x51, 0x11, 0x49,
- 0x00, 0x00, 0x48, 0xa2, 0xc6,
- 0x00, 0x00, 0x4d, 0x87, 0xc8,
- 0x00, 0x00, 0x41, 0x0c, 0xc5,
- 0x00, 0x00, 0x59, 0x9f, 0x05,
- 0x00, 0x00, 0x5c, 0xf2, 0x06,
- 0x00, 0x00, 0x4f, 0x77, 0x09,
- 0x00, 0x00, 0x40, 0xec, 0xc9,
- 0x00, 0x00, 0x48, 0x89, 0x86,
- 0x00, 0x00, 0x4e, 0x8a, 0xc8,
- 0x00, 0x00, 0x49, 0xc8, 0x88,
- 0x00, 0x00, 0x42, 0xf9, 0x44,
- 0x00, 0x00, 0x4c, 0xd1, 0x04,
- 0x00, 0x00, 0x4c, 0xd1, 0x08,
- 0x00, 0x00, 0x5c, 0x54, 0x88,
- 0x00, 0x00, 0x55, 0xf9, 0x09,
- 0x00, 0x00, 0x4a, 0xb8, 0x46,
- 0x00, 0x00, 0x41, 0x36, 0x86,
- 0x00, 0x00, 0x54, 0x03, 0x4d,
- 0x00, 0x00, 0x50, 0xef, 0xc6,
- 0x00, 0x00, 0x57, 0x11, 0xc9,
- 0x00, 0x00, 0x4f, 0xd5, 0x05,
- 0x00, 0x00, 0x59, 0xe6, 0x86,
- 0x00, 0x00, 0x5d, 0xcc, 0x08,
- 0x00, 0x00, 0x53, 0xc9, 0xc5,
- 0x00, 0x00, 0x5b, 0x5f, 0xc4,
- 0x00, 0x00, 0x47, 0x9a, 0x45,
- 0x00, 0x00, 0x49, 0x16, 0xc8,
- 0x00, 0x00, 0x4a, 0xaf, 0xc9,
- 0x00, 0x00, 0x48, 0x72, 0x04,
- 0x00, 0x00, 0x4c, 0xdf, 0x06,
- 0x00, 0x00, 0x4e, 0xb7, 0x8a,
- 0x00, 0x00, 0x51, 0x8e, 0xc8,
- 0x00, 0x00, 0x47, 0xef, 0xc9,
- 0x00, 0x00, 0x48, 0xab, 0x4a,
- 0x00, 0x00, 0x53, 0x47, 0x86,
- 0x00, 0x00, 0x4a, 0x9a, 0xc8,
- 0x00, 0x00, 0x57, 0x8e, 0x45,
- 0x00, 0x00, 0x49, 0xa1, 0xc8,
- 0x00, 0x00, 0x50, 0x89, 0x05,
- 0x00, 0x00, 0x41, 0x33, 0x49,
- 0x00, 0x00, 0x54, 0x29, 0xc9,
- 0x00, 0x00, 0x42, 0x69, 0x42,
- 0x00, 0x00, 0x4d, 0x04, 0x85,
- 0x00, 0x00, 0x48, 0xed, 0x86,
- 0x00, 0x00, 0x48, 0xa2, 0x07,
- 0x00, 0x00, 0x45, 0x75, 0x05,
- 0x00, 0x00, 0x4f, 0x6b, 0xc6,
- 0x00, 0x00, 0x58, 0x33, 0xc8,
- 0x00, 0x00, 0x4a, 0xa8, 0x06,
- 0x00, 0x00, 0x57, 0x00, 0x09,
- 0x00, 0x00, 0x48, 0x95, 0x46,
- 0x00, 0x00, 0x46, 0x30, 0x48,
- 0x00, 0x00, 0x4b, 0xc8, 0x05,
- 0x00, 0x00, 0x45, 0x53, 0x46,
- 0x00, 0x00, 0x5d, 0xc1, 0x08,
- 0x00, 0x00, 0x49, 0x0e, 0x08,
- 0x00, 0x00, 0x58, 0x56, 0xc8,
- 0x00, 0x00, 0x52, 0x19, 0x48,
- 0x00, 0x00, 0x41, 0x81, 0x44,
- 0x00, 0x00, 0x42, 0xbc, 0x83,
- 0x00, 0x00, 0x57, 0x02, 0x44,
- 0x00, 0x00, 0x49, 0x00, 0x86,
- 0x00, 0x00, 0x56, 0x0e, 0x04,
- 0x00, 0x00, 0x43, 0xd2, 0x87,
- 0x00, 0x00, 0x4a, 0x69, 0x09,
- 0x00, 0x00, 0x4e, 0x0e, 0xc5,
- 0x00, 0x00, 0x43, 0x65, 0xc6,
- 0x00, 0x00, 0x42, 0x2f, 0xc6,
- 0x00, 0x00, 0x4a, 0x31, 0x8b,
- 0x00, 0x00, 0x4c, 0xa2, 0x86,
- 0x00, 0x00, 0x49, 0x42, 0x06,
- 0x00, 0x00, 0x4e, 0x08, 0x08,
- 0x00, 0x00, 0x44, 0x59, 0x46,
- 0x00, 0x00, 0x49, 0xf8, 0x43,
- 0x00, 0x00, 0x41, 0x31, 0x83,
- 0x00, 0x00, 0x43, 0x90, 0xc4,
- 0x00, 0x00, 0x43, 0x11, 0x05,
- 0x00, 0x00, 0x4d, 0xa8, 0x07,
- 0x00, 0x00, 0x48, 0x73, 0x88,
- 0x00, 0x00, 0x48, 0x73, 0x8f,
- 0x00, 0x00, 0x48, 0xad, 0xcb,
- 0x00, 0x00, 0x51, 0x49, 0x88,
- 0x00, 0x00, 0x4c, 0xdf, 0x86,
- 0x00, 0x00, 0x51, 0x4c, 0x8e,
- 0x00, 0x00, 0x41, 0x10, 0xc3,
- 0x00, 0x00, 0x4d, 0xa7, 0x84,
- 0x00, 0x00, 0x4c, 0xa2, 0x05,
- 0x00, 0x00, 0x4c, 0xae, 0x46,
- 0x00, 0x00, 0x49, 0xc4, 0xcb,
- 0x00, 0x00, 0x4a, 0x0b, 0x46,
- 0x00, 0x00, 0x41, 0xaa, 0xc9,
- 0x00, 0x00, 0x47, 0x11, 0x05,
- 0x00, 0x00, 0x46, 0x12, 0xc8,
- 0x00, 0x00, 0x5f, 0x3c, 0x08,
- 0x00, 0x00, 0x40, 0xeb, 0x8c,
- 0x00, 0x00, 0x4a, 0xf5, 0x46,
- 0x00, 0x00, 0x47, 0x5e, 0xc6,
- 0x00, 0x00, 0x4c, 0xf1, 0x05,
- 0x00, 0x00, 0x49, 0x6f, 0x08,
- 0x00, 0x00, 0x48, 0xa8, 0xc5,
- 0x00, 0x00, 0x47, 0x38, 0xc8,
- 0x00, 0x00, 0x4a, 0xbd, 0xca,
- 0x00, 0x00, 0x4a, 0xe8, 0x89,
- 0x00, 0x00, 0xda, 0x24, 0xc4,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0xaa, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x40, 0x0e, 0xc2,
- 0x00, 0x00, 0x42, 0x8f, 0x84,
- 0x00, 0x00, 0x40, 0x18, 0xc2,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x40, 0x2e, 0xc2,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x00, 0x62, 0x04,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x00, 0x5c, 0xc2,
- 0x00, 0x00, 0x05, 0x10, 0xc2,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x02, 0x4b, 0x42,
- 0x00, 0x00, 0x00, 0x5f, 0xc2,
- 0x00, 0x00, 0x00, 0x26, 0x42,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x4f, 0x03,
- 0x00, 0x00, 0x41, 0x4f, 0x04,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x92, 0xc4,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x4e, 0x40, 0x84,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x77, 0xc7,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x43, 0xd5, 0xc8,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x48, 0xcc, 0x4b,
- 0x00, 0x00, 0x50, 0x9b, 0x43,
- 0x00, 0x00, 0x41, 0x2f, 0xc6,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x00, 0x50, 0x46, 0x8b,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0xef, 0x83,
- 0x00, 0x00, 0x42, 0x4c, 0xc3,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x42, 0xc4, 0x45,
- 0x00, 0x00, 0x5b, 0x61, 0xc8,
- 0x00, 0x00, 0x4e, 0x41, 0xc8,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x56, 0xb1, 0x45,
- 0x00, 0x00, 0x5c, 0xe9, 0x47,
- 0x00, 0x00, 0x40, 0x13, 0x42,
- 0x00, 0x00, 0x4d, 0x29, 0xc7,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x45, 0x94, 0xc7,
- 0x00, 0x00, 0x43, 0xc3, 0xc9,
- 0x00, 0x00, 0x47, 0xa2, 0x88,
- 0x00, 0x00, 0x49, 0x45, 0xc9,
- 0x00, 0x00, 0x40, 0xd8, 0x42,
- 0x00, 0x00, 0x5a, 0xf9, 0xc7,
- 0x00, 0x00, 0x58, 0xca, 0x04,
- 0x00, 0x00, 0x5c, 0xea, 0x07,
- 0x00, 0x00, 0x4c, 0x48, 0x47,
- 0x00, 0x00, 0x4d, 0x57, 0x82,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x40, 0x3c, 0x42,
- 0x00, 0x00, 0x40, 0x18, 0xc2,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x40, 0x20, 0xc2,
- 0x00, 0x00, 0x40, 0x09, 0x02,
- 0x00, 0x00, 0x40, 0x2e, 0xc2,
- 0x00, 0x00, 0x59, 0xff, 0xc5,
- 0x00, 0x00, 0x41, 0x05, 0x45,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x01, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x42, 0xb4, 0x83,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x36, 0xc3,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x15, 0x7f, 0x86,
- 0x00, 0xaf, 0xc9, 0xdf, 0x8b,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x15, 0x72, 0x85,
- 0x00, 0x00, 0x00, 0xb4, 0xc3,
- 0x00, 0x00, 0x00, 0x01, 0x01,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x9d, 0xc3,
- 0x00, 0xb1, 0x05, 0x49, 0x86,
- 0x00, 0x00, 0x01, 0xa6, 0xc3,
- 0x00, 0x00, 0x0f, 0xdd, 0x45,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x5b, 0x82,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x02, 0xf8, 0x43,
- 0x00, 0x00, 0x04, 0xaf, 0xc4,
- 0x00, 0x02, 0x88, 0x4a, 0xc4,
- 0x00, 0x00, 0x0f, 0x68, 0x85,
- 0x00, 0x00, 0x1a, 0x56, 0x43,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x59, 0xab, 0x04,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x2b, 0x83,
- 0x00, 0x00, 0x42, 0xf2, 0xc5,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x40, 0xf7, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x42, 0xb6, 0x43,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x41, 0x4f, 0x83,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x0c, 0x7f, 0x03,
- 0x00, 0x00, 0x00, 0x05, 0xc2,
- 0x00, 0x00, 0x02, 0x32, 0xc2,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x00, 0x23, 0xc2,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x2e, 0xc2,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x47, 0x68, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x21, 0x84,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x30, 0x42,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x30, 0x42,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x50, 0x36, 0x43,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x2f, 0xc5,
- 0x00, 0x00, 0x1f, 0x07, 0x86,
- 0x00, 0x00, 0x07, 0x25, 0x44,
- 0x00, 0x00, 0x0b, 0xdc, 0x04,
- 0x00, 0x00, 0x41, 0x4f, 0x04,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x00, 0x00, 0x08, 0x82,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x00, 0x23, 0xc2,
- 0x00, 0x00, 0x05, 0x10, 0xc2,
- 0x00, 0x00, 0x00, 0xc6, 0x42,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x02, 0x0e, 0x08,
- 0x00, 0x00, 0x0b, 0x2c, 0x83,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x03, 0xfb, 0xc4,
- 0x00, 0xbb, 0x95, 0xd9, 0x86,
- 0x00, 0x00, 0x02, 0x60, 0x84,
- 0x00, 0x00, 0x0b, 0xa9, 0x4b,
- 0x00, 0x00, 0x03, 0xc7, 0x46,
- 0x00, 0x00, 0x08, 0x2b, 0x87,
- 0x00, 0x00, 0x0a, 0x13, 0x09,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x04, 0xf6, 0x88,
- 0x00, 0x00, 0x04, 0xf6, 0x8b,
- 0x00, 0x00, 0x04, 0xfb, 0x0b,
- 0x00, 0x00, 0x05, 0x08, 0x8b,
- 0x00, 0x00, 0x05, 0x0b, 0xcb,
- 0x00, 0x00, 0x05, 0x0e, 0x8b,
- 0x00, 0x00, 0x05, 0x12, 0xcb,
- 0x00, 0x00, 0x1c, 0x1b, 0x46,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x1c, 0x9f, 0x45,
- 0x00, 0x00, 0x1a, 0x35, 0x04,
- 0x00, 0x00, 0x41, 0xbd, 0x03,
- 0x00, 0x00, 0x12, 0x17, 0x87,
- 0x00, 0x00, 0x16, 0x57, 0x06,
- 0x00, 0x00, 0x13, 0x75, 0x85,
- 0x00, 0x00, 0x00, 0x20, 0x44,
- 0x00, 0x00, 0x0f, 0x28, 0xc4,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x08, 0x8a, 0x86,
- 0x00, 0x00, 0x11, 0xff, 0x04,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x50, 0xa9, 0x04,
- 0x00, 0x00, 0x13, 0x7a, 0x47,
- 0x00, 0x00, 0x1f, 0x03, 0x89,
- 0x00, 0x00, 0x0b, 0xa7, 0x08,
- 0x00, 0x00, 0x1e, 0x67, 0x85,
- 0x00, 0x00, 0x02, 0x3d, 0xc4,
- 0x00, 0x00, 0x1c, 0xeb, 0x44,
- 0x00, 0x00, 0x03, 0x68, 0xc3,
- 0x00, 0x00, 0x1d, 0xea, 0x03,
- 0x00, 0x00, 0x05, 0x41, 0x46,
- 0x00, 0x00, 0x1d, 0x78, 0x08,
- 0x00, 0x00, 0x1a, 0xea, 0x85,
- 0x00, 0x00, 0x1a, 0x2c, 0x89,
- 0x00, 0x00, 0x01, 0xe1, 0x43,
- 0x00, 0x00, 0x10, 0x0a, 0x86,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x50, 0x9b, 0x43,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0xfc, 0x83,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x4e, 0x40, 0x84,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x2f, 0xc6,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x01, 0x89, 0x03,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x08, 0x2b, 0x87,
- 0x00, 0x00, 0x00, 0xc0, 0x43,
- 0x00, 0x00, 0x01, 0xe1, 0x43,
- 0x00, 0x00, 0x00, 0x74, 0x42,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x06, 0xd7, 0xc3,
- 0x00, 0x00, 0x17, 0x66, 0x08,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0xc2, 0xc0, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x54, 0x2f, 0x07,
- 0x00, 0x00, 0x5b, 0xe4, 0x4b,
- 0x00, 0x00, 0x42, 0xc3, 0x83,
- 0x00, 0x00, 0x48, 0x7b, 0x48,
- 0x00, 0x00, 0x5d, 0x5a, 0xc7,
- 0x00, 0x00, 0x58, 0xba, 0xc6,
- 0x00, 0x00, 0x40, 0xd1, 0xc5,
- 0x00, 0x00, 0x56, 0xb2, 0x89,
- 0x00, 0x00, 0x41, 0x2d, 0x48,
- 0x00, 0x00, 0x45, 0x7b, 0xc9,
- 0x00, 0x00, 0x45, 0x7b, 0xd0,
- 0x00, 0x00, 0x58, 0x3c, 0x0b,
- 0x00, 0x00, 0x5a, 0x89, 0x89,
- 0x00, 0x00, 0x40, 0xc0, 0x43,
- 0x00, 0x00, 0x42, 0x3a, 0xc9,
- 0x00, 0x00, 0x43, 0x2f, 0x46,
- 0x00, 0x00, 0x43, 0x2f, 0x4c,
- 0x00, 0x00, 0x42, 0xc5, 0x08,
- 0x00, 0x00, 0x5e, 0xf4, 0x08,
- 0x00, 0x00, 0x5d, 0xe1, 0x09,
- 0x00, 0x00, 0x4d, 0x39, 0x0e,
- 0x00, 0x00, 0x43, 0xc1, 0x8b,
- 0x00, 0x00, 0x4c, 0x43, 0x0c,
- 0x00, 0x00, 0x40, 0x28, 0xc3,
- 0x00, 0x00, 0x47, 0xcd, 0xcc,
- 0x00, 0x00, 0x40, 0x28, 0xc9,
- 0x00, 0x00, 0x51, 0x5a, 0x07,
- 0x00, 0x00, 0x43, 0x5f, 0xcc,
- 0x00, 0x00, 0x4c, 0x5d, 0x0a,
- 0x00, 0x00, 0x40, 0x48, 0x84,
- 0x00, 0x00, 0x4b, 0xfa, 0x0d,
- 0x00, 0x00, 0x47, 0xcc, 0x88,
- 0x00, 0x00, 0x53, 0x24, 0x4d,
- 0x00, 0x00, 0x48, 0x23, 0x86,
- 0x00, 0x00, 0x45, 0x36, 0x4b,
- 0x00, 0x00, 0x5f, 0x05, 0x09,
- 0x00, 0x00, 0x46, 0x8f, 0x07,
- 0x00, 0x00, 0x5c, 0x3a, 0x86,
- 0x00, 0x00, 0x5d, 0x3b, 0xc9,
- 0x00, 0x00, 0x55, 0x8c, 0x8a,
- 0x00, 0x00, 0x51, 0xed, 0x88,
- 0x00, 0x00, 0x50, 0x97, 0x44,
- 0x00, 0x00, 0x4c, 0x1d, 0x07,
- 0x00, 0x00, 0x43, 0x1a, 0xc7,
- 0x00, 0x00, 0x53, 0x5b, 0x04,
- 0x00, 0x00, 0x41, 0xa5, 0x04,
- 0x00, 0x00, 0x40, 0x6a, 0xc9,
- 0x00, 0x00, 0x50, 0x18, 0x89,
- 0x00, 0x00, 0x5c, 0xee, 0xc8,
- 0x00, 0x00, 0x4c, 0xbe, 0x45,
- 0x00, 0x00, 0x40, 0xd7, 0x85,
- 0x00, 0x00, 0x40, 0x8b, 0x46,
- 0x00, 0x00, 0x4b, 0xf8, 0xc9,
- 0x00, 0x00, 0x52, 0x5b, 0x4d,
- 0x00, 0x00, 0x59, 0xe7, 0x88,
- 0x00, 0x00, 0x40, 0x8a, 0x47,
- 0x00, 0x00, 0x40, 0xd2, 0x48,
- 0x00, 0x00, 0x43, 0x79, 0x06,
- 0x00, 0x00, 0x43, 0x2b, 0x84,
- 0x00, 0x00, 0x46, 0x64, 0x85,
- 0x00, 0x00, 0x5e, 0xa3, 0xc6,
- 0x00, 0x00, 0x5e, 0xcf, 0x04,
- 0x00, 0x00, 0x40, 0x27, 0xc7,
- 0x00, 0x00, 0x40, 0x4e, 0x4a,
- 0x00, 0x00, 0x40, 0xea, 0xc4,
- 0x00, 0x00, 0x41, 0x56, 0xc6,
- 0x00, 0x00, 0x41, 0x75, 0x09,
- 0x00, 0x00, 0x41, 0x75, 0x0f,
- 0x00, 0x00, 0x41, 0x82, 0xcd,
- 0x00, 0x00, 0x41, 0x88, 0x06,
- 0x00, 0x00, 0x42, 0x0a, 0x10,
- 0x00, 0x00, 0x42, 0x0e, 0x06,
- 0x00, 0x00, 0x42, 0x27, 0xc7,
- 0x00, 0x00, 0x42, 0x34, 0x07,
- 0x00, 0x00, 0x42, 0x34, 0x0f,
- 0x00, 0x00, 0x42, 0x3e, 0xc9,
- 0x00, 0x00, 0x42, 0x70, 0xc6,
- 0x00, 0x00, 0x42, 0x7b, 0x47,
- 0x00, 0x00, 0x42, 0x7b, 0x48,
- 0x00, 0x00, 0x42, 0x7e, 0x89,
- 0x00, 0x00, 0x5c, 0x19, 0x88,
- 0x00, 0x00, 0x51, 0xc6, 0x07,
- 0x00, 0x00, 0x42, 0x9a, 0x03,
- 0x00, 0x00, 0x42, 0xe3, 0xc6,
- 0x00, 0x00, 0x53, 0x6a, 0xc8,
- 0x00, 0x00, 0x4d, 0x3b, 0xca,
- 0x00, 0x00, 0x40, 0x2f, 0x09,
- 0x00, 0x00, 0x41, 0x2e, 0x83,
- 0x00, 0x00, 0x56, 0xb0, 0x46,
- 0x00, 0x00, 0x59, 0x3b, 0x8a,
- 0x00, 0x00, 0x43, 0x45, 0xc7,
- 0x00, 0x00, 0x51, 0x58, 0x4a,
- 0x00, 0x00, 0x57, 0x3e, 0x4e,
- 0x00, 0x00, 0x42, 0x40, 0x06,
- 0x00, 0x00, 0x52, 0x1d, 0x07,
- 0x00, 0x00, 0x45, 0xe5, 0x86,
- 0x00, 0x00, 0x40, 0x29, 0x86,
- 0x00, 0x00, 0x5c, 0xb8, 0xcb,
- 0x00, 0x00, 0x5c, 0x1c, 0x4a,
- 0x00, 0x00, 0x5f, 0x38, 0x4d,
- 0x00, 0x00, 0x5d, 0x62, 0xc7,
- 0x00, 0x00, 0x4f, 0xff, 0x88,
- 0x00, 0x00, 0x4f, 0xff, 0x89,
- 0x00, 0x00, 0x4f, 0xff, 0x8f,
- 0x00, 0x00, 0x4b, 0x95, 0x4c,
- 0x00, 0x00, 0x58, 0x11, 0x49,
- 0x00, 0x00, 0x4b, 0xb0, 0x4e,
- 0x00, 0x00, 0x45, 0x78, 0xca,
- 0x00, 0x00, 0x57, 0x96, 0xc6,
- 0x00, 0x00, 0x4f, 0xbb, 0x86,
- 0x00, 0x00, 0x52, 0x3e, 0x8c,
- 0x00, 0x00, 0x5f, 0x15, 0x8c,
- 0x00, 0x00, 0x52, 0xb9, 0x88,
- 0x00, 0x00, 0x55, 0xe8, 0x47,
- 0x00, 0x00, 0x41, 0xc2, 0x85,
- 0x00, 0x00, 0x5c, 0xeb, 0xc4,
- 0x00, 0x00, 0x40, 0x22, 0x0e,
- 0x00, 0x00, 0x41, 0xca, 0x44,
- 0x00, 0x00, 0x5d, 0x39, 0x07,
- 0x00, 0x00, 0x5b, 0x3a, 0x8a,
- 0x00, 0x00, 0x5e, 0xbf, 0xd4,
- 0x00, 0x00, 0x42, 0xd6, 0xcf,
- 0x00, 0x00, 0x42, 0x35, 0xc8,
- 0x00, 0x00, 0x42, 0xe2, 0x88,
- 0x00, 0x00, 0x40, 0xf3, 0x8d,
- 0x00, 0x00, 0x40, 0xf3, 0x8e,
- 0x00, 0x00, 0x42, 0xe7, 0x09,
- 0x00, 0x00, 0x54, 0x92, 0x08,
- 0x00, 0x00, 0x54, 0x92, 0x0f,
- 0x00, 0x00, 0x43, 0x5c, 0xcc,
- 0x00, 0x00, 0x43, 0x5c, 0xcf,
- 0x00, 0x00, 0x43, 0x6d, 0x07,
- 0x00, 0x00, 0x43, 0xa0, 0x8a,
- 0x00, 0x00, 0x43, 0xaf, 0xcb,
- 0x00, 0x00, 0x43, 0xb9, 0x88,
- 0x00, 0x00, 0x43, 0xdc, 0x87,
- 0x00, 0x00, 0x47, 0x1d, 0x8d,
- 0x00, 0x00, 0x50, 0x22, 0xc6,
- 0x00, 0x00, 0x4b, 0xfb, 0xc6,
- 0x00, 0x00, 0x44, 0x25, 0x09,
- 0x00, 0x00, 0x47, 0x23, 0x48,
- 0x00, 0x00, 0x44, 0x82, 0x48,
- 0x00, 0x00, 0x44, 0x82, 0x4e,
- 0x00, 0x00, 0x46, 0xd4, 0x47,
- 0x00, 0x00, 0x50, 0xd0, 0x45,
- 0x00, 0x00, 0x44, 0xa4, 0x85,
- 0x00, 0x00, 0x41, 0xa3, 0x84,
- 0x00, 0x00, 0x58, 0xbd, 0x86,
- 0x00, 0x00, 0x5c, 0xed, 0xc8,
- 0x00, 0x00, 0x45, 0xf1, 0xc3,
- 0x00, 0x00, 0x4c, 0x54, 0x4e,
- 0x00, 0x00, 0x47, 0x21, 0x48,
- 0x00, 0x00, 0x41, 0xe2, 0x0b,
- 0x00, 0x00, 0x47, 0x69, 0xc7,
- 0x00, 0x00, 0x55, 0xf3, 0x85,
- 0x00, 0x00, 0x47, 0xcf, 0x46,
- 0x00, 0x00, 0x4b, 0xe7, 0x07,
- 0x00, 0x00, 0x54, 0xe5, 0x08,
- 0x00, 0x00, 0x57, 0x52, 0x09,
- 0x00, 0x00, 0x43, 0x29, 0xc5,
- 0x00, 0x00, 0x49, 0x51, 0x48,
- 0x00, 0x00, 0x50, 0xf3, 0x86,
- 0x00, 0x00, 0x5b, 0x31, 0xca,
- 0x00, 0x00, 0x40, 0x21, 0x09,
- 0x00, 0x00, 0x43, 0x60, 0x89,
- 0x00, 0x00, 0x43, 0x60, 0x8b,
- 0x00, 0x00, 0x54, 0x73, 0x08,
- 0x00, 0x00, 0x53, 0x59, 0xc9,
- 0x00, 0x00, 0x4c, 0x8a, 0x46,
- 0x00, 0x00, 0x47, 0xb2, 0x8a,
- 0x00, 0x00, 0x48, 0x53, 0xca,
- 0x00, 0x00, 0x43, 0xa2, 0x8c,
- 0x00, 0x00, 0x47, 0x34, 0x07,
- 0x00, 0x00, 0x47, 0xa0, 0x8a,
- 0x00, 0x00, 0x5c, 0x4d, 0x0b,
- 0x00, 0x00, 0x5c, 0x4d, 0x19,
- 0x00, 0x00, 0x4d, 0x66, 0xc8,
- 0x00, 0x00, 0x41, 0x30, 0x45,
- 0x00, 0x00, 0x47, 0x1f, 0x46,
- 0x00, 0x00, 0x57, 0x98, 0xc9,
- 0x00, 0x00, 0x55, 0xdf, 0x86,
- 0x00, 0x00, 0x4e, 0x48, 0x8a,
- 0x00, 0x00, 0x40, 0x64, 0xc6,
- 0x00, 0x00, 0x4e, 0x25, 0x04,
- 0x00, 0x00, 0x4e, 0x25, 0x0d,
- 0x00, 0x00, 0x53, 0xb4, 0x87,
- 0x00, 0x00, 0x55, 0xee, 0x09,
- 0x00, 0x00, 0x44, 0xec, 0x45,
- 0x00, 0x00, 0x44, 0xef, 0x08,
- 0x00, 0x00, 0x44, 0xf4, 0x49,
- 0x00, 0x00, 0x45, 0x16, 0x04,
- 0x00, 0x00, 0x45, 0x1c, 0xc7,
- 0x00, 0x00, 0x45, 0x1c, 0xc8,
- 0x00, 0x00, 0x45, 0x20, 0x07,
- 0x00, 0x00, 0x47, 0x7d, 0xc8,
- 0x00, 0x00, 0x45, 0xca, 0x47,
- 0x00, 0x00, 0x46, 0x92, 0x85,
- 0x00, 0x00, 0x46, 0x5b, 0x8c,
- 0x00, 0x00, 0x46, 0x5f, 0x89,
- 0x00, 0x00, 0x52, 0x92, 0x0a,
- 0x00, 0x00, 0x46, 0x87, 0x09,
- 0x00, 0x00, 0x42, 0x3b, 0xc9,
- 0x00, 0x00, 0x46, 0x8a, 0x4c,
- 0x00, 0x00, 0x46, 0xc1, 0x8b,
- 0x00, 0x00, 0x46, 0xd0, 0x08,
- 0x00, 0x00, 0x46, 0xd9, 0x48,
- 0x00, 0x00, 0x47, 0x0d, 0x04,
- 0x00, 0x00, 0x49, 0x26, 0x48,
- 0x00, 0x00, 0x49, 0x33, 0x49,
- 0x00, 0x00, 0x4c, 0x5d, 0xc7,
- 0x00, 0x00, 0x41, 0x77, 0x46,
- 0x00, 0x00, 0x4a, 0xd4, 0x87,
- 0x00, 0x00, 0x57, 0x0d, 0x89,
- 0x00, 0x00, 0x44, 0x5d, 0xcb,
- 0x00, 0x00, 0x5a, 0xef, 0x07,
- 0x00, 0x00, 0x4a, 0x08, 0x87,
- 0x00, 0x00, 0x45, 0x66, 0x87,
- 0x00, 0x00, 0x53, 0x23, 0xc4,
- 0x00, 0x00, 0x53, 0x23, 0xc5,
- 0x00, 0x00, 0x5a, 0xb0, 0x45,
- 0x00, 0x00, 0x55, 0xbe, 0x4b,
- 0x00, 0x00, 0x5e, 0x4b, 0xc4,
- 0x00, 0x00, 0x4d, 0xc6, 0x88,
- 0x00, 0x00, 0x4b, 0xd0, 0xca,
- 0x00, 0x00, 0x50, 0xf4, 0x47,
- 0x00, 0x00, 0x5e, 0xf0, 0x07,
- 0x00, 0x00, 0x49, 0xba, 0xd2,
- 0x00, 0x00, 0x48, 0xcf, 0x06,
- 0x00, 0x00, 0x43, 0x13, 0x86,
- 0x00, 0x00, 0x5d, 0xa7, 0x4e,
- 0x00, 0x00, 0x49, 0x8b, 0x06,
- 0x00, 0x00, 0x4a, 0x1c, 0x08,
- 0x00, 0x00, 0x4a, 0x2c, 0x8f,
- 0x00, 0x00, 0x53, 0x28, 0x08,
- 0x00, 0x00, 0x49, 0x69, 0x88,
- 0x00, 0x00, 0x51, 0x2b, 0xca,
- 0x00, 0x00, 0x51, 0x2b, 0xd1,
- 0x00, 0x00, 0x4b, 0x36, 0x0e,
- 0x00, 0x00, 0x47, 0xb9, 0xca,
- 0x00, 0x00, 0x47, 0xb9, 0xcc,
- 0x00, 0x00, 0x45, 0xd9, 0x47,
- 0x00, 0x00, 0x54, 0x94, 0x10,
- 0x00, 0x00, 0x5d, 0x32, 0x08,
- 0x00, 0x00, 0x4b, 0x38, 0x05,
- 0x00, 0x00, 0x4b, 0xef, 0xca,
- 0x00, 0x00, 0x5e, 0xcf, 0x4c,
- 0x00, 0x00, 0x40, 0xbf, 0x0d,
- 0x00, 0x00, 0x5c, 0xd9, 0x06,
- 0x00, 0x00, 0x5c, 0xd9, 0x07,
- 0x00, 0x00, 0x5c, 0xd9, 0x0c,
- 0x00, 0x00, 0x5f, 0x3d, 0xcc,
- 0x00, 0x00, 0x41, 0x1e, 0x4c,
- 0x00, 0x00, 0x52, 0xcf, 0x0b,
- 0x00, 0x00, 0x5a, 0x5b, 0xc4,
- 0x00, 0x00, 0x41, 0xd9, 0x84,
- 0x00, 0x00, 0x4c, 0x3b, 0xc9,
- 0x00, 0x00, 0x53, 0x87, 0x87,
- 0x00, 0x00, 0x42, 0xe0, 0x49,
- 0x00, 0x00, 0x48, 0x52, 0x09,
- 0x00, 0x00, 0x4c, 0x59, 0xc7,
- 0x00, 0x00, 0x4c, 0x5b, 0x86,
- 0x00, 0x00, 0x4c, 0x5b, 0x89,
- 0x00, 0x00, 0x4c, 0x5f, 0x83,
- 0x00, 0x00, 0x4a, 0xa9, 0x0a,
- 0x00, 0x00, 0x53, 0x69, 0x87,
- 0x00, 0x00, 0x5d, 0xd2, 0x4b,
- 0x00, 0x00, 0x5f, 0x36, 0xca,
- 0x00, 0x00, 0x45, 0x96, 0x04,
- 0x00, 0x00, 0x5e, 0xe6, 0x86,
- 0x00, 0x00, 0x49, 0x01, 0x09,
- 0x00, 0x00, 0x5b, 0xf3, 0xc4,
- 0x00, 0x00, 0x4e, 0xbc, 0xca,
- 0x00, 0x00, 0x50, 0x7c, 0xc5,
- 0x00, 0x00, 0x4d, 0x50, 0x05,
- 0x00, 0x00, 0x4d, 0x50, 0x0d,
- 0x00, 0x00, 0x4d, 0x53, 0x4e,
- 0x00, 0x00, 0x56, 0x35, 0x45,
- 0x00, 0x00, 0x54, 0x1b, 0xc6,
- 0x00, 0x00, 0x41, 0x2b, 0xc7,
- 0x00, 0x00, 0x43, 0x88, 0x4a,
- 0x00, 0x00, 0x41, 0xcd, 0x46,
- 0x00, 0x00, 0x4f, 0x46, 0xc4,
- 0x00, 0x00, 0x4f, 0x8c, 0x47,
- 0x00, 0x00, 0x5e, 0x11, 0x4b,
- 0x00, 0x00, 0x4f, 0xe2, 0x47,
- 0x00, 0x00, 0x48, 0xc2, 0x84,
- 0x00, 0x00, 0x51, 0x80, 0x46,
- 0x00, 0x00, 0x51, 0x80, 0x4d,
- 0x00, 0x00, 0x4f, 0x12, 0x0c,
- 0x00, 0x00, 0x41, 0x08, 0x86,
- 0x00, 0x00, 0x59, 0xe9, 0x8a,
- 0x00, 0x00, 0x41, 0xd4, 0x06,
- 0x00, 0x00, 0x42, 0x24, 0x88,
- 0x00, 0x00, 0x43, 0xa9, 0x47,
- 0x00, 0x00, 0x46, 0x65, 0xca,
- 0x00, 0x00, 0x55, 0x19, 0x86,
- 0x00, 0x00, 0x48, 0xd5, 0x03,
- 0x00, 0x00, 0x5c, 0xa1, 0x06,
- 0x00, 0x00, 0x44, 0xa6, 0xc8,
- 0x00, 0x00, 0x57, 0x5d, 0x0a,
- 0x00, 0x00, 0x49, 0xa3, 0x47,
- 0x00, 0x00, 0x49, 0xa3, 0x48,
- 0x00, 0x00, 0x49, 0xc0, 0x44,
- 0x00, 0x00, 0x48, 0xd1, 0x07,
- 0x00, 0x00, 0x58, 0x70, 0xc8,
- 0x00, 0x00, 0x43, 0x58, 0x48,
- 0x00, 0x00, 0x4c, 0xc7, 0x48,
- 0x00, 0x00, 0x4c, 0xcb, 0x4a,
- 0x00, 0x00, 0x4d, 0xfa, 0x85,
- 0x00, 0x00, 0x43, 0xdd, 0xc7,
- 0x00, 0x00, 0x47, 0xb8, 0x13,
- 0x00, 0x00, 0x48, 0x64, 0x46,
- 0x00, 0x00, 0x43, 0x5a, 0xc8,
- 0x00, 0x00, 0x42, 0x54, 0x49,
- 0x00, 0x00, 0x4d, 0x28, 0x88,
- 0x00, 0x00, 0x56, 0x0b, 0x4b,
- 0x00, 0x00, 0x4c, 0xe4, 0xc8,
- 0x00, 0x00, 0x50, 0xce, 0x84,
- 0x00, 0x00, 0x51, 0x60, 0x46,
- 0x00, 0x00, 0x52, 0xc5, 0x86,
- 0x00, 0x00, 0x59, 0xb3, 0xc9,
- 0x00, 0x00, 0x4d, 0xfe, 0x47,
- 0x00, 0x00, 0x46, 0x5c, 0x88,
- 0x00, 0x00, 0x56, 0xaa, 0x46,
- 0x00, 0x00, 0x58, 0xd0, 0x84,
- 0x00, 0x00, 0x53, 0x63, 0x05,
- 0x00, 0x00, 0x5d, 0x74, 0x08,
- 0x00, 0x00, 0x40, 0x15, 0x0a,
- 0x00, 0x00, 0x4e, 0x21, 0x88,
- 0x00, 0x00, 0x4e, 0x77, 0x86,
- 0x00, 0x00, 0x4a, 0x9c, 0xca,
- 0x00, 0x00, 0x40, 0x33, 0x08,
- 0x00, 0x00, 0x5a, 0x9d, 0xc8,
- 0x00, 0x00, 0x4e, 0xbf, 0x48,
- 0x00, 0x00, 0x4e, 0xc4, 0xc6,
- 0x00, 0x00, 0x4e, 0xe1, 0x06,
- 0x00, 0x00, 0x5a, 0xc9, 0xcc,
- 0x00, 0x00, 0x4e, 0xe6, 0xd0,
- 0x00, 0x00, 0x4e, 0xea, 0xc5,
- 0x00, 0x00, 0x52, 0x06, 0x88,
- 0x00, 0x00, 0x52, 0x06, 0x90,
- 0x00, 0x00, 0x53, 0x26, 0x10,
- 0x00, 0x00, 0x45, 0x7a, 0x4e,
- 0x00, 0x00, 0x5a, 0xc6, 0x4e,
- 0x00, 0x00, 0x5a, 0xc6, 0x54,
- 0x00, 0x00, 0x5b, 0x0b, 0x0f,
- 0x00, 0x00, 0x5b, 0x0e, 0xc6,
- 0x00, 0x00, 0x5e, 0xfd, 0x91,
- 0x00, 0x00, 0x54, 0x74, 0xd3,
- 0x00, 0x00, 0x5c, 0x3c, 0x08,
- 0x00, 0x00, 0x5c, 0x32, 0x05,
- 0x00, 0x00, 0x48, 0x97, 0x88,
- 0x00, 0x00, 0x5e, 0xab, 0xc5,
- 0x00, 0x00, 0x54, 0xf1, 0x0c,
- 0x00, 0x00, 0x41, 0x23, 0x49,
- 0x00, 0x00, 0x41, 0xc8, 0x89,
- 0x00, 0x00, 0x42, 0x97, 0x47,
- 0x00, 0x00, 0x5b, 0x35, 0xc9,
- 0x00, 0x00, 0x55, 0xdb, 0x47,
- 0x00, 0x00, 0x5a, 0x30, 0x46,
- 0x00, 0x00, 0x46, 0x62, 0x87,
- 0x00, 0x00, 0x48, 0xb3, 0x45,
- 0x00, 0x00, 0x40, 0xb5, 0x03,
- 0x00, 0x00, 0x41, 0x89, 0x03,
- 0x00, 0x00, 0x47, 0xfb, 0x84,
- 0x00, 0x00, 0x5d, 0x22, 0x8d,
- 0x00, 0x00, 0x5f, 0x1d, 0xcf,
- 0x00, 0x00, 0x58, 0xd0, 0xc5,
- 0x00, 0x00, 0x41, 0x22, 0x46,
- 0x00, 0x00, 0x5b, 0x74, 0xc7,
- 0x00, 0x00, 0x42, 0xc2, 0x87,
- 0x00, 0x00, 0x4d, 0x0c, 0x46,
- 0x00, 0x00, 0x4d, 0x0c, 0x4b,
- 0x00, 0x00, 0x4b, 0x47, 0x85,
- 0x00, 0x00, 0x41, 0xe0, 0xc6,
- 0x00, 0x00, 0x5b, 0x1d, 0x87,
- 0x00, 0x00, 0x45, 0xdc, 0x49,
- 0x00, 0x00, 0x56, 0x9d, 0xc6,
- 0x00, 0x00, 0x41, 0xe6, 0xc5,
- 0x00, 0x00, 0x53, 0xbc, 0xcb,
- 0x00, 0x00, 0x5c, 0xd2, 0x06,
- 0x00, 0x00, 0x42, 0x2b, 0x85,
- 0x00, 0x00, 0x45, 0x2c, 0x08,
- 0x00, 0x00, 0x49, 0xd4, 0xc8,
- 0x00, 0x00, 0x4b, 0x48, 0xcc,
- 0x00, 0x00, 0x4b, 0x48, 0xd0,
- 0x00, 0x00, 0x4b, 0x6f, 0x49,
- 0x00, 0x00, 0x4c, 0x77, 0x47,
- 0x00, 0x00, 0x4c, 0xc2, 0x8b,
- 0x00, 0x00, 0x4f, 0x69, 0x86,
- 0x00, 0x00, 0x51, 0xc4, 0xca,
- 0x00, 0x00, 0x4b, 0x05, 0x4b,
- 0x00, 0x00, 0x54, 0xe7, 0x4a,
- 0x00, 0x00, 0x57, 0x19, 0x46,
- 0x00, 0x00, 0x50, 0x35, 0x05,
- 0x00, 0x00, 0x53, 0x66, 0xc6,
- 0x00, 0x00, 0x49, 0x3d, 0x08,
- 0x00, 0x00, 0x49, 0xe1, 0x4a,
- 0x00, 0x00, 0x40, 0xf0, 0x1c,
- 0x00, 0x00, 0x50, 0x9c, 0x0c,
- 0x00, 0x00, 0x50, 0x9f, 0x08,
- 0x00, 0x00, 0x41, 0x2f, 0xc5,
- 0x00, 0x00, 0x41, 0xf8, 0x07,
- 0x00, 0x00, 0x4b, 0x2b, 0x46,
- 0x00, 0x00, 0x4d, 0x3f, 0xc5,
- 0x00, 0x00, 0x41, 0xb8, 0x86,
- 0x00, 0x00, 0x4d, 0x0e, 0x08,
- 0x00, 0x00, 0x4d, 0x2e, 0x07,
- 0x00, 0x00, 0x4d, 0x38, 0x08,
- 0x00, 0x00, 0x48, 0x65, 0x0a,
- 0x00, 0x00, 0x4f, 0x60, 0xcc,
- 0x00, 0x00, 0x45, 0xf4, 0x49,
- 0x00, 0x00, 0x41, 0xf2, 0x47,
- 0x00, 0x00, 0x42, 0x82, 0xc4,
- 0x00, 0x00, 0x42, 0x46, 0x06,
- 0x00, 0x00, 0x49, 0x65, 0x0a,
- 0x00, 0x00, 0x48, 0x53, 0x05,
- 0x00, 0x00, 0x41, 0xa1, 0x8c,
- 0x00, 0x00, 0x41, 0xa8, 0x48,
- 0x00, 0x00, 0x42, 0xd0, 0xc8,
- 0x00, 0x00, 0x42, 0xab, 0xcc,
- 0x00, 0x00, 0x59, 0x59, 0x8c,
- 0x00, 0x00, 0x42, 0xdc, 0x09,
- 0x00, 0x00, 0x42, 0xde, 0x47,
- 0x00, 0x00, 0x44, 0x74, 0x4c,
- 0x00, 0x00, 0x43, 0x3d, 0xc4,
- 0x00, 0x00, 0x44, 0xb4, 0x8a,
- 0x00, 0x00, 0x41, 0x7d, 0x0c,
- 0x00, 0x00, 0x48, 0x27, 0x4b,
- 0x00, 0x00, 0x59, 0x45, 0x0b,
- 0x00, 0x00, 0x5a, 0x63, 0x86,
- 0x00, 0x00, 0x45, 0xc1, 0xc7,
- 0x00, 0x00, 0x45, 0xd4, 0x47,
- 0x00, 0x00, 0x54, 0x96, 0x4f,
- 0x00, 0x00, 0x51, 0x70, 0x51,
- 0x00, 0x00, 0x4f, 0x37, 0xd2,
- 0x00, 0x00, 0x45, 0xd4, 0x4d,
- 0x00, 0x00, 0x45, 0xd4, 0x4e,
- 0x00, 0x00, 0x45, 0xd7, 0x8e,
- 0x00, 0x00, 0x5b, 0x0c, 0xc8,
- 0x00, 0x00, 0x5b, 0x0c, 0xd2,
- 0x00, 0x00, 0x44, 0x18, 0x48,
- 0x00, 0x00, 0x45, 0x01, 0xc7,
- 0x00, 0x00, 0x45, 0x6e, 0xca,
- 0x00, 0x00, 0x44, 0xb2, 0xc8,
- 0x00, 0x00, 0x49, 0x8a, 0xc5,
- 0x00, 0x00, 0x5b, 0xa8, 0xca,
- 0x00, 0x00, 0x42, 0x13, 0x47,
- 0x00, 0x00, 0x4e, 0x31, 0x84,
- 0x00, 0x00, 0x44, 0xe5, 0x83,
- 0x00, 0x00, 0x58, 0xff, 0x05,
- 0x00, 0x00, 0x51, 0x2e, 0x47,
- 0x00, 0x00, 0x4f, 0x99, 0x47,
- 0x00, 0x00, 0x40, 0xc1, 0x0e,
- 0x00, 0x00, 0x51, 0x61, 0x8d,
- 0x00, 0x00, 0x51, 0x7d, 0x09,
- 0x00, 0x00, 0x40, 0xe7, 0xc5,
- 0x00, 0x00, 0x52, 0x64, 0x03,
- 0x00, 0x00, 0x54, 0x42, 0x06,
- 0x00, 0x00, 0x46, 0xa9, 0x45,
- 0x00, 0x00, 0x41, 0xe4, 0x48,
- 0x00, 0x00, 0x53, 0xc1, 0x49,
- 0x00, 0x00, 0x47, 0x1f, 0x85,
- 0x00, 0x00, 0x47, 0x1f, 0x8f,
- 0x00, 0x00, 0x4b, 0xaa, 0x47,
- 0x00, 0x00, 0x40, 0xd0, 0x45,
- 0x00, 0x00, 0x47, 0x73, 0x8a,
- 0x00, 0x00, 0x40, 0xae, 0x06,
- 0x00, 0x00, 0x4a, 0x8c, 0x49,
- 0x00, 0x00, 0x55, 0x96, 0x4c,
- 0x00, 0x00, 0x57, 0xe9, 0x09,
- 0x00, 0x00, 0x41, 0x25, 0xc6,
- 0x00, 0x00, 0x4b, 0xce, 0xcc,
- 0x00, 0x00, 0x57, 0xf8, 0x46,
- 0x00, 0x00, 0x5e, 0x68, 0x88,
- 0x00, 0x00, 0x51, 0x55, 0x46,
- 0x00, 0x00, 0x47, 0xae, 0xc6,
- 0x00, 0x00, 0x4c, 0xa4, 0x04,
- 0x00, 0x00, 0x42, 0x23, 0x83,
- 0x00, 0x00, 0x4d, 0xfb, 0xca,
- 0x00, 0x00, 0x49, 0xca, 0xd1,
- 0x00, 0x00, 0x58, 0x13, 0x0a,
- 0x00, 0x00, 0x46, 0x57, 0x45,
- 0x00, 0x00, 0x46, 0x82, 0x87,
- 0x00, 0x00, 0x46, 0x2a, 0x47,
- 0x00, 0x00, 0x4d, 0x00, 0x44,
- 0x00, 0x00, 0x58, 0x71, 0xcb,
- 0x00, 0x00, 0x49, 0x44, 0x48,
- 0x00, 0x00, 0x4d, 0x0a, 0x06,
- 0x00, 0x00, 0x43, 0x36, 0x05,
- 0x00, 0x00, 0x47, 0x3d, 0x04,
- 0x00, 0x00, 0x47, 0x53, 0x89,
- 0x00, 0x00, 0x40, 0x08, 0xc4,
- 0x00, 0x00, 0x5e, 0xd8, 0x87,
- 0x00, 0x00, 0x58, 0x7e, 0x05,
- 0x00, 0x00, 0x58, 0x7e, 0x07,
- 0x00, 0x00, 0x5d, 0xa9, 0x85,
- 0x00, 0x00, 0x46, 0x0a, 0xc3,
- 0x00, 0x00, 0x45, 0x00, 0x88,
- 0x00, 0x00, 0x47, 0x7a, 0x0a,
- 0x00, 0x00, 0x40, 0x48, 0x03,
- 0x00, 0x00, 0x42, 0xc4, 0x8a,
- 0x00, 0x00, 0x40, 0x48, 0x06,
- 0x00, 0x00, 0x47, 0x1d, 0x0f,
- 0x00, 0x00, 0x46, 0xd3, 0xc9,
- 0x00, 0x00, 0x4c, 0x53, 0xd0,
- 0x00, 0x00, 0x5a, 0x76, 0x48,
- 0x00, 0x00, 0x4e, 0x7c, 0x89,
- 0x00, 0x00, 0x4a, 0x79, 0x07,
- 0x00, 0x00, 0x51, 0x7f, 0xcf,
- 0x00, 0x00, 0x53, 0x4b, 0xc4,
- 0x00, 0x00, 0x4e, 0x41, 0x04,
- 0x00, 0x00, 0x42, 0x0c, 0x86,
- 0x00, 0x00, 0x5b, 0x6d, 0x46,
- 0x00, 0x00, 0x54, 0xfd, 0x4a,
- 0x00, 0x00, 0x47, 0x37, 0x06,
- 0x00, 0x00, 0x4c, 0x28, 0xc7,
- 0x00, 0x00, 0x51, 0xc9, 0x48,
- 0x00, 0x00, 0x51, 0xcb, 0x47,
- 0x00, 0x00, 0x51, 0xdc, 0x47,
- 0x00, 0x00, 0x52, 0x0b, 0xca,
- 0x00, 0x00, 0x51, 0xe6, 0x4b,
- 0x00, 0x00, 0x50, 0x20, 0x45,
- 0x00, 0x00, 0x4f, 0x34, 0x08,
- 0x00, 0x00, 0x41, 0xff, 0x83,
- 0x00, 0x00, 0x5d, 0x11, 0x8c,
- 0x00, 0x00, 0x41, 0xc0, 0x0f,
- 0x00, 0x00, 0x43, 0xcc, 0x0d,
- 0x00, 0x00, 0x49, 0xab, 0x07,
- 0x00, 0x00, 0x42, 0xce, 0x09,
- 0x00, 0x00, 0x48, 0x41, 0x07,
- 0x00, 0x00, 0x4d, 0x91, 0xc8,
- 0x00, 0x00, 0x5e, 0xc1, 0xcc,
- 0x00, 0x00, 0x50, 0xcd, 0x88,
- 0x00, 0x00, 0x44, 0xd4, 0x08,
- 0x00, 0x00, 0x53, 0x82, 0x0e,
- 0x00, 0x00, 0x54, 0xba, 0x94,
- 0x00, 0x00, 0x54, 0xbf, 0xa4,
- 0x00, 0x00, 0x56, 0x72, 0xca,
- 0x00, 0x00, 0x58, 0x42, 0x0b,
- 0x00, 0x00, 0x55, 0xdc, 0x04,
- 0x00, 0x00, 0x55, 0xdc, 0x09,
- 0x00, 0x00, 0x4c, 0xba, 0x08,
- 0x00, 0x00, 0x44, 0xb7, 0x45,
- 0x00, 0x00, 0x5d, 0x58, 0xca,
- 0x00, 0x00, 0x49, 0x62, 0x87,
- 0x00, 0x00, 0x42, 0xff, 0xc4,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x92, 0xc4,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x4e, 0xe6, 0xc6,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x92, 0xc4,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x4e, 0xe6, 0xc6,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x58, 0xa7, 0xc3,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x17, 0x82,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0x1a, 0xc2,
- 0x00, 0x00, 0x40, 0x05, 0xc2,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x42, 0x8f, 0x84,
- 0x00, 0x00, 0x41, 0xe0, 0x02,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x5a, 0x63, 0x86,
- 0x00, 0x00, 0x42, 0x4b, 0x42,
- 0x00, 0x00, 0x40, 0x26, 0x42,
- 0x00, 0x00, 0x42, 0x58, 0x42,
- 0x00, 0xc7, 0xc0, 0x3e, 0xc3,
- 0x00, 0xc8, 0x45, 0x59, 0x83,
- 0x00, 0x00, 0x06, 0x35, 0x86,
- 0x00, 0x00, 0x06, 0x35, 0x86,
- 0x00, 0x00, 0x41, 0x4f, 0x04,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x1d, 0xec, 0x0d,
- 0x00, 0x00, 0x1c, 0xec, 0x4a,
- 0x00, 0x00, 0x1a, 0x12, 0x46,
- 0x00, 0x00, 0x1d, 0x01, 0xcc,
- 0x00, 0xc9, 0xd1, 0xf1, 0x4d,
- 0x00, 0x00, 0x08, 0xf2, 0x8c,
- 0x00, 0xca, 0x85, 0x48, 0x4f,
- 0x00, 0x00, 0x1d, 0x8f, 0x0d,
- 0x00, 0x00, 0x07, 0x91, 0x84,
- 0x00, 0x00, 0x16, 0x90, 0x44,
- 0x00, 0x00, 0x0c, 0xdc, 0x84,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x09, 0x57, 0x09,
- 0x00, 0x00, 0x0a, 0x0f, 0xcc,
- 0x00, 0x00, 0x03, 0x42, 0xc7,
- 0x00, 0x00, 0x01, 0x2a, 0xc6,
- 0x00, 0x00, 0x01, 0x92, 0x88,
- 0x00, 0x00, 0x01, 0xf4, 0xc7,
- 0x00, 0x00, 0x02, 0x49, 0x88,
- 0x00, 0x00, 0x1b, 0xb4, 0xca,
- 0x00, 0x00, 0x11, 0xb4, 0x87,
- 0x00, 0x00, 0x0a, 0x12, 0x09,
- 0x00, 0xcb, 0x4d, 0x45, 0xc5,
- 0x00, 0x00, 0x0f, 0x48, 0xc9,
- 0x00, 0xcb, 0x83, 0x7e, 0x4b,
- 0x00, 0x00, 0x15, 0x11, 0xcb,
- 0x00, 0x00, 0x00, 0x2c, 0x4b,
- 0x00, 0x00, 0x17, 0x2b, 0xc8,
- 0x00, 0x00, 0x16, 0x12, 0x8a,
- 0x00, 0x00, 0x17, 0xc8, 0x8e,
- 0x00, 0xcc, 0x0b, 0x74, 0xca,
- 0x00, 0x00, 0x0e, 0x35, 0xcd,
- 0x00, 0x00, 0x02, 0xe7, 0x0d,
- 0x00, 0x02, 0x8d, 0x26, 0x8b,
- 0x00, 0x00, 0x0f, 0x10, 0xca,
- 0x00, 0x00, 0x02, 0x60, 0x84,
- 0x00, 0x00, 0x08, 0xa6, 0x46,
- 0x00, 0x00, 0x18, 0x96, 0xc8,
- 0x00, 0x00, 0x0c, 0x9a, 0x08,
- 0x00, 0x00, 0x03, 0x81, 0x07,
- 0x00, 0x00, 0x02, 0x6e, 0x45,
- 0x00, 0x00, 0x1e, 0x3b, 0x07,
- 0x00, 0x00, 0x0a, 0x24, 0xc9,
- 0x00, 0x00, 0x1d, 0x9d, 0x47,
- 0x00, 0x00, 0x00, 0x79, 0x08,
- 0x00, 0x00, 0x10, 0xf8, 0x49,
- 0x00, 0x00, 0x06, 0x0a, 0x04,
- 0x00, 0x00, 0x06, 0x85, 0xc5,
- 0x00, 0x00, 0x15, 0x44, 0x0e,
- 0x00, 0x00, 0x14, 0x55, 0xc7,
- 0x00, 0xcc, 0xc2, 0x71, 0xc6,
- 0x00, 0x00, 0x0b, 0xc8, 0x4d,
- 0x00, 0x00, 0x1d, 0x9b, 0xc8,
- 0x00, 0x00, 0x0f, 0x30, 0x08,
- 0x00, 0xcd, 0x48, 0x09, 0x86,
- 0x00, 0xce, 0x8b, 0x27, 0x88,
- 0x00, 0x00, 0x18, 0x2c, 0x0a,
- 0x00, 0x00, 0x06, 0x43, 0x48,
- 0x00, 0x00, 0x14, 0x31, 0x10,
- 0x00, 0x00, 0x06, 0x04, 0x8c,
- 0x00, 0x00, 0x07, 0x2c, 0x07,
- 0x00, 0x00, 0x07, 0x41, 0x07,
- 0x00, 0x00, 0x07, 0x9c, 0x87,
- 0x00, 0x00, 0x07, 0xfa, 0x47,
- 0x00, 0x00, 0x00, 0x8b, 0x02,
- 0x00, 0x00, 0x12, 0xa3, 0x87,
- 0x00, 0x00, 0x1c, 0x1e, 0x0c,
- 0x00, 0x00, 0x01, 0x4d, 0x05,
- 0x00, 0x00, 0x0c, 0xc1, 0x07,
- 0x00, 0x00, 0x0b, 0x6e, 0x06,
- 0x00, 0x00, 0x0b, 0x78, 0xc9,
- 0x00, 0x00, 0x0b, 0xac, 0x08,
- 0x00, 0x00, 0x01, 0x5f, 0xc2,
- 0x00, 0x00, 0x00, 0x05, 0xc2,
- 0x00, 0x00, 0x11, 0x6a, 0x86,
- 0x00, 0x00, 0x19, 0x4e, 0x0b,
- 0x00, 0x00, 0x17, 0x3c, 0xc6,
- 0x00, 0x00, 0x1d, 0xe6, 0x84,
- 0x00, 0x00, 0x1c, 0xf8, 0xc7,
- 0x00, 0x00, 0x08, 0x07, 0x89,
- 0x00, 0x00, 0x1e, 0x0b, 0x49,
- 0x00, 0x00, 0x1b, 0xa6, 0x88,
- 0x00, 0x00, 0x05, 0x10, 0xc2,
- 0x00, 0x00, 0x19, 0xa9, 0x89,
- 0x00, 0x00, 0x01, 0x15, 0x08,
- 0x00, 0x00, 0x0f, 0x0b, 0x8a,
- 0x00, 0x00, 0x0c, 0xeb, 0x48,
- 0x00, 0xcf, 0x4e, 0x09, 0x8b,
- 0x00, 0x00, 0x1d, 0xb9, 0xc9,
- 0x00, 0x00, 0x04, 0xb2, 0x06,
- 0x00, 0x00, 0x0e, 0x5a, 0x49,
- 0x00, 0x00, 0x0f, 0x10, 0x47,
- 0x00, 0x00, 0x0f, 0x19, 0x09,
- 0x00, 0x00, 0x0f, 0x2a, 0x48,
- 0x00, 0x00, 0x0f, 0x40, 0x87,
- 0x00, 0x00, 0x0f, 0x5a, 0x49,
- 0x00, 0x00, 0x0f, 0x8e, 0x05,
- 0x00, 0x00, 0x0f, 0x91, 0xd0,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x18, 0x1b, 0x86,
- 0x00, 0x00, 0x1c, 0xf8, 0x05,
- 0x00, 0x00, 0x0d, 0x98, 0x07,
- 0x00, 0x00, 0x04, 0x35, 0x0d,
- 0x00, 0x00, 0x1b, 0x77, 0xc9,
- 0x00, 0xd0, 0x4c, 0x88, 0xc3,
- 0x00, 0x00, 0x04, 0x71, 0x85,
- 0x00, 0x00, 0x1c, 0xbd, 0x46,
- 0x00, 0x00, 0x10, 0x4a, 0xc7,
- 0x00, 0x00, 0x10, 0xa9, 0x18,
- 0x00, 0x00, 0x1d, 0xa0, 0xc8,
- 0x00, 0x00, 0x08, 0x62, 0x4a,
- 0x00, 0x00, 0x01, 0xc5, 0x8e,
- 0x00, 0x00, 0x01, 0x00, 0x02,
- 0x00, 0xd0, 0xc5, 0x22, 0x8b,
- 0x00, 0xd1, 0x4e, 0x5b, 0x4a,
- 0x00, 0x00, 0x19, 0x42, 0xca,
- 0x00, 0x00, 0x06, 0x58, 0x4d,
- 0x00, 0x00, 0x00, 0x10, 0x42,
- 0x00, 0x00, 0x0d, 0xd0, 0xc6,
- 0x00, 0x00, 0x01, 0x5d, 0x46,
- 0x00, 0x00, 0x0c, 0x20, 0xc8,
- 0x00, 0x00, 0x0b, 0xa0, 0xca,
- 0x00, 0x00, 0x05, 0xa3, 0xc8,
- 0x00, 0x00, 0x1b, 0x95, 0x49,
- 0x00, 0x00, 0x11, 0xd9, 0x08,
- 0x00, 0x00, 0x07, 0x4c, 0x8e,
- 0x00, 0x00, 0x00, 0x63, 0x08,
- 0x00, 0x00, 0x14, 0x42, 0x07,
- 0x00, 0xd1, 0xcb, 0x26, 0xc4,
- 0x00, 0x00, 0x0c, 0xfc, 0x4d,
- 0x00, 0x00, 0x0c, 0xbd, 0x48,
- 0x00, 0x00, 0x11, 0x38, 0x45,
- 0x00, 0x00, 0x14, 0x6f, 0x48,
- 0x00, 0xd2, 0x58, 0x1f, 0x09,
- 0x00, 0x00, 0x03, 0x71, 0xc8,
- 0x00, 0xd2, 0x81, 0xf7, 0x4a,
- 0x00, 0x00, 0x00, 0x40, 0x42,
- 0x00, 0xd3, 0x4b, 0x24, 0xc8,
- 0x00, 0x00, 0x11, 0x9e, 0x46,
- 0x00, 0x00, 0x00, 0x5f, 0xc2,
- 0x00, 0x00, 0x0d, 0x05, 0x04,
- 0x00, 0x00, 0x07, 0x4b, 0x46,
- 0x00, 0xd3, 0x92, 0x3b, 0x48,
- 0x00, 0x00, 0x05, 0x41, 0x46,
- 0x00, 0xd4, 0x8d, 0xe5, 0x0b,
- 0x00, 0x00, 0x00, 0x36, 0x42,
- 0x00, 0xca, 0x43, 0xab, 0x84,
- 0x00, 0x00, 0x02, 0x19, 0x43,
- 0x00, 0x00, 0x16, 0xb4, 0x49,
- 0x00, 0x00, 0x00, 0x19, 0x08,
- 0x00, 0x00, 0x00, 0x25, 0x47,
- 0x00, 0x00, 0x02, 0xc0, 0xca,
- 0x00, 0x00, 0x07, 0x16, 0x87,
- 0x00, 0x00, 0x00, 0x04, 0x01,
- 0x00, 0x00, 0x00, 0x00, 0x81,
- 0x00, 0x00, 0x18, 0x86, 0x47,
- 0x00, 0x00, 0x11, 0x7e, 0x48,
- 0x00, 0x00, 0x0c, 0x70, 0xc8,
- 0x00, 0x00, 0x0c, 0x72, 0xc8,
- 0x00, 0x00, 0x0c, 0x74, 0xc8,
- 0x00, 0x00, 0x06, 0xcb, 0xc7,
- 0x00, 0x00, 0x0a, 0x86, 0x43,
- 0x00, 0xcd, 0xc3, 0xab, 0x84,
- 0x00, 0xce, 0x4d, 0x1f, 0xc3,
- 0x00, 0x00, 0x00, 0x00, 0xc1,
- 0x00, 0x00, 0x0f, 0xc9, 0x86,
- 0x00, 0x00, 0x00, 0x00, 0xc1,
- 0x00, 0x00, 0x00, 0x02, 0x01,
- 0x00, 0x00, 0x0f, 0xc9, 0x86,
- 0x00, 0x00, 0x0a, 0x86, 0x43,
- 0x00, 0xcf, 0xc4, 0xac, 0x44,
- 0x00, 0x00, 0x19, 0x0d, 0x04,
- 0x00, 0x00, 0x00, 0xe9, 0x85,
- 0x00, 0x00, 0x03, 0x9f, 0x45,
- 0x00, 0x00, 0x1c, 0xfa, 0x04,
- 0x00, 0x00, 0x00, 0x67, 0x84,
- 0x00, 0x00, 0x05, 0x15, 0x04,
- 0x00, 0x02, 0x81, 0x00, 0x87,
- 0x00, 0x02, 0x84, 0xab, 0x87,
- 0x00, 0x00, 0x1c, 0x74, 0x48,
- 0x00, 0x00, 0x1c, 0x14, 0x8c,
- 0x00, 0x00, 0x00, 0x0c, 0x01,
- 0x00, 0x00, 0x01, 0x4f, 0x83,
- 0x00, 0x00, 0x01, 0xec, 0xc4,
- 0x00, 0x00, 0x1b, 0xd0, 0x44,
- 0x00, 0x00, 0x02, 0x8d, 0x45,
- 0x00, 0x00, 0x1c, 0x74, 0x48,
- 0x00, 0xd4, 0x5c, 0x74, 0x48,
- 0x00, 0x00, 0x06, 0x8f, 0x03,
- 0x00, 0x00, 0x07, 0xe5, 0x83,
- 0x00, 0x00, 0x01, 0x2e, 0x03,
- 0x00, 0x00, 0x02, 0x26, 0x07,
- 0x00, 0x00, 0x00, 0x4a, 0x07,
- 0x00, 0x02, 0x9e, 0x51, 0x45,
- 0x00, 0x00, 0x05, 0x63, 0x44,
- 0x00, 0x00, 0x07, 0x2d, 0x47,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x03, 0x9f, 0x04,
- 0x00, 0x00, 0x1e, 0x0f, 0x4a,
- 0x00, 0x00, 0x40, 0x48, 0x84,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x45, 0x54, 0xc4,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x42, 0x53, 0x05,
- 0x00, 0x00, 0x41, 0x9d, 0xc3,
- 0x00, 0x00, 0x43, 0x73, 0x43,
- 0x00, 0x00, 0x53, 0xd8, 0x45,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x02, 0x35, 0xc3,
- 0x00, 0xd7, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x05, 0x54, 0xc4,
- 0x00, 0x00, 0x00, 0x3b, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x01, 0x81,
- 0x00, 0x00, 0x00, 0xf7, 0x43,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x42, 0x8f, 0x84,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x02, 0xb6, 0x43,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x05, 0xc2,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x10, 0xea, 0x47,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x13, 0x62, 0x85,
- 0x00, 0x00, 0x06, 0x05, 0xcf,
- 0x00, 0x00, 0x0e, 0x32, 0x46,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x02, 0x87, 0xe2, 0x48,
- 0x00, 0xd9, 0x40, 0x1b, 0xc2,
- 0x00, 0x00, 0x5d, 0xae, 0x48,
- 0x00, 0x00, 0x5c, 0xf3, 0x86,
- 0x00, 0x00, 0x4d, 0xb1, 0x06,
- 0x00, 0x00, 0x59, 0xd9, 0x47,
- 0x00, 0xd9, 0xc0, 0x87, 0xc2,
- 0x00, 0xda, 0x4c, 0x52, 0x48,
- 0x00, 0x00, 0x42, 0x9c, 0xca,
- 0x00, 0x00, 0x47, 0x26, 0x88,
- 0x00, 0x00, 0x40, 0x0b, 0x02,
- 0x00, 0x00, 0x53, 0x67, 0xc9,
- 0x00, 0x00, 0x50, 0x20, 0x87,
- 0x00, 0x00, 0x41, 0x76, 0xc6,
- 0x00, 0x00, 0x44, 0xfd, 0xc9,
- 0x00, 0x00, 0x43, 0xdf, 0x04,
- 0x00, 0x00, 0x58, 0xb9, 0xc6,
- 0x00, 0x00, 0x4d, 0xb5, 0x04,
- 0x00, 0x00, 0x42, 0x00, 0x44,
- 0x00, 0x00, 0x46, 0x4b, 0x09,
- 0x00, 0x00, 0x51, 0xb9, 0x46,
- 0x00, 0x00, 0x42, 0x7d, 0x45,
- 0x00, 0x00, 0x47, 0x83, 0xc5,
- 0x00, 0x00, 0x42, 0xf0, 0x07,
- 0x00, 0x00, 0x53, 0x40, 0x87,
- 0x00, 0x00, 0x5e, 0xdf, 0x44,
- 0x00, 0x00, 0x56, 0x04, 0x06,
- 0x00, 0x00, 0x4c, 0x64, 0x85,
- 0x00, 0x00, 0x5f, 0x31, 0xc5,
- 0x00, 0x00, 0x42, 0xf8, 0xc5,
- 0x00, 0x00, 0x43, 0x7a, 0xc7,
- 0x00, 0x00, 0x47, 0x68, 0x05,
- 0x00, 0x00, 0x44, 0xf8, 0xc9,
- 0x00, 0x00, 0x5d, 0xc5, 0x45,
- 0x00, 0x00, 0x54, 0xe6, 0x44,
- 0x00, 0x00, 0x41, 0xcc, 0x87,
- 0x00, 0x00, 0x53, 0xb0, 0x0e,
- 0x00, 0x00, 0x54, 0x6a, 0xc9,
- 0x00, 0x00, 0x5d, 0xa6, 0x09,
- 0x00, 0x00, 0x5b, 0xde, 0x06,
- 0x00, 0x00, 0x44, 0x3f, 0x48,
- 0x00, 0x00, 0x57, 0x8c, 0x0b,
- 0x00, 0x00, 0x4f, 0xec, 0x8c,
- 0x00, 0x00, 0x52, 0xdb, 0x46,
- 0x00, 0x00, 0x4c, 0x41, 0xc7,
- 0x00, 0x00, 0x4f, 0x83, 0x85,
- 0x00, 0x00, 0x51, 0x3a, 0xca,
- 0x00, 0x00, 0x5c, 0xef, 0xc9,
- 0x00, 0x00, 0x40, 0x0a, 0xc9,
- 0x00, 0x00, 0x4f, 0xbf, 0xc6,
- 0x00, 0x00, 0x5b, 0x1b, 0x45,
- 0x00, 0x00, 0x44, 0xb1, 0x05,
- 0x00, 0x00, 0x57, 0x50, 0x09,
- 0x00, 0x00, 0x42, 0xfa, 0x4b,
- 0x00, 0x00, 0x5c, 0xc9, 0x86,
- 0x00, 0x00, 0x55, 0x76, 0x86,
- 0x00, 0x00, 0x40, 0x8a, 0x44,
- 0x00, 0x00, 0x45, 0x29, 0x46,
- 0x00, 0x00, 0x50, 0xd0, 0xc8,
- 0x00, 0x00, 0x5d, 0x00, 0xc6,
- 0x00, 0x00, 0x47, 0xda, 0x46,
- 0x00, 0x00, 0x40, 0x42, 0x48,
- 0x00, 0x00, 0x40, 0x5a, 0x47,
- 0x00, 0x00, 0x40, 0x68, 0x89,
- 0x00, 0x00, 0x40, 0x74, 0x05,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x5e, 0x3a, 0x84,
- 0x00, 0x00, 0x51, 0xe2, 0xc4,
- 0x00, 0x00, 0x40, 0xd6, 0x05,
- 0x00, 0x00, 0x54, 0xa8, 0x09,
- 0x00, 0x00, 0x40, 0xda, 0x07,
- 0x00, 0x00, 0x40, 0xda, 0x0b,
- 0x00, 0x00, 0x42, 0x62, 0x0a,
- 0x00, 0x00, 0x42, 0x96, 0x85,
- 0x00, 0xda, 0xc0, 0x51, 0x82,
- 0x00, 0x00, 0x5f, 0x35, 0x87,
- 0x00, 0xdb, 0x42, 0x9a, 0x08,
- 0x00, 0x00, 0x5c, 0x58, 0x07,
- 0x00, 0x00, 0x4d, 0xf7, 0x45,
- 0x00, 0x00, 0x43, 0xb3, 0xca,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x48, 0xb0, 0x0b,
- 0x00, 0x00, 0x48, 0xd5, 0x8a,
- 0x00, 0x00, 0x47, 0x78, 0xc6,
- 0x00, 0x00, 0x55, 0xf3, 0x83,
- 0x00, 0x00, 0x40, 0x37, 0x4d,
- 0x00, 0x00, 0x5d, 0x7c, 0xcc,
- 0x00, 0x00, 0x40, 0xdc, 0x8d,
- 0x00, 0x00, 0x43, 0x65, 0x05,
- 0x00, 0x00, 0x41, 0x11, 0x85,
- 0x00, 0x00, 0x45, 0xf2, 0x07,
- 0x00, 0x00, 0x41, 0x8d, 0x49,
- 0x00, 0x00, 0x42, 0x9b, 0xc6,
- 0x00, 0x00, 0x47, 0x35, 0x85,
- 0x00, 0x00, 0x52, 0xac, 0x08,
- 0x00, 0x00, 0x43, 0xa7, 0x83,
- 0x00, 0x00, 0x4e, 0x44, 0xc8,
- 0x00, 0x00, 0x45, 0x28, 0x48,
- 0x00, 0x00, 0x5c, 0x49, 0x87,
- 0x00, 0x00, 0x43, 0xa7, 0x88,
- 0x00, 0x00, 0x43, 0xe2, 0x89,
- 0x00, 0x00, 0x57, 0xd0, 0x47,
- 0x00, 0x00, 0x5b, 0xdf, 0xc7,
- 0x00, 0x00, 0x5e, 0x4f, 0xc8,
- 0x00, 0x00, 0x41, 0x18, 0x84,
- 0x00, 0x00, 0x41, 0x18, 0x87,
- 0x00, 0x00, 0x48, 0x22, 0x88,
- 0x00, 0x00, 0x56, 0x7e, 0x86,
- 0x00, 0x00, 0x5c, 0x5f, 0xcf,
- 0x00, 0x00, 0x44, 0x4b, 0xc7,
- 0x00, 0x00, 0x55, 0xff, 0x06,
- 0x00, 0x00, 0x42, 0xdf, 0x85,
- 0x00, 0x00, 0x42, 0x59, 0xc3,
- 0x00, 0x00, 0x44, 0xd0, 0xc7,
- 0x00, 0x00, 0x58, 0xf6, 0x43,
- 0x00, 0x00, 0x45, 0x2f, 0xc6,
- 0x00, 0x00, 0x45, 0x62, 0x46,
- 0x00, 0x00, 0x45, 0x9a, 0x46,
- 0x00, 0x00, 0x49, 0xfe, 0x05,
- 0x00, 0x00, 0x47, 0x7d, 0xc3,
- 0x00, 0x00, 0x59, 0xa6, 0x08,
- 0x00, 0x00, 0x5a, 0x38, 0x89,
- 0x00, 0x00, 0x45, 0xa0, 0x8b,
- 0x00, 0x00, 0x45, 0xb5, 0x88,
- 0x00, 0x00, 0x45, 0xc7, 0x05,
- 0x00, 0x00, 0x45, 0xe8, 0x05,
- 0x00, 0xdb, 0xc5, 0x96, 0xc2,
- 0x00, 0x00, 0x46, 0x63, 0x49,
- 0x00, 0x00, 0x5d, 0x1c, 0x87,
- 0x00, 0x00, 0x41, 0xe1, 0x45,
- 0x00, 0x00, 0x46, 0x4a, 0x07,
- 0x00, 0x00, 0x46, 0x6e, 0x86,
- 0x00, 0x00, 0x46, 0x7e, 0x45,
- 0x00, 0x00, 0x46, 0xa7, 0x8b,
- 0x00, 0x00, 0x46, 0xd0, 0x04,
- 0x00, 0x00, 0x47, 0x18, 0x45,
- 0x00, 0x00, 0x47, 0x19, 0x87,
- 0x00, 0x00, 0x48, 0x5a, 0x46,
- 0x00, 0x00, 0x48, 0x67, 0x85,
- 0x00, 0x00, 0x49, 0x2a, 0x87,
- 0x00, 0x00, 0x49, 0x2f, 0xc7,
- 0x00, 0x00, 0x4c, 0x6b, 0x84,
- 0x00, 0x00, 0x4a, 0x0d, 0xca,
- 0x00, 0x00, 0x4b, 0x76, 0xc8,
- 0x00, 0x00, 0x57, 0x8e, 0xc9,
- 0x00, 0x00, 0x52, 0x01, 0x05,
- 0x00, 0x00, 0x47, 0x5a, 0x06,
- 0x00, 0x00, 0x50, 0xd2, 0x8a,
- 0x00, 0x00, 0x47, 0x82, 0xc6,
- 0x00, 0x00, 0x5e, 0xc5, 0x87,
- 0x00, 0x00, 0x47, 0xa4, 0x0d,
- 0x00, 0x00, 0x4b, 0x42, 0xc9,
- 0x00, 0x00, 0x58, 0x45, 0x45,
- 0x00, 0x00, 0x5c, 0x3f, 0x07,
- 0x00, 0x00, 0x5d, 0xb5, 0x08,
- 0x00, 0x00, 0x5d, 0xbe, 0xc8,
- 0x00, 0x00, 0x53, 0xab, 0xc7,
- 0x00, 0x00, 0x5c, 0x2e, 0x06,
- 0x00, 0x00, 0x41, 0x61, 0x07,
- 0x00, 0x00, 0x45, 0x56, 0xc3,
- 0x00, 0x00, 0x51, 0xb8, 0xc4,
- 0x00, 0x00, 0x58, 0x5d, 0x45,
- 0x00, 0x00, 0x5b, 0x02, 0x07,
- 0x00, 0x00, 0x5b, 0xa2, 0x89,
- 0x00, 0x00, 0x42, 0x5d, 0xc8,
- 0x00, 0x00, 0x5e, 0xc4, 0x85,
- 0x00, 0x00, 0x47, 0x38, 0x44,
- 0x00, 0x00, 0x44, 0xf3, 0x05,
- 0x00, 0x00, 0x45, 0xea, 0x4d,
- 0x00, 0x00, 0x40, 0x70, 0x02,
- 0x00, 0x00, 0x4d, 0x62, 0x46,
- 0x00, 0x00, 0x4c, 0x89, 0x46,
- 0x00, 0x00, 0x50, 0xe2, 0xca,
- 0x00, 0x00, 0x5a, 0x16, 0x06,
- 0x00, 0x00, 0x5a, 0xdc, 0x85,
- 0x00, 0x00, 0x58, 0xc3, 0xc5,
- 0x00, 0x00, 0x58, 0xc3, 0xc7,
- 0x00, 0x00, 0x5b, 0x30, 0x0c,
- 0x00, 0x00, 0x46, 0x41, 0xca,
- 0x00, 0x00, 0x49, 0xb0, 0x46,
- 0x00, 0x00, 0x4e, 0x0d, 0xc5,
- 0x00, 0x00, 0x45, 0x27, 0x86,
- 0x00, 0x00, 0x49, 0xb9, 0x07,
- 0x00, 0x00, 0x49, 0xde, 0x46,
- 0x00, 0x00, 0x49, 0xfd, 0x0c,
- 0x00, 0x00, 0x44, 0xff, 0x09,
- 0x00, 0xdc, 0x44, 0x44, 0x47,
- 0x00, 0x00, 0x4a, 0x30, 0x45,
- 0x00, 0x00, 0x4a, 0x30, 0x46,
- 0x00, 0x00, 0x4a, 0x35, 0x48,
- 0x00, 0x00, 0x44, 0x96, 0x85,
- 0x00, 0x00, 0x4b, 0x4f, 0x85,
- 0x00, 0x00, 0x4b, 0x57, 0x08,
- 0x00, 0x00, 0x4b, 0x59, 0x0a,
- 0x00, 0xdc, 0xc0, 0xbd, 0x82,
- 0x00, 0xdd, 0x40, 0x9b, 0x02,
- 0x00, 0x00, 0x4a, 0xfc, 0x05,
- 0x00, 0x00, 0x51, 0xc7, 0x03,
- 0x00, 0x00, 0x51, 0xdf, 0x88,
- 0x00, 0x00, 0x48, 0x5e, 0x43,
- 0x00, 0x00, 0x4b, 0x5b, 0x84,
- 0x00, 0x00, 0x4a, 0x8d, 0x8b,
- 0x00, 0x00, 0x4b, 0x90, 0xc8,
- 0x00, 0x00, 0x53, 0x3d, 0x88,
- 0x00, 0xdd, 0xd4, 0xda, 0x49,
- 0x00, 0x00, 0x4b, 0xbe, 0x09,
- 0x00, 0x00, 0x4b, 0xc7, 0x46,
- 0x00, 0x00, 0x4b, 0xe3, 0x88,
- 0x00, 0x00, 0x4b, 0xe5, 0x89,
- 0x00, 0x00, 0x4c, 0x02, 0x06,
- 0x00, 0x00, 0x4c, 0x03, 0x85,
- 0x00, 0x00, 0x44, 0xe1, 0x86,
- 0x00, 0x00, 0x4c, 0x08, 0x09,
- 0x00, 0x00, 0x4d, 0x7e, 0xc7,
- 0x00, 0x00, 0x45, 0x52, 0x06,
- 0x00, 0x00, 0x55, 0x85, 0x87,
- 0x00, 0x00, 0x55, 0x8e, 0x47,
- 0x00, 0x00, 0x5a, 0x0e, 0x04,
- 0x00, 0xde, 0x5e, 0x4e, 0x09,
- 0x00, 0x00, 0x4d, 0x42, 0x08,
- 0x00, 0x00, 0x4c, 0x51, 0x48,
- 0x00, 0x00, 0x58, 0xd2, 0xc7,
- 0x00, 0x00, 0x4e, 0x1e, 0x06,
- 0x00, 0x00, 0x5c, 0xc1, 0x89,
- 0x00, 0x00, 0x4d, 0xb7, 0xc7,
- 0x00, 0x00, 0x5a, 0xf3, 0xca,
- 0x00, 0x00, 0x5e, 0xdb, 0x48,
- 0x00, 0x00, 0x47, 0x75, 0xc7,
- 0x00, 0x00, 0x4e, 0x4d, 0x86,
- 0x00, 0x00, 0x5e, 0x0d, 0x4a,
- 0x00, 0x00, 0x54, 0x7c, 0x08,
- 0x00, 0x00, 0x4e, 0x88, 0x45,
- 0x00, 0x00, 0x4b, 0xa4, 0x85,
- 0x00, 0x00, 0x50, 0xa5, 0x47,
- 0x00, 0x00, 0x51, 0x85, 0x49,
- 0x00, 0x00, 0x51, 0x8a, 0x4b,
- 0x00, 0x00, 0x52, 0xee, 0x08,
- 0x00, 0x00, 0x5d, 0xc5, 0xc9,
- 0x00, 0x00, 0x45, 0xbc, 0x07,
- 0x00, 0x00, 0x4c, 0xed, 0x4c,
- 0x00, 0x00, 0x4c, 0xf2, 0x4c,
- 0x00, 0x00, 0x4c, 0xf5, 0x4a,
- 0x00, 0x00, 0x4c, 0xf7, 0xcc,
- 0x00, 0x00, 0x4d, 0xb0, 0x88,
- 0x00, 0x00, 0x4d, 0xb2, 0x88,
- 0x00, 0x00, 0x4d, 0xb4, 0x84,
- 0x00, 0x00, 0x4d, 0xc8, 0x89,
- 0x00, 0x00, 0x4d, 0xca, 0xc9,
- 0x00, 0x00, 0x4d, 0xcd, 0x0a,
- 0x00, 0x00, 0x4d, 0xcf, 0x89,
- 0x00, 0x00, 0x4d, 0xd3, 0x07,
- 0x00, 0x00, 0x5c, 0x5b, 0xcc,
- 0x00, 0x00, 0x5c, 0xab, 0x06,
- 0x00, 0x00, 0x47, 0x9d, 0xc8,
- 0x00, 0x00, 0x47, 0x83, 0x86,
- 0x00, 0x00, 0x51, 0x84, 0x06,
- 0x00, 0x00, 0x58, 0x44, 0x47,
- 0x00, 0x00, 0x59, 0x6b, 0xc8,
- 0x00, 0x00, 0x58, 0xbf, 0x8b,
- 0x00, 0x00, 0x4f, 0xc0, 0xc7,
- 0x00, 0x00, 0x46, 0x47, 0xc9,
- 0x00, 0x00, 0x46, 0x83, 0xc9,
- 0x00, 0x00, 0x48, 0xf5, 0x07,
- 0x00, 0x00, 0x4d, 0xb7, 0x44,
- 0x00, 0x00, 0x46, 0x73, 0x07,
- 0x00, 0x00, 0x4e, 0x92, 0x46,
- 0x00, 0x00, 0x41, 0x3f, 0x86,
- 0x00, 0x00, 0x54, 0xff, 0x05,
- 0x00, 0x00, 0x43, 0x0e, 0x48,
- 0x00, 0x00, 0x55, 0xda, 0x44,
- 0x00, 0x00, 0x55, 0xda, 0x46,
- 0x00, 0x00, 0x46, 0x40, 0x8b,
- 0x00, 0x00, 0x4a, 0xac, 0x09,
- 0x00, 0x00, 0x43, 0x79, 0xc6,
- 0x00, 0x00, 0x42, 0x65, 0x09,
- 0x00, 0x00, 0x40, 0xd6, 0xc6,
- 0x00, 0x00, 0x58, 0x78, 0xc8,
- 0x00, 0x00, 0x40, 0x77, 0x83,
- 0x00, 0x00, 0x5b, 0x1c, 0xc5,
- 0x00, 0x00, 0x41, 0x40, 0xc9,
- 0x00, 0x00, 0x40, 0x58, 0x05,
- 0x00, 0x00, 0x4f, 0xa4, 0xc4,
- 0x00, 0x00, 0x44, 0x49, 0x46,
- 0x00, 0x00, 0x47, 0xdb, 0x85,
- 0x00, 0x00, 0x46, 0x08, 0x06,
- 0x00, 0x00, 0x52, 0x30, 0x07,
- 0x00, 0x00, 0x5b, 0xee, 0xc6,
- 0x00, 0x00, 0x43, 0x4b, 0xcb,
- 0x00, 0x00, 0x47, 0xb1, 0x87,
- 0x00, 0x00, 0x48, 0x90, 0x46,
- 0x00, 0x00, 0x49, 0x2e, 0x46,
- 0x00, 0x00, 0x42, 0xf0, 0xc6,
- 0x00, 0x00, 0x5e, 0xdf, 0x09,
- 0x00, 0x00, 0x4b, 0x3d, 0x4a,
- 0x00, 0x00, 0x56, 0xd4, 0x05,
- 0x00, 0x00, 0x44, 0x51, 0x4d,
- 0x00, 0x00, 0x4b, 0x5a, 0x06,
- 0x00, 0x00, 0x4d, 0x00, 0xc6,
- 0x00, 0x00, 0x5a, 0x75, 0x46,
- 0x00, 0x00, 0x42, 0x24, 0x05,
- 0x00, 0x00, 0x4f, 0x94, 0xc7,
- 0x00, 0x00, 0x47, 0xc2, 0x47,
- 0x00, 0x00, 0x51, 0xbc, 0xce,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x4e, 0x1d, 0xc9,
- 0x00, 0x00, 0x5a, 0x29, 0x49,
- 0x00, 0x00, 0x51, 0x3e, 0xc7,
- 0x00, 0x00, 0x47, 0xc7, 0x07,
- 0x00, 0x00, 0x43, 0x59, 0xc5,
- 0x00, 0x00, 0x57, 0xb2, 0x05,
- 0x00, 0xde, 0xc0, 0x6c, 0x0f,
- 0x00, 0x00, 0x4e, 0x7e, 0xc7,
- 0x00, 0x00, 0x4e, 0x80, 0x88,
- 0x00, 0x00, 0x4e, 0x84, 0xc4,
- 0x00, 0x00, 0x4e, 0x87, 0x06,
- 0x00, 0xdf, 0x44, 0xa5, 0x02,
- 0x00, 0x00, 0x4e, 0xc7, 0x46,
- 0x00, 0x00, 0x4e, 0xe6, 0xc6,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x40, 0x1b, 0x8e,
- 0x00, 0x00, 0x4e, 0x43, 0x0a,
- 0x00, 0x00, 0x40, 0x3b, 0xc6,
- 0x00, 0x00, 0x41, 0x1b, 0x8a,
- 0x00, 0x00, 0x5c, 0xb2, 0xc9,
- 0x00, 0x00, 0x43, 0xb8, 0x45,
- 0x00, 0x00, 0x57, 0x1b, 0x48,
- 0x00, 0x00, 0x51, 0x64, 0x46,
- 0x00, 0x00, 0x4c, 0x3d, 0xc8,
- 0x00, 0x00, 0x50, 0x03, 0x08,
- 0x00, 0x00, 0x49, 0x4b, 0x8b,
- 0x00, 0x00, 0x59, 0xda, 0x45,
- 0x00, 0x00, 0x47, 0x68, 0x88,
- 0x00, 0x00, 0x40, 0x43, 0x8c,
- 0x00, 0x00, 0x4d, 0xf6, 0x07,
- 0x00, 0x00, 0x45, 0x6e, 0x06,
- 0x00, 0x00, 0x4e, 0x4a, 0xc8,
- 0x00, 0x00, 0x58, 0xbc, 0x48,
- 0x00, 0xdf, 0xc1, 0x22, 0x82,
- 0x00, 0x00, 0x5d, 0x4d, 0xcb,
- 0x00, 0x00, 0x54, 0xf5, 0x89,
- 0x00, 0x00, 0x58, 0xb8, 0x09,
- 0x00, 0x00, 0x40, 0x76, 0x07,
- 0x00, 0x00, 0x5c, 0xbb, 0x88,
- 0x00, 0xe0, 0x43, 0xeb, 0x48,
- 0x00, 0x00, 0x53, 0x2a, 0x8b,
- 0x00, 0x00, 0x45, 0x43, 0x89,
- 0x00, 0x00, 0x46, 0x6b, 0x4d,
- 0x00, 0x00, 0x54, 0xd4, 0xc8,
- 0x00, 0x00, 0x4d, 0x07, 0x08,
- 0x00, 0xe0, 0xc0, 0x15, 0x82,
- 0x00, 0x00, 0x41, 0xe7, 0xc4,
- 0x00, 0xe1, 0x42, 0x32, 0xc2,
- 0x00, 0x00, 0x56, 0xcf, 0x46,
- 0x00, 0xe1, 0xc0, 0xb7, 0xc2,
- 0x00, 0x00, 0x50, 0x79, 0x4a,
- 0x00, 0x00, 0x46, 0xc0, 0x06,
- 0x00, 0x00, 0x42, 0x6b, 0x88,
- 0x00, 0x00, 0x45, 0x06, 0x88,
- 0x00, 0x00, 0x46, 0x1b, 0xc6,
- 0x00, 0x00, 0x4c, 0x46, 0xc6,
- 0x00, 0x00, 0x50, 0xdb, 0x86,
- 0x00, 0x00, 0x41, 0xe3, 0xc5,
- 0x00, 0x00, 0x43, 0xbd, 0x04,
- 0x00, 0xe2, 0x58, 0x78, 0x44,
- 0x00, 0x00, 0x55, 0xcb, 0xc6,
- 0x00, 0x00, 0x45, 0x9e, 0xc7,
- 0x00, 0xe2, 0xc8, 0x8b, 0x47,
- 0x00, 0x00, 0x59, 0x79, 0xcb,
- 0x00, 0x00, 0x5c, 0x5a, 0x09,
- 0x00, 0x00, 0x41, 0x11, 0xca,
- 0x00, 0x00, 0x58, 0xc5, 0x04,
- 0x00, 0x00, 0x4d, 0xba, 0x88,
- 0x00, 0x00, 0x45, 0x4f, 0xcd,
- 0x00, 0x00, 0x50, 0x82, 0x49,
- 0x00, 0x00, 0x50, 0x84, 0x88,
- 0x00, 0x00, 0x50, 0x87, 0x09,
- 0x00, 0x00, 0x50, 0xa9, 0x04,
- 0x00, 0x00, 0x44, 0x17, 0x44,
- 0x00, 0x00, 0x48, 0x39, 0x05,
- 0x00, 0x00, 0x5b, 0x64, 0x4b,
- 0x00, 0x00, 0x4b, 0x90, 0x46,
- 0x00, 0x00, 0x55, 0xca, 0x05,
- 0x00, 0x00, 0x59, 0x00, 0x89,
- 0x00, 0x00, 0x56, 0x04, 0xc8,
- 0x00, 0x00, 0x46, 0x7e, 0x84,
- 0x00, 0x00, 0x51, 0x3c, 0x49,
- 0x00, 0x00, 0x43, 0x4b, 0x05,
- 0x00, 0x00, 0x53, 0x40, 0xc8,
- 0x00, 0x00, 0x5d, 0xaa, 0x07,
- 0x00, 0x00, 0x52, 0x0d, 0x88,
- 0x00, 0x00, 0x49, 0x03, 0x06,
- 0x00, 0x00, 0x5c, 0x18, 0x47,
- 0x00, 0x00, 0x4f, 0x26, 0x89,
- 0x00, 0x00, 0x53, 0xbe, 0x49,
- 0x00, 0x00, 0x42, 0x2c, 0x05,
- 0x00, 0x00, 0x45, 0x74, 0x45,
- 0x00, 0xe3, 0x41, 0xcd, 0x02,
- 0x00, 0x00, 0x54, 0xe4, 0x04,
- 0x00, 0x00, 0x4f, 0x63, 0x45,
- 0x00, 0x00, 0x59, 0xd8, 0x46,
- 0x00, 0x00, 0x58, 0x33, 0x05,
- 0x00, 0x00, 0x45, 0xe8, 0xc7,
- 0x00, 0x00, 0x4d, 0xaa, 0xc5,
- 0x00, 0x00, 0x48, 0x3e, 0x04,
- 0x00, 0x00, 0x5b, 0xde, 0xc6,
- 0x00, 0x00, 0x47, 0x36, 0x07,
- 0x00, 0x00, 0x44, 0xa5, 0x46,
- 0x00, 0x00, 0x5b, 0x21, 0x85,
- 0x00, 0x00, 0x40, 0xc8, 0x88,
- 0x00, 0x00, 0x5c, 0xf5, 0x85,
- 0x00, 0x00, 0x40, 0xf6, 0xc7,
- 0x00, 0x00, 0x41, 0xce, 0x89,
- 0x00, 0x00, 0x4a, 0xad, 0x4a,
- 0x00, 0x00, 0x42, 0x67, 0x07,
- 0x00, 0x00, 0x42, 0x67, 0x0c,
- 0x00, 0x00, 0x42, 0x7d, 0x06,
- 0x00, 0x00, 0x43, 0xf5, 0x09,
- 0x00, 0x00, 0x44, 0x7c, 0xc5,
- 0x00, 0x00, 0x44, 0x95, 0xc8,
- 0x00, 0x00, 0x41, 0x5d, 0x43,
- 0x00, 0x00, 0x4c, 0x8a, 0x05,
- 0x00, 0x00, 0x4f, 0x5e, 0x05,
- 0x00, 0x00, 0x49, 0x0b, 0x47,
- 0x00, 0xe3, 0xc0, 0x0b, 0x82,
- 0x00, 0x00, 0x50, 0x42, 0x07,
- 0x00, 0x00, 0x4d, 0xd6, 0x86,
- 0x00, 0x00, 0x5e, 0x34, 0x06,
- 0x00, 0x00, 0x4e, 0x85, 0x86,
- 0x00, 0x00, 0x58, 0xbb, 0x86,
- 0x00, 0x00, 0x44, 0xcb, 0xc8,
- 0x00, 0x00, 0x48, 0x98, 0xc5,
- 0x00, 0x00, 0x55, 0xff, 0xc7,
- 0x00, 0x00, 0x55, 0xff, 0xcd,
- 0x00, 0x00, 0x44, 0xe5, 0x83,
- 0x00, 0x00, 0x5c, 0xaf, 0xc5,
- 0x00, 0x00, 0x47, 0x71, 0x47,
- 0x00, 0x00, 0x50, 0x45, 0x48,
- 0x00, 0x00, 0x47, 0x6d, 0x05,
- 0x00, 0x00, 0x41, 0x98, 0xc8,
- 0x00, 0x00, 0x42, 0x8a, 0x46,
- 0x00, 0x00, 0x51, 0x6b, 0x87,
- 0x00, 0x00, 0x4f, 0x44, 0xc5,
- 0x00, 0x00, 0x59, 0xda, 0xc6,
- 0x00, 0x00, 0x59, 0xab, 0x85,
- 0x00, 0x00, 0x5c, 0xcd, 0x4a,
- 0x00, 0x00, 0x4f, 0x6a, 0xc6,
- 0x00, 0x00, 0x4d, 0x76, 0x47,
- 0x00, 0x00, 0x42, 0x7a, 0x05,
- 0x00, 0x00, 0x4f, 0x7f, 0x47,
- 0x00, 0x00, 0x4f, 0x8b, 0xc4,
- 0x00, 0x00, 0x4f, 0xa4, 0x46,
- 0x00, 0x00, 0x57, 0x1a, 0x85,
- 0x00, 0x00, 0x42, 0xc9, 0x8b,
- 0x00, 0x00, 0x4e, 0x90, 0xc9,
- 0x00, 0x00, 0x58, 0xa8, 0xca,
- 0x00, 0x00, 0x42, 0x2c, 0x88,
- 0x00, 0x00, 0x50, 0xb5, 0x88,
- 0x00, 0x00, 0x51, 0x0b, 0x8c,
- 0x00, 0x00, 0x51, 0x14, 0x07,
- 0x00, 0x00, 0x51, 0x47, 0x88,
- 0x00, 0x00, 0x55, 0xc5, 0x08,
- 0x00, 0x00, 0x56, 0xd0, 0x85,
- 0x00, 0x00, 0x52, 0x9a, 0x4a,
- 0x00, 0x00, 0x52, 0x64, 0x09,
- 0x00, 0xe4, 0x40, 0x19, 0x82,
- 0x00, 0x00, 0x4a, 0x06, 0x86,
- 0x00, 0x00, 0x43, 0x0c, 0x84,
- 0x00, 0x00, 0x43, 0x0c, 0x89,
- 0x00, 0x00, 0x42, 0x86, 0xc9,
- 0x00, 0x00, 0x51, 0x9a, 0x47,
- 0x00, 0x00, 0x47, 0xf8, 0x87,
- 0x00, 0x00, 0x48, 0x50, 0x89,
- 0x00, 0x00, 0x4c, 0xcd, 0x48,
- 0x00, 0x00, 0x4c, 0xcd, 0x4f,
- 0x00, 0x00, 0x41, 0x6b, 0x06,
- 0x00, 0x00, 0x4f, 0x04, 0x8b,
- 0x00, 0x00, 0x45, 0xf9, 0xc5,
- 0x00, 0x00, 0x45, 0xf9, 0xc7,
- 0x00, 0x00, 0x55, 0x4f, 0xc9,
- 0x00, 0x00, 0x42, 0x4c, 0x86,
- 0x00, 0x00, 0x51, 0x3b, 0xc7,
- 0x00, 0x00, 0x4f, 0x3b, 0x45,
- 0x00, 0x00, 0x43, 0x27, 0xc4,
- 0x00, 0x00, 0x5b, 0x6a, 0x46,
- 0x00, 0x00, 0x40, 0xdb, 0xc4,
- 0x00, 0x00, 0x4c, 0xd5, 0x07,
- 0x00, 0x00, 0x53, 0x98, 0x48,
- 0x00, 0xe4, 0xdb, 0x1a, 0x48,
- 0x00, 0x00, 0x5c, 0x70, 0x05,
- 0x00, 0x00, 0x5e, 0x52, 0x47,
- 0x00, 0x00, 0x4d, 0x68, 0x49,
- 0x00, 0x00, 0x40, 0xe2, 0x04,
- 0x00, 0x00, 0x44, 0x6d, 0x48,
- 0x00, 0xe5, 0x50, 0xa3, 0x88,
- 0x00, 0x00, 0x4d, 0x00, 0x44,
- 0x00, 0x00, 0x50, 0x0d, 0x08,
- 0x00, 0x00, 0x5c, 0x3b, 0x44,
- 0x00, 0x00, 0x5b, 0x71, 0x49,
- 0x00, 0x00, 0x5a, 0x74, 0x85,
- 0x00, 0xe5, 0xc3, 0xd9, 0x42,
- 0x00, 0x00, 0x41, 0x6b, 0x45,
- 0x00, 0x00, 0x4e, 0xa7, 0x85,
- 0x00, 0x00, 0x53, 0xb7, 0x88,
- 0x00, 0x00, 0x43, 0x6b, 0x47,
- 0x00, 0xe6, 0x40, 0x08, 0xc2,
- 0x00, 0x00, 0x5c, 0xc5, 0x45,
- 0x00, 0x00, 0x4e, 0xaf, 0xc6,
- 0x00, 0x00, 0x46, 0x7b, 0x06,
- 0x00, 0x00, 0x54, 0xe3, 0xc8,
- 0x00, 0x00, 0x55, 0x0c, 0x88,
- 0x00, 0x00, 0x58, 0x32, 0xc6,
- 0x00, 0x00, 0x58, 0x3e, 0x86,
- 0x00, 0x00, 0x50, 0xeb, 0x09,
- 0x00, 0x00, 0x5e, 0x33, 0x46,
- 0x00, 0x00, 0x42, 0x4b, 0x4b,
- 0x00, 0x00, 0x4f, 0xf3, 0xc5,
- 0x00, 0x00, 0x5a, 0x65, 0x86,
- 0x00, 0x00, 0x4a, 0xc8, 0x48,
- 0x00, 0x00, 0x50, 0x23, 0xc6,
- 0x00, 0x00, 0x4a, 0x23, 0x46,
- 0x00, 0x00, 0x41, 0xb0, 0x0a,
- 0x00, 0x00, 0x5a, 0x00, 0x8a,
- 0x00, 0x00, 0x45, 0xed, 0x45,
- 0x00, 0x00, 0x49, 0xb7, 0x47,
- 0x00, 0x00, 0x48, 0x34, 0x86,
- 0x00, 0xe6, 0xc0, 0x46, 0xc2,
- 0x00, 0x00, 0x47, 0x72, 0x87,
- 0x00, 0x00, 0x5e, 0x49, 0x05,
- 0x00, 0x00, 0x50, 0xd2, 0x04,
- 0x00, 0x00, 0x50, 0xd2, 0x05,
- 0x00, 0x00, 0x4d, 0xb9, 0x86,
- 0x00, 0x00, 0x58, 0x89, 0x47,
- 0x00, 0x00, 0x42, 0x0c, 0x85,
- 0x00, 0x00, 0x42, 0x87, 0x84,
- 0x00, 0x00, 0x4c, 0x61, 0x88,
- 0x00, 0x00, 0x4a, 0x24, 0x05,
- 0x00, 0x00, 0x4f, 0x52, 0x47,
- 0x00, 0x00, 0x53, 0x6d, 0x05,
- 0x00, 0x00, 0x58, 0x17, 0x85,
- 0x00, 0x00, 0x41, 0x27, 0x84,
- 0x00, 0x00, 0x54, 0x66, 0x09,
- 0x00, 0x00, 0x4c, 0x62, 0xc8,
- 0x00, 0x00, 0x44, 0x9d, 0x86,
- 0x00, 0x00, 0x5a, 0xaa, 0xc6,
- 0x00, 0x00, 0x53, 0xc3, 0xc6,
- 0x00, 0xe7, 0x52, 0xd1, 0x48,
- 0x00, 0x00, 0x50, 0xb4, 0x07,
- 0x00, 0x00, 0x59, 0x15, 0xcd,
- 0x00, 0x00, 0x56, 0x6a, 0x0c,
- 0x00, 0x00, 0x5e, 0x00, 0x89,
- 0x00, 0x00, 0x5e, 0x91, 0x09,
- 0x00, 0xe7, 0xd7, 0xd9, 0x42,
- 0x00, 0x00, 0x5e, 0x84, 0x43,
- 0x00, 0x00, 0x42, 0x81, 0x83,
- 0x00, 0x00, 0x4e, 0x93, 0x05,
- 0x00, 0x00, 0x5b, 0x03, 0x0a,
- 0x00, 0x00, 0x54, 0x46, 0x06,
- 0x00, 0x00, 0x5e, 0xc8, 0xc5,
- 0x00, 0x00, 0x52, 0x34, 0x04,
- 0x00, 0x00, 0x52, 0x34, 0x0b,
- 0x00, 0x00, 0x53, 0x91, 0x0c,
- 0x00, 0x00, 0x53, 0x9a, 0x4c,
- 0x00, 0x00, 0x53, 0x9d, 0x55,
- 0x00, 0x00, 0x53, 0xc7, 0x4d,
- 0x00, 0x00, 0x53, 0xeb, 0x8f,
- 0x00, 0x00, 0x53, 0xef, 0x52,
- 0x00, 0x00, 0x53, 0xf3, 0xcf,
- 0x00, 0x00, 0x53, 0xf7, 0x92,
- 0x00, 0x00, 0x53, 0xfc, 0x13,
- 0x00, 0x00, 0x54, 0x00, 0xcd,
- 0x00, 0x00, 0x54, 0x06, 0x8d,
- 0x00, 0x00, 0x54, 0x0a, 0x0e,
- 0x00, 0x00, 0x54, 0x13, 0x0e,
- 0x00, 0x00, 0x54, 0x19, 0x8c,
- 0x00, 0x00, 0x54, 0x1d, 0x4c,
- 0x00, 0x00, 0x54, 0x21, 0x8b,
- 0x00, 0x00, 0x54, 0x2c, 0x0e,
- 0x00, 0x00, 0x54, 0x35, 0x12,
- 0x00, 0x00, 0x54, 0x43, 0xcc,
- 0x00, 0x00, 0x54, 0x48, 0xd0,
- 0x00, 0x00, 0x55, 0x1b, 0x12,
- 0x00, 0x00, 0x55, 0x27, 0x8c,
- 0x00, 0x00, 0x55, 0x2e, 0x4d,
- 0x00, 0x00, 0x55, 0x31, 0x8c,
- 0x00, 0x00, 0x55, 0x64, 0x91,
- 0x00, 0x00, 0x55, 0x78, 0x0d,
- 0x00, 0x00, 0x55, 0x9f, 0x8d,
- 0x00, 0x00, 0x55, 0xa5, 0x8a,
- 0x00, 0x00, 0x55, 0xa8, 0x0c,
- 0x00, 0x00, 0x55, 0xbc, 0x0c,
- 0x00, 0x00, 0x55, 0xc7, 0x0c,
- 0x00, 0x00, 0x55, 0xd1, 0x8c,
- 0x00, 0x00, 0x56, 0x22, 0x13,
- 0x00, 0x00, 0x56, 0x2c, 0x10,
- 0x00, 0x00, 0x56, 0x30, 0x10,
- 0x00, 0x00, 0x56, 0x36, 0x8d,
- 0x00, 0x00, 0x56, 0x3c, 0x8c,
- 0x00, 0x00, 0x56, 0x70, 0x09,
- 0x00, 0x00, 0x56, 0x91, 0x4d,
- 0x00, 0x00, 0x56, 0x94, 0x93,
- 0x00, 0x00, 0x56, 0xba, 0x11,
- 0x00, 0x00, 0x56, 0xc2, 0x13,
- 0x00, 0x00, 0x56, 0xd5, 0x4f,
- 0x00, 0x00, 0x56, 0xd9, 0x0c,
- 0x00, 0x00, 0x56, 0xdc, 0x0f,
- 0x00, 0x00, 0x56, 0xdf, 0xcd,
- 0x00, 0x00, 0x56, 0xe5, 0xcf,
- 0x00, 0x00, 0x56, 0xe9, 0x90,
- 0x00, 0x00, 0x56, 0xf4, 0x0e,
- 0x00, 0x00, 0x57, 0x56, 0x4e,
- 0x00, 0x00, 0x57, 0x5f, 0x90,
- 0x00, 0x00, 0x57, 0x6a, 0xcd,
- 0x00, 0x00, 0x57, 0x74, 0x4e,
- 0x00, 0x00, 0x57, 0x77, 0xcc,
- 0x00, 0x00, 0x57, 0x86, 0x93,
- 0x00, 0x00, 0x57, 0xab, 0xce,
- 0x00, 0x00, 0x57, 0xb7, 0x10,
- 0x00, 0x00, 0x57, 0xbb, 0x11,
- 0x00, 0x00, 0x57, 0xbf, 0x4f,
- 0x00, 0x00, 0x57, 0xc3, 0x13,
- 0x00, 0x00, 0x57, 0xd4, 0xcd,
- 0x00, 0x00, 0x57, 0xd8, 0x0f,
- 0x00, 0x00, 0x57, 0xdb, 0xce,
- 0x00, 0x00, 0x57, 0xe1, 0x50,
- 0x00, 0x00, 0x57, 0xe5, 0x49,
- 0x00, 0x00, 0x57, 0xf9, 0xd0,
- 0x00, 0x00, 0x57, 0xfe, 0xcf,
- 0x00, 0x00, 0x58, 0x05, 0x4f,
- 0x00, 0x00, 0x58, 0x09, 0x12,
- 0x00, 0x00, 0x58, 0x24, 0x8e,
- 0x00, 0x00, 0x58, 0x2f, 0x4d,
- 0x00, 0x00, 0x58, 0x35, 0xcd,
- 0x00, 0x00, 0x58, 0x39, 0x0d,
- 0x00, 0x00, 0x58, 0x46, 0x8d,
- 0x00, 0x00, 0x58, 0x49, 0xcd,
- 0x00, 0x00, 0x58, 0x4d, 0x10,
- 0x00, 0x00, 0x58, 0x51, 0x0b,
- 0x00, 0x00, 0x58, 0x5b, 0x0c,
- 0x00, 0x00, 0x58, 0x5e, 0x8c,
- 0x00, 0x00, 0x58, 0x64, 0x8c,
- 0x00, 0x00, 0x58, 0x67, 0x8e,
- 0x00, 0x00, 0x59, 0x47, 0xd0,
- 0x00, 0x00, 0x59, 0x65, 0x12,
- 0x00, 0x00, 0x59, 0x69, 0x8b,
- 0x00, 0x00, 0x59, 0x6d, 0xce,
- 0x00, 0x00, 0x59, 0x71, 0x4e,
- 0x00, 0x00, 0x59, 0x80, 0x4e,
- 0x00, 0x00, 0x59, 0x85, 0xcb,
- 0x00, 0xe8, 0x59, 0x89, 0x56,
- 0x00, 0x00, 0x59, 0x98, 0x8d,
- 0x00, 0x00, 0x59, 0xa1, 0x54,
- 0x00, 0x00, 0x59, 0xae, 0x4d,
- 0x00, 0x00, 0x59, 0xd0, 0x95,
- 0x00, 0x00, 0x59, 0xf2, 0x8d,
- 0x00, 0x00, 0x59, 0xfc, 0x0f,
- 0x00, 0x00, 0x5a, 0x04, 0x0f,
- 0x00, 0x00, 0x5a, 0x3a, 0xcf,
- 0x00, 0x00, 0x5a, 0x3e, 0x8e,
- 0x00, 0x00, 0x5a, 0x42, 0x0d,
- 0x00, 0x00, 0x5a, 0x57, 0x51,
- 0x00, 0x00, 0x5a, 0x8e, 0xcc,
- 0x00, 0x00, 0x5a, 0x91, 0xcc,
- 0x00, 0x00, 0x5a, 0x94, 0xcb,
- 0x00, 0x00, 0x5a, 0x97, 0x8c,
- 0x00, 0x00, 0x5a, 0x9f, 0xcf,
- 0x00, 0x00, 0x5a, 0xa3, 0x92,
- 0x00, 0x00, 0x5a, 0xb1, 0x8d,
- 0x00, 0x00, 0x5a, 0xc3, 0xcc,
- 0x00, 0x00, 0x5a, 0xcc, 0xcc,
- 0x00, 0x00, 0x5a, 0xcf, 0xcd,
- 0x00, 0x00, 0x5a, 0xd3, 0x0f,
- 0x00, 0x00, 0x5a, 0xd6, 0xce,
- 0x00, 0x00, 0x5a, 0xff, 0xcc,
- 0x00, 0x00, 0x5b, 0x05, 0x8d,
- 0x00, 0x00, 0x5b, 0x08, 0xcb,
- 0x00, 0x00, 0x5b, 0x15, 0x4c,
- 0x00, 0x00, 0x5b, 0x26, 0x8d,
- 0x00, 0x00, 0x5b, 0x29, 0xce,
- 0x00, 0x00, 0x5b, 0x2d, 0x49,
- 0x00, 0x00, 0x5b, 0x3d, 0x13,
- 0x00, 0x00, 0x5b, 0x44, 0xcd,
- 0x00, 0x00, 0x5b, 0x4b, 0xcd,
- 0x00, 0x00, 0x5b, 0x51, 0xcc,
- 0x00, 0x00, 0x5b, 0x58, 0x8e,
- 0x00, 0x00, 0x5b, 0x7c, 0x4f,
- 0x00, 0x00, 0x5b, 0x80, 0x0c,
- 0x00, 0x00, 0x5b, 0x83, 0x0d,
- 0x00, 0x00, 0x5b, 0x86, 0x4f,
- 0x00, 0x00, 0x5b, 0x8a, 0x0c,
- 0x00, 0x00, 0x5b, 0x90, 0x0c,
- 0x00, 0x00, 0x5b, 0x9d, 0x4c,
- 0x00, 0x00, 0x5b, 0xa0, 0x4c,
- 0x00, 0x00, 0x5b, 0xac, 0x4d,
- 0x00, 0x00, 0x5b, 0xaf, 0x92,
- 0x00, 0x00, 0x5b, 0xba, 0x0c,
- 0x00, 0x00, 0x5b, 0xbd, 0x0c,
- 0x00, 0x00, 0x5b, 0xc0, 0x11,
- 0x00, 0x00, 0x5b, 0xc4, 0x4f,
- 0x00, 0x00, 0x5b, 0xc8, 0x0f,
- 0x00, 0x00, 0x5b, 0xcb, 0xd3,
- 0x00, 0x00, 0x5b, 0xf8, 0x4e,
- 0x00, 0x00, 0x5b, 0xfb, 0xcf,
- 0x00, 0x00, 0x5b, 0xff, 0x8c,
- 0x00, 0xe8, 0xdc, 0x06, 0x4e,
- 0x00, 0x00, 0x5c, 0x09, 0xcf,
- 0x00, 0x00, 0x5c, 0x0d, 0x96,
- 0x00, 0x00, 0x5c, 0x44, 0xd2,
- 0x00, 0x00, 0x5c, 0x7a, 0x8c,
- 0x00, 0x00, 0x5c, 0x81, 0x8f,
- 0x00, 0x00, 0x5c, 0x88, 0x0d,
- 0x00, 0x00, 0x5d, 0xf1, 0x0f,
- 0x00, 0x00, 0x5d, 0xf4, 0xcc,
- 0x00, 0x00, 0x5d, 0xf7, 0xcd,
- 0x00, 0x00, 0x5d, 0xfb, 0x0d,
- 0x00, 0x00, 0x5e, 0x16, 0x8e,
- 0x00, 0x00, 0x5e, 0x2b, 0x8c,
- 0x00, 0x00, 0x5e, 0x5b, 0x4c,
- 0x00, 0x00, 0x5e, 0x5e, 0x50,
- 0x00, 0x00, 0x5e, 0x77, 0xd1,
- 0x00, 0x00, 0x5e, 0x7c, 0x0b,
- 0x00, 0x00, 0x5e, 0x80, 0x4c,
- 0x00, 0x00, 0x5e, 0x83, 0x4e,
- 0x00, 0x00, 0x5e, 0x96, 0x51,
- 0x00, 0x00, 0x5e, 0x9a, 0x8e,
- 0x00, 0x00, 0x5e, 0x9e, 0x0d,
- 0x00, 0x00, 0x5e, 0xfb, 0x4b,
- 0x00, 0x00, 0x5f, 0x0c, 0x4f,
- 0x00, 0x00, 0x5f, 0x18, 0x94,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x43, 0x83,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x43, 0x83,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x3c, 0xc2,
- 0x00, 0x00, 0x44, 0xe1, 0xc5,
- 0x00, 0x00, 0x5e, 0x93, 0x4c,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x3c, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x4a, 0x43, 0xc5,
- 0x00, 0x00, 0x4a, 0xad, 0x45,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0xb7, 0x82,
- 0x00, 0x00, 0x4a, 0x43, 0xc5,
- 0x00, 0x00, 0x53, 0xcd, 0x09,
- 0x00, 0x00, 0x56, 0xb7, 0x0c,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x44, 0xe1, 0xc5,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0xb7, 0x82,
- 0x00, 0x00, 0x53, 0xcd, 0x09,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x4a, 0xad, 0x45,
- 0x00, 0x00, 0x40, 0x62, 0xc2,
- 0x00, 0x00, 0x4a, 0xad, 0x45,
- 0x00, 0x00, 0x56, 0xb7, 0x0c,
- 0x00, 0x00, 0x5e, 0x93, 0x4c,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x29, 0xcf,
- 0x00, 0x00, 0x13, 0xb5, 0x48,
- 0x00, 0x00, 0x07, 0x4d, 0xc4,
- 0x00, 0x00, 0x0e, 0x70, 0x08,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0xea, 0xc0, 0x22, 0x02,
- 0x00, 0x00, 0x44, 0x57, 0xc3,
- 0x00, 0x00, 0x4f, 0x16, 0x84,
- 0x00, 0x00, 0x40, 0x3b, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x04,
- 0x00, 0x00, 0x43, 0x13, 0x86,
- 0x00, 0x00, 0x44, 0x48, 0x43,
- 0x00, 0x00, 0x44, 0x48, 0x04,
- 0x00, 0x00, 0x49, 0xeb, 0xc5,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x5b, 0x6f, 0x0a,
- 0x00, 0x00, 0x5a, 0x63, 0x86,
- 0x00, 0x00, 0x59, 0x74, 0xcc,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x4e, 0xe6, 0xc6,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x01, 0xe1, 0x43,
- 0x00, 0x00, 0x0b, 0x71, 0x88,
- 0x00, 0xec, 0x1e, 0xd1, 0xc5,
- 0x00, 0x00, 0x07, 0xe6, 0x07,
- 0x00, 0x00, 0x05, 0x00, 0x85,
- 0x00, 0x00, 0x01, 0x79, 0x47,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x0a, 0x2a, 0x04,
- 0x00, 0x00, 0x0a, 0x2a, 0x0a,
- 0x00, 0x00, 0x00, 0x2d, 0x89,
- 0x00, 0x00, 0x00, 0x1a, 0xc2,
- 0x00, 0x00, 0x1c, 0x92, 0x8a,
- 0x00, 0xed, 0xdd, 0xd7, 0xc5,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x03, 0x42, 0xc7,
- 0x00, 0x00, 0x00, 0x62, 0x08,
- 0x00, 0x00, 0x00, 0x99, 0x0e,
- 0x00, 0x00, 0x09, 0x78, 0x52,
- 0x00, 0x00, 0x13, 0x60, 0x8b,
- 0x00, 0x00, 0x11, 0xb5, 0x86,
- 0x00, 0xee, 0x4d, 0x45, 0xc5,
- 0x00, 0xee, 0xcd, 0x45, 0xcc,
- 0x00, 0x00, 0x1e, 0x4a, 0x87,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x00, 0x00, 0x0d, 0xc2, 0x4a,
- 0x00, 0x00, 0x03, 0xf1, 0x50,
- 0x00, 0x00, 0x14, 0xdc, 0x45,
- 0x00, 0x00, 0x0b, 0xa9, 0x4b,
- 0x00, 0x00, 0x0c, 0x9a, 0x08,
- 0x00, 0x00, 0x03, 0x81, 0x07,
- 0x00, 0x00, 0x12, 0xbc, 0xcb,
- 0x00, 0x00, 0x0a, 0x24, 0xc9,
- 0x00, 0x00, 0x04, 0xe3, 0x87,
- 0x00, 0x00, 0x1d, 0x9d, 0x47,
- 0x00, 0x00, 0x1c, 0xcb, 0xc7,
- 0x00, 0x00, 0x03, 0x80, 0x46,
- 0x00, 0x00, 0x00, 0x79, 0x08,
- 0x00, 0xef, 0x83, 0x41, 0x06,
- 0x00, 0x00, 0x05, 0xa3, 0x07,
- 0x00, 0x00, 0x02, 0x8b, 0x86,
- 0x00, 0x00, 0x0b, 0xc8, 0x4d,
- 0x00, 0x00, 0x0d, 0xbc, 0x10,
- 0x00, 0xf0, 0x00, 0x4c, 0x02,
- 0x00, 0x00, 0x1d, 0x9b, 0xc8,
- 0x00, 0x00, 0x19, 0x30, 0x90,
- 0x00, 0x00, 0x19, 0x37, 0xcc,
- 0x00, 0xf0, 0xda, 0x4a, 0x0d,
- 0x00, 0x00, 0x06, 0xb4, 0xc8,
- 0x00, 0x00, 0x06, 0xbd, 0xcb,
- 0x00, 0x00, 0x07, 0xb6, 0xc7,
- 0x00, 0x00, 0x10, 0x30, 0xc9,
- 0x00, 0x00, 0x06, 0x36, 0x46,
- 0x00, 0x00, 0x0a, 0x37, 0x48,
- 0x00, 0x00, 0x01, 0x73, 0x82,
- 0x00, 0x00, 0x06, 0x80, 0x8a,
- 0x00, 0x00, 0x03, 0xee, 0xc7,
- 0x00, 0x00, 0x0c, 0xc1, 0x07,
- 0x00, 0x00, 0x0b, 0x78, 0xc9,
- 0x00, 0x00, 0x0b, 0xac, 0x08,
- 0x00, 0x00, 0x1c, 0x9f, 0x45,
- 0x00, 0x00, 0x06, 0xba, 0x47,
- 0x00, 0x00, 0x11, 0x6a, 0x86,
- 0x00, 0x00, 0x17, 0x3c, 0xc6,
- 0x00, 0x00, 0x10, 0x97, 0xce,
- 0x00, 0x00, 0x04, 0x8b, 0xce,
- 0x00, 0x00, 0x05, 0xe4, 0x4f,
- 0x00, 0x00, 0x08, 0x07, 0x89,
- 0x00, 0x00, 0x1e, 0x0b, 0x49,
- 0x00, 0x00, 0x0a, 0xf7, 0x8b,
- 0x00, 0x00, 0x0d, 0xe0, 0x4f,
- 0x00, 0x00, 0x19, 0xcd, 0x8c,
- 0x00, 0x00, 0x0d, 0x84, 0x4b,
- 0x00, 0x00, 0x12, 0x91, 0x48,
- 0x00, 0x00, 0x19, 0x78, 0xc7,
- 0x00, 0x00, 0x1a, 0x51, 0xc8,
- 0x00, 0x00, 0x0c, 0x1b, 0x0b,
- 0x00, 0x00, 0x0c, 0x26, 0x8c,
- 0x00, 0x00, 0x0c, 0x2a, 0x8c,
- 0x00, 0x00, 0x0c, 0x2e, 0x8c,
- 0x00, 0x00, 0x0c, 0x31, 0x8d,
- 0x00, 0x00, 0x1b, 0xa6, 0x88,
- 0x00, 0x00, 0x07, 0xe5, 0xc2,
- 0x00, 0x00, 0x19, 0xa9, 0x89,
- 0x00, 0x00, 0x0a, 0x8a, 0xc8,
- 0x00, 0x00, 0x0d, 0xe9, 0x4b,
- 0x00, 0x00, 0x0e, 0x20, 0x06,
- 0x00, 0x00, 0x0e, 0xa8, 0xcb,
- 0x00, 0x00, 0x14, 0x30, 0x4b,
- 0x00, 0x00, 0x0f, 0x32, 0x8a,
- 0x00, 0x00, 0x0f, 0x42, 0x45,
- 0x00, 0x00, 0x0f, 0x91, 0xd0,
- 0x00, 0x00, 0x10, 0x09, 0x86,
- 0x00, 0x00, 0x1b, 0xf0, 0x06,
- 0x00, 0x00, 0x1c, 0xf8, 0x05,
- 0x00, 0x00, 0x0d, 0x98, 0x07,
- 0x00, 0x00, 0x10, 0x10, 0x48,
- 0x00, 0x00, 0x10, 0x4a, 0xc7,
- 0x00, 0x00, 0x10, 0x4d, 0x87,
- 0x00, 0x00, 0x17, 0x2e, 0x07,
- 0x00, 0x00, 0x02, 0x02, 0x86,
- 0x00, 0x00, 0x16, 0xcd, 0x8a,
- 0x00, 0x00, 0x0b, 0x4c, 0x0a,
- 0x00, 0x00, 0x01, 0x5d, 0x46,
- 0x00, 0x00, 0x0c, 0xbe, 0xcd,
- 0x00, 0x00, 0x05, 0xa3, 0xc8,
- 0x00, 0x00, 0x11, 0xd9, 0x08,
- 0x00, 0x00, 0x01, 0x26, 0xc9,
- 0x00, 0x00, 0x08, 0x60, 0x09,
- 0x00, 0x00, 0x0d, 0xd5, 0x85,
- 0x00, 0x00, 0x16, 0x7f, 0xcc,
- 0x00, 0x00, 0x0c, 0x33, 0x8b,
- 0x00, 0x00, 0x01, 0xf0, 0x09,
- 0x00, 0x00, 0x11, 0x89, 0x84,
- 0x00, 0x00, 0x11, 0x9c, 0x09,
- 0x00, 0x00, 0x11, 0x9e, 0x46,
- 0x00, 0x00, 0x01, 0x32, 0x06,
- 0x00, 0x00, 0x00, 0x26, 0x42,
- 0x00, 0x00, 0x05, 0x41, 0x46,
- 0x00, 0x00, 0x08, 0x61, 0x8b,
- 0x00, 0x00, 0x12, 0x60, 0xc7,
- 0x00, 0x00, 0x12, 0x62, 0x87,
- 0x00, 0x00, 0x00, 0x36, 0x42,
- 0x00, 0x00, 0x0e, 0x3d, 0xc5,
- 0x00, 0x00, 0x00, 0x63, 0x84,
- 0x00, 0x00, 0x00, 0x01, 0x01,
- 0x00, 0x00, 0x05, 0xc1, 0x83,
- 0x00, 0xef, 0x56, 0x0f, 0xc6,
- 0x00, 0x00, 0x0d, 0x1f, 0xc3,
- 0x00, 0x00, 0x00, 0x03, 0x82,
- 0x00, 0x00, 0x00, 0x0e, 0x44,
- 0x00, 0x00, 0x00, 0x0b, 0x02,
- 0x00, 0x00, 0x01, 0x4f, 0x04,
- 0x00, 0x00, 0x00, 0x08, 0x82,
- 0x00, 0x00, 0x00, 0x8b, 0x82,
- 0x00, 0x00, 0x00, 0x8a, 0x42,
- 0x00, 0x00, 0x06, 0x97, 0x82,
- 0x00, 0x00, 0x00, 0x17, 0x82,
- 0x00, 0x00, 0x02, 0x19, 0x42,
- 0x00, 0x00, 0x00, 0x34, 0x02,
- 0x00, 0x00, 0x15, 0x47, 0xc2,
- 0x00, 0x00, 0x03, 0x19, 0x82,
- 0x00, 0x00, 0x05, 0x43, 0x02,
- 0x00, 0x00, 0x00, 0x23, 0xc2,
- 0x00, 0x00, 0x05, 0x64, 0x42,
- 0x00, 0x00, 0x01, 0xf6, 0x03,
- 0x00, 0x00, 0x00, 0x09, 0x42,
- 0x00, 0x00, 0x00, 0x13, 0x42,
- 0x00, 0x00, 0x00, 0xfd, 0x02,
- 0x00, 0x00, 0x00, 0x81, 0x02,
- 0x00, 0x00, 0x00, 0x06, 0x42,
- 0x00, 0x00, 0x02, 0x96, 0x02,
- 0x00, 0x00, 0x01, 0x5f, 0xc2,
- 0x00, 0x00, 0x00, 0x14, 0x42,
- 0x00, 0x00, 0x00, 0x41, 0x42,
- 0x00, 0x00, 0x00, 0x05, 0xc2,
- 0x00, 0x00, 0x01, 0x1e, 0x43,
- 0x00, 0x00, 0x00, 0x2b, 0x82,
- 0x00, 0x00, 0x00, 0x4b, 0x02,
- 0x00, 0x00, 0x05, 0x10, 0xc2,
- 0x00, 0x00, 0x00, 0x69, 0x82,
- 0x00, 0x00, 0x00, 0x6b, 0x42,
- 0x00, 0x00, 0x00, 0x95, 0x82,
- 0x00, 0x00, 0x01, 0xa2, 0x02,
- 0x00, 0x00, 0x00, 0x20, 0x42,
- 0x00, 0x00, 0x00, 0x0e, 0xc2,
- 0x00, 0x00, 0x19, 0x4c, 0xc2,
- 0x00, 0x00, 0x07, 0xd2, 0x02,
- 0x00, 0x00, 0x00, 0x70, 0xc2,
- 0x00, 0x00, 0x01, 0x09, 0xc3,
- 0x00, 0x00, 0x00, 0x06, 0x02,
- 0x00, 0x00, 0x01, 0x22, 0x82,
- 0x00, 0x00, 0x00, 0x1f, 0x42,
- 0x00, 0x00, 0x01, 0x62, 0x82,
- 0x00, 0x00, 0x02, 0x2b, 0x85,
- 0x00, 0x00, 0x00, 0x4f, 0x82,
- 0x00, 0x00, 0x01, 0xa0, 0x02,
- 0x00, 0x00, 0x1d, 0xea, 0x83,
- 0x00, 0x00, 0x00, 0x06, 0x82,
- 0x00, 0x00, 0x01, 0x00, 0x02,
- 0x00, 0x00, 0x00, 0x10, 0x42,
- 0x00, 0x00, 0x00, 0x1a, 0x42,
- 0x00, 0x00, 0x01, 0x0a, 0x82,
- 0x00, 0x00, 0x00, 0x08, 0xc2,
- 0x00, 0x00, 0x00, 0x5f, 0xc2,
- 0x00, 0x00, 0x00, 0x26, 0x42,
- 0x00, 0x00, 0x00, 0x2c, 0x45,
- 0x00, 0xf1, 0x40, 0x3c, 0xc2,
- 0x00, 0xf1, 0xd0, 0x93, 0x43,
- 0x00, 0x00, 0x01, 0x5c, 0x43,
- 0x00, 0xf2, 0x40, 0x3c, 0xc2,
- 0x00, 0x00, 0x01, 0x5c, 0x43,
- 0x00, 0x00, 0x0e, 0x18, 0xc7,
- 0x00, 0x00, 0x40, 0xd5, 0xc3,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x05, 0xc3,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x4d, 0x20, 0x03,
- 0x00, 0x00, 0x1a, 0x56, 0x43,
- 0x00, 0x00, 0x1a, 0x56, 0x44,
- 0x00, 0x00, 0x17, 0x97, 0xc6,
- 0x00, 0x00, 0x0d, 0xd5, 0xc4,
- 0x00, 0x00, 0x10, 0x05, 0x05,
- 0x00, 0x00, 0x10, 0xcf, 0x85,
- 0x00, 0x00, 0x1c, 0x36, 0xc3,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x01, 0x81,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x42, 0xb6, 0x43,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1d, 0xf0, 0x04,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x64, 0xc3,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x49, 0x0a, 0x43,
- 0x00, 0x00, 0x43, 0x45, 0x03,
- 0x00, 0x00, 0x4b, 0x48, 0x43,
- 0x00, 0x00, 0x44, 0xc8, 0xc3,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x5c, 0x7f, 0x44,
- 0x00, 0x00, 0x41, 0x11, 0x03,
- 0x00, 0x00, 0x00, 0x28, 0xc3,
- 0x00, 0x00, 0x44, 0xa6, 0x43,
- 0x00, 0x00, 0x53, 0x36, 0x48,
- 0x00, 0x00, 0x55, 0x8c, 0x84,
- 0x00, 0x00, 0x40, 0x02, 0x0a,
- 0x00, 0x00, 0x45, 0xef, 0xc6,
- 0x00, 0x00, 0x0d, 0x66, 0x44,
- 0x00, 0x00, 0x5b, 0x41, 0x87,
- 0x00, 0x00, 0x42, 0x37, 0x0a,
- 0x00, 0x00, 0x41, 0x69, 0xc9,
- 0x00, 0x00, 0x5c, 0x91, 0x07,
- 0x00, 0x00, 0x5c, 0xe4, 0x8a,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x4a, 0xfc, 0x8b,
- 0x00, 0x00, 0x57, 0x94, 0x49,
- 0x00, 0x00, 0x58, 0x6f, 0xc5,
- 0x00, 0x00, 0x5b, 0x93, 0x07,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x46, 0xa5, 0xc7,
- 0x00, 0x00, 0x50, 0xfe, 0xc5,
- 0x00, 0x00, 0x4d, 0xb6, 0x09,
- 0x00, 0x00, 0x01, 0xb9, 0x8e,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x43, 0xbf, 0xc6,
- 0x00, 0x00, 0x4d, 0x96, 0xc3,
- 0x00, 0x00, 0x0d, 0xd7, 0x03,
- 0x00, 0x00, 0x12, 0x25, 0x06,
- 0x00, 0x00, 0x1d, 0xd6, 0x86,
- 0x00, 0x00, 0x1f, 0x41, 0x47,
- 0x00, 0x00, 0x41, 0x42, 0xc6,
- 0x00, 0x00, 0x41, 0xaa, 0x05,
- 0x00, 0x00, 0x40, 0x74, 0xc7,
- 0x00, 0x00, 0x4e, 0xf9, 0xc7,
- 0x00, 0xf7, 0xc0, 0x55, 0x03,
- 0x00, 0x00, 0x55, 0x29, 0xc7,
- 0x00, 0x00, 0x48, 0x5f, 0xc3,
- 0x00, 0x00, 0x0b, 0xa3, 0x49,
- 0x00, 0x00, 0x50, 0x1f, 0x85,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x4c, 0x7b, 0xc8,
- 0x00, 0x00, 0x5e, 0x86, 0x8c,
- 0x00, 0x00, 0x4c, 0x6e, 0xc5,
- 0x00, 0x00, 0x4b, 0x44, 0x46,
- 0x00, 0x00, 0x50, 0xfd, 0x87,
- 0x00, 0x00, 0x41, 0xf3, 0x07,
- 0x00, 0x00, 0x48, 0x91, 0x87,
- 0x00, 0x00, 0x48, 0x9c, 0x08,
- 0x00, 0x00, 0x52, 0x11, 0x8f,
- 0x00, 0x00, 0x5b, 0xec, 0x05,
- 0x00, 0x00, 0x40, 0x52, 0x47,
- 0x00, 0x00, 0x4d, 0x4d, 0x07,
- 0x00, 0x00, 0x5d, 0x84, 0x8a,
- 0x00, 0x00, 0x52, 0xaa, 0x49,
- 0x00, 0x00, 0x52, 0x26, 0x45,
- 0x00, 0x00, 0x53, 0x39, 0x0a,
- 0x00, 0x00, 0x12, 0x70, 0xc6,
- 0x00, 0x00, 0x0c, 0xdd, 0x87,
- 0x00, 0x00, 0x4d, 0x97, 0x45,
- 0x00, 0x00, 0x59, 0x13, 0xc4,
- 0x00, 0x00, 0x54, 0x57, 0x86,
- 0x00, 0x00, 0x18, 0x20, 0xc6,
- 0x00, 0x00, 0x59, 0x3a, 0x47,
- 0x00, 0x00, 0x4d, 0xff, 0x87,
- 0x00, 0x00, 0x53, 0xad, 0x48,
- 0x00, 0x00, 0x40, 0xf8, 0x45,
- 0x00, 0x00, 0x46, 0xa4, 0xc6,
- 0x00, 0x00, 0x02, 0x05, 0x08,
- 0x00, 0x00, 0x47, 0xd9, 0xc5,
- 0x00, 0x00, 0x02, 0x64, 0x46,
- 0x00, 0x00, 0x47, 0x15, 0x85,
- 0x00, 0x00, 0x44, 0x30, 0x04,
- 0x00, 0x00, 0x44, 0x0f, 0xc7,
- 0x00, 0x00, 0x44, 0xca, 0x0a,
- 0x00, 0x00, 0x4a, 0xdd, 0x88,
- 0x00, 0x00, 0x40, 0x5e, 0x46,
- 0x00, 0x00, 0x03, 0xdd, 0xc3,
- 0x00, 0x00, 0x4d, 0xfa, 0x85,
- 0x00, 0x00, 0x4b, 0x00, 0x06,
- 0x00, 0x00, 0x5c, 0x5e, 0x06,
- 0x00, 0x00, 0x40, 0x1e, 0x46,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x5a, 0xb4, 0x07,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x4d, 0x4c, 0x85,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x4f, 0x35, 0x4d,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x53, 0xae, 0x48,
- 0x00, 0x00, 0x47, 0xfc, 0x04,
- 0x00, 0x00, 0x4b, 0x03, 0x05,
- 0x00, 0x00, 0x4b, 0x5b, 0xc6,
- 0x00, 0x00, 0x5d, 0x4b, 0x46,
- 0x00, 0x00, 0x5a, 0x64, 0x87,
- 0x00, 0x00, 0x45, 0x42, 0x47,
- 0x00, 0x00, 0x47, 0xe9, 0x45,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x56, 0xac, 0x87,
- 0x00, 0x00, 0x5a, 0x2e, 0x09,
- 0x00, 0x00, 0x52, 0x3c, 0x89,
- 0x00, 0x00, 0x58, 0x93, 0x4a,
- 0x00, 0x00, 0x40, 0x72, 0x02,
- 0x00, 0x00, 0x50, 0x1f, 0x44,
- 0x00, 0x00, 0x52, 0x44, 0x44,
- 0x00, 0x00, 0x50, 0x3f, 0x07,
- 0x00, 0x00, 0x50, 0x40, 0xc8,
- 0x00, 0x00, 0x50, 0x5b, 0x49,
- 0x00, 0x00, 0x5c, 0xae, 0x89,
- 0x00, 0x00, 0x50, 0x67, 0x87,
- 0x00, 0x00, 0x11, 0x16, 0x09,
- 0x00, 0x00, 0x40, 0xbc, 0x06,
- 0x00, 0x00, 0x10, 0x95, 0x46,
- 0x00, 0x00, 0x50, 0xa9, 0x04,
- 0x00, 0x00, 0x42, 0xe9, 0x8a,
- 0x00, 0x00, 0x50, 0xcb, 0xc8,
- 0x00, 0x00, 0x50, 0xda, 0x49,
- 0x00, 0x00, 0x50, 0xde, 0x86,
- 0x00, 0x00, 0x4c, 0xb1, 0x45,
- 0x00, 0x00, 0x4a, 0xdc, 0x48,
- 0x00, 0x00, 0x4e, 0x22, 0x8a,
- 0x00, 0x00, 0x47, 0xb4, 0x03,
- 0x00, 0x00, 0x53, 0x58, 0x46,
- 0x00, 0x00, 0x50, 0x68, 0x87,
- 0x00, 0x00, 0x55, 0xeb, 0x45,
- 0x00, 0x00, 0x08, 0x2b, 0x48,
- 0x00, 0x00, 0x5b, 0xd3, 0x45,
- 0x00, 0x00, 0x41, 0x30, 0xc3,
- 0x00, 0x00, 0x41, 0xbc, 0x04,
- 0x00, 0x00, 0x04, 0xd5, 0x09,
- 0x00, 0x00, 0x4b, 0xa4, 0x45,
- 0x00, 0x00, 0x49, 0x30, 0xc7,
- 0x00, 0x00, 0x4c, 0x64, 0x05,
- 0x00, 0x00, 0x4f, 0x0a, 0x46,
- 0x00, 0x00, 0x10, 0x62, 0xc5,
- 0x00, 0x00, 0x40, 0x3c, 0x83,
- 0x00, 0x00, 0x40, 0x3c, 0x89,
- 0x00, 0x00, 0x4b, 0x00, 0xcc,
- 0x00, 0x00, 0x4d, 0x72, 0x0c,
- 0x00, 0x00, 0x51, 0x42, 0xc8,
- 0x00, 0x00, 0x4a, 0x3b, 0xc7,
- 0x00, 0x00, 0x51, 0x56, 0xc8,
- 0x00, 0x00, 0x11, 0x5d, 0x07,
- 0x00, 0x00, 0x51, 0x68, 0x8a,
- 0x00, 0x00, 0x51, 0x6e, 0x8b,
- 0x00, 0x00, 0x57, 0x95, 0x88,
- 0x00, 0x00, 0x5d, 0x4c, 0x48,
- 0x00, 0x00, 0x43, 0x87, 0x46,
- 0x00, 0x00, 0x5e, 0x19, 0x85,
- 0x00, 0x00, 0x54, 0x71, 0x0a,
- 0x00, 0x00, 0x42, 0x98, 0xc5,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x00, 0x4e, 0x0f, 0x47,
- 0x00, 0x00, 0x48, 0xe5, 0x06,
- 0x00, 0x00, 0x57, 0xf2, 0x05,
- 0x00, 0x00, 0x51, 0x8d, 0x09,
- 0x00, 0x00, 0x5b, 0xe7, 0x85,
- 0x00, 0x00, 0x1d, 0x7b, 0x08,
- 0x00, 0x00, 0x4b, 0xc3, 0x85,
- 0x00, 0x00, 0x50, 0x1b, 0x49,
- 0x00, 0x00, 0x52, 0xba, 0xc6,
- 0x00, 0x00, 0x5d, 0x3a, 0x08,
- 0x00, 0x00, 0x4b, 0x03, 0xc3,
- 0x00, 0x00, 0x40, 0xb5, 0x86,
- 0x00, 0x00, 0x44, 0x48, 0x86,
- 0x00, 0x00, 0x52, 0x56, 0xc5,
- 0x00, 0x00, 0x52, 0x56, 0xc9,
- 0x00, 0x00, 0x48, 0x3f, 0x09,
- 0x00, 0x00, 0x46, 0x99, 0xc7,
- 0x00, 0x00, 0x12, 0x98, 0x04,
- 0x00, 0x00, 0x52, 0x98, 0x07,
- 0x00, 0x00, 0x5c, 0xad, 0x89,
- 0x00, 0x00, 0x42, 0x39, 0x05,
- 0x00, 0x00, 0x03, 0xbe, 0x08,
- 0x00, 0x00, 0x5c, 0x2b, 0x85,
- 0x00, 0x00, 0x5d, 0x56, 0x85,
- 0x00, 0x00, 0x5d, 0xbc, 0x09,
- 0x00, 0x00, 0x40, 0x61, 0x82,
- 0x00, 0x00, 0x50, 0xf5, 0xc4,
- 0x00, 0x00, 0x40, 0x16, 0x02,
- 0x00, 0x00, 0x40, 0x2b, 0x82,
- 0x00, 0x00, 0x4e, 0xcd, 0x85,
- 0x00, 0x00, 0x52, 0x58, 0xc8,
- 0x00, 0x00, 0x4c, 0xdb, 0x45,
- 0x00, 0x00, 0x4d, 0xd4, 0xc3,
- 0x00, 0x00, 0x4d, 0xd4, 0xc5,
- 0x00, 0x00, 0x4e, 0xc9, 0x43,
- 0x00, 0x00, 0x40, 0xa7, 0x02,
- 0x00, 0x00, 0x42, 0xb3, 0x84,
- 0x00, 0x00, 0x40, 0x32, 0x83,
- 0x00, 0x00, 0x40, 0x67, 0x02,
- 0x00, 0x00, 0x50, 0x17, 0xc4,
- 0x00, 0x00, 0x51, 0xda, 0xc3,
- 0x00, 0x00, 0x40, 0xbc, 0x02,
- 0x00, 0x00, 0x45, 0x4b, 0x83,
- 0x00, 0x00, 0x41, 0x5c, 0xc4,
- 0x00, 0x00, 0x50, 0xe0, 0x03,
- 0x00, 0x00, 0x45, 0x94, 0x44,
- 0x00, 0x00, 0x40, 0x48, 0x42,
- 0x00, 0x00, 0x41, 0x95, 0x83,
- 0x00, 0x00, 0x41, 0x1c, 0x83,
- 0x00, 0x00, 0x40, 0x3c, 0x82,
- 0x00, 0x00, 0x4b, 0x26, 0xc2,
- 0x00, 0x00, 0x48, 0x3d, 0x49,
- 0x00, 0x00, 0x40, 0x74, 0x82,
- 0x00, 0x00, 0x49, 0x86, 0x84,
- 0x00, 0x00, 0x40, 0x20, 0x02,
- 0x00, 0x00, 0x46, 0x16, 0x44,
- 0x00, 0x00, 0x40, 0xbb, 0xc4,
- 0x00, 0x00, 0x50, 0xba, 0x04,
- 0x00, 0x00, 0x40, 0x26, 0x42,
- 0x00, 0x00, 0x43, 0x83, 0x82,
- 0x00, 0x00, 0x42, 0xdd, 0xc3,
- 0x00, 0x00, 0x51, 0xf0, 0x43,
- 0x00, 0x00, 0x4c, 0xd7, 0xc4,
- 0x00, 0x00, 0x50, 0x6a, 0x04,
- 0x00, 0x00, 0x52, 0x99, 0x84,
- 0x00, 0x00, 0x53, 0x38, 0x04,
- 0x00, 0x00, 0x52, 0x50, 0x43,
- 0x00, 0x00, 0x57, 0x05, 0xc3,
- 0x00, 0x00, 0x52, 0x70, 0x44,
- 0x00, 0x00, 0x52, 0xb3, 0x44,
- 0x00, 0x00, 0x52, 0xb4, 0x86,
- 0x00, 0x00, 0x5d, 0x55, 0x02,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x04, 0x71, 0x83,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x36, 0xc5,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x4d, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x48, 0x40, 0x04,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x50, 0xaf, 0x04,
- 0x00, 0x00, 0x53, 0x0b, 0x03,
- 0x00, 0x00, 0x43, 0x75, 0xc3,
- 0x00, 0x00, 0x58, 0x32, 0x04,
- 0x00, 0x00, 0x5c, 0x29, 0x86,
- 0x00, 0x00, 0x40, 0x78, 0x43,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x00, 0x00, 0x41, 0xda, 0xc3,
- 0x00, 0xfb, 0x42, 0x4f, 0x08,
- 0x00, 0x00, 0x45, 0x34, 0xc3,
- 0x00, 0x00, 0x4c, 0x90, 0xc3,
- 0x00, 0x00, 0x47, 0x21, 0xc3,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x5c, 0xc4, 0x45,
- 0x00, 0x00, 0x1b, 0x92, 0xc3,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x46, 0xa4, 0x83,
- 0x00, 0x00, 0x40, 0x2b, 0xc3,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x46, 0x27, 0x84,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x4b, 0x2b, 0x44,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x51, 0xaf, 0x85,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x14, 0x82,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x40, 0x18, 0xc2,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x1d, 0x46, 0x44,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x92, 0xc4,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x41, 0x4f, 0x04,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x1c, 0x36, 0xc3,
- 0x00, 0x00, 0x12, 0xd9, 0xc4,
- 0x00, 0x00, 0x40, 0x48, 0x84,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x45, 0x54, 0xc4,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x40, 0x15, 0x82,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x43, 0x73, 0x43,
- 0x00, 0x00, 0x01, 0xbc, 0x04,
- 0x00, 0x00, 0x53, 0xd8, 0x45,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x00, 0x5d, 0xe9, 0x83,
- 0x00, 0x00, 0x1a, 0x23, 0x89,
- 0x00, 0x00, 0x0f, 0x15, 0x06,
- 0x00, 0x00, 0x08, 0x4b, 0x08,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x01, 0x01, 0xdb, 0xed, 0x87,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x05, 0xc2,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x8f, 0x42,
- 0x00, 0x00, 0x00, 0x00, 0x82,
- 0x00, 0x00, 0x01, 0xbc, 0x04,
- 0x00, 0x00, 0x00, 0x00, 0xc2,
- 0x00, 0x00, 0x1c, 0xe6, 0x47,
- 0x00, 0x00, 0x01, 0x72, 0xc9,
- 0x00, 0x00, 0x00, 0x27, 0xc3,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x06, 0x97, 0x43,
- 0x01, 0x02, 0xd2, 0xe8, 0x87,
- 0x00, 0x00, 0x00, 0x66, 0x43,
- 0x00, 0x00, 0x0a, 0x05, 0x88,
- 0x00, 0x00, 0x01, 0xf6, 0x03,
- 0x00, 0x00, 0x05, 0x82, 0x87,
- 0x00, 0x00, 0x00, 0x55, 0x03,
- 0x00, 0x00, 0x03, 0xfa, 0x46,
- 0x00, 0x00, 0x01, 0x1e, 0x43,
- 0x00, 0x00, 0x04, 0x23, 0x08,
- 0x00, 0x00, 0x0d, 0x89, 0xc8,
- 0x00, 0x00, 0x1d, 0x14, 0x83,
- 0x00, 0x00, 0x08, 0x7f, 0x86,
- 0x01, 0x03, 0x13, 0x75, 0x85,
- 0x00, 0x00, 0x1d, 0xb6, 0xc5,
- 0x00, 0x00, 0x00, 0x65, 0x43,
- 0x00, 0x00, 0x09, 0xb1, 0x08,
- 0x00, 0x00, 0x0e, 0x5d, 0xc8,
- 0x00, 0x00, 0x05, 0xe9, 0x43,
- 0x01, 0x03, 0x8f, 0x4f, 0x06,
- 0x00, 0x00, 0x0f, 0xa8, 0x85,
- 0x00, 0x00, 0x1a, 0x2b, 0x04,
- 0x00, 0x00, 0x03, 0x62, 0x87,
- 0x00, 0x00, 0x01, 0x09, 0xc3,
- 0x00, 0x00, 0x00, 0x46, 0x43,
- 0x00, 0x00, 0x01, 0xf1, 0x43,
- 0x00, 0x00, 0x01, 0x5c, 0x82,
- 0x00, 0x00, 0x18, 0xfd, 0x0a,
- 0x00, 0x00, 0x02, 0x0e, 0x43,
- 0x01, 0x04, 0x40, 0xff, 0x4c,
- 0x00, 0x00, 0x0c, 0xdc, 0x03,
- 0x00, 0x00, 0x01, 0x50, 0xc4,
- 0x00, 0x00, 0x12, 0x0f, 0x8b,
- 0x00, 0x00, 0x12, 0x15, 0x48,
- 0x00, 0x00, 0x09, 0xcf, 0xc2,
- 0x00, 0x00, 0x12, 0x3b, 0x43,
- 0x00, 0x02, 0x81, 0x00, 0x87,
- 0x00, 0x02, 0x93, 0xb2, 0x87,
- 0x00, 0x02, 0x91, 0xa8, 0x88,
- 0x00, 0x02, 0x92, 0x3b, 0x43,
- 0x00, 0x00, 0x1c, 0x74, 0x48,
- 0x00, 0x02, 0x8e, 0x9e, 0x04,
- 0x00, 0x00, 0x0f, 0xd6, 0xcb,
- 0x00, 0x00, 0x00, 0xd8, 0x42,
- 0x00, 0x00, 0x13, 0x7a, 0x47,
- 0x00, 0x00, 0x14, 0xe6, 0xc4,
- 0x00, 0x00, 0x0f, 0x0c, 0x87,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x43, 0x92, 0xc4,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x42, 0x02, 0x83,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x01, 0xe1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1c, 0x36, 0xc3,
- 0x00, 0x00, 0x01, 0xf6, 0x03,
- 0x01, 0x08, 0x80, 0x55, 0x03,
- 0x00, 0x00, 0x07, 0xe6, 0x07,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x43, 0xdd, 0xc3,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x42, 0x4b, 0x42,
- 0x00, 0x00, 0x40, 0x00, 0xc1,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x02, 0x01,
- 0x00, 0x00, 0x53, 0xec, 0x82,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x42, 0x0a, 0x05,
- 0x00, 0x00, 0x40, 0x01, 0x01,
- 0x00, 0x00, 0x00, 0x66, 0x43,
- 0x00, 0x00, 0x03, 0x21, 0x84,
- 0x00, 0x00, 0x40, 0x0c, 0xc1,
- 0x00, 0x00, 0x40, 0x05, 0x01,
- 0x00, 0x00, 0x40, 0x0b, 0xc1,
- 0x00, 0x00, 0x44, 0xe1, 0x42,
- 0x00, 0x00, 0x58, 0xf6, 0x44,
- 0x00, 0x00, 0x44, 0xe1, 0x43,
- 0x00, 0x00, 0x40, 0x00, 0x41,
- 0x00, 0x00, 0x40, 0x08, 0x01,
- 0x00, 0x00, 0x40, 0x01, 0x81,
- 0x00, 0x00, 0x02, 0xda, 0xc6,
- 0x00, 0x00, 0x1e, 0x3c, 0x4c,
- 0x00, 0x00, 0x40, 0x07, 0x01,
- 0x00, 0x00, 0x56, 0x8e, 0xc7,
- 0x00, 0x00, 0x50, 0x52, 0x4f,
- 0x00, 0x00, 0x5e, 0x53, 0xc6,
- 0x00, 0x00, 0x40, 0x04, 0xc1,
- 0x00, 0x00, 0x52, 0xda, 0x06,
- 0x00, 0x00, 0x40, 0x0e, 0xc1,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x40, 0x05, 0x81,
- 0x00, 0x00, 0x5b, 0x8c, 0x8e,
- 0x00, 0x00, 0x40, 0x03, 0xc1,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x14, 0x01,
- 0x01, 0x0a, 0x4c, 0xbd, 0xc4,
- 0x00, 0x00, 0x44, 0x1c, 0x05,
- 0x00, 0x00, 0x41, 0x5c, 0x82,
- 0x00, 0x00, 0x41, 0x2f, 0xc5,
- 0x00, 0x00, 0x40, 0x04, 0x01,
- 0x00, 0x00, 0x40, 0x07, 0x41,
- 0x00, 0x00, 0x40, 0x07, 0xc1,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x00, 0x40, 0x00, 0x81,
- 0x00, 0x00, 0x40, 0x0f, 0x81,
- 0x00, 0x00, 0x40, 0x8f, 0x81,
- 0x00, 0x00, 0x40, 0x53, 0x81,
- 0x00, 0x00, 0x40, 0x18, 0x41,
- 0x00, 0x00, 0x00, 0x56, 0x82,
- 0x00, 0x00, 0x05, 0xc4, 0xc9,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x05, 0x8a, 0x88,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x9d, 0xc3,
- 0x00, 0x00, 0x1f, 0x2c, 0x83,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x09, 0xcf, 0x08,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x00, 0x3d, 0x43,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x01, 0x0c, 0x5d, 0xdc, 0x88,
- 0x00, 0x00, 0x1f, 0x29, 0x43,
- 0x00, 0x00, 0x1d, 0x78, 0x08,
- 0x00, 0x00, 0x08, 0x86, 0xc2,
- 0x00, 0x00, 0x00, 0x29, 0xc3,
- 0x00, 0x00, 0x00, 0x4c, 0x02,
- 0x00, 0x00, 0x00, 0x26, 0x42,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x09, 0xda, 0x06,
- 0x00, 0x00, 0x13, 0xb9, 0x47,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x0f, 0xc2, 0x04,
- 0x00, 0x02, 0x99, 0x33, 0x48,
- 0x00, 0x00, 0x04, 0xaf, 0xc4,
- 0x00, 0x00, 0x12, 0xae, 0x87,
- 0x00, 0x00, 0x06, 0x14, 0xc4,
- 0x00, 0x00, 0x05, 0x04, 0x4c,
- 0x00, 0x00, 0x1e, 0x86, 0x84,
- 0x00, 0x00, 0x06, 0x65, 0x45,
- 0x00, 0x00, 0x05, 0xc4, 0xc9,
- 0x00, 0x00, 0x1a, 0x7f, 0x47,
- 0x00, 0x00, 0x0d, 0x0b, 0x08,
- 0x00, 0x00, 0x02, 0xb6, 0xc6,
- 0x00, 0x00, 0x1c, 0x20, 0x8a,
- 0x00, 0x02, 0x9e, 0x71, 0xca,
- 0x00, 0x00, 0x13, 0x0e, 0x44,
- 0x00, 0x02, 0x98, 0x2c, 0xc3,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x28, 0xc3,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x4e, 0x40, 0x84,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x49, 0xf0, 0x85,
- 0x00, 0x00, 0x47, 0x7a, 0x04,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x0e, 0xc2,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x01, 0x3d, 0xc3,
- 0x00, 0x02, 0x96, 0x8c, 0x05,
- 0x00, 0x00, 0x0f, 0x2f, 0x86,
- 0x00, 0x00, 0x0c, 0x15, 0x04,
- 0x00, 0x00, 0x12, 0xd5, 0x06,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x0e, 0xc2,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x27, 0x09,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x4b, 0xb7, 0x89,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x08, 0x8a, 0x84,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x50, 0xa7, 0x08,
- 0x00, 0x00, 0x43, 0x8c, 0x07,
- 0x00, 0x00, 0x53, 0xd8, 0x45,
- 0x00, 0x00, 0x0a, 0x0c, 0xc6,
- 0x00, 0x00, 0x13, 0x3f, 0x48,
- 0x00, 0x00, 0x13, 0xd8, 0x49,
- 0x00, 0x00, 0x1e, 0x7e, 0x48,
- 0x00, 0x00, 0x1c, 0xe6, 0x47,
- 0x00, 0x00, 0x10, 0x43, 0x4a,
- 0x00, 0x00, 0x07, 0x6a, 0xcb,
- 0x00, 0x00, 0x12, 0xdc, 0x47,
- 0x00, 0x00, 0x04, 0x3e, 0x08,
- 0x00, 0x00, 0x1e, 0x4c, 0x4a,
- 0x00, 0x00, 0x0c, 0x87, 0x48,
- 0x00, 0x00, 0x01, 0x72, 0xc9,
- 0x00, 0x00, 0x02, 0x89, 0x07,
- 0x00, 0x00, 0x1b, 0x63, 0x07,
- 0x00, 0x00, 0x1b, 0xda, 0x88,
- 0x00, 0x00, 0x0a, 0x05, 0x88,
- 0x00, 0x00, 0x04, 0x6a, 0x4f,
- 0x00, 0x00, 0x0a, 0xdf, 0x45,
- 0x00, 0x00, 0x0a, 0x08, 0x87,
- 0x00, 0x00, 0x03, 0xfa, 0x46,
- 0x00, 0x00, 0x1e, 0xc7, 0x87,
- 0x00, 0x00, 0x12, 0x27, 0x86,
- 0x00, 0x00, 0x04, 0x23, 0x08,
- 0x00, 0x00, 0x0f, 0xb0, 0x06,
- 0x00, 0x00, 0x1d, 0x38, 0x07,
- 0x00, 0x00, 0x14, 0x58, 0xc9,
- 0x00, 0x00, 0x1b, 0x79, 0xc7,
- 0x00, 0x00, 0x1c, 0x78, 0x49,
- 0x00, 0x00, 0x0c, 0xe6, 0xc9,
- 0x00, 0x00, 0x0d, 0x63, 0xc6,
- 0x00, 0x00, 0x0d, 0x89, 0xc8,
- 0x00, 0x00, 0x13, 0x42, 0x05,
- 0x00, 0x00, 0x07, 0x62, 0x8a,
- 0x00, 0x00, 0x0e, 0x5d, 0xc8,
- 0x00, 0x00, 0x05, 0xe9, 0x43,
- 0x00, 0x00, 0x0e, 0xcb, 0xc8,
- 0x00, 0x00, 0x03, 0x62, 0x87,
- 0x00, 0x00, 0x0f, 0xe6, 0x05,
- 0x00, 0x00, 0x16, 0x28, 0x10,
- 0x00, 0x00, 0x00, 0x46, 0x43,
- 0x00, 0x00, 0x1a, 0xf2, 0x47,
- 0x00, 0x00, 0x01, 0x62, 0x05,
- 0x00, 0x00, 0x10, 0x50, 0x88,
- 0x00, 0x00, 0x0f, 0xff, 0x05,
- 0x00, 0x00, 0x0c, 0xdc, 0x03,
- 0x00, 0x00, 0x0f, 0xd9, 0x48,
- 0x00, 0x00, 0x1d, 0x53, 0xc6,
- 0x00, 0x00, 0x05, 0xe2, 0x49,
- 0x00, 0x00, 0x0b, 0xe7, 0x87,
- 0x00, 0x00, 0x1a, 0x26, 0x4b,
- 0x00, 0x00, 0x11, 0x83, 0x84,
- 0x00, 0x00, 0x11, 0x98, 0x04,
- 0x00, 0x00, 0x12, 0x0f, 0x8b,
- 0x00, 0x00, 0x12, 0x15, 0x48,
- 0x00, 0x00, 0x12, 0x24, 0x07,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x44, 0x31, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x0a, 0xf8, 0xcb,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x34, 0x02,
- 0x00, 0x00, 0x00, 0x0e, 0xc2,
- 0x00, 0x00, 0x00, 0x0f, 0x82,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x13, 0x2d, 0x09,
- 0x00, 0x00, 0x1c, 0x74, 0x48,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x40, 0x05, 0xc2,
- 0x00, 0x00, 0x40, 0x1e, 0x42,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x01, 0x22, 0x46,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x01, 0xbc, 0x04,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x40, 0xf4, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x50, 0xc4,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0x41, 0x83,
- 0x00, 0x00, 0x40, 0xfd, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x36, 0xc3,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x49, 0x65, 0x84,
- 0x00, 0x00, 0x52, 0x1b, 0x86,
- 0x00, 0x00, 0x44, 0x1f, 0x83,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x1d, 0xd9, 0xcb,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x49, 0xf0, 0x85,
- 0x00, 0x00, 0x4e, 0x0f, 0x47,
- 0x00, 0x00, 0x1d, 0x6c, 0x43,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x1b, 0xd4, 0x04,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x01, 0x9a, 0x43,
- 0x01, 0x1b, 0x51, 0x30, 0x0c,
- 0x00, 0x00, 0x0d, 0xb9, 0x03,
- 0x00, 0x00, 0x07, 0x1b, 0xc7,
- 0x00, 0x00, 0x08, 0x62, 0xc6,
- 0x00, 0x00, 0x1d, 0xb7, 0x4c,
- 0x00, 0x00, 0x0d, 0x98, 0x07,
- 0x00, 0x00, 0x1d, 0x3f, 0x85,
- 0x00, 0x00, 0x41, 0x03, 0x42,
- 0x00, 0x00, 0x45, 0xa5, 0x03,
- 0x00, 0x00, 0x4d, 0x9c, 0x43,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x01, 0x1d, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0x1a, 0xc2,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x3b, 0x43,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x5b, 0xec, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x01, 0x1d, 0xc0, 0x24, 0xc2,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x43, 0x30, 0xc3,
- 0x00, 0x00, 0x40, 0x63, 0x83,
- 0x00, 0x00, 0x41, 0x0a, 0x43,
- 0x00, 0x00, 0x42, 0x4b, 0x42,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x1c, 0x36, 0xc3,
- 0x00, 0x00, 0x42, 0xff, 0xc4,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x43, 0x92, 0xc4,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x42, 0x1e, 0x44,
- 0x00, 0x00, 0x42, 0x8f, 0x84,
- 0x00, 0x00, 0x4e, 0xe6, 0xc6,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x48, 0xe5, 0x06,
- 0x00, 0x00, 0x03, 0xc9, 0x0b,
- 0x00, 0x00, 0x03, 0x41, 0x06,
- 0x00, 0x00, 0x04, 0x36, 0xca,
- 0x00, 0x00, 0x12, 0x6b, 0x8a,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x42, 0x04, 0xc4,
- 0x01, 0x20, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x54, 0xd2, 0x84,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x41, 0x28, 0x04,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x52, 0x5e, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x01, 0xcc, 0xc3,
- 0x00, 0x00, 0x54, 0xec, 0x4b,
- 0x00, 0x00, 0x5d, 0xfe, 0x4a,
- 0x00, 0x00, 0x5f, 0x24, 0x4c,
- 0x00, 0x00, 0x0f, 0x58, 0x48,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x42, 0xf2, 0xc5,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x40, 0x0e, 0xc2,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x42, 0x8f, 0x84,
- 0x00, 0x00, 0x40, 0x18, 0xc2,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x40, 0x2e, 0xc2,
- 0x00, 0x00, 0x42, 0x4b, 0x42,
- 0x00, 0x00, 0x05, 0x0b, 0x03,
- 0x00, 0x00, 0x00, 0x2d, 0xc2,
- 0x00, 0x00, 0x4d, 0xa3, 0x09,
- 0x00, 0x00, 0x47, 0xad, 0x48,
- 0x00, 0x00, 0x43, 0x9b, 0x49,
- 0x00, 0x00, 0x5a, 0x0c, 0x49,
- 0x00, 0x00, 0x40, 0x3e, 0x8a,
- 0x00, 0x00, 0x41, 0x89, 0x8a,
- 0x00, 0x00, 0x40, 0x52, 0x02,
- 0x00, 0x00, 0x55, 0x47, 0xc2,
- 0x00, 0x00, 0x00, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x42, 0xdd, 0x42,
- 0x00, 0x00, 0x44, 0x69, 0x46,
- 0x00, 0x00, 0x53, 0x0c, 0x02,
- 0x00, 0x00, 0x04, 0x14, 0x02,
- 0x00, 0x00, 0x40, 0x0d, 0x82,
- 0x00, 0x00, 0x47, 0x6e, 0x4e,
- 0x00, 0x00, 0x41, 0x95, 0xce,
- 0x00, 0x00, 0x41, 0x09, 0x47,
- 0x00, 0x00, 0x41, 0x17, 0x42,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0xc7, 0xc2,
- 0x00, 0x00, 0x40, 0x05, 0xc2,
- 0x00, 0x00, 0x00, 0xfc, 0x83,
- 0x00, 0x00, 0x43, 0x94, 0xcf,
- 0x00, 0x00, 0x44, 0x6c, 0x82,
- 0x00, 0x00, 0x4c, 0x60, 0x07,
- 0x00, 0x00, 0x4c, 0xac, 0x07,
- 0x00, 0x00, 0x52, 0xef, 0xc7,
- 0x00, 0x00, 0x4c, 0xa4, 0xcc,
- 0x00, 0x00, 0x4e, 0x01, 0x0c,
- 0x00, 0x00, 0x42, 0xfc, 0x84,
- 0x00, 0x00, 0x48, 0x37, 0x4a,
- 0x00, 0x00, 0x41, 0x95, 0x02,
- 0x00, 0x00, 0x40, 0x69, 0x82,
- 0x00, 0x00, 0x4d, 0x02, 0x44,
- 0x00, 0x00, 0x40, 0x07, 0x02,
- 0x00, 0x00, 0x43, 0x7b, 0x42,
- 0x00, 0x00, 0x4e, 0x03, 0x44,
- 0x00, 0x00, 0x41, 0x77, 0x02,
- 0x00, 0x00, 0x40, 0x6b, 0x42,
- 0x00, 0x00, 0x00, 0xf7, 0x43,
- 0x00, 0x00, 0x4f, 0xb0, 0x87,
- 0x00, 0x00, 0x4e, 0x4c, 0x85,
- 0x00, 0x00, 0x41, 0xa2, 0x02,
- 0x00, 0x00, 0x51, 0xe5, 0x84,
- 0x00, 0x00, 0x59, 0x4c, 0xc2,
- 0x00, 0x00, 0x4f, 0x50, 0x88,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x57, 0xf5, 0x88,
- 0x00, 0x00, 0x40, 0x1f, 0x82,
- 0x00, 0x00, 0x42, 0xfe, 0x45,
- 0x00, 0x00, 0x59, 0xe4, 0x46,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x4f, 0x82,
- 0x00, 0x00, 0x50, 0x5d, 0x87,
- 0x00, 0x00, 0x01, 0x5c, 0x82,
- 0x00, 0x00, 0x44, 0xc4, 0xc5,
- 0x00, 0x00, 0x54, 0xd7, 0x85,
- 0x00, 0x00, 0x40, 0x10, 0xc2,
- 0x00, 0x00, 0x43, 0xf6, 0x02,
- 0x00, 0x00, 0x49, 0x43, 0x0a,
- 0x00, 0x00, 0x47, 0xe7, 0xca,
- 0x00, 0x00, 0x47, 0xcc, 0xc2,
- 0x00, 0x00, 0x4a, 0xd3, 0x44,
- 0x00, 0x00, 0x40, 0x2a, 0x42,
- 0x00, 0x00, 0x50, 0x1e, 0x08,
- 0x00, 0x00, 0x41, 0x81, 0x82,
- 0x00, 0x00, 0x5d, 0x20, 0x48,
- 0x00, 0x00, 0x00, 0x11, 0x01,
- 0x00, 0x00, 0x51, 0xb3, 0x47,
- 0x00, 0x00, 0x51, 0xc0, 0x49,
- 0x00, 0x00, 0x44, 0xc5, 0x42,
- 0x00, 0x00, 0x52, 0x2f, 0x85,
- 0x00, 0x00, 0x47, 0x5c, 0x45,
- 0x00, 0x00, 0x40, 0xf9, 0x0b,
- 0x00, 0x00, 0x52, 0x21, 0x0c,
- 0x00, 0x00, 0x42, 0xd4, 0x08,
- 0x00, 0x00, 0x53, 0x78, 0x88,
- 0x00, 0x00, 0x5d, 0x55, 0x02,
- 0x00, 0x00, 0x49, 0xe0, 0x02,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x40, 0x03, 0x82,
- 0x00, 0x00, 0x40, 0x18, 0xc2,
- 0x00, 0x00, 0x40, 0x03, 0xc2,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x2e, 0xc2,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x01, 0x22, 0xc0, 0x22, 0x02,
- 0x00, 0x00, 0x11, 0x31, 0x44,
- 0x00, 0x00, 0x03, 0xab, 0x05,
- 0x01, 0x24, 0xc0, 0x55, 0x03,
- 0x00, 0x00, 0x0b, 0xd5, 0x04,
- 0x00, 0x00, 0x40, 0xf7, 0x43,
- 0x00, 0x00, 0x40, 0x0e, 0xc2,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x58, 0x1c, 0xc3,
- 0x01, 0x25, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x50, 0x36, 0x43,
- 0x00, 0x00, 0x4e, 0x19, 0xc6,
- 0x00, 0x00, 0x19, 0x14, 0x45,
- 0x00, 0x02, 0xc1, 0x3d, 0xc3,
- 0x00, 0x00, 0x1a, 0x28, 0x85,
- 0x00, 0x00, 0x14, 0xcb, 0x05,
- 0x00, 0x00, 0x14, 0xee, 0xcb,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x01, 0x23, 0x53, 0xc4, 0xc8,
- 0x00, 0x00, 0x06, 0x71, 0xc7,
- 0x01, 0x23, 0xcd, 0x24, 0x0a,
- 0x00, 0x00, 0x0f, 0x9d, 0x4c,
- 0x00, 0x00, 0x1b, 0x94, 0x87,
- 0x00, 0x00, 0x1c, 0xf8, 0x05,
- 0x01, 0x24, 0x59, 0x18, 0x89,
- 0x00, 0x00, 0x02, 0x06, 0xcd,
- 0x00, 0x00, 0x03, 0xf8, 0x42,
- 0x00, 0x00, 0x12, 0x1b, 0x42,
- 0x00, 0x00, 0x00, 0x0c, 0x01,
- 0x00, 0x00, 0x0f, 0xbc, 0x44,
- 0x00, 0x00, 0x0b, 0x7d, 0x8a,
- 0x00, 0x00, 0x07, 0xe6, 0x07,
- 0x00, 0x00, 0x13, 0x55, 0x0f,
- 0x00, 0x00, 0x01, 0x6f, 0x44,
- 0x00, 0x00, 0x01, 0x6f, 0x83,
- 0x00, 0x00, 0x01, 0x6f, 0x84,
- 0x00, 0x00, 0x06, 0xa0, 0x8b,
- 0x00, 0x00, 0x00, 0xa7, 0x4d,
- 0x01, 0x26, 0x40, 0xbb, 0x02,
- 0x01, 0x26, 0xc0, 0x0b, 0x02,
- 0x01, 0x27, 0x40, 0x28, 0xc2,
- 0x01, 0x27, 0xc0, 0x19, 0x42,
- 0x01, 0x28, 0x40, 0x36, 0x02,
- 0x01, 0x28, 0xc0, 0x17, 0x82,
- 0x00, 0x00, 0x0f, 0x08, 0xc7,
- 0x01, 0x29, 0x40, 0x22, 0x02,
- 0x01, 0x29, 0xc1, 0x18, 0x02,
- 0x01, 0x2a, 0x42, 0x3f, 0xc2,
- 0x01, 0x2a, 0xc0, 0x23, 0xc2,
- 0x00, 0x00, 0x41, 0x95, 0xc3,
- 0x00, 0x00, 0x1b, 0x73, 0x44,
- 0x01, 0x2b, 0x45, 0x8a, 0x88,
- 0x00, 0x00, 0x42, 0xdf, 0x83,
- 0x01, 0x2b, 0xc1, 0x3f, 0x42,
- 0x00, 0x00, 0x06, 0xb4, 0xc8,
- 0x01, 0x2c, 0x40, 0x87, 0x82,
- 0x00, 0x00, 0x05, 0x5d, 0xc7,
- 0x00, 0x00, 0x1b, 0x54, 0x07,
- 0x01, 0x2c, 0xc0, 0x00, 0x42,
- 0x01, 0x2d, 0x40, 0x35, 0x82,
- 0x01, 0x2d, 0xc0, 0x01, 0x82,
- 0x01, 0x2e, 0x40, 0x30, 0x42,
- 0x01, 0x2e, 0xc0, 0x41, 0x42,
- 0x01, 0x2f, 0x40, 0x05, 0xc2,
- 0x00, 0x00, 0x18, 0x6f, 0x05,
- 0x00, 0x00, 0x41, 0x36, 0xc3,
- 0x00, 0x00, 0x5b, 0xf3, 0xc4,
- 0x01, 0x2f, 0xc0, 0x07, 0x02,
- 0x01, 0x30, 0x43, 0x98, 0x42,
- 0x01, 0x30, 0xc0, 0x4f, 0xc2,
- 0x00, 0x00, 0x09, 0x3e, 0x0b,
- 0x01, 0x31, 0x40, 0x5e, 0x42,
- 0x01, 0x32, 0x44, 0xe4, 0x42,
- 0x01, 0x32, 0xc0, 0x0e, 0xc2,
- 0x01, 0x33, 0x40, 0x1e, 0x42,
- 0x00, 0x00, 0x09, 0xb1, 0x08,
- 0x01, 0x33, 0xc1, 0xcf, 0x02,
- 0x01, 0x34, 0x40, 0x15, 0x02,
- 0x01, 0x34, 0xc0, 0x3c, 0x42,
- 0x01, 0x35, 0x47, 0xd2, 0x02,
- 0x01, 0x35, 0xc0, 0x24, 0xc2,
- 0x01, 0x36, 0x40, 0x37, 0x02,
- 0x01, 0x36, 0xc0, 0x18, 0xc2,
- 0x01, 0x37, 0x41, 0xe3, 0x42,
- 0x01, 0x37, 0xc0, 0xab, 0xc2,
- 0x01, 0x38, 0x43, 0x9e, 0x02,
- 0x00, 0x00, 0x11, 0xff, 0x04,
- 0x00, 0x00, 0x5c, 0xcd, 0x03,
- 0x01, 0x38, 0xc3, 0x6d, 0x82,
- 0x01, 0x39, 0x40, 0x67, 0x82,
- 0x01, 0x39, 0xc0, 0xae, 0xc2,
- 0x01, 0x3a, 0x40, 0x06, 0xc2,
- 0x01, 0x3a, 0xc0, 0x03, 0xc2,
- 0x01, 0x3b, 0x40, 0x67, 0x02,
- 0x00, 0x00, 0x10, 0x2a, 0xc8,
- 0x00, 0x00, 0x0a, 0xfa, 0x47,
- 0x01, 0x3b, 0xc0, 0x2b, 0x02,
- 0x01, 0x3c, 0x40, 0x2b, 0x42,
- 0x01, 0x3c, 0xc0, 0x2e, 0xc2,
- 0x01, 0x3d, 0x40, 0x6e, 0xc2,
- 0x00, 0x00, 0x16, 0x7f, 0xcc,
- 0x01, 0x3d, 0xc0, 0x1b, 0x42,
- 0x01, 0x3e, 0x42, 0x72, 0xc2,
- 0x01, 0x3e, 0xc1, 0x60, 0x02,
- 0x01, 0x3f, 0x40, 0x46, 0xc2,
- 0x01, 0x3f, 0xc0, 0xc3, 0x02,
- 0x01, 0x40, 0x40, 0x40, 0x02,
- 0x01, 0x40, 0xc0, 0x59, 0xc2,
- 0x01, 0x41, 0x40, 0x3d, 0xc2,
- 0x01, 0x41, 0xc8, 0x4e, 0x42,
- 0x01, 0x42, 0x48, 0x57, 0x82,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x01, 0xa4, 0x03,
- 0x00, 0x00, 0x0d, 0x58, 0xc3,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x01, 0x31, 0xd3, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x5c, 0xc4, 0xc4,
- 0x00, 0x00, 0x43, 0x9a, 0x46,
- 0x00, 0x00, 0x50, 0xe5, 0x43,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x5d, 0xe0, 0x09,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x5d, 0xbe, 0x43,
- 0x00, 0x00, 0x4c, 0xe9, 0xc3,
- 0x00, 0x00, 0x53, 0xb7, 0x05,
- 0x00, 0x00, 0x40, 0x3b, 0x43,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x4e, 0xdd, 0xc3,
- 0x00, 0x00, 0x44, 0x0b, 0x83,
- 0x00, 0x00, 0x42, 0x16, 0x49,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x40, 0x2d, 0xc2,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0xa4, 0x03,
- 0x01, 0x43, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x5a, 0x0e, 0x83,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x14, 0xb7, 0x42,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x01, 0x44, 0x90, 0x0d, 0xc2,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x00, 0x0c, 0xc1,
- 0x00, 0x00, 0x40, 0x48, 0x84,
- 0x00, 0x00, 0x46, 0x64, 0x83,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x5a, 0x31, 0x83,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x45, 0x54, 0xc4,
- 0x00, 0x00, 0x5d, 0x64, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x43, 0x5e, 0xc3,
- 0x00, 0x00, 0x43, 0x73, 0x43,
- 0x00, 0x00, 0x53, 0xd8, 0x45,
- 0x00, 0x00, 0x44, 0x0b, 0x83,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x00, 0x08, 0x82,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x53, 0xc6, 0x83,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x43, 0x13, 0x86,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x41, 0x1e, 0x43,
- 0x00, 0x00, 0x49, 0x47, 0x44,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x96, 0x83,
- 0x00, 0x00, 0x00, 0x62, 0x04,
- 0x00, 0x00, 0x15, 0x47, 0xc2,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x01, 0xa6, 0xc3,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x00, 0x0e, 0xc2,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x07, 0x2e, 0x84,
- 0x00, 0x00, 0x07, 0x25, 0x44,
- 0x00, 0x00, 0x00, 0xc6, 0x42,
- 0x00, 0x02, 0x98, 0xa7, 0xc7,
- 0x00, 0x00, 0x00, 0x72, 0x47,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x03, 0x41, 0x06,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x0f, 0x7e, 0x06,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x53, 0x34, 0xc8,
- 0x00, 0x00, 0x53, 0x76, 0xc9,
- 0x00, 0x00, 0x54, 0x81, 0x09,
- 0x00, 0x00, 0x55, 0x0a, 0xc8,
- 0x00, 0x00, 0x5a, 0x0a, 0x88,
- 0x00, 0x00, 0x5a, 0x0a, 0x89,
- 0x00, 0x00, 0x52, 0xe0, 0xca,
- 0x00, 0x00, 0x56, 0x49, 0xca,
- 0x00, 0x00, 0x59, 0xb1, 0x8a,
- 0x00, 0x00, 0x5a, 0x1a, 0x4a,
- 0x00, 0x00, 0x5d, 0xfe, 0x4a,
- 0x00, 0x00, 0x5e, 0xba, 0xcb,
- 0x00, 0x00, 0x44, 0x14, 0x4d,
- 0x00, 0x00, 0x44, 0x2a, 0x0f,
- 0x00, 0x00, 0x44, 0x61, 0x50,
- 0x00, 0x00, 0x56, 0x89, 0x4d,
- 0x00, 0x00, 0x58, 0x61, 0x8c,
- 0x00, 0x00, 0x5a, 0x17, 0x8b,
- 0x00, 0x00, 0x17, 0x84, 0xc7,
- 0x00, 0x00, 0x13, 0x09, 0x8e,
- 0x00, 0x00, 0x13, 0x53, 0x0a,
- 0x00, 0x00, 0x13, 0x73, 0xc9,
- 0x00, 0x00, 0x14, 0x81, 0x09,
- 0x00, 0x00, 0x16, 0x77, 0x09,
- 0x00, 0x00, 0x16, 0x79, 0x4a,
- 0x00, 0x00, 0x17, 0x17, 0x49,
- 0x00, 0x00, 0x17, 0x24, 0xc9,
- 0x00, 0x00, 0x17, 0x2f, 0xcb,
- 0x00, 0x00, 0x00, 0x62, 0x08,
- 0x00, 0x00, 0x10, 0x0c, 0x08,
- 0x00, 0x00, 0x00, 0x17, 0x09,
- 0x00, 0x02, 0x89, 0x56, 0x07,
- 0x00, 0x00, 0x0e, 0x3d, 0xc5,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x01, 0xf1, 0x43,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x5b, 0xe4, 0x45,
- 0x00, 0x00, 0x40, 0x6b, 0x03,
- 0x01, 0x4d, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x59, 0x07, 0xc7,
- 0x00, 0x00, 0x47, 0x21, 0xc3,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x42, 0xb6, 0x43,
- 0x00, 0x00, 0x40, 0xf4, 0xc3,
- 0x00, 0x00, 0x40, 0x34, 0xc3,
- 0x00, 0x00, 0x41, 0xd7, 0x83,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x5a, 0x63, 0x86,
- 0x00, 0x00, 0x43, 0xd9, 0x42,
- 0x00, 0x00, 0x40, 0x0f, 0x83,
- 0x00, 0x00, 0x1b, 0x96, 0x88,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x45, 0x0b, 0x03,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x66, 0x43,
- 0x00, 0x00, 0x41, 0xf6, 0x03,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x45, 0x03, 0xc4,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0x09, 0xc3,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
- 0x00, 0x00, 0x41, 0x3d, 0xc3,
- 0x00, 0x00, 0x00, 0x72, 0x47,
- 0x00, 0x00, 0x00, 0xd8, 0x42,
- 0x00, 0x00, 0x13, 0xad, 0x44,
- 0x00, 0x02, 0x94, 0x47, 0x46,
- 0x00, 0x00, 0x40, 0x00, 0xc2,
- 0x00, 0x00, 0x40, 0x22, 0x02,
- 0x00, 0x00, 0x40, 0x55, 0x03,
- 0x00, 0x00, 0x40, 0x65, 0x43,
- 0x00, 0x00, 0x41, 0xf1, 0x43,
-}
+//
+//go:embed data/nodes
+var nodes uint40String
// children is the list of nodes' children, the parent's wildcard bit and the
// parent's node type. If a node has no children then their children index
@@ -9906,681 +59,12 @@ var nodes = [...]uint8{
// [ 2 bits] node type
// [14 bits] high nodes index (exclusive) of children
// [14 bits] low nodes index (inclusive) of children
-var children = [...]uint32{
- 0x0,
- 0x10000000,
- 0x20000000,
- 0x40000000,
- 0x50000000,
- 0x60000000,
- 0x179c5e0,
- 0x17a05e7,
- 0x17a45e8,
- 0x17c45e9,
- 0x191c5f1,
- 0x1930647,
- 0x194464c,
- 0x1958651,
- 0x1974656,
- 0x199865d,
- 0x19b0666,
- 0x1a0066c,
- 0x1a04680,
- 0x1a3c681,
- 0x1a4068f,
- 0x1a58690,
- 0x1a5c696,
- 0x1a60697,
- 0x1aa4698,
- 0x1aa86a9,
- 0x1aac6aa,
- 0x21ab06ab,
- 0x61ab86ac,
- 0x21ac06ae,
- 0x1b086b0,
- 0x1b146c2,
- 0x21b186c5,
- 0x1b3c6c6,
- 0x1b406cf,
- 0x1b546d0,
- 0x1b586d5,
- 0x1b786d6,
- 0x1ba86de,
- 0x1bc86ea,
- 0x1bd06f2,
- 0x1bf86f4,
- 0x1c146fe,
- 0x21c18705,
- 0x21c1c706,
- 0x1c20707,
- 0x1cb8708,
- 0x1ccc72e,
- 0x1ce0733,
- 0x1d18738,
- 0x1d28746,
- 0x1d3c74a,
- 0x1d5474f,
- 0x1df8755,
- 0x202c77e,
- 0x203480b,
- 0x2203880d,
- 0x2203c80e,
- 0x20a880f,
- 0x211482a,
- 0x212c845,
- 0x214084b,
- 0x2144850,
- 0x2148851,
- 0x2150852,
- 0x2168854,
- 0x216c85a,
- 0x218885b,
- 0x21dc862,
- 0x21e0877,
- 0x221e4878,
- 0x2208879,
- 0x2220c882,
- 0x2210883,
- 0x2214884,
- 0x2244885,
- 0x62248891,
- 0x22250892,
- 0x22254894,
- 0x2298895,
- 0x6229c8a6,
- 0x22b08a7,
- 0x23108ac,
- 0x223148c4,
- 0x223188c5,
- 0x223208c6,
- 0x223248c8,
- 0x223288c9,
- 0x232c8ca,
- 0x23348cb,
- 0x23388cd,
- 0x223448ce,
- 0x2234c8d1,
- 0x235c8d3,
- 0x236c8d7,
- 0x24208db,
- 0x2424908,
- 0x22434909,
- 0x2243890d,
- 0x2244090e,
- 0x249c910,
- 0x24a0927,
- 0x24a4928,
- 0x24a8929,
- 0x2a9c92a,
- 0x2aa0aa7,
- 0x22b48aa8,
- 0x22b4cad2,
- 0x22b50ad3,
- 0x22b5cad4,
- 0x22b60ad7,
- 0x22b6cad8,
- 0x22b70adb,
- 0x22b74adc,
- 0x22b78add,
- 0x22b7cade,
- 0x22b80adf,
- 0x22b8cae0,
- 0x22b90ae3,
- 0x22b9cae4,
- 0x22ba0ae7,
- 0x22ba4ae8,
- 0x22ba8ae9,
- 0x22bb4aea,
- 0x22bb8aed,
- 0x22bc4aee,
- 0x22bc8af1,
- 0x22bccaf2,
- 0x22bd0af3,
- 0x2bd4af4,
- 0x22bd8af5,
- 0x22be4af6,
- 0x22be8af9,
- 0x2becafa,
- 0x2bf4afb,
- 0x62c00afd,
- 0x22c08b00,
- 0x2c4cb02,
- 0x22c6cb13,
- 0x22c70b1b,
- 0x22c74b1c,
- 0x22c7cb1d,
- 0x22c84b1f,
- 0x22c88b21,
- 0x22c8cb22,
- 0x22c94b23,
- 0x22c98b25,
- 0x22c9cb26,
- 0x22ca0b27,
- 0x2ca4b28,
- 0x22cd0b29,
- 0x22cd4b34,
- 0x22cd8b35,
- 0x2cdcb36,
- 0x22ce0b37,
- 0x22ce4b38,
- 0x22cf0b39,
- 0x22cf4b3c,
- 0x2cf8b3d,
- 0x2d00b3e,
- 0x2d0cb40,
- 0x2d14b43,
- 0x2d30b45,
- 0x2d48b4c,
- 0x2d60b52,
- 0x2d70b58,
- 0x2d7cb5c,
- 0x2db0b5f,
- 0x2db8b6c,
- 0x22dbcb6e,
- 0x2dd4b6f,
- 0x22ddcb75,
- 0x22de0b77,
- 0x22de8b78,
- 0x2ef8b7a,
- 0x22efcbbe,
- 0x2f04bbf,
- 0x2f08bc1,
- 0x22f0cbc2,
- 0x22f10bc3,
- 0x22f14bc4,
- 0x2f18bc5,
- 0x2f64bc6,
- 0x2f68bd9,
- 0x2f6cbda,
- 0x2f88bdb,
- 0x2f9cbe2,
- 0x2fc4be7,
- 0x2fecbf1,
- 0x2ff0bfb,
- 0x62ff4bfc,
- 0x3024bfd,
- 0x3028c09,
- 0x2302cc0a,
- 0x3030c0b,
- 0x3058c0c,
- 0x305cc16,
- 0x3080c17,
- 0x3084c20,
- 0x309cc21,
- 0x30a0c27,
- 0x30a4c28,
- 0x30c4c29,
- 0x30e4c31,
- 0x230e8c39,
- 0x30ecc3a,
- 0x230f0c3b,
- 0x30f4c3c,
- 0x30f8c3d,
- 0x30fcc3e,
- 0x3100c3f,
- 0x3120c40,
- 0x23124c48,
- 0x2312cc49,
- 0x3130c4b,
- 0x3158c4c,
- 0x316cc56,
- 0x31ecc5b,
- 0x31f4c7b,
- 0x31f8c7d,
- 0x3214c7e,
- 0x322cc85,
- 0x3230c8b,
- 0x3244c8c,
- 0x325cc91,
- 0x3278c97,
- 0x3290c9e,
- 0x329cca4,
- 0x32b8ca7,
- 0x32d0cae,
- 0x32d4cb4,
- 0x32fccb5,
- 0x331ccbf,
- 0x3338cc7,
- 0x333ccce,
- 0x33a0ccf,
- 0x33bcce8,
- 0x33e4cef,
- 0x33e8cf9,
- 0x3400cfa,
- 0x3444d00,
- 0x34c4d11,
- 0x3504d31,
- 0x3508d41,
- 0x350cd42,
- 0x3518d43,
- 0x3538d46,
- 0x3544d4e,
- 0x3564d51,
- 0x356cd59,
- 0x35b0d5b,
- 0x3604d6c,
- 0x3608d81,
- 0x371cd82,
- 0x23724dc7,
- 0x23728dc9,
- 0x2372cdca,
- 0x23730dcb,
- 0x23734dcc,
- 0x23738dcd,
- 0x2373cdce,
- 0x23740dcf,
- 0x3744dd0,
- 0x3748dd1,
- 0x2374cdd2,
- 0x2375cdd3,
- 0x23764dd7,
- 0x2376cdd9,
- 0x23770ddb,
- 0x23778ddc,
- 0x2377cdde,
- 0x23780ddf,
- 0x3798de0,
- 0x37bcde6,
- 0x37dcdef,
- 0x3e54df7,
- 0x23e58f95,
- 0x23e5cf96,
- 0x23e60f97,
- 0x23e64f98,
- 0x3e74f99,
- 0x3e94f9d,
- 0x4054fa5,
- 0x4125015,
- 0x4195049,
- 0x41ed065,
- 0x42d507b,
- 0x432d0b5,
- 0x43690cb,
- 0x44650da,
- 0x4531119,
- 0x45c914c,
- 0x4659172,
- 0x46bd196,
- 0x48f51af,
- 0x49ad23d,
- 0x4a7926b,
- 0x4ac529e,
- 0x4b4d2b1,
- 0x4b892d3,
- 0x4bd92e2,
- 0x4c512f6,
- 0x64c55314,
- 0x64c59315,
- 0x64c5d316,
- 0x4cd9317,
- 0x4d35336,
- 0x4db134d,
- 0x4e2936c,
- 0x4ea938a,
- 0x4f153aa,
- 0x50413c5,
- 0x5099410,
- 0x6509d426,
- 0x5135427,
- 0x513d44d,
- 0x2514144f,
- 0x51c9450,
- 0x5215472,
- 0x527d485,
- 0x532549f,
- 0x53ed4c9,
- 0x54554fb,
- 0x5569515,
- 0x6556d55a,
- 0x6557155b,
- 0x55cd55c,
- 0x5629573,
- 0x56b958a,
- 0x57355ae,
- 0x57795cd,
- 0x585d5de,
- 0x5891617,
- 0x58f1624,
- 0x596563c,
- 0x59ed659,
- 0x5a2d67b,
- 0x5a9d68b,
- 0x65aa16a7,
- 0x5ac56a8,
- 0x5ac96b1,
- 0x5af96b2,
- 0x5b156be,
- 0x5b596c5,
- 0x5b696d6,
- 0x5b816da,
- 0x5bf96e0,
- 0x5c016fe,
- 0x5c1d700,
- 0x5c31707,
- 0x5c5170c,
- 0x25c55714,
- 0x5c7d715,
- 0x5c8171f,
- 0x5c89720,
- 0x5c9d722,
- 0x5cb9727,
- 0x5cc172e,
- 0x5ccd730,
- 0x5cd1733,
- 0x5d0d734,
- 0x5d11743,
- 0x5d19744,
- 0x5d2d746,
- 0x5d5574b,
- 0x5d5d755,
- 0x5d61757,
- 0x5d85758,
- 0x5da9761,
- 0x5dc176a,
- 0x5dc5770,
- 0x5dcd771,
- 0x5dd5773,
- 0x5de9775,
- 0x5ea177a,
- 0x5ea57a8,
- 0x5ead7a9,
- 0x5eb17ab,
- 0x5ed57ac,
- 0x5ef57b5,
- 0x5f117bd,
- 0x5f217c4,
- 0x5f357c8,
- 0x5f3d7cd,
- 0x5f457cf,
- 0x5f497d1,
- 0x5f517d2,
- 0x5f6d7d4,
- 0x5f7d7db,
- 0x5f817df,
- 0x5f9d7e0,
- 0x68257e7,
- 0x685da09,
- 0x6889a17,
- 0x68a1a22,
- 0x68c5a28,
- 0x68e5a31,
- 0x6929a39,
- 0x6931a4a,
- 0x26935a4c,
- 0x26939a4d,
- 0x6941a4e,
- 0x6b89a50,
- 0x26b8dae2,
- 0x26b91ae3,
- 0x6ba5ae4,
- 0x26ba9ae9,
- 0x6badaea,
- 0x6bb5aeb,
- 0x26bc1aed,
- 0x26bd1af0,
- 0x26bd9af4,
- 0x26be5af6,
- 0x6be9af9,
- 0x26bedafa,
- 0x26c05afb,
- 0x26c0db01,
- 0x26c15b03,
- 0x26c19b05,
- 0x26c21b06,
- 0x26c25b08,
- 0x6c29b09,
- 0x26c2db0a,
- 0x6c31b0b,
- 0x26c3db0c,
- 0x6c45b0f,
- 0x6c59b11,
- 0x6c5db16,
- 0x6c85b17,
- 0x6cc1b21,
- 0x6cc5b30,
- 0x6cfdb31,
- 0x6d1db3f,
- 0x7879b47,
- 0x787de1e,
- 0x7881e1f,
- 0x27885e20,
- 0x7889e21,
- 0x2788de22,
- 0x7891e23,
- 0x2789de24,
- 0x78a1e27,
- 0x78a5e28,
- 0x278a9e29,
- 0x78ade2a,
- 0x278b5e2b,
- 0x78b9e2d,
- 0x78bde2e,
- 0x278cde2f,
- 0x78d1e33,
- 0x78d5e34,
- 0x78d9e35,
- 0x78dde36,
- 0x278e1e37,
- 0x78e5e38,
- 0x78e9e39,
- 0x78ede3a,
- 0x78f1e3b,
- 0x278f9e3c,
- 0x78fde3e,
- 0x7901e3f,
- 0x7905e40,
- 0x27909e41,
- 0x790de42,
- 0x27915e43,
- 0x27919e45,
- 0x7935e46,
- 0x7945e4d,
- 0x7985e51,
- 0x7989e61,
- 0x79ade62,
- 0x79c1e6b,
- 0x79c5e70,
- 0x79d1e71,
- 0x7b99e74,
- 0x27b9dee6,
- 0x27ba5ee7,
- 0x27ba9ee9,
- 0x27badeea,
- 0x7bb5eeb,
- 0x7c91eed,
- 0x27c9df24,
- 0x27ca1f27,
- 0x27ca5f28,
- 0x27ca9f29,
- 0x7cadf2a,
- 0x7cd9f2b,
- 0x7cf1f36,
- 0x7cf5f3c,
- 0x7d15f3d,
- 0x7d21f45,
- 0x7d41f48,
- 0x7d45f50,
- 0x7d7df51,
- 0x8045f5f,
- 0x8102011,
- 0x8106040,
- 0x810a041,
- 0x811e042,
- 0x8122047,
- 0x8156048,
- 0x818e055,
- 0x28192063,
- 0x81ae064,
- 0x81d206b,
- 0x81d6074,
- 0x81f6075,
- 0x821207d,
- 0x8236084,
- 0x824608d,
- 0x824a091,
- 0x824e092,
- 0x828a093,
- 0x82960a2,
- 0x82be0a5,
- 0x282c20af,
- 0x835e0b0,
- 0x283620d7,
- 0x83660d8,
- 0x83760d9,
- 0x2837a0dd,
- 0x83920de,
- 0x83ae0e4,
- 0x83ce0eb,
- 0x83d20f3,
- 0x83e60f4,
- 0x83fa0f9,
- 0x83fe0fe,
- 0x84060ff,
- 0x840a101,
- 0x842a102,
- 0x84e210a,
- 0x284e6138,
- 0x84ea139,
- 0x850a13a,
- 0x8536142,
- 0x2854614d,
- 0x854a151,
- 0x8556152,
- 0x859a155,
- 0x859e166,
- 0x85b2167,
- 0x85d216c,
- 0x85ee174,
- 0x85f217b,
- 0x85fe17c,
- 0x861e17f,
- 0x864e187,
- 0x865a193,
- 0x872a196,
- 0x872e1ca,
- 0x87421cb,
- 0x87461d0,
- 0x875e1d1,
- 0x87621d7,
- 0x876e1d8,
- 0x877a1db,
- 0x877e1de,
- 0x87861df,
- 0x878a1e1,
- 0x87ae1e2,
- 0x87ea1eb,
- 0x87ee1fa,
- 0x880e1fb,
- 0x8846203,
- 0x8876211,
- 0x2887a21d,
- 0x887e21e,
- 0x888621f,
- 0x88de221,
- 0x88e2237,
- 0x88e6238,
- 0x88ea239,
- 0x892e23a,
- 0x893e24b,
- 0x897a24f,
- 0x897e25e,
- 0x89ae25f,
- 0x8afa26b,
- 0x8b1e2be,
- 0x8b5e2c7,
- 0x8b8e2d7,
- 0x28b962e3,
- 0x28b9a2e5,
- 0x28b9e2e6,
- 0x8ba62e7,
- 0x8bbe2e9,
- 0x8ce22ef,
- 0x8cee338,
- 0x8cfa33b,
- 0x8d0633e,
- 0x8d12341,
- 0x8d1e344,
- 0x8d2a347,
- 0x8d3634a,
- 0x8d4234d,
- 0x8d4e350,
- 0x8d5a353,
- 0x28d5e356,
- 0x8d6a357,
- 0x8d7635a,
- 0x8d8235d,
- 0x8d8a360,
- 0x8d96362,
- 0x8da2365,
- 0x8dae368,
- 0x8dba36b,
- 0x8dc636e,
- 0x8dd2371,
- 0x8dde374,
- 0x8dea377,
- 0x8df637a,
- 0x8e0237d,
- 0x8e0e380,
- 0x8e3a383,
- 0x8e4638e,
- 0x8e52391,
- 0x8e5e394,
- 0x8e6a397,
- 0x8e7639a,
- 0x8e7e39d,
- 0x8e8a39f,
- 0x8e963a2,
- 0x8ea23a5,
- 0x8eae3a8,
- 0x8eba3ab,
- 0x8ec63ae,
- 0x8ed23b1,
- 0x8ede3b4,
- 0x8eea3b7,
- 0x8ef63ba,
- 0x8f023bd,
- 0x8f0a3c0,
- 0x8f163c2,
- 0x8f1e3c5,
- 0x8f2a3c7,
- 0x8f363ca,
- 0x8f423cd,
- 0x8f4e3d0,
- 0x8f5a3d3,
- 0x8f663d6,
- 0x8f723d9,
- 0x8f7e3dc,
- 0x8f823df,
- 0x8f8e3e0,
- 0x8fa63e3,
- 0x8faa3e9,
- 0x8fba3ea,
- 0x8fda3ee,
- 0x8fde3f6,
- 0x902e3f7,
- 0x903240b,
- 0x904640c,
- 0x907a411,
- 0x909a41e,
- 0x909e426,
- 0x90a6427,
- 0x90ca429,
- 0x90e2432,
- 0x90fa438,
- 0x911243e,
- 0x913a444,
- 0x914e44e,
- 0x9166453,
- 0x916a459,
- 0x291b245a,
- 0x91b646c,
- 0x91e246d,
- 0x91f2478,
- 0x920647c,
-}
+//
+//go:embed data/children
+var children uint32String
-// max children 669 (capacity 1023)
-// max text offset 32017 (capacity 65535)
+// max children 718 (capacity 1023)
+// max text offset 32976 (capacity 65535)
// max text length 36 (capacity 63)
-// max hi 9345 (capacity 16383)
-// max lo 9340 (capacity 16383)
+// max hi 9656 (capacity 16383)
+// max lo 9651 (capacity 16383)
diff --git a/vendor/golang.org/x/sys/execabs/execabs_go119.go b/vendor/golang.org/x/sys/execabs/execabs_go119.go
index 1e7a9ada0b0dd..46c5b525e7b8f 100644
--- a/vendor/golang.org/x/sys/execabs/execabs_go119.go
+++ b/vendor/golang.org/x/sys/execabs/execabs_go119.go
@@ -7,9 +7,11 @@
package execabs
-import "strings"
+import (
+ "errors"
+ "os/exec"
+)
func isGo119ErrDot(err error) bool {
- // TODO: return errors.Is(err, exec.ErrDot)
- return strings.Contains(err.Error(), "current directory")
+ return errors.Is(err, exec.ErrDot)
}
diff --git a/vendor/golang.org/x/sys/unix/sockcmsg_unix.go b/vendor/golang.org/x/sys/unix/sockcmsg_unix.go
index 453a942c5db30..3865943f6e27d 100644
--- a/vendor/golang.org/x/sys/unix/sockcmsg_unix.go
+++ b/vendor/golang.org/x/sys/unix/sockcmsg_unix.go
@@ -52,6 +52,20 @@ func ParseSocketControlMessage(b []byte) ([]SocketControlMessage, error) {
return msgs, nil
}
+// ParseOneSocketControlMessage parses a single socket control message from b, returning the message header,
+// message data (a slice of b), and the remainder of b after that single message.
+// When there are no remaining messages, len(remainder) == 0.
+func ParseOneSocketControlMessage(b []byte) (hdr Cmsghdr, data []byte, remainder []byte, err error) {
+ h, dbuf, err := socketControlMessageHeaderAndData(b)
+ if err != nil {
+ return Cmsghdr{}, nil, nil, err
+ }
+ if i := cmsgAlignOf(int(h.Len)); i < len(b) {
+ remainder = b[i:]
+ }
+ return *h, dbuf, remainder, nil
+}
+
func socketControlMessageHeaderAndData(b []byte) (*Cmsghdr, []byte, error) {
h := (*Cmsghdr)(unsafe.Pointer(&b[0]))
if h.Len < SizeofCmsghdr || uint64(h.Len) > uint64(len(b)) {
diff --git a/vendor/golang.org/x/sys/unix/syscall_linux.go b/vendor/golang.org/x/sys/unix/syscall_linux.go
index e044d5b546bde..c5a98440eca1b 100644
--- a/vendor/golang.org/x/sys/unix/syscall_linux.go
+++ b/vendor/golang.org/x/sys/unix/syscall_linux.go
@@ -1554,6 +1554,7 @@ func sendmsgN(fd int, iov []Iovec, oob []byte, ptr unsafe.Pointer, salen _Sockle
var iova [1]Iovec
iova[0].Base = &dummy
iova[0].SetLen(1)
+ iov = iova[:]
}
}
msg.Control = &oob[0]
diff --git a/vendor/golang.org/x/sys/windows/syscall_windows.go b/vendor/golang.org/x/sys/windows/syscall_windows.go
index 7a6ba43a7eeac..a49853e9d3af5 100644
--- a/vendor/golang.org/x/sys/windows/syscall_windows.go
+++ b/vendor/golang.org/x/sys/windows/syscall_windows.go
@@ -367,6 +367,7 @@ func NewCallbackCDecl(fn interface{}) uintptr {
//sys IsWindowUnicode(hwnd HWND) (isUnicode bool) = user32.IsWindowUnicode
//sys IsWindowVisible(hwnd HWND) (isVisible bool) = user32.IsWindowVisible
//sys GetGUIThreadInfo(thread uint32, info *GUIThreadInfo) (err error) = user32.GetGUIThreadInfo
+//sys GetLargePageMinimum() (size uintptr)
// Volume Management Functions
//sys DefineDosDevice(flags uint32, deviceName *uint16, targetPath *uint16) (err error) = DefineDosDeviceW
diff --git a/vendor/golang.org/x/sys/windows/zsyscall_windows.go b/vendor/golang.org/x/sys/windows/zsyscall_windows.go
index 96ba8559c374e..ac60052e44a79 100644
--- a/vendor/golang.org/x/sys/windows/zsyscall_windows.go
+++ b/vendor/golang.org/x/sys/windows/zsyscall_windows.go
@@ -252,6 +252,7 @@ var (
procGetFileType = modkernel32.NewProc("GetFileType")
procGetFinalPathNameByHandleW = modkernel32.NewProc("GetFinalPathNameByHandleW")
procGetFullPathNameW = modkernel32.NewProc("GetFullPathNameW")
+ procGetLargePageMinimum = modkernel32.NewProc("GetLargePageMinimum")
procGetLastError = modkernel32.NewProc("GetLastError")
procGetLogicalDriveStringsW = modkernel32.NewProc("GetLogicalDriveStringsW")
procGetLogicalDrives = modkernel32.NewProc("GetLogicalDrives")
@@ -2180,6 +2181,12 @@ func GetFullPathName(path *uint16, buflen uint32, buf *uint16, fname **uint16) (
return
}
+func GetLargePageMinimum() (size uintptr) {
+ r0, _, _ := syscall.Syscall(procGetLargePageMinimum.Addr(), 0, 0, 0, 0)
+ size = uintptr(r0)
+ return
+}
+
func GetLastError() (lasterr error) {
r0, _, _ := syscall.Syscall(procGetLastError.Addr(), 0, 0, 0, 0)
if r0 != 0 {
diff --git a/vendor/golang.org/x/term/terminal.go b/vendor/golang.org/x/term/terminal.go
index 4b48a5899d1f8..f636667fb0425 100644
--- a/vendor/golang.org/x/term/terminal.go
+++ b/vendor/golang.org/x/term/terminal.go
@@ -233,7 +233,6 @@ func (t *Terminal) queue(data []rune) {
t.outBuf = append(t.outBuf, []byte(string(data))...)
}
-var eraseUnderCursor = []rune{' ', keyEscape, '[', 'D'}
var space = []rune{' '}
func isPrintable(key rune) bool {
diff --git a/vendor/golang.org/x/text/unicode/bidi/trieval.go b/vendor/golang.org/x/text/unicode/bidi/trieval.go
index 4c459c4b72e0e..6a796e2214c69 100644
--- a/vendor/golang.org/x/text/unicode/bidi/trieval.go
+++ b/vendor/golang.org/x/text/unicode/bidi/trieval.go
@@ -37,18 +37,6 @@ const (
unknownClass = ^Class(0)
)
-var controlToClass = map[rune]Class{
- 0x202D: LRO, // LeftToRightOverride,
- 0x202E: RLO, // RightToLeftOverride,
- 0x202A: LRE, // LeftToRightEmbedding,
- 0x202B: RLE, // RightToLeftEmbedding,
- 0x202C: PDF, // PopDirectionalFormat,
- 0x2066: LRI, // LeftToRightIsolate,
- 0x2067: RLI, // RightToLeftIsolate,
- 0x2068: FSI, // FirstStrongIsolate,
- 0x2069: PDI, // PopDirectionalIsolate,
-}
-
// A trie entry has the following bits:
// 7..5 XOR mask for brackets
// 4 1: Bracket open, 0: Bracket close
diff --git a/vendor/modules.txt b/vendor/modules.txt
index d1c97427cab6f..ebe9df96b9da2 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1346,7 +1346,7 @@ go4.org/intern
# go4.org/unsafe/assume-no-moving-gc v0.0.0-20220617031537-928513b29760
## explicit; go 1.11
go4.org/unsafe/assume-no-moving-gc
-# golang.org/x/crypto v0.1.0
+# golang.org/x/crypto v0.4.0
## explicit; go 1.17
golang.org/x/crypto/argon2
golang.org/x/crypto/bcrypt
@@ -1365,7 +1365,7 @@ golang.org/x/exp/slices
# golang.org/x/mod v0.6.0
## explicit; go 1.17
golang.org/x/mod/semver
-# golang.org/x/net v0.1.0
+# golang.org/x/net v0.3.0
## explicit; go 1.17
golang.org/x/net/bpf
golang.org/x/net/context
@@ -1399,7 +1399,7 @@ golang.org/x/oauth2/jwt
## explicit
golang.org/x/sync/errgroup
golang.org/x/sync/semaphore
-# golang.org/x/sys v0.1.0
+# golang.org/x/sys v0.3.0
## explicit; go 1.17
golang.org/x/sys/cpu
golang.org/x/sys/execabs
@@ -1409,10 +1409,10 @@ golang.org/x/sys/unix
golang.org/x/sys/windows
golang.org/x/sys/windows/registry
golang.org/x/sys/windows/svc/eventlog
-# golang.org/x/term v0.1.0
+# golang.org/x/term v0.3.0
## explicit; go 1.17
golang.org/x/term
-# golang.org/x/text v0.4.0
+# golang.org/x/text v0.5.0
## explicit; go 1.17
golang.org/x/text/cases
golang.org/x/text/encoding
|
build
|
bump golang.org/x/crypto from 0.1.0 to 0.4.0 (#7883)
|
e69b64f6bf3fb62fc3f3cf61db01dd7c453b14c4
|
2020-08-13 20:59:36
|
Ed Welch
|
promtail: Drop stage (#2496)
| false
|
diff --git a/docs/sources/clients/promtail/pipelines.md b/docs/sources/clients/promtail/pipelines.md
index ef4b4f315c90e..00d0f05711cd2 100644
--- a/docs/sources/clients/promtail/pipelines.md
+++ b/docs/sources/clients/promtail/pipelines.md
@@ -219,3 +219,4 @@ Action stages:
Filtering stages:
* [match](../stages/match/): Conditionally run stages based on the label set.
+ * [drop](../stages/drop/): Conditionally drop log lines based on several options.
diff --git a/docs/sources/clients/promtail/stages/_index.md b/docs/sources/clients/promtail/stages/_index.md
index afe7e4d9eeff3..76bed1c43c434 100644
--- a/docs/sources/clients/promtail/stages/_index.md
+++ b/docs/sources/clients/promtail/stages/_index.md
@@ -29,4 +29,5 @@ Action stages:
Filtering stages:
* [match](match/): Conditionally run stages based on the label set.
+ * [drop](drop/): Conditionally drop log lines based on several options.
diff --git a/docs/sources/clients/promtail/stages/drop.md b/docs/sources/clients/promtail/stages/drop.md
new file mode 100644
index 0000000000000..0a8b0e6562912
--- /dev/null
+++ b/docs/sources/clients/promtail/stages/drop.md
@@ -0,0 +1,193 @@
+---
+title: drop
+---
+# `drop` stage
+
+The `drop` stage is a filtering stage that lets you drop logs based on several options.
+
+It's important to note that if you provide multiple options they will be treated like an AND clause,
+where each option has to be true to drop the log.
+
+If you wish to drop with an OR clause, then specify multiple drop stages.
+
+There are examples below to help explain.
+
+## Drop stage schema
+
+```yaml
+drop:
+ # Name from extracted data to parse. If empty, uses the log message.
+ [source: <string>]
+
+ # RE2 regular expression, if source is provided the regex will attempt to match the source
+ # If no source is provided, then the regex attempts to match the log line
+ # If the provided regex matches the log line or a provided source, the line will be dropped.
+ [expression: <string>]
+
+ # value can only be specified when source is specified. It is an error to specify value and regex.
+ # If the value provided is an exact match for the `source` the line will be dropped.
+ [value: <string>]
+
+ # older_than will be parsed as a Go duration: https://golang.org/pkg/time/#ParseDuration
+ # If the log line timestamp is older than the current time minus the provided duration it will be dropped.
+ [older_than: <duration>]
+
+ # longer_than is a value in bytes, any log line longer than this value will be dropped.
+ # Can be specified as an exact number of bytes in integer format: 8192
+ # Or can be expressed with a suffix such as 8kb
+ [longer_than: <string>|<int>]
+
+ # Every time a log line is dropped the metric `logentry_dropped_lines_total`
+ # will be incremented. By default the reason label will be `drop_stage`
+ # however you can optionally specify a custom value to be used in the `reason`
+ # label of that metric here.
+ [drop_counter_reason: <string> | default = "drop_stage"]
+```
+
+## Examples
+
+The following are examples showing the use of the `drop` stage.
+
+### Simple drops
+
+Simple `drop` stage configurations only specify one of the options, or two options when using the `source` option.
+
+#### Regex match a line
+
+Given the pipeline:
+
+```yaml
+- drop:
+ expression: ".*debug.*"
+```
+
+Would drop any log line with the word `debug` in it.
+
+#### Regex match a source
+
+Given the pipeline:
+
+```yaml
+- json:
+ expressions:
+ level:
+ msg:
+- drop:
+ source: "level"
+ expression: "(error|ERROR)"
+```
+
+Would drop both of these log lines:
+
+```
+{"time":"2019-01-01T01:00:00.000000001Z", "level": "error", "msg":"11.11.11.11 - "POST /loki/api/push/ HTTP/1.1" 200 932 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6"}
+{"time":"2019-01-01T01:00:00.000000001Z", "level": "ERROR", "msg":"11.11.11.11 - "POST /loki/api/push/ HTTP/1.1" 200 932 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6"}
+```
+
+#### Value match a source
+
+Given the pipeline:
+
+```yaml
+- json:
+ expressions:
+ level:
+ msg:
+- drop:
+ source: "level"
+ value: "error"
+```
+
+Would drop this log line:
+
+```
+{"time":"2019-01-01T01:00:00.000000001Z", "level": "error", "msg":"11.11.11.11 - "POST /loki/api/push/ HTTP/1.1" 200 932 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6"}
+```
+
+#### Drop old log lines
+
+**NOTE** For `older_than` to work, you must be using the [timestamp](timestamp.md) stage to set the timestamp from the ingested log line _before_ applying the `drop` stage.
+
+Given the pipeline:
+
+```yaml
+- json:
+ expressions:
+ time:
+ msg:
+- timestamp:
+ source: time
+ format: RFC3339
+- drop:
+ older_than: 24h
+ drop_counter_reason: "line_too_old"
+```
+
+With a current ingestion time of 2020-08-12T12:00:00Z would drop this log line when read from a file:
+
+```
+{"time":"2020-08-11T11:00:00Z", "level": "error", "msg":"11.11.11.11 - "POST /loki/api/push/ HTTP/1.1" 200 932 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6"}
+```
+
+However it would _not_ drop this log line:
+
+```
+{"time":"2020-08-11T13:00:00Z", "level": "error", "msg":"11.11.11.11 - "POST /loki/api/push/ HTTP/1.1" 200 932 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 GTB6"}
+```
+
+In this example the current time is 2020-08-12T12:00:00Z and `older_than` is 24h. All log lines which have a timestamp older than 2020-08-11T12:00:00Z will be dropped.
+
+All lines dropped by this drop stage would also increment the `logentry_drop_lines_total` metric with a label `reason="line_too_old"`
+
+#### Dropping long log lines
+
+Given the pipeline:
+
+```yaml
+- drop:
+ longer_than: 8kb
+ drop_counter_reason: "line_too_long"
+```
+
+Would drop any log line longer than 8kb bytes, this is useful when Loki would reject a line for being too long.
+
+All lines dropped by this drop stage would also increment the `logentry_drop_lines_total` metric with a label `reason="line_too_long"`
+
+### Complex drops
+
+Complex `drop` stage configurations specify multiple options in one stage or specify multiple drop stages
+
+#### Drop logs by regex AND length
+
+Given the pipeline:
+
+```yaml
+- drop:
+ expression: ".*debug.*"
+ longer_than: 1kb
+```
+
+Would drop all logs that contain the word _debug_ *AND* are longer than 1kb bytes
+
+#### Drop logs by time OR length OR regex
+
+Given the pipeline:
+
+```yaml
+- json:
+ expressions:
+ time:
+ msg:
+- timestamp:
+ source: time
+ format: RFC3339
+- drop:
+ older_than: 24h
+- drop:
+ longer_than: 8kb
+- drop:
+ source: msg
+ regex: ".*trace.*"
+```
+
+Would drop all logs older than 24h OR longer than 8kb bytes OR have a json `msg` field containing the word _trace_
\ No newline at end of file
diff --git a/docs/sources/clients/promtail/stages/match.md b/docs/sources/clients/promtail/stages/match.md
index a380a4924662c..2016c5b5dd5e5 100644
--- a/docs/sources/clients/promtail/stages/match.md
+++ b/docs/sources/clients/promtail/stages/match.md
@@ -24,6 +24,12 @@ match:
# and no later metrics will be recorded.
# Stages must be not defined when dropping entries.
[action: <string> | default = "keep"]
+
+ # If you specify `action: drop` the metric `logentry_dropped_lines_total`
+ # will be incremented for every line dropped. By default the reason
+ # label will be `match_stage` however you can optionally specify a custom value
+ # to be used in the `reason` label of that metric here.
+ [drop_counter_reason: <string> | default = "match_stage"]
# Nested set of pipeline stages only if the selector
# matches the labels of the log entries:
@@ -72,6 +78,7 @@ pipeline_stages:
- match:
selector: '{app="promtail"} |~ ".*noisy error.*"'
action: drop
+ drop_counter_reason: promtail_noisy_error
- output:
source: msg
```
@@ -97,7 +104,8 @@ label of `app` whose value is `pokey`. This does **not** match in our case, so
the nested `json` stage is not ran.
The fifth stage will drop any entries from the application `promtail` that matches
-the regex `.*noisy error`.
+the regex `.*noisy error`. and will also increment the `logentry_drop_lines_total`
+metric with a label `reason="promtail_noisy_error"`
The final `output` stage changes the contents of the log line to be the value of
`msg` from the extracted map. In this case, the log line is changed to `app1 log
diff --git a/pkg/logentry/stages/drop.go b/pkg/logentry/stages/drop.go
new file mode 100644
index 0000000000000..f8ad8398b078a
--- /dev/null
+++ b/pkg/logentry/stages/drop.go
@@ -0,0 +1,233 @@
+package stages
+
+import (
+ "fmt"
+ "reflect"
+ "regexp"
+ "time"
+
+ "github.com/go-kit/kit/log"
+ "github.com/go-kit/kit/log/level"
+ "github.com/mitchellh/mapstructure"
+ "github.com/pkg/errors"
+ "github.com/prometheus/common/model"
+
+ "github.com/grafana/loki/pkg/util/flagext"
+)
+
+const (
+ ErrDropStageEmptyConfig = "drop stage config must contain at least one of `source`, `expression`, `older_than` or `longer_than`"
+ ErrDropStageInvalidDuration = "drop stage invalid duration, %v cannot be converted to a duration: %v"
+ ErrDropStageInvalidConfig = "drop stage config error, `value` and `expression` cannot both be defined at the same time."
+ ErrDropStageInvalidRegex = "drop stage regex compilation error: %v"
+ ErrDropStageInvalidByteSize = "drop stage failed to parse longer_than to bytes: %v"
+)
+
+var (
+ defaultDropReason = "drop_stage"
+)
+
+// DropConfig contains the configuration for a dropStage
+type DropConfig struct {
+ DropReason *string `mapstructure:"drop_counter_reason"`
+ Source *string `mapstructure:"source"`
+ Value *string `mapstructure:"value"`
+ Expression *string `mapstructure:"expression"`
+ regex *regexp.Regexp
+ OlderThan *string `mapstructure:"older_than"`
+ olderThan time.Duration
+ LongerThan *string `mapstructure:"longer_than"`
+ longerThan flagext.ByteSize
+}
+
+// validateDropConfig validates the DropConfig for the dropStage
+func validateDropConfig(cfg *DropConfig) error {
+ if cfg == nil ||
+ (cfg.Source == nil && cfg.Expression == nil && cfg.OlderThan == nil && cfg.LongerThan == nil) {
+ return errors.New(ErrDropStageEmptyConfig)
+ }
+ if cfg.DropReason == nil || *cfg.DropReason == "" {
+ cfg.DropReason = &defaultDropReason
+ }
+ if cfg.OlderThan != nil {
+ dur, err := time.ParseDuration(*cfg.OlderThan)
+ if err != nil {
+ return errors.Errorf(ErrDropStageInvalidDuration, *cfg.OlderThan, err)
+ }
+ cfg.olderThan = dur
+ }
+ if cfg.Value != nil && cfg.Expression != nil {
+ return errors.New(ErrDropStageInvalidConfig)
+ }
+ if cfg.Expression != nil {
+ expr, err := regexp.Compile(*cfg.Expression)
+ if err != nil {
+ return errors.Errorf(ErrDropStageInvalidRegex, err)
+ }
+ cfg.regex = expr
+ }
+ if cfg.LongerThan != nil {
+ err := cfg.longerThan.Set(*cfg.LongerThan)
+ if err != nil {
+ return errors.Errorf(ErrDropStageInvalidByteSize, err)
+ }
+ }
+ return nil
+}
+
+// newDropStage creates a DropStage from config
+func newDropStage(logger log.Logger, config interface{}) (Stage, error) {
+ cfg := &DropConfig{}
+ err := mapstructure.WeakDecode(config, cfg)
+ if err != nil {
+ return nil, err
+ }
+ err = validateDropConfig(cfg)
+ if err != nil {
+ return nil, err
+ }
+
+ return &dropStage{
+ logger: log.With(logger, "component", "stage", "type", "drop"),
+ cfg: cfg,
+ }, nil
+}
+
+// dropStage applies Label matchers to determine if the include stages should be run
+type dropStage struct {
+ logger log.Logger
+ cfg *DropConfig
+}
+
+// Process implements Stage
+func (m *dropStage) Process(labels model.LabelSet, extracted map[string]interface{}, t *time.Time, entry *string) {
+ // There are many options for dropping a log and if multiple are defined it's treated like an AND condition
+ // where all drop conditions must be met to drop the log.
+ // Therefore if at any point there is a condition which does not match we can return.
+ // The order is what I roughly think would be fastest check to slowest check to try to quit early whenever possible
+
+ if m.cfg.LongerThan != nil {
+ if len([]byte(*entry)) > m.cfg.longerThan.Val() {
+ // Too long, drop
+ if Debug {
+ level.Debug(m.logger).Log("msg", fmt.Sprintf("line met drop criteria for length %v > %v", len([]byte(*entry)), m.cfg.longerThan.Val()))
+ }
+ } else {
+ if Debug {
+ level.Debug(m.logger).Log("msg", fmt.Sprintf("line will not be dropped, it did not meet criteria for drop length %v is not greater than %v", len([]byte(*entry)), m.cfg.longerThan.Val()))
+ }
+ return
+ }
+ }
+
+ if m.cfg.OlderThan != nil {
+ ct := time.Now()
+ if t.Before(ct.Add(-m.cfg.olderThan)) {
+ // Too old, drop
+ if Debug {
+ level.Debug(m.logger).Log("msg", fmt.Sprintf("line met drop criteria for age; current time=%v, drop before=%v, log timestamp=%v", ct, ct.Add(-m.cfg.olderThan), t))
+ }
+ } else {
+ if Debug {
+ level.Debug(m.logger).Log("msg", fmt.Sprintf("line will not be dropped, it did not meet drop criteria for age; current time=%v, drop before=%v, log timestamp=%v", ct, ct.Add(-m.cfg.olderThan), t))
+ }
+ return
+ }
+ }
+
+ if m.cfg.Source != nil && m.cfg.Expression == nil {
+ if v, ok := extracted[*m.cfg.Source]; ok {
+ if m.cfg.Value == nil {
+ // Found in map, no value set meaning drop if found in map
+ if Debug {
+ level.Debug(m.logger).Log("msg", "line met drop criteria for finding source key in extracted map")
+ }
+ } else {
+ if *m.cfg.Value == v {
+ // Found in map with value set for drop
+ if Debug {
+ level.Debug(m.logger).Log("msg", "line met drop criteria for finding source key in extracted map with value matching desired drop value")
+ }
+ } else {
+ // Value doesn't match, don't drop
+ if Debug {
+ level.Debug(m.logger).Log("msg", fmt.Sprintf("line will not be dropped, source key was found in extracted map but value '%v' did not match desired value '%v'", v, *m.cfg.Value))
+ }
+ return
+ }
+ }
+ } else {
+ // Not found in extact map, don't drop
+ if Debug {
+ level.Debug(m.logger).Log("msg", "line will not be dropped, the provided source was not found in the extracted map")
+ }
+ return
+ }
+ }
+
+ if m.cfg.Expression != nil {
+ if m.cfg.Source != nil {
+ if v, ok := extracted[*m.cfg.Source]; ok {
+ s, err := getString(v)
+ if err != nil {
+ if Debug {
+ level.Debug(m.logger).Log("msg", "Failed to convert extracted map value to string, cannot test regex line will not be dropped.", "err", err, "type", reflect.TypeOf(v))
+ }
+ return
+ }
+ match := m.cfg.regex.FindStringSubmatch(s)
+ if match == nil {
+ // Not a match to the regex, don't drop
+ if Debug {
+ level.Debug(m.logger).Log("msg", fmt.Sprintf("line will not be dropped, the provided regular expression did not match the value found in the extracted map for source key: %v", *m.cfg.Source))
+ }
+ return
+ } else {
+ // regex match, will be dropped
+ if Debug {
+ level.Debug(m.logger).Log("msg", "line met drop criteria, regex matched the value in the extracted map source key")
+ }
+ }
+ } else {
+ // Not found in extact map, don't drop
+ if Debug {
+ level.Debug(m.logger).Log("msg", "line will not be dropped, the provided source was not found in the extracted map")
+ }
+ return
+ }
+ } else {
+ if entry != nil {
+ match := m.cfg.regex.FindStringSubmatch(*entry)
+ if match == nil {
+ // Not a match to the regex, don't drop
+ if Debug {
+ level.Debug(m.logger).Log("msg", "line will not be dropped, the provided regular expression did not match the log line")
+ }
+ return
+ } else {
+ if Debug {
+ level.Debug(m.logger).Log("msg", "line met drop criteria, the provided regular expression matched the log line")
+ }
+ }
+ } else {
+ // Not a match to entry was nil, do not drop
+ if Debug {
+ level.Debug(m.logger).Log("msg", "line will not be dropped, because it was nil and we can't regex match to nil")
+ }
+ return
+ }
+ }
+ }
+
+ // Everything matched, drop the line
+ if Debug {
+ level.Debug(m.logger).Log("msg", "all critera met, line will be dropped")
+ }
+ // Adds the drop label to not be sent by the api.EntryHandler
+ labels[dropLabel] = model.LabelValue(*m.cfg.DropReason)
+}
+
+// Name implements Stage
+func (m *dropStage) Name() string {
+ return StageTypeDrop
+}
diff --git a/pkg/logentry/stages/drop_test.go b/pkg/logentry/stages/drop_test.go
new file mode 100644
index 0000000000000..39b9ce01f6237
--- /dev/null
+++ b/pkg/logentry/stages/drop_test.go
@@ -0,0 +1,377 @@
+package stages
+
+import (
+ "errors"
+ "fmt"
+ "testing"
+ "time"
+
+ "github.com/cortexproject/cortex/pkg/util"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/common/model"
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+ ww "github.com/weaveworks/common/server"
+)
+
+// Not all these are tested but are here to make sure the different types marshal without error
+var testDropYaml = `
+pipeline_stages:
+- json:
+ expressions:
+ app:
+ msg:
+- drop:
+ source: src
+ expression: ".*test.*"
+ older_than: 24h
+ longer_than: 8kb
+- drop:
+ expression: ".*app1.*"
+- drop:
+ source: app
+ value: loki
+- drop:
+ longer_than: 10000
+`
+
+func Test_dropStage_Process(t *testing.T) {
+ // Enable debug logging
+ cfg := &ww.Config{}
+ cfg.LogLevel.Set("debug")
+ util.InitLogger(cfg)
+ Debug = true
+
+ tests := []struct {
+ name string
+ config *DropConfig
+ labels model.LabelSet
+ extracted map[string]interface{}
+ t *time.Time
+ entry *string
+ shouldDrop bool
+ }{
+ {
+ name: "Longer Than Should Drop",
+ config: &DropConfig{
+ LongerThan: ptrFromString("10b"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{},
+ t: nil,
+ entry: ptrFromString("12345678901"),
+ shouldDrop: true,
+ },
+ {
+ name: "Longer Than Should Not Drop When Equal",
+ config: &DropConfig{
+ LongerThan: ptrFromString("10b"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{},
+ t: nil,
+ entry: ptrFromString("1234567890"),
+ shouldDrop: false,
+ },
+ {
+ name: "Longer Than Should Not Drop When Less",
+ config: &DropConfig{
+ LongerThan: ptrFromString("10b"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{},
+ t: nil,
+ entry: ptrFromString("123456789"),
+ shouldDrop: false,
+ },
+ {
+ name: "Older than Should Drop",
+ config: &DropConfig{
+ OlderThan: ptrFromString("1h"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{},
+ t: ptrFromTime(time.Now().Add(-2 * time.Hour)),
+ entry: nil,
+ shouldDrop: true,
+ },
+ {
+ name: "Older than Should Not Drop",
+ config: &DropConfig{
+ OlderThan: ptrFromString("1h"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{},
+ t: ptrFromTime(time.Now().Add(-5 * time.Minute)),
+ entry: nil,
+ shouldDrop: false,
+ },
+ {
+ name: "Matched Source",
+ config: &DropConfig{
+ Source: ptrFromString("key"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "key": "",
+ },
+ shouldDrop: true,
+ },
+ {
+ name: "Did not match Source",
+ config: &DropConfig{
+ Source: ptrFromString("key1"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "key": "val1",
+ },
+ shouldDrop: false,
+ },
+ {
+ name: "Matched Source and Value",
+ config: &DropConfig{
+ Source: ptrFromString("key"),
+ Value: ptrFromString("val1"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "key": "val1",
+ },
+ shouldDrop: true,
+ },
+ {
+ name: "Did not match Source and Value",
+ config: &DropConfig{
+ Source: ptrFromString("key"),
+ Value: ptrFromString("val1"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "key": "VALRUE1",
+ },
+ shouldDrop: false,
+ },
+ {
+ name: "Regex Matched Source and Value",
+ config: &DropConfig{
+ Source: ptrFromString("key"),
+ Expression: ptrFromString(".*val.*"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "key": "val1",
+ },
+ shouldDrop: true,
+ },
+ {
+ name: "Regex Did not match Source and Value",
+ config: &DropConfig{
+ Source: ptrFromString("key"),
+ Expression: ptrFromString(".*val.*"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "key": "pal1",
+ },
+ shouldDrop: false,
+ },
+ {
+ name: "Regex No Matching Source",
+ config: &DropConfig{
+ Source: ptrFromString("key"),
+ Expression: ptrFromString(".*val.*"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "pokey": "pal1",
+ },
+ shouldDrop: false,
+ },
+ {
+ name: "Regex Did Not Match Line",
+ config: &DropConfig{
+ Expression: ptrFromString(".*val.*"),
+ },
+ labels: model.LabelSet{},
+ entry: ptrFromString("this is a line which does not match the regex"),
+ extracted: map[string]interface{}{},
+ shouldDrop: false,
+ },
+ {
+ name: "Regex Matched Line",
+ config: &DropConfig{
+ Expression: ptrFromString(".*val.*"),
+ },
+ labels: model.LabelSet{},
+ entry: ptrFromString("this is a line with the word value in it"),
+ extracted: map[string]interface{}{},
+ shouldDrop: true,
+ },
+ {
+ name: "Match Source and Length Both Match",
+ config: &DropConfig{
+ Source: ptrFromString("key"),
+ LongerThan: ptrFromString("10b"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "key": "pal1",
+ },
+ t: nil,
+ entry: ptrFromString("12345678901"),
+ shouldDrop: true,
+ },
+ {
+ name: "Match Source and Length Only First Matches",
+ config: &DropConfig{
+ Source: ptrFromString("key"),
+ LongerThan: ptrFromString("10b"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "key": "pal1",
+ },
+ t: nil,
+ entry: ptrFromString("123456789"),
+ shouldDrop: false,
+ },
+ {
+ name: "Match Source and Length Only Second Matches",
+ config: &DropConfig{
+ Source: ptrFromString("key"),
+ LongerThan: ptrFromString("10b"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "WOOOOOOOOOOOOOO": "pal1",
+ },
+ t: nil,
+ entry: ptrFromString("123456789012"),
+ shouldDrop: false,
+ },
+ {
+ name: "Everything Must Match",
+ config: &DropConfig{
+ Source: ptrFromString("key"),
+ Expression: ptrFromString(".*val.*"),
+ OlderThan: ptrFromString("1h"),
+ LongerThan: ptrFromString("10b"),
+ },
+ labels: model.LabelSet{},
+ extracted: map[string]interface{}{
+ "key": "must contain value to match",
+ },
+ t: ptrFromTime(time.Now().Add(-2 * time.Hour)),
+ entry: ptrFromString("12345678901"),
+ shouldDrop: true,
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ err := validateDropConfig(tt.config)
+ if err != nil {
+ t.Error(err)
+ }
+ m := &dropStage{
+ cfg: tt.config,
+ logger: util.Logger,
+ }
+ m.Process(tt.labels, tt.extracted, tt.t, tt.entry)
+ if tt.shouldDrop {
+ assert.Contains(t, tt.labels.String(), dropLabel)
+ } else {
+ assert.NotContains(t, tt.labels.String(), dropLabel)
+ }
+ })
+ }
+}
+
+func ptrFromString(str string) *string {
+ return &str
+}
+
+func ptrFromTime(t time.Time) *time.Time {
+ return &t
+}
+
+// TestDropPipeline is used to verify we properly parse the yaml config and create a working pipeline
+func TestDropPipeline(t *testing.T) {
+ registry := prometheus.NewRegistry()
+ plName := "test_pipeline"
+ pl, err := NewPipeline(util.Logger, loadConfig(testDropYaml), &plName, registry)
+ require.NoError(t, err)
+ lbls := model.LabelSet{}
+ ts := time.Now()
+
+ // Process the first log line which should be dropped
+ entry := testMatchLogLineApp1
+ extracted := map[string]interface{}{}
+ pl.Process(lbls, extracted, &ts, &entry)
+ assert.Contains(t, lbls.String(), dropLabel)
+
+ // Process the second line which should not be dropped.
+ entry = testMatchLogLineApp2
+ extracted = map[string]interface{}{}
+ lbls = model.LabelSet{}
+ pl.Process(lbls, extracted, &ts, &entry)
+ assert.NotContains(t, lbls.String(), dropLabel)
+}
+
+var (
+ dropInvalidDur = "10y"
+ dropVal = "msg"
+ dropRegex = ".*blah"
+ dropInvalidRegex = "(?P<ts[0-9]+).*"
+ dropInvalidByteSize = "23QB"
+)
+
+func Test_validateDropConfig(t *testing.T) {
+ tests := []struct {
+ name string
+ config *DropConfig
+ wantErr error
+ }{
+ {
+ name: "ErrEmpty",
+ config: &DropConfig{},
+ wantErr: errors.New(ErrDropStageEmptyConfig),
+ },
+ {
+ name: "Invalid Duration",
+ config: &DropConfig{
+ OlderThan: &dropInvalidDur,
+ },
+ wantErr: fmt.Errorf(ErrDropStageInvalidDuration, dropInvalidDur, "time: unknown unit y in duration 10y"),
+ },
+ {
+ name: "Invalid Config",
+ config: &DropConfig{
+ Value: &dropVal,
+ Expression: &dropRegex,
+ },
+ wantErr: errors.New(ErrDropStageInvalidConfig),
+ },
+ {
+ name: "Invalid Regex",
+ config: &DropConfig{
+ Expression: &dropInvalidRegex,
+ },
+ wantErr: fmt.Errorf(ErrDropStageInvalidRegex, "error parsing regexp: invalid named capture: `(?P<ts[0-9]+).*`"),
+ },
+ {
+ name: "Invalid Bytesize",
+ config: &DropConfig{
+ LongerThan: &dropInvalidByteSize,
+ },
+ wantErr: fmt.Errorf(ErrDropStageInvalidByteSize, "strconv.UnmarshalText: parsing \"23QB\": invalid syntax"),
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ if err := validateDropConfig(tt.config); ((err != nil) && (err.Error() != tt.wantErr.Error())) || (err == nil && tt.wantErr != nil) {
+ t.Errorf("validateDropConfig() error = %v, wantErr = %v", err, tt.wantErr)
+ }
+ })
+ }
+}
diff --git a/pkg/logentry/stages/match.go b/pkg/logentry/stages/match.go
index 8dac39d2a5dfd..5d9f5eae4001b 100644
--- a/pkg/logentry/stages/match.go
+++ b/pkg/logentry/stages/match.go
@@ -32,6 +32,7 @@ type MatcherConfig struct {
Selector string `mapstructure:"selector"`
Stages PipelineStages `mapstructure:"stages"`
Action string `mapstructure:"action"`
+ DropReason *string `mapstructure:"drop_counter_reason"`
}
// validateMatcherConfig validates the MatcherConfig for the matcherStage
@@ -99,20 +100,27 @@ func newMatcherStage(logger log.Logger, jobName *string, config interface{}, reg
return nil, errors.Wrap(err, "error parsing filter")
}
+ dropReason := "match_stage"
+ if cfg.DropReason != nil && *cfg.DropReason != "" {
+ dropReason = *cfg.DropReason
+ }
+
return &matcherStage{
- matchers: selector.Matchers(),
- pipeline: pl,
- action: cfg.Action,
- filter: filter,
+ dropReason: dropReason,
+ matchers: selector.Matchers(),
+ pipeline: pl,
+ action: cfg.Action,
+ filter: filter,
}, nil
}
// matcherStage applies Label matchers to determine if the include stages should be run
type matcherStage struct {
- matchers []*labels.Matcher
- filter logql.LineFilter
- pipeline Stage
- action string
+ dropReason string
+ matchers []*labels.Matcher
+ filter logql.LineFilter
+ pipeline Stage
+ action string
}
// Process implements Stage
@@ -126,7 +134,7 @@ func (m *matcherStage) Process(labels model.LabelSet, extracted map[string]inter
switch m.action {
case MatchActionDrop:
// Adds the drop label to not be sent by the api.EntryHandler
- labels[dropLabel] = ""
+ labels[dropLabel] = model.LabelValue(m.dropReason)
case MatchActionKeep:
m.pipeline.Process(labels, extracted, t, entry)
}
diff --git a/pkg/logentry/stages/match_test.go b/pkg/logentry/stages/match_test.go
index 843e2b8dd24a3..112876ca895a0 100644
--- a/pkg/logentry/stages/match_test.go
+++ b/pkg/logentry/stages/match_test.go
@@ -163,6 +163,7 @@ func TestMatcher(t *testing.T) {
tt.selector,
stages,
tt.action,
+ nil,
}
s, err := newMatcherStage(util.Logger, nil, matchConfig, prometheus.DefaultRegisterer)
if (err != nil) != tt.wantErr {
diff --git a/pkg/logentry/stages/pipeline.go b/pkg/logentry/stages/pipeline.go
index 2974850bc5cee..dafa6e50e80f3 100644
--- a/pkg/logentry/stages/pipeline.go
+++ b/pkg/logentry/stages/pipeline.go
@@ -26,6 +26,7 @@ type Pipeline struct {
stages []Stage
jobName *string
plDuration *prometheus.HistogramVec
+ dropCount *prometheus.CounterVec
}
// NewPipeline creates a new log entry pipeline from a configuration
@@ -45,6 +46,20 @@ func NewPipeline(logger log.Logger, stgs PipelineStages, jobName *string, regist
panic(err)
}
}
+ dropCount := prometheus.NewCounterVec(prometheus.CounterOpts{
+ Namespace: "logentry",
+ Name: "dropped_lines_total",
+ Help: "A count of all log lines dropped as a result of a pipeline stage",
+ }, []string{"reason"})
+ err = registerer.Register(dropCount)
+ if err != nil {
+ if existing, ok := err.(prometheus.AlreadyRegisteredError); ok {
+ dropCount = existing.ExistingCollector.(*prometheus.CounterVec)
+ } else {
+ // Same behavior as MustRegister if the error is not for AlreadyRegistered
+ panic(err)
+ }
+ }
st := []Stage{}
for _, s := range stgs {
@@ -73,6 +88,7 @@ func NewPipeline(logger log.Logger, stgs PipelineStages, jobName *string, regist
stages: st,
jobName: jobName,
plDuration: hist,
+ dropCount: dropCount,
}, nil
}
@@ -112,7 +128,11 @@ func (p *Pipeline) Wrap(next api.EntryHandler) api.EntryHandler {
extracted := map[string]interface{}{}
p.Process(labels, extracted, ×tamp, &line)
// if the labels set contains the __drop__ label we don't send this entry to the next EntryHandler
- if _, ok := labels[dropLabel]; ok {
+ if reason, ok := labels[dropLabel]; ok {
+ if reason == "" {
+ reason = "undefined"
+ }
+ p.dropCount.WithLabelValues(string(reason)).Inc()
return nil
}
return next.Handle(labels, timestamp, line)
diff --git a/pkg/logentry/stages/stage.go b/pkg/logentry/stages/stage.go
index 16da6fd9f205d..df770aea0f680 100644
--- a/pkg/logentry/stages/stage.go
+++ b/pkg/logentry/stages/stage.go
@@ -23,6 +23,7 @@ const (
StageTypeTemplate = "template"
StageTypePipeline = "pipeline"
StageTypeTenant = "tenant"
+ StageTypeDrop = "drop"
)
// Stage takes an existing set of labels, timestamp and log entry and returns either a possibly mutated
@@ -106,6 +107,11 @@ func New(logger log.Logger, jobName *string, stageType string,
if err != nil {
return nil, err
}
+ case StageTypeDrop:
+ s, err = newDropStage(logger, cfg)
+ if err != nil {
+ return nil, err
+ }
default:
return nil, errors.Errorf("Unknown stage type: %s", stageType)
}
|
promtail
|
Drop stage (#2496)
|
7be37ad8f2a349fa70c61d47128aafe2da6c7e40
|
2025-01-16 18:45:11
|
renovate[bot]
|
fix(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.29.0 (#15784)
| false
|
diff --git a/tools/lambda-promtail/go.mod b/tools/lambda-promtail/go.mod
index 1443b3b44d384..21cd8a9948c1b 100644
--- a/tools/lambda-promtail/go.mod
+++ b/tools/lambda-promtail/go.mod
@@ -5,7 +5,7 @@ go 1.22
require (
github.com/aws/aws-lambda-go v1.47.0
github.com/aws/aws-sdk-go-v2 v1.33.0
- github.com/aws/aws-sdk-go-v2/config v1.28.11
+ github.com/aws/aws-sdk-go-v2/config v1.29.0
github.com/aws/aws-sdk-go-v2/service/s3 v1.73.0
github.com/go-kit/log v0.2.1
github.com/gogo/protobuf v1.3.2
@@ -25,8 +25,8 @@ require (
github.com/alecthomas/units v0.0.0-20240626203959-61d1e3462e30 // indirect
github.com/armon/go-metrics v0.4.1 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7 // indirect
- github.com/aws/aws-sdk-go-v2/credentials v1.17.52 // indirect
- github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.23 // indirect
+ github.com/aws/aws-sdk-go-v2/credentials v1.17.53 // indirect
+ github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.28 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
@@ -35,9 +35,9 @@ require (
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.5.0 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.9 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.9 // indirect
- github.com/aws/aws-sdk-go-v2/service/sso v1.24.9 // indirect
- github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.8 // indirect
- github.com/aws/aws-sdk-go-v2/service/sts v1.33.7 // indirect
+ github.com/aws/aws-sdk-go-v2/service/sso v1.24.10 // indirect
+ github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.9 // indirect
+ github.com/aws/aws-sdk-go-v2/service/sts v1.33.8 // indirect
github.com/aws/smithy-go v1.22.1 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500 // indirect
diff --git a/tools/lambda-promtail/go.sum b/tools/lambda-promtail/go.sum
index c523cd641688c..15fcb444e3ef6 100644
--- a/tools/lambda-promtail/go.sum
+++ b/tools/lambda-promtail/go.sum
@@ -52,12 +52,12 @@ github.com/aws/aws-sdk-go-v2 v1.33.0 h1:Evgm4DI9imD81V0WwD+TN4DCwjUMdc94TrduMLbg
github.com/aws/aws-sdk-go-v2 v1.33.0/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7 h1:lL7IfaFzngfx0ZwUGOZdsFFnQ5uLvR0hWqqhyE7Q9M8=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7/go.mod h1:QraP0UcVlQJsmHfioCrveWOC1nbiWUl3ej08h4mXWoc=
-github.com/aws/aws-sdk-go-v2/config v1.28.11 h1:7Ekru0IkRHRnSRWGQLnLN6i0o1Jncd0rHo2T130+tEQ=
-github.com/aws/aws-sdk-go-v2/config v1.28.11/go.mod h1:x78TpPvBfHH16hi5tE3OCWQ0pzNfyXA349p5/Wp82Yo=
-github.com/aws/aws-sdk-go-v2/credentials v1.17.52 h1:I4ymSk35LHogx2Re2Wu6LOHNTRaRWkLVoJgWS5Wd40M=
-github.com/aws/aws-sdk-go-v2/credentials v1.17.52/go.mod h1:vAkqKbMNUcher8fDXP2Ge2qFXKMkcD74qvk1lJRMemM=
-github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.23 h1:IBAoD/1d8A8/1aA8g4MBVtTRHhXRiNAgwdbo/xRM2DI=
-github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.23/go.mod h1:vfENuCM7dofkgKpYzuzf1VT1UKkA/YL3qanfBn7HCaA=
+github.com/aws/aws-sdk-go-v2/config v1.29.0 h1:Vk/u4jof33or1qAQLdofpjKV7mQQT7DcUpnYx8kdmxY=
+github.com/aws/aws-sdk-go-v2/config v1.29.0/go.mod h1:iXAZK3Gxvpq3tA+B9WaDYpZis7M8KFgdrDPMmHrgbJM=
+github.com/aws/aws-sdk-go-v2/credentials v1.17.53 h1:lwrVhiEDW5yXsuVKlFVUnR2R50zt2DklhOyeLETqDuE=
+github.com/aws/aws-sdk-go-v2/credentials v1.17.53/go.mod h1:CkqM1bIw/xjEpBMhBnvqUXYZbpCFuj6dnCAyDk2AtAY=
+github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24 h1:5grmdTdMsovn9kPZPI23Hhvp0ZyNm5cRO+IZFIYiAfw=
+github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24/go.mod h1:zqi7TVKTswH3Ozq28PkmBmgzG1tona7mo9G2IJg4Cis=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28 h1:igORFSiH3bfq4lxKFkTSYDhJEUCYo6C8VKiWJjYwQuQ=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28/go.mod h1:3So8EA/aAYm36L7XIvCVwLa0s5N0P7o2b1oqnx/2R4g=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.28 h1:1mOW9zAUMhTSrMDssEHS/ajx8JcAj/IcftzcmNlmVLI=
@@ -76,12 +76,12 @@ github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.9 h1:2aInXbh02XsbO0
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.9/go.mod h1:dgXS1i+HgWnYkPXqNoPIPKeUsUUYHaUbThC90aDnNiE=
github.com/aws/aws-sdk-go-v2/service/s3 v1.73.0 h1:sHF4brL/726nbTldh8GGDKFS5LsQ8FwOTKEyvKp9DB4=
github.com/aws/aws-sdk-go-v2/service/s3 v1.73.0/go.mod h1:rGHXqEgGFrz7j58tIGKKAfD1fJzYXeKkN/Jn3eIRZYE=
-github.com/aws/aws-sdk-go-v2/service/sso v1.24.9 h1:YqtxripbjWb2QLyzRK9pByfEDvgg95gpC2AyDq4hFE8=
-github.com/aws/aws-sdk-go-v2/service/sso v1.24.9/go.mod h1:lV8iQpg6OLOfBnqbGMBKYjilBlf633qwHnBEiMSPoHY=
-github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.8 h1:6dBT1Lz8fK11m22R+AqfRsFn8320K0T5DTGxxOQBSMw=
-github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.8/go.mod h1:/kiBvRQXBc6xeJTYzhSdGvJ5vm1tjaDEjH+MSeRJnlY=
-github.com/aws/aws-sdk-go-v2/service/sts v1.33.7 h1:qwGa9MA8G7mBq2YphHFaygdPe5t9OA7SvaJdwWTlEds=
-github.com/aws/aws-sdk-go-v2/service/sts v1.33.7/go.mod h1:+8h7PZb3yY5ftmVLD7ocEoE98hdc8PoKS0H3wfx1dlc=
+github.com/aws/aws-sdk-go-v2/service/sso v1.24.10 h1:DyZUj3xSw3FR3TXSwDhPhuZkkT14QHBiacdbUVcD0Dg=
+github.com/aws/aws-sdk-go-v2/service/sso v1.24.10/go.mod h1:Ro744S4fKiCCuZECXgOi760TiYylUM8ZBf6OGiZzJtY=
+github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.9 h1:I1TsPEs34vbpOnR81GIcAq4/3Ud+jRHVGwx6qLQUHLs=
+github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.9/go.mod h1:Fzsj6lZEb8AkTE5S68OhcbBqeWPsR8RnGuKPr8Todl8=
+github.com/aws/aws-sdk-go-v2/service/sts v1.33.8 h1:pqEJQtlKWvnv3B6VRt60ZmsHy3SotlEBvfUBPB1KVcM=
+github.com/aws/aws-sdk-go-v2/service/sts v1.33.8/go.mod h1:f6vjfZER1M17Fokn0IzssOTMT2N8ZSq+7jnNF0tArvw=
github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro=
github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3 h1:6df1vn4bBlDDo4tARvBm7l6KA9iVMnE3NWizDeWSrps=
|
fix
|
update module github.com/aws/aws-sdk-go-v2/config to v1.29.0 (#15784)
|
c178cc62df569cf5d7ff1ad9647dc82336cae473
|
2024-04-23 18:10:19
|
Paul Rogers
|
test: Data race updates for memchunk tests (#12752)
| false
|
diff --git a/pkg/chunkenc/memchunk_test.go b/pkg/chunkenc/memchunk_test.go
index 8fc3eaab5ab34..09eab22f74be4 100644
--- a/pkg/chunkenc/memchunk_test.go
+++ b/pkg/chunkenc/memchunk_test.go
@@ -184,7 +184,7 @@ func TestBlock(t *testing.T) {
}
}
- var noopStreamPipeline = log.NewNoopPipeline().ForStream(labels.Labels{})
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
it, err := chk.Iterator(context.Background(), time.Unix(0, 0), time.Unix(0, math.MaxInt64), logproto.FORWARD, noopStreamPipeline)
require.NoError(t, err)
@@ -212,7 +212,7 @@ func TestBlock(t *testing.T) {
require.NoError(t, it.Close())
require.Equal(t, len(cases), idx)
- countExtractor = func() log.StreamSampleExtractor {
+ countExtractor := func() log.StreamSampleExtractor {
ex, err := log.NewLineSampleExtractor(log.CountExtractor, nil, nil, false, false)
if err != nil {
panic(err)
@@ -276,6 +276,7 @@ func TestCorruptChunk(t *testing.T) {
ctx, start, end := context.Background(), time.Unix(0, 0), time.Unix(0, math.MaxInt64)
for i, c := range cases {
chk.blocks = []block{{b: c.data}}
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
it, err := chk.Iterator(ctx, start, end, logproto.FORWARD, noopStreamPipeline)
require.NoError(t, err, "case %d", i)
@@ -309,6 +310,7 @@ func TestReadFormatV1(t *testing.T) {
t.Fatal(err)
}
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
it, err := r.Iterator(context.Background(), time.Unix(0, 0), time.Unix(0, math.MaxInt64), logproto.FORWARD, noopStreamPipeline)
if err != nil {
t.Fatal(err)
@@ -340,6 +342,7 @@ func TestRoundtripV2(t *testing.T) {
assertLines := func(c *MemChunk) {
require.Equal(t, enc, c.Encoding())
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
it, err := c.Iterator(context.Background(), time.Unix(0, 0), time.Unix(0, math.MaxInt64), logproto.FORWARD, noopStreamPipeline)
if err != nil {
t.Fatal(err)
@@ -529,6 +532,7 @@ func TestChunkFilling(t *testing.T) {
require.Equal(t, int64(lines), i)
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
it, err := chk.Iterator(context.Background(), time.Unix(0, 0), time.Unix(0, 100), logproto.FORWARD, noopStreamPipeline)
require.NoError(t, err)
i = 0
@@ -711,6 +715,7 @@ func TestChunkStats(t *testing.T) {
expectedSize := inserted * (len(entry.Line) + 3*binary.MaxVarintLen64)
statsCtx, ctx := stats.NewContext(context.Background())
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
it, err := c.Iterator(ctx, first.Add(-time.Hour), entry.Timestamp.Add(time.Hour), logproto.BACKWARD, noopStreamPipeline)
if err != nil {
t.Fatal(err)
@@ -789,6 +794,7 @@ func TestIteratorClose(t *testing.T) {
} {
c := newMemChunkWithFormat(f.chunkFormat, enc, f.headBlockFmt, testBlockSize, testTargetSize)
inserted := fillChunk(c)
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
iter, err := c.Iterator(context.Background(), time.Unix(0, 0), time.Unix(0, inserted), logproto.BACKWARD, noopStreamPipeline)
if err != nil {
t.Fatal(err)
@@ -916,6 +922,7 @@ func BenchmarkBackwardIterator(b *testing.B) {
_ = fillChunk(c)
b.ResetTimer()
for n := 0; n < b.N; n++ {
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
iterator, err := c.Iterator(context.Background(), time.Unix(0, 0), time.Now(), logproto.BACKWARD, noopStreamPipeline)
if err != nil {
panic(err)
@@ -938,6 +945,7 @@ func TestGenerateDataSize(t *testing.T) {
bytesRead := uint64(0)
for _, c := range chunks {
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
// use forward iterator for benchmark -- backward iterator does extra allocations by keeping entries in memory
iterator, err := c.Iterator(context.TODO(), time.Unix(0, 0), time.Now(), logproto.FORWARD, noopStreamPipeline)
if err != nil {
@@ -977,6 +985,7 @@ func BenchmarkHeadBlockIterator(b *testing.B) {
b.ResetTimer()
for n := 0; n < b.N; n++ {
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
iter := h.Iterator(context.Background(), logproto.BACKWARD, 0, math.MaxInt64, noopStreamPipeline)
for iter.Next() {
@@ -1061,6 +1070,7 @@ func TestMemChunk_IteratorBounds(t *testing.T) {
tt := tt
c := createChunk()
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
// testing headchunk
it, err := c.Iterator(context.Background(), tt.mint, tt.maxt, tt.direction, noopStreamPipeline)
require.NoError(t, err)
@@ -1091,6 +1101,7 @@ func TestMemchunkLongLine(t *testing.T) {
for i := 1; i <= 10; i++ {
require.NoError(t, c.Append(&logproto.Entry{Timestamp: time.Unix(0, int64(i)), Line: strings.Repeat("e", 200000)}))
}
+ noopStreamPipeline := log.NewNoopPipeline().ForStream(labels.Labels{})
it, err := c.Iterator(context.Background(), time.Unix(0, 0), time.Unix(0, 100), logproto.FORWARD, noopStreamPipeline)
require.NoError(t, err)
for i := 1; i <= 10; i++ {
|
test
|
Data race updates for memchunk tests (#12752)
|
00d4fcb71267c09ce25398322c56dd114a1bc1db
|
2025-01-22 11:22:49
|
Sandeep Sukhani
|
fix: do not update delete requests tracking metrics for users whose delete requests are not supposed to be processed (#15855)
| false
|
diff --git a/pkg/compactor/deletion/delete_requests_manager.go b/pkg/compactor/deletion/delete_requests_manager.go
index ba99625b2dd96..c9d03b354fc7a 100644
--- a/pkg/compactor/deletion/delete_requests_manager.go
+++ b/pkg/compactor/deletion/delete_requests_manager.go
@@ -92,6 +92,16 @@ func (d *DeleteRequestsManager) updateMetrics() error {
oldestPendingRequestCreatedAt := model.Time(0)
for _, deleteRequest := range deleteRequests {
+ // do not consider requests from users whose delete requests should not be processed as per their config
+ processRequest, err := d.shouldProcessRequest(deleteRequest)
+ if err != nil {
+ return err
+ }
+
+ if !processRequest {
+ continue
+ }
+
// adding an extra minute here to avoid a race between cancellation of request and picking up the request for processing
if deleteRequest.Status != StatusReceived || deleteRequest.CreatedAt.Add(d.deleteRequestCancelPeriod).Add(time.Minute).After(model.Now()) {
continue
|
fix
|
do not update delete requests tracking metrics for users whose delete requests are not supposed to be processed (#15855)
|
babed450c7a997a296184c66f5f0f620f68964c9
|
2021-06-03 00:58:17
|
Danny Kopping
|
ruler: Recording Rules (#3766)
| false
|
diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index ffbcd8951a118..28beb64bcbb54 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -15,7 +15,6 @@ import (
"github.com/cortexproject/cortex/pkg/frontend/transport"
"github.com/cortexproject/cortex/pkg/frontend/v1/frontendv1pb"
- "github.com/grafana/loki/pkg/ruler/manager"
"github.com/grafana/loki/pkg/runtime"
"github.com/grafana/loki/pkg/storage/stores/shipper/compactor"
"github.com/grafana/loki/pkg/validation"
@@ -517,7 +516,7 @@ func (t *Loki) initRulerStorage() (_ services.Service, err error) {
}
}
- t.RulerStorage, err = cortex_ruler.NewLegacyRuleStore(t.Cfg.Ruler.StoreConfig, manager.GroupLoader{}, util_log.Logger)
+ t.RulerStorage, err = cortex_ruler.NewLegacyRuleStore(t.Cfg.Ruler.StoreConfig, ruler.GroupLoader{}, util_log.Logger)
return
}
diff --git a/pkg/ruler/appender.go b/pkg/ruler/appender.go
new file mode 100644
index 0000000000000..21bacc8b430da
--- /dev/null
+++ b/pkg/ruler/appender.go
@@ -0,0 +1,197 @@
+package ruler
+
+import (
+ "context"
+ "errors"
+
+ "github.com/cortexproject/cortex/pkg/cortexpb"
+ "github.com/go-kit/kit/log"
+ "github.com/go-kit/kit/log/level"
+ "github.com/prometheus/prometheus/pkg/exemplar"
+ "github.com/prometheus/prometheus/pkg/labels"
+ "github.com/prometheus/prometheus/promql"
+ "github.com/prometheus/prometheus/rules"
+ "github.com/prometheus/prometheus/storage"
+
+ "github.com/grafana/loki/pkg/util"
+)
+
+type RemoteWriteAppendable struct {
+ groupAppender map[string]*RemoteWriteAppender
+
+ userID string
+ cfg Config
+ overrides RulesLimits
+ logger log.Logger
+
+ metrics *remoteWriteMetrics
+}
+
+func newRemoteWriteAppendable(cfg Config, overrides RulesLimits, logger log.Logger, userID string, metrics *remoteWriteMetrics) *RemoteWriteAppendable {
+ return &RemoteWriteAppendable{
+ logger: logger,
+ userID: userID,
+ cfg: cfg,
+ overrides: overrides,
+ groupAppender: make(map[string]*RemoteWriteAppender),
+ metrics: metrics,
+ }
+}
+
+type RemoteWriteAppender struct {
+ logger log.Logger
+ ctx context.Context
+ remoteWriter remoteWriter
+ userID string
+ groupKey string
+
+ queue *util.EvictingQueue
+ metrics *remoteWriteMetrics
+}
+
+func (a *RemoteWriteAppendable) Appender(ctx context.Context) storage.Appender {
+ groupKey := retrieveGroupKeyFromContext(ctx)
+
+ capacity := a.overrides.RulerRemoteWriteQueueCapacity(a.userID)
+
+ // create or retrieve an appender associated with this groupKey (unique ID for rule group)
+ appender, found := a.groupAppender[groupKey]
+ if found {
+ err := appender.WithQueueCapacity(capacity)
+ if err != nil {
+ level.Warn(a.logger).Log("msg", "attempting to set capacity failed", "err", err)
+ }
+
+ return appender
+ }
+
+ client, err := newRemoteWriter(a.cfg, a.userID)
+ if err != nil {
+ level.Error(a.logger).Log("msg", "error creating remote-write client; setting appender as noop", "err", err, "tenant", a.userID)
+ return &NoopAppender{}
+ }
+
+ queue, err := util.NewEvictingQueue(capacity, a.onEvict(a.userID, groupKey))
+ if err != nil {
+ level.Error(a.logger).Log("msg", "queue creation error; setting appender as noop", "err", err, "tenant", a.userID)
+ return &NoopAppender{}
+ }
+
+ appender = &RemoteWriteAppender{
+ ctx: ctx,
+ logger: a.logger,
+ remoteWriter: client,
+ groupKey: groupKey,
+ userID: a.userID,
+
+ queue: queue,
+ metrics: a.metrics,
+ }
+
+ // only track reference if groupKey was retrieved
+ if groupKey == "" {
+ level.Warn(a.logger).Log("msg", "blank group key passed via context; creating new appender")
+ return appender
+ }
+
+ a.groupAppender[groupKey] = appender
+ return appender
+}
+
+func (a *RemoteWriteAppendable) onEvict(userID, groupKey string) func() {
+ return func() {
+ a.metrics.samplesEvicted.WithLabelValues(userID, groupKey).Inc()
+ }
+}
+
+func (a *RemoteWriteAppender) Append(_ uint64, l labels.Labels, t int64, v float64) (uint64, error) {
+ a.queue.Append(queueEntry{
+ labels: l,
+ sample: cortexpb.Sample{
+ Value: v,
+ TimestampMs: t,
+ },
+ })
+
+ a.metrics.samplesQueued.WithLabelValues(a.userID, a.groupKey).Set(float64(a.queue.Length()))
+ a.metrics.samplesQueuedTotal.WithLabelValues(a.userID, a.groupKey).Inc()
+
+ return 0, nil
+}
+
+func (a *RemoteWriteAppender) AppendExemplar(_ uint64, _ labels.Labels, _ exemplar.Exemplar) (uint64, error) {
+ return 0, errors.New("exemplars are unsupported")
+}
+
+func (a *RemoteWriteAppender) Commit() error {
+ if a.queue.Length() <= 0 {
+ return nil
+ }
+
+ if a.remoteWriter == nil {
+ level.Warn(a.logger).Log("msg", "no remote_write client defined, skipping commit")
+ return nil
+ }
+
+ level.Debug(a.logger).Log("msg", "writing samples to remote_write target", "target", a.remoteWriter.Endpoint(), "count", a.queue.Length())
+
+ req, err := a.remoteWriter.PrepareRequest(a.queue)
+ if err != nil {
+ level.Error(a.logger).Log("msg", "could not prepare remote-write request", "err", err)
+ a.metrics.remoteWriteErrors.WithLabelValues(a.userID, a.groupKey).Inc()
+ return err
+ }
+
+ err = a.remoteWriter.Store(a.ctx, req)
+ if err != nil {
+ level.Error(a.logger).Log("msg", "could not store recording rule samples", "err", err)
+ a.metrics.remoteWriteErrors.WithLabelValues(a.userID, a.groupKey).Inc()
+ return err
+ }
+
+ // Clear the queue on a successful response
+ a.queue.Clear()
+
+ a.metrics.samplesQueued.WithLabelValues(a.userID, a.groupKey).Set(0)
+
+ return nil
+}
+
+func (a *RemoteWriteAppender) Rollback() error {
+ a.queue.Clear()
+
+ return nil
+}
+
+func (a *RemoteWriteAppender) WithQueueCapacity(capacity int) error {
+ if err := a.queue.SetCapacity(capacity); err != nil {
+ return err
+ }
+
+ a.metrics.samplesQueueCapacity.WithLabelValues(a.userID).Set(float64(capacity))
+ return nil
+}
+
+func retrieveGroupKeyFromContext(ctx context.Context) string {
+ data, found := ctx.Value(promql.QueryOrigin{}).(map[string]interface{})
+ if !found {
+ return ""
+ }
+
+ ruleGroup, found := data["ruleGroup"].(map[string]string)
+ if !found {
+ return ""
+ }
+
+ file, found := ruleGroup["file"]
+ if !found {
+ return ""
+ }
+
+ name, found := ruleGroup["name"]
+ if !found {
+ return ""
+ }
+
+ return rules.GroupKey(file, name)
+}
diff --git a/pkg/ruler/appender_test.go b/pkg/ruler/appender_test.go
new file mode 100644
index 0000000000000..34e959af72489
--- /dev/null
+++ b/pkg/ruler/appender_test.go
@@ -0,0 +1,327 @@
+package ruler
+
+import (
+ "context"
+ "fmt"
+ "net/url"
+ "testing"
+ "time"
+
+ "github.com/cortexproject/cortex/pkg/cortexpb"
+ "github.com/go-kit/kit/log"
+ "github.com/prometheus/client_golang/prometheus"
+ promConfig "github.com/prometheus/common/config"
+ "github.com/prometheus/prometheus/config"
+ "github.com/prometheus/prometheus/pkg/labels"
+ "github.com/prometheus/prometheus/promql"
+ "github.com/prometheus/prometheus/rules"
+ "github.com/stretchr/testify/mock"
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/pkg/util"
+ "github.com/grafana/loki/pkg/validation"
+)
+
+var (
+ logger = log.NewNopLogger()
+ fakeUserID = "fake"
+ emptyWriteRequest = []byte{}
+ queueCapacity = 10
+ metrics = newRemoteWriteMetrics(prometheus.DefaultRegisterer)
+)
+
+func TestGroupKeyRetrieval(t *testing.T) {
+ ruleFile := "/my/file"
+ groupName := "my-group"
+
+ ctx := createOriginContext(ruleFile, groupName)
+ // group key should match value derived from context
+ require.Equal(t, rules.GroupKey(ruleFile, groupName), retrieveGroupKeyFromContext(ctx))
+
+ // group key should be blank if context does not contain expected data
+ require.Equal(t, "", retrieveGroupKeyFromContext(context.TODO()))
+}
+
+// TestMemoizedAppenders tests that appenders are memoized by their associated group key
+func TestMemoizedAppenders(t *testing.T) {
+ ctx := createOriginContext("/rule/file", "rule-group")
+ appendable := createBasicAppendable(queueCapacity)
+
+ // context passing a valid group key will allow the appender to be memoized
+ appender := appendable.Appender(ctx)
+ require.Same(t, appender, appendable.Appender(ctx))
+
+ // a missing or invalid group key will force a new appender to be created each time
+ ctx = promql.NewOriginContext(context.TODO(), nil)
+ appender = appendable.Appender(ctx)
+ require.NotSame(t, appender, appendable.Appender(ctx))
+}
+
+// TestMemoizedAppendersWithRuntimeCapacityChange tests that memoized appenders can reconfigure their capacity
+func TestMemoizedAppendersWithRuntimeCapacityChange(t *testing.T) {
+ ctx := createOriginContext("/rule/file", "rule-group")
+ appendable := createBasicAppendable(queueCapacity)
+
+ appender := appendable.Appender(ctx)
+
+ // appender is configured with default queue capacity initially
+ capacity := appender.(*RemoteWriteAppender).queue.Capacity()
+ require.Equal(t, queueCapacity, capacity)
+
+ newCapacity := 123
+
+ // reconfigure the overrides to simulate a runtime config change
+ appendable.overrides = fakeLimits(newCapacity)
+
+ // appender is reconfigured with new queue capacity when retrieved again
+ appender = appendable.Appender(ctx)
+ capacity = appender.(*RemoteWriteAppender).queue.Capacity()
+ require.Equal(t, newCapacity, capacity)
+}
+
+func TestAppenderSeparationByRuleGroup(t *testing.T) {
+ ctxA := createOriginContext("/rule/fileA", "rule-groupA")
+ ctxB := createOriginContext("/rule/fileB", "rule-groupB")
+ appendable := createBasicAppendable(queueCapacity)
+
+ appenderA := appendable.Appender(ctxA)
+ appenderB := appendable.Appender(ctxB)
+ require.NotSame(t, appenderA, appenderB)
+}
+
+func TestQueueCapacity(t *testing.T) {
+ ctx := createOriginContext("/rule/file", "rule-group")
+ appendable := createBasicAppendable(queueCapacity)
+
+ appender := appendable.Appender(ctx).(*RemoteWriteAppender)
+ require.Equal(t, appender.queue.Capacity(), queueCapacity)
+}
+
+func TestQueueCapacityTenantOverride(t *testing.T) {
+ ctx := createOriginContext("/rule/file", "rule-group")
+ appendable := createBasicAppendable(queueCapacity)
+
+ overriddenCapacity := 999
+ overrides, err := validation.NewOverrides(validation.Limits{}, func(userID string) *validation.Limits {
+ return &validation.Limits{
+ RulerRemoteWriteQueueCapacity: overriddenCapacity,
+ }
+ })
+ require.Nil(t, err)
+ appendable.overrides = overrides
+
+ appender := appendable.Appender(ctx).(*RemoteWriteAppender)
+ require.Equal(t, appender.queue.Capacity(), overriddenCapacity)
+}
+
+func TestAppendSample(t *testing.T) {
+ ctx := createOriginContext("/rule/file", "rule-group")
+ appendable := createBasicAppendable(queueCapacity)
+ appender := appendable.Appender(ctx).(*RemoteWriteAppender)
+
+ labels := labels.Labels{
+ labels.Label{
+ Name: "cluster",
+ Value: "us-central1",
+ },
+ }
+ ts := time.Now().Unix()
+ val := 91.2
+
+ sample := queueEntry{
+ labels: labels,
+ sample: cortexpb.Sample{
+ Value: val,
+ TimestampMs: ts,
+ },
+ }
+
+ _, err := appender.Append(0, labels, ts, val)
+ require.Nil(t, err)
+
+ require.Equal(t, appender.queue.Entries()[0], sample)
+}
+
+func TestSuccessfulRemoteWriteSample(t *testing.T) {
+ client := &MockRemoteWriteClient{}
+
+ appendable := createBasicAppendable(queueCapacity)
+
+ appender := appendable.Appender(context.TODO()).(*RemoteWriteAppender)
+ appender.remoteWriter = client
+
+ client.On("PrepareRequest", mock.Anything).Return(emptyWriteRequest, nil).Once()
+ client.On("Store", mock.Anything, mock.Anything).Return(nil).Once()
+
+ _, err := appender.Append(0, labels.Labels{}, time.Now().UnixNano(), 11.2)
+ require.Nil(t, err)
+
+ // commit didn't return any error, which means a successful write
+ err = appender.Commit()
+ require.Nil(t, err)
+
+ // queue should be cleared on successful write
+ require.Zero(t, appender.queue.Length())
+
+ client.AssertExpectations(t)
+}
+
+func TestUnsuccessfulRemoteWritePrepare(t *testing.T) {
+ client := &MockRemoteWriteClient{}
+
+ appendable := createBasicAppendable(queueCapacity)
+
+ appender := appendable.Appender(context.TODO()).(*RemoteWriteAppender)
+ appender.remoteWriter = client
+
+ client.On("PrepareRequest", mock.Anything).Return(emptyWriteRequest, fmt.Errorf("some error")).Once()
+ _, err := appender.Append(0, labels.Labels{}, time.Now().UnixNano(), 11.2)
+ require.Nil(t, err)
+
+ // commit fails if PrepareRequest returns an error
+ err = appender.Commit()
+ require.NotNil(t, err)
+
+ // queue should NOT be cleared on unsuccessful write
+ require.NotZero(t, appender.queue.Length())
+
+ client.AssertExpectations(t)
+}
+
+func TestUnsuccessfulRemoteWriteStore(t *testing.T) {
+ client := &MockRemoteWriteClient{}
+
+ appendable := createBasicAppendable(queueCapacity)
+
+ appender := appendable.Appender(context.TODO()).(*RemoteWriteAppender)
+ appender.remoteWriter = client
+
+ client.On("PrepareRequest", mock.Anything).Return(emptyWriteRequest, nil).Once()
+ client.On("Store", mock.Anything, mock.Anything).Return(fmt.Errorf("some error")).Once()
+ _, err := appender.Append(0, labels.Labels{}, time.Now().UnixNano(), 11.2)
+ require.Nil(t, err)
+
+ // commit fails if Store returns an error
+ err = appender.Commit()
+ require.NotNil(t, err)
+
+ // queue should NOT be cleared on unsuccessful write
+ require.NotZero(t, appender.queue.Length())
+
+ client.AssertExpectations(t)
+}
+
+func TestEmptyRemoteWrite(t *testing.T) {
+ client := &MockRemoteWriteClient{}
+
+ appendable := createBasicAppendable(queueCapacity)
+ appender := appendable.Appender(context.TODO()).(*RemoteWriteAppender)
+ appender.remoteWriter = client
+
+ // queue should be empty
+ require.Zero(t, appender.queue.Length())
+
+ // no error returned
+ err := appender.Commit()
+ require.Nil(t, err)
+
+ // PrepareRequest & Store were not called either
+ client.AssertExpectations(t)
+}
+
+func TestAppenderRollback(t *testing.T) {
+ appendable := createBasicAppendable(queueCapacity)
+ appender := appendable.Appender(context.TODO()).(*RemoteWriteAppender)
+
+ appender.Append(0, labels.Labels{}, time.Now().UnixNano(), 11.2) //nolint:errcheck
+ appender.Append(0, labels.Labels{}, time.Now().UnixNano(), 11.2) //nolint:errcheck
+ appender.Append(0, labels.Labels{}, time.Now().UnixNano(), 11.2) //nolint:errcheck
+
+ require.Equal(t, 3, appender.queue.Length())
+
+ require.Nil(t, appender.Rollback())
+ require.Zero(t, appender.queue.Length())
+}
+
+func TestAppenderEvictOldest(t *testing.T) {
+ capacity := 2
+ appendable := createBasicAppendable(capacity)
+
+ appender := appendable.Appender(context.TODO()).(*RemoteWriteAppender)
+
+ appender.Append(0, labels.Labels{}, time.Now().UnixNano(), 11.2) //nolint:errcheck
+ appender.Append(0, labels.Labels{}, time.Now().UnixNano(), 11.3) //nolint:errcheck
+ appender.Append(0, labels.Labels{}, time.Now().UnixNano(), 11.4) //nolint:errcheck
+
+ // capacity is enforced
+ require.Equal(t, capacity, appender.queue.Length())
+
+ // only two newest samples are kept
+ require.Equal(t, appender.queue.Entries()[0].(queueEntry).sample.Value, 11.3)
+ require.Equal(t, appender.queue.Entries()[1].(queueEntry).sample.Value, 11.4)
+}
+
+// context is created by ruler, passing along details of the rule being executed
+// see github.com/prometheus/prometheus/rules/manager.go
+// -> func (g *Group) run(ctx context.Context)
+func createOriginContext(ruleFile, groupName string) context.Context {
+ return promql.NewOriginContext(context.TODO(), map[string]interface{}{
+ "ruleGroup": map[string]string{
+ "file": ruleFile,
+ "name": groupName,
+ },
+ })
+}
+
+func createBasicAppendable(queueCapacity int) *RemoteWriteAppendable {
+ target, err := url.Parse("http://some/target")
+ if err != nil {
+ panic(err)
+ }
+
+ return newRemoteWriteAppendable(
+ Config{
+ RemoteWrite: RemoteWriteConfig{
+ Enabled: true,
+ Client: config.RemoteWriteConfig{
+ URL: &promConfig.URL{URL: target},
+ },
+ },
+ },
+ fakeLimits(queueCapacity),
+ logger,
+ fakeUserID,
+ metrics,
+ )
+}
+
+func fakeLimits(queueCapacity int) RulesLimits {
+ o, err := validation.NewOverrides(validation.Limits{
+ RulerRemoteWriteQueueCapacity: queueCapacity,
+ }, nil)
+ if err != nil {
+ panic(err)
+ }
+ return o
+}
+
+type MockRemoteWriteClient struct {
+ mock.Mock
+}
+
+// Store stores the given samples in the remote storage.
+func (c *MockRemoteWriteClient) Store(ctx context.Context, data []byte) error {
+ args := c.Called(ctx, data)
+ return args.Error(0)
+}
+
+// Name uniquely identifies the remote storage.
+func (c *MockRemoteWriteClient) Name() string { return "" }
+
+// Endpoint is the remote read or write endpoint for the storage client.
+func (c *MockRemoteWriteClient) Endpoint() string { return "" }
+
+func (c *MockRemoteWriteClient) PrepareRequest(queue *util.EvictingQueue) ([]byte, error) {
+ args := c.Called(queue)
+ return args.Get(0).([]byte), args.Error(1)
+}
diff --git a/pkg/ruler/manager/compat.go b/pkg/ruler/compat.go
similarity index 87%
rename from pkg/ruler/manager/compat.go
rename to pkg/ruler/compat.go
index 05a56692762a3..1ef56ca8ed357 100644
--- a/pkg/ruler/manager/compat.go
+++ b/pkg/ruler/compat.go
@@ -1,4 +1,4 @@
-package manager
+package ruler
import (
"bytes"
@@ -7,8 +7,11 @@ import (
"strings"
"time"
+ "github.com/prometheus/prometheus/storage"
+
"github.com/cortexproject/cortex/pkg/ruler"
"github.com/go-kit/kit/log"
+ "github.com/go-kit/kit/log/level"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/model"
@@ -30,7 +33,9 @@ import (
// RulesLimits is the one function we need from limits.Overrides, and
// is here to limit coupling.
type RulesLimits interface {
- EvaluationDelay(usedID string) time.Duration
+ ruler.RulesLimits
+
+ RulerRemoteWriteQueueCapacity(userID string) int
}
// engineQueryFunc returns a new query function using the rules.EngineQueryFunc function
@@ -84,11 +89,12 @@ func (m *MultiTenantManager) ValidateRuleGroup(grp rulefmt.RuleGroup) []error {
}
func MemstoreTenantManager(
- cfg ruler.Config,
+ cfg Config,
engine *logql.Engine,
overrides RulesLimits,
) ruler.ManagerFactory {
- var metrics *Metrics
+ var msMetrics *memstoreMetrics
+ var rwMetrics *remoteWriteMetrics
return ruler.ManagerFactory(func(
ctx context.Context,
@@ -97,17 +103,24 @@ func MemstoreTenantManager(
logger log.Logger,
reg prometheus.Registerer,
) ruler.RulesManager {
- // We'll ignore the passed registere and use the default registerer to avoid prefix issues and other weirdness.
+ // We'll ignore the passed registerer and use the default registerer to avoid prefix issues and other weirdness.
// This closure prevents re-registering.
- if metrics == nil {
- metrics = NewMetrics(prometheus.DefaultRegisterer)
+ registerer := prometheus.DefaultRegisterer
+
+ if msMetrics == nil {
+ msMetrics = newMemstoreMetrics(registerer)
+ }
+
+ if rwMetrics == nil {
+ rwMetrics = newRemoteWriteMetrics(registerer)
}
+
logger = log.With(logger, "user", userID)
queryFunc := engineQueryFunc(engine, overrides, userID)
- memStore := NewMemStore(userID, queryFunc, metrics, 5*time.Minute, log.With(logger, "subcomponent", "MemStore"))
+ memStore := NewMemStore(userID, queryFunc, msMetrics, 5*time.Minute, log.With(logger, "subcomponent", "MemStore"))
mgr := rules.NewManager(&rules.ManagerOptions{
- Appendable: NoopAppender{},
+ Appendable: newAppendable(cfg, overrides, logger, userID, rwMetrics),
Queryable: memStore,
QueryFunc: queryFunc,
Context: user.InjectOrgID(ctx, userID),
@@ -128,6 +141,15 @@ func MemstoreTenantManager(
})
}
+func newAppendable(cfg Config, overrides RulesLimits, logger log.Logger, userID string, metrics *remoteWriteMetrics) storage.Appendable {
+ if !cfg.RemoteWrite.Enabled {
+ level.Info(logger).Log("msg", "remote-write is disabled")
+ return &NoopAppender{}
+ }
+
+ return newRemoteWriteAppendable(cfg, overrides, logger, userID, metrics)
+}
+
type GroupLoader struct{}
func (GroupLoader) Parse(query string) (parser.Expr, error) {
diff --git a/pkg/ruler/manager/compat_test.go b/pkg/ruler/compat_test.go
similarity index 73%
rename from pkg/ruler/manager/compat_test.go
rename to pkg/ruler/compat_test.go
index 7cddb3f915281..8ac490ac6f8f4 100644
--- a/pkg/ruler/manager/compat_test.go
+++ b/pkg/ruler/compat_test.go
@@ -1,13 +1,22 @@
-package manager
+package ruler
import (
+ "context"
"fmt"
"io/ioutil"
"os"
"strings"
"testing"
+ "time"
+ "github.com/cortexproject/cortex/pkg/ruler"
+ "github.com/go-kit/kit/log"
+ "github.com/prometheus/prometheus/config"
"github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/pkg/iter"
+ "github.com/grafana/loki/pkg/logql"
+ "github.com/grafana/loki/pkg/validation"
)
func Test_Load(t *testing.T) {
@@ -265,3 +274,65 @@ groups:
}
}
+
+// TestNoopAppender tests that a NoopAppender is created when remote-write is disabled
+func TestInvalidRemoteWriteConfig(t *testing.T) {
+ // if remote-write is not enabled, validation fails
+ cfg := Config{
+ Config: ruler.Config{},
+ RemoteWrite: RemoteWriteConfig{
+ Enabled: false,
+ },
+ }
+ require.Nil(t, cfg.RemoteWrite.Validate())
+
+ // if no remote-write URL is configured, validation fails
+ cfg = Config{
+ Config: ruler.Config{},
+ RemoteWrite: RemoteWriteConfig{
+ Enabled: true,
+ Client: config.RemoteWriteConfig{
+ URL: nil,
+ },
+ },
+ }
+ require.Error(t, cfg.RemoteWrite.Validate())
+}
+
+// TestNoopAppender tests that a NoopAppender is created when remote-write is disabled
+func TestNoopAppender(t *testing.T) {
+ cfg := Config{
+ Config: ruler.Config{},
+ RemoteWrite: RemoteWriteConfig{
+ Enabled: false,
+ },
+ }
+ require.False(t, cfg.RemoteWrite.Enabled)
+
+ appendable := newAppendable(cfg, &validation.Overrides{}, log.NewNopLogger(), "fake", metrics)
+ appender := appendable.Appender(context.TODO())
+ require.IsType(t, NoopAppender{}, appender)
+}
+
+// TestNonMetricQuery tests that only metric queries can be executed in the query function,
+// as both alert and recording rules rely on metric queries being run
+func TestNonMetricQuery(t *testing.T) {
+ overrides, err := validation.NewOverrides(validation.Limits{}, nil)
+ require.Nil(t, err)
+
+ engine := logql.NewEngine(logql.EngineOpts{}, &FakeQuerier{}, overrides)
+ queryFunc := engineQueryFunc(engine, overrides, "fake")
+
+ _, err = queryFunc(context.TODO(), `{job="nginx"}`, time.Now())
+ require.Error(t, err, "rule result is not a vector or scalar")
+}
+
+type FakeQuerier struct{}
+
+func (q *FakeQuerier) SelectLogs(context.Context, logql.SelectLogParams) (iter.EntryIterator, error) {
+ return iter.NoopIterator, nil
+}
+
+func (q *FakeQuerier) SelectSamples(context.Context, logql.SelectSampleParams) (iter.SampleIterator, error) {
+ return iter.NoopIterator, nil
+}
diff --git a/pkg/ruler/config.go b/pkg/ruler/config.go
new file mode 100644
index 0000000000000..8a40c77e5d95d
--- /dev/null
+++ b/pkg/ruler/config.go
@@ -0,0 +1,52 @@
+package ruler
+
+import (
+ "flag"
+ "fmt"
+
+ "github.com/cortexproject/cortex/pkg/ruler"
+ "github.com/pkg/errors"
+ "github.com/prometheus/prometheus/config"
+)
+
+type Config struct {
+ ruler.Config `yaml:",inline"`
+
+ RemoteWrite RemoteWriteConfig `yaml:"remote_write,omitempty"`
+}
+
+func (c *Config) RegisterFlags(f *flag.FlagSet) {
+ c.Config.RegisterFlags(f)
+ c.RemoteWrite.RegisterFlags(f)
+}
+
+// Validate overrides the embedded cortex variant which expects a cortex limits struct. Instead copy the relevant bits over.
+func (c *Config) Validate() error {
+ if err := c.StoreConfig.Validate(); err != nil {
+ return fmt.Errorf("invalid ruler store config: %w", err)
+ }
+
+ if err := c.RemoteWrite.Validate(); err != nil {
+ return fmt.Errorf("invalid ruler remote-write config: %w", err)
+ }
+
+ return nil
+}
+
+type RemoteWriteConfig struct {
+ Client config.RemoteWriteConfig `yaml:"client"`
+ Enabled bool `yaml:"enabled"`
+}
+
+func (c *RemoteWriteConfig) Validate() error {
+ if c.Enabled && c.Client.URL == nil {
+ return errors.New("remote-write enabled but client URL is not configured")
+ }
+
+ return nil
+}
+
+// RegisterFlags adds the flags required to config this to the given FlagSet.
+func (c *RemoteWriteConfig) RegisterFlags(f *flag.FlagSet) {
+ f.BoolVar(&c.Enabled, "ruler.remote-write.enabled", false, "Remote-write recording rule samples to Prometheus-compatible remote-write receiver.")
+}
diff --git a/pkg/ruler/manager/memstore.go b/pkg/ruler/memstore.go
similarity index 91%
rename from pkg/ruler/manager/memstore.go
rename to pkg/ruler/memstore.go
index f7853d79773ed..82f0aac9c39a0 100644
--- a/pkg/ruler/manager/memstore.go
+++ b/pkg/ruler/memstore.go
@@ -1,4 +1,4 @@
-package manager
+package ruler
import (
"context"
@@ -46,23 +46,23 @@ func ForStateMetric(base labels.Labels, alertName string) labels.Labels {
return b.Labels()
}
-type Metrics struct {
- Evaluations *prometheus.CounterVec
- Samples prometheus.Gauge // in memory samples
- CacheHits *prometheus.CounterVec // cache hits on in memory samples
+type memstoreMetrics struct {
+ evaluations *prometheus.CounterVec
+ samples prometheus.Gauge // in memory samples
+ cacheHits *prometheus.CounterVec // cache hits on in memory samples
}
-func NewMetrics(r prometheus.Registerer) *Metrics {
- return &Metrics{
- Evaluations: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+func newMemstoreMetrics(r prometheus.Registerer) *memstoreMetrics {
+ return &memstoreMetrics{
+ evaluations: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
Namespace: "loki",
Name: "ruler_memory_for_state_evaluations_total",
}, []string{"status", "tenant"}),
- Samples: promauto.With(r).NewGauge(prometheus.GaugeOpts{
+ samples: promauto.With(r).NewGauge(prometheus.GaugeOpts{
Namespace: "loki",
Name: "ruler_memory_samples",
}),
- CacheHits: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ cacheHits: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
Namespace: "loki",
Name: "ruler_memory_for_state_cache_hits_total",
}, []string{"tenant"}),
@@ -77,7 +77,7 @@ type MemStore struct {
mtx sync.Mutex
userID string
queryFunc rules.QueryFunc
- metrics *Metrics
+ metrics *memstoreMetrics
mgr RuleIter
logger log.Logger
rules map[string]*RuleCache
@@ -87,7 +87,7 @@ type MemStore struct {
cleanupInterval time.Duration
}
-func NewMemStore(userID string, queryFunc rules.QueryFunc, metrics *Metrics, cleanupInterval time.Duration, logger log.Logger) *MemStore {
+func NewMemStore(userID string, queryFunc rules.QueryFunc, metrics *memstoreMetrics, cleanupInterval time.Duration, logger log.Logger) *MemStore {
s := &MemStore{
userID: userID,
metrics: metrics,
@@ -243,7 +243,7 @@ func (m *memStoreQuerier) Select(sortSeries bool, params *storage.SelectHints, m
smpl, cached := cache.Get(m.ts, ls)
if cached {
- m.metrics.CacheHits.WithLabelValues(m.userID).Inc()
+ m.metrics.cacheHits.WithLabelValues(m.userID).Inc()
level.Debug(m.logger).Log("msg", "result cached", "rule", ruleKey)
// Assuming the result is cached but the desired series is not in the result, it wouldn't be considered active.
if smpl == nil {
@@ -265,10 +265,10 @@ func (m *memStoreQuerier) Select(sortSeries bool, params *storage.SelectHints, m
vec, err := m.queryFunc(m.ctx, rule.Query().String(), m.ts.Add(-rule.HoldDuration()))
if err != nil {
level.Info(m.logger).Log("msg", "error querying for rule", "rule", ruleKey, "err", err.Error())
- m.metrics.Evaluations.WithLabelValues(statusFailure, m.userID).Inc()
+ m.metrics.evaluations.WithLabelValues(statusFailure, m.userID).Inc()
return storage.NoopSeriesSet()
}
- m.metrics.Evaluations.WithLabelValues(statusSuccess, m.userID).Inc()
+ m.metrics.evaluations.WithLabelValues(statusSuccess, m.userID).Inc()
level.Debug(m.logger).Log("msg", "rule state successfully restored", "rule", ruleKey, "len", len(vec))
// translate the result into the ALERTS_FOR_STATE series for caching,
@@ -322,11 +322,11 @@ func (*memStoreQuerier) Close() error { return nil }
type RuleCache struct {
mtx sync.Mutex
- metrics *Metrics
+ metrics *memstoreMetrics
data map[int64]map[uint64]promql.Sample
}
-func NewRuleCache(metrics *Metrics) *RuleCache {
+func NewRuleCache(metrics *memstoreMetrics) *RuleCache {
return &RuleCache{
data: make(map[int64]map[uint64]promql.Sample),
metrics: metrics,
@@ -345,7 +345,7 @@ func (c *RuleCache) Set(ts time.Time, vec promql.Vector) {
for _, sample := range vec {
tsMap[sample.Metric.Hash()] = sample
}
- c.metrics.Samples.Add(float64(len(vec)))
+ c.metrics.samples.Add(float64(len(vec)))
}
// Get returns ok if that timestamp's result is cached.
@@ -377,7 +377,7 @@ func (c *RuleCache) CleanupOldSamples(olderThan time.Time) (empty bool) {
for ts, tsMap := range c.data {
if ts < ns {
delete(c.data, ts)
- c.metrics.Samples.Add(-float64(len(tsMap)))
+ c.metrics.samples.Add(-float64(len(tsMap)))
}
}
diff --git a/pkg/ruler/manager/memstore_test.go b/pkg/ruler/memstore_test.go
similarity index 98%
rename from pkg/ruler/manager/memstore_test.go
rename to pkg/ruler/memstore_test.go
index 2d2569ee882c7..2849257cc8555 100644
--- a/pkg/ruler/manager/memstore_test.go
+++ b/pkg/ruler/memstore_test.go
@@ -1,4 +1,4 @@
-package manager
+package ruler
import (
"context"
@@ -15,7 +15,7 @@ import (
)
var (
- NilMetrics = NewMetrics(nil)
+ NilMetrics = newMemstoreMetrics(nil)
NilLogger = log.NewNopLogger()
)
diff --git a/pkg/ruler/remote_write.go b/pkg/ruler/remote_write.go
new file mode 100644
index 0000000000000..13b4f910df532
--- /dev/null
+++ b/pkg/ruler/remote_write.go
@@ -0,0 +1,138 @@
+package ruler
+
+import (
+ "fmt"
+
+ "github.com/cortexproject/cortex/pkg/cortexpb"
+ "github.com/golang/snappy"
+ "github.com/pkg/errors"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "github.com/prometheus/prometheus/pkg/labels"
+ "github.com/prometheus/prometheus/storage/remote"
+
+ "github.com/grafana/loki/pkg/util"
+ "github.com/grafana/loki/pkg/util/build"
+)
+
+var UserAgent = fmt.Sprintf("loki-remote-write/%s", build.Version)
+
+type queueEntry struct {
+ labels labels.Labels
+ sample cortexpb.Sample
+}
+
+type remoteWriter interface {
+ remote.WriteClient
+
+ PrepareRequest(queue *util.EvictingQueue) ([]byte, error)
+}
+
+type remoteWriteClient struct {
+ remote.WriteClient
+
+ labels []labels.Labels
+ samples []cortexpb.Sample
+}
+
+type remoteWriteMetrics struct {
+ samplesEvicted *prometheus.CounterVec
+ samplesQueuedTotal *prometheus.CounterVec
+ samplesQueued *prometheus.GaugeVec
+ samplesQueueCapacity *prometheus.GaugeVec
+ remoteWriteErrors *prometheus.CounterVec
+}
+
+func newRemoteWriter(cfg Config, userID string) (remoteWriter, error) {
+ writeClient, err := remote.NewWriteClient("recording_rules", &remote.ClientConfig{
+ URL: cfg.RemoteWrite.Client.URL,
+ Timeout: cfg.RemoteWrite.Client.RemoteTimeout,
+ HTTPClientConfig: cfg.RemoteWrite.Client.HTTPClientConfig,
+ Headers: util.MergeMaps(cfg.RemoteWrite.Client.Headers, map[string]string{
+ "X-Scope-OrgID": userID,
+ "User-Agent": UserAgent,
+ }),
+ })
+ if err != nil {
+ return nil, errors.Wrapf(err, "could not create remote-write client for tenant: %v", userID)
+ }
+
+ return &remoteWriteClient{
+ WriteClient: writeClient,
+ }, nil
+}
+
+func (r *remoteWriteClient) prepare(queue *util.EvictingQueue) error {
+ // reuse slices, resize if they are not big enough
+ if cap(r.labels) < queue.Length() {
+ r.labels = make([]labels.Labels, 0, queue.Length())
+ }
+ if cap(r.samples) < queue.Length() {
+ r.samples = make([]cortexpb.Sample, 0, queue.Length())
+ }
+
+ r.labels = r.labels[:0]
+ r.samples = r.samples[:0]
+
+ for _, entry := range queue.Entries() {
+ entry, ok := entry.(queueEntry)
+ if !ok {
+ return fmt.Errorf("queue contains invalid entry of type: %T", entry)
+ }
+
+ r.labels = append(r.labels, entry.labels)
+ r.samples = append(r.samples, entry.sample)
+ }
+
+ return nil
+}
+
+// PrepareRequest takes the given queue and serializes it into a compressed
+// proto write request that will be sent to Cortex
+func (r *remoteWriteClient) PrepareRequest(queue *util.EvictingQueue) ([]byte, error) {
+ // prepare labels and samples from queue
+ err := r.prepare(queue)
+ if err != nil {
+ return nil, err
+ }
+
+ req := cortexpb.ToWriteRequest(r.labels, r.samples, nil, cortexpb.RULE)
+ defer cortexpb.ReuseSlice(req.Timeseries)
+
+ reqBytes, err := req.Marshal()
+ if err != nil {
+ return nil, err
+ }
+
+ return snappy.Encode(nil, reqBytes), nil
+}
+
+func newRemoteWriteMetrics(r prometheus.Registerer) *remoteWriteMetrics {
+ return &remoteWriteMetrics{
+ samplesEvicted: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Namespace: "loki",
+ Name: "recording_rules_samples_evicted_total",
+ Help: "Number of samples evicted from queue; queue is full!",
+ }, []string{"tenant", "group_key"}),
+ samplesQueuedTotal: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Namespace: "loki",
+ Name: "recording_rules_samples_queued_total",
+ Help: "Number of samples queued in total.",
+ }, []string{"tenant", "group_key"}),
+ samplesQueued: promauto.With(r).NewGaugeVec(prometheus.GaugeOpts{
+ Namespace: "loki",
+ Name: "recording_rules_samples_queued_current",
+ Help: "Number of samples queued to be remote-written.",
+ }, []string{"tenant", "group_key"}),
+ samplesQueueCapacity: promauto.With(r).NewGaugeVec(prometheus.GaugeOpts{
+ Namespace: "loki",
+ Name: "recording_rules_samples_queue_capacity",
+ Help: "Number of samples that can be queued before eviction of oldest samples occurs.",
+ }, []string{"tenant"}),
+ remoteWriteErrors: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Namespace: "loki",
+ Name: "recording_rules_remote_write_errors",
+ Help: "Number of samples that failed to be remote-written due to error.",
+ }, []string{"tenant", "group_key"}),
+ }
+}
diff --git a/pkg/ruler/remote_write_test.go b/pkg/ruler/remote_write_test.go
new file mode 100644
index 0000000000000..43b77bbf374c5
--- /dev/null
+++ b/pkg/ruler/remote_write_test.go
@@ -0,0 +1,113 @@
+package ruler
+
+import (
+ "math/rand"
+ "testing"
+ "time"
+
+ "github.com/cortexproject/cortex/pkg/cortexpb"
+ "github.com/golang/snappy"
+ "github.com/prometheus/prometheus/pkg/labels"
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/pkg/util"
+)
+
+func TestPrepare(t *testing.T) {
+ client := remoteWriteClient{}
+ queue, err := util.NewEvictingQueue(1000, func() {})
+ require.Nil(t, err)
+
+ lbs := labels.Labels{
+ labels.Label{
+ Name: "cluster",
+ Value: "us-central1",
+ },
+ }
+ sample := cortexpb.Sample{
+ Value: rand.Float64(),
+ TimestampMs: time.Now().Unix(),
+ }
+
+ // first start with 10 items
+ for i := 0; i < 10; i++ {
+ queue.Append(queueEntry{labels: lbs, sample: sample})
+ }
+ require.Nil(t, client.prepare(queue))
+
+ assert.Equal(t, len(client.labels), queue.Length())
+ assert.Equal(t, len(client.samples), queue.Length())
+ assert.Equal(t, cap(client.labels), 10)
+ assert.Equal(t, cap(client.samples), 10)
+
+ queue.Clear()
+
+ // then resize the internal slices to 100
+ for i := 0; i < 100; i++ {
+ queue.Append(queueEntry{labels: lbs, sample: sample})
+ }
+ require.Nil(t, client.prepare(queue))
+
+ assert.Equal(t, len(client.labels), queue.Length())
+ assert.Equal(t, len(client.samples), queue.Length())
+ assert.Equal(t, cap(client.labels), 100)
+ assert.Equal(t, cap(client.samples), 100)
+
+ queue.Clear()
+
+ // then reuse the existing slice (no resize necessary since 5 < 100)
+ for i := 0; i < 5; i++ {
+ queue.Append(queueEntry{labels: lbs, sample: sample})
+ }
+ require.Nil(t, client.prepare(queue))
+
+ assert.Equal(t, len(client.labels), queue.Length())
+ assert.Equal(t, len(client.samples), queue.Length())
+ // cap remains 100 since no resize was necessary
+ assert.Equal(t, cap(client.labels), 100)
+ assert.Equal(t, cap(client.samples), 100)
+}
+
+func TestPrepareRequest(t *testing.T) {
+ appender := createBasicAppender(t)
+
+ lbs := labels.Labels{
+ labels.Label{
+ Name: "cluster",
+ Value: "us-central1",
+ },
+ }
+ sample := cortexpb.Sample{
+ Value: 70,
+ TimestampMs: time.Now().Unix(),
+ }
+
+ appender.Append(0, lbs, sample.TimestampMs, sample.Value) //nolint:errcheck
+
+ bytes, err := appender.remoteWriter.PrepareRequest(appender.queue)
+ require.Nil(t, err)
+
+ var req cortexpb.WriteRequest
+
+ reqBytes, err := snappy.Decode(nil, bytes)
+ require.Nil(t, err)
+
+ require.Nil(t, req.Unmarshal(reqBytes))
+
+ require.Equal(t, req.Timeseries[0].Labels[0].Name, lbs[0].Name)
+ require.Equal(t, req.Timeseries[0].Labels[0].Value, lbs[0].Value)
+ require.Equal(t, req.Timeseries[0].Samples[0], sample)
+}
+
+func createBasicAppender(t *testing.T) *RemoteWriteAppender {
+ ctx := createOriginContext("/rule/file", "rule-group")
+ appendable := createBasicAppendable(100)
+
+ appender := appendable.Appender(ctx).(*RemoteWriteAppender)
+ client, err := newRemoteWriter(appendable.cfg, "fake")
+ require.Nil(t, err)
+
+ appender.remoteWriter = client
+ return appender
+}
diff --git a/pkg/ruler/ruler.go b/pkg/ruler/ruler.go
index 2e38448af2f52..3dc9d05303668 100644
--- a/pkg/ruler/ruler.go
+++ b/pkg/ruler/ruler.go
@@ -4,30 +4,16 @@ import (
"github.com/cortexproject/cortex/pkg/ruler"
"github.com/cortexproject/cortex/pkg/ruler/rulestore"
"github.com/go-kit/kit/log"
- "github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/grafana/loki/pkg/logql"
- "github.com/grafana/loki/pkg/ruler/manager"
)
-type Config struct {
- ruler.Config `yaml:",inline"`
-}
-
-// Override the embedded cortex variant which expects a cortex limits struct. Instead copy the relevant bits over.
-func (cfg *Config) Validate() error {
- if err := cfg.StoreConfig.Validate(); err != nil {
- return errors.Wrap(err, "invalid storage config")
- }
- return nil
-}
-
-func NewRuler(cfg Config, engine *logql.Engine, reg prometheus.Registerer, logger log.Logger, ruleStore rulestore.RuleStore, limits ruler.RulesLimits) (*ruler.Ruler, error) {
+func NewRuler(cfg Config, engine *logql.Engine, reg prometheus.Registerer, logger log.Logger, ruleStore rulestore.RuleStore, limits RulesLimits) (*ruler.Ruler, error) {
mgr, err := ruler.NewDefaultMultiTenantManager(
cfg.Config,
- manager.MemstoreTenantManager(
- cfg.Config,
+ MemstoreTenantManager(
+ cfg,
engine,
limits,
),
@@ -39,7 +25,7 @@ func NewRuler(cfg Config, engine *logql.Engine, reg prometheus.Registerer, logge
}
return ruler.NewRuler(
cfg.Config,
- manager.MultiTenantManagerAdapter(mgr),
+ MultiTenantManagerAdapter(mgr),
reg,
logger,
ruleStore,
diff --git a/pkg/util/evicting_queue.go b/pkg/util/evicting_queue.go
new file mode 100644
index 0000000000000..f18238ce5884b
--- /dev/null
+++ b/pkg/util/evicting_queue.go
@@ -0,0 +1,93 @@
+package util
+
+import (
+ "errors"
+ "sync"
+)
+
+type EvictingQueue struct {
+ sync.RWMutex
+
+ capacity int
+ entries []interface{}
+ onEvict func()
+}
+
+func NewEvictingQueue(capacity int, onEvict func()) (*EvictingQueue, error) {
+ if err := validateCapacity(capacity); err != nil {
+ return nil, err
+ }
+
+ queue := &EvictingQueue{
+ onEvict: onEvict,
+ entries: make([]interface{}, 0, capacity),
+ }
+
+ err := queue.SetCapacity(capacity)
+ if err != nil {
+ return nil, err
+ }
+
+ return queue, nil
+}
+
+func (q *EvictingQueue) Append(entry interface{}) {
+ q.Lock()
+ defer q.Unlock()
+
+ if len(q.entries) >= q.capacity {
+ q.evictOldest()
+ }
+
+ q.entries = append(q.entries, entry)
+}
+
+func (q *EvictingQueue) evictOldest() {
+ q.onEvict()
+
+ start := (len(q.entries) - q.Capacity()) + 1
+ q.entries = append(q.entries[:0], q.entries[start:]...)
+}
+
+func (q *EvictingQueue) Entries() []interface{} {
+ q.RLock()
+ defer q.RUnlock()
+
+ return q.entries
+}
+
+func (q *EvictingQueue) Length() int {
+ q.RLock()
+ defer q.RUnlock()
+
+ return len(q.entries)
+}
+
+func (q *EvictingQueue) Capacity() int {
+ return q.capacity
+}
+
+func (q *EvictingQueue) SetCapacity(capacity int) error {
+ if err := validateCapacity(capacity); err != nil {
+ return err
+ }
+
+ q.capacity = capacity
+ return nil
+}
+
+func (q *EvictingQueue) Clear() {
+ q.Lock()
+ defer q.Unlock()
+
+ q.entries = q.entries[:0]
+}
+
+func validateCapacity(capacity int) error {
+ if capacity <= 0 {
+ // a queue of 0 (or smaller) capacity is invalid
+ return errors.New("queue cannot have a zero or negative capacity")
+ }
+
+ return nil
+}
diff --git a/pkg/util/evicting_queue_test.go b/pkg/util/evicting_queue_test.go
new file mode 100644
index 0000000000000..c33af3e2ac1d9
--- /dev/null
+++ b/pkg/util/evicting_queue_test.go
@@ -0,0 +1,130 @@
+package util
+
+import (
+ "math"
+ "math/rand"
+ "sync"
+ "testing"
+
+ "github.com/stretchr/testify/require"
+)
+
+func noopOnEvict() {}
+
+func TestQueueAppend(t *testing.T) {
+ q, err := NewEvictingQueue(10, noopOnEvict)
+ require.Nil(t, err)
+
+ q.Append(1)
+ q.Append(2)
+ q.Append(3)
+ q.Append(4)
+ q.Append(5)
+
+ require.Equal(t, 5, q.Length())
+}
+
+func TestQueueCapacity(t *testing.T) {
+ q, err := NewEvictingQueue(9, noopOnEvict)
+ require.Nil(t, err)
+ require.Equal(t, 9, q.Capacity())
+
+ q.capacity = 11
+ require.Equal(t, 11, q.Capacity())
+}
+
+func TestZeroCapacityQueue(t *testing.T) {
+ q, err := NewEvictingQueue(0, noopOnEvict)
+ require.Error(t, err)
+ require.Nil(t, q)
+}
+
+func TestNegativeCapacityQueue(t *testing.T) {
+ q, err := NewEvictingQueue(-1, noopOnEvict)
+ require.Error(t, err)
+ require.Nil(t, q)
+}
+
+func TestQueueEvict(t *testing.T) {
+ q, err := NewEvictingQueue(3, noopOnEvict)
+ require.Nil(t, err)
+
+ // appending 5 entries will cause the first (oldest) 2 entries to be evicted
+ entries := []interface{}{1, 2, 3, 4, 5}
+ for _, entry := range entries {
+ q.Append(entry)
+ }
+
+ require.Equal(t, 3, q.Length())
+ require.Equal(t, entries[2:], q.Entries())
+}
+
+func TestQueueClear(t *testing.T) {
+ q, err := NewEvictingQueue(3, noopOnEvict)
+ require.Nil(t, err)
+
+ q.Append(1)
+ q.Clear()
+
+ require.Equal(t, 0, q.Length())
+}
+
+func TestQueueEvictionCallback(t *testing.T) {
+ var evictionCallbackCalls int
+
+ q, err := NewEvictingQueue(3, func() {
+ evictionCallbackCalls++
+ })
+ require.Nil(t, err)
+
+ for i := 0; i < 5; i++ {
+ q.Append(i)
+ }
+
+ require.Equal(t, 2, evictionCallbackCalls)
+}
+
+func TestSafeConcurrentAccess(t *testing.T) {
+ q, err := NewEvictingQueue(3, noopOnEvict)
+ require.Nil(t, err)
+
+ var wg sync.WaitGroup
+
+ for w := 0; w < 30; w++ {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+
+ for i := 0; i < 500; i++ {
+ q.Append(rand.Int())
+ }
+ }()
+ }
+
+ wg.Wait()
+
+ require.Equal(t, 3, q.Length())
+}
+
+type queueEntry struct {
+ key string
+ value interface{}
+}
+
+func BenchmarkAppendAndEvict(b *testing.B) {
+ capacity := 5000
+ q, err := NewEvictingQueue(capacity, noopOnEvict)
+ require.Nil(b, err)
+
+ b.ResetTimer()
+ b.ReportAllocs()
+
+ for n := 0; n < b.N; n++ {
+ q.Append(&queueEntry{
+ key: "hello",
+ value: "world",
+ })
+ }
+
+ require.EqualValues(b, math.Min(float64(b.N), float64(capacity)), q.Length())
+}
diff --git a/pkg/util/mapmerge.go b/pkg/util/mapmerge.go
new file mode 100644
index 0000000000000..f218644003dc0
--- /dev/null
+++ b/pkg/util/mapmerge.go
@@ -0,0 +1,35 @@
+package util
+
+// CopyMap makes a copy of the given map
+func CopyMap(m map[string]string) map[string]string {
+ var newMap = make(map[string]string, len(m))
+
+ if m == nil {
+ return nil
+ }
+
+ for k, v := range m {
+ newMap[k] = v
+ }
+
+ return newMap
+}
+
+// MergeMaps merges the overlay map onto the base map, with overlay taking precedence
+// NOTE: this treats the given base and overlay maps as immutable, and returns a copy
+func MergeMaps(base map[string]string, overlay map[string]string) map[string]string {
+ if base == nil {
+ return CopyMap(overlay)
+ }
+
+ newMap := CopyMap(base)
+ if overlay == nil {
+ return newMap
+ }
+
+ for k, v := range overlay {
+ newMap[k] = v
+ }
+
+ return newMap
+}
diff --git a/pkg/util/mapmerge_test.go b/pkg/util/mapmerge_test.go
new file mode 100644
index 0000000000000..b8ba2c88f8894
--- /dev/null
+++ b/pkg/util/mapmerge_test.go
@@ -0,0 +1,96 @@
+package util
+
+import (
+ "testing"
+
+ "github.com/stretchr/testify/require"
+)
+
+func TestMerge(t *testing.T) {
+ var base = map[string]string{
+ "a": "b",
+ "c": "10",
+ "y": "z",
+ }
+
+ var overlay = map[string]string{
+ "a": "z",
+ "c": "10",
+ "d": "e",
+ }
+
+ merged := MergeMaps(base, overlay)
+ require.Equal(t, merged, map[string]string{
+ "a": "z",
+ "c": "10",
+ "d": "e",
+ "y": "z",
+ })
+}
+
+func TestCopy(t *testing.T) {
+ var base = map[string]string{
+ "a": "b",
+ "c": "10",
+ "y": "z",
+ }
+
+ copy := CopyMap(base)
+ require.EqualValues(t, base, copy)
+ require.NotSame(t, base, copy)
+}
+
+func TestNilCopy(t *testing.T) {
+ var base map[string]string
+
+ copy := CopyMap(base)
+ require.EqualValues(t, base, copy)
+ require.NotSame(t, base, copy)
+}
+
+func TestNilBase(t *testing.T) {
+ var overlay = map[string]string{
+ "a": "z",
+ "c": "10",
+ "d": "e",
+ }
+
+ merged := MergeMaps(nil, overlay)
+ require.Equal(t, merged, overlay)
+}
+
+func TestNilOverlay(t *testing.T) {
+ var base = map[string]string{
+ "a": "b",
+ "c": "10",
+ "y": "z",
+ }
+
+ merged := MergeMaps(base, nil)
+ require.Equal(t, merged, base)
+}
+
+// TestImmutability tests that both given maps are unaltered
+func TestImmutability(t *testing.T) {
+ var base = map[string]string{
+ "a": "b",
+ "c": "10",
+ "y": "z",
+ }
+
+ var overlay = map[string]string{
+ "a": "z",
+ "c": "10",
+ "d": "e",
+ }
+
+ beforeBase := CopyMap(base)
+ beforeOverlay := CopyMap(overlay)
+ require.EqualValues(t, base, beforeBase)
+ require.EqualValues(t, overlay, beforeOverlay)
+
+ MergeMaps(base, overlay)
+
+ require.EqualValues(t, base, beforeBase)
+ require.EqualValues(t, overlay, beforeOverlay)
+}
diff --git a/pkg/validation/limits.go b/pkg/validation/limits.go
index d97cef3dfaa45..88950df26bc0e 100644
--- a/pkg/validation/limits.go
+++ b/pkg/validation/limits.go
@@ -60,9 +60,10 @@ type Limits struct {
QuerySplitDuration model.Duration `yaml:"split_queries_by_interval" json:"split_queries_by_interval"`
// Ruler defaults and limits.
- RulerEvaluationDelay model.Duration `yaml:"ruler_evaluation_delay_duration" json:"ruler_evaluation_delay_duration"`
- RulerMaxRulesPerRuleGroup int `yaml:"ruler_max_rules_per_rule_group" json:"ruler_max_rules_per_rule_group"`
- RulerMaxRuleGroupsPerTenant int `yaml:"ruler_max_rule_groups_per_tenant" json:"ruler_max_rule_groups_per_tenant"`
+ RulerEvaluationDelay model.Duration `yaml:"ruler_evaluation_delay_duration" json:"ruler_evaluation_delay_duration"`
+ RulerMaxRulesPerRuleGroup int `yaml:"ruler_max_rules_per_rule_group" json:"ruler_max_rules_per_rule_group"`
+ RulerMaxRuleGroupsPerTenant int `yaml:"ruler_max_rule_groups_per_tenant" json:"ruler_max_rule_groups_per_tenant"`
+ RulerRemoteWriteQueueCapacity int `yaml:"ruler_remote_write_queue_capacity" json:"ruler_remote_write_queue_capacity"`
// Global and per tenant retention
RetentionPeriod model.Duration `yaml:"retention_period" json:"retention_period"`
@@ -122,6 +123,7 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
f.IntVar(&l.RulerMaxRulesPerRuleGroup, "ruler.max-rules-per-rule-group", 0, "Maximum number of rules per rule group per-tenant. 0 to disable.")
f.IntVar(&l.RulerMaxRuleGroupsPerTenant, "ruler.max-rule-groups-per-tenant", 0, "Maximum number of rule groups per-tenant. 0 to disable.")
+ f.IntVar(&l.RulerRemoteWriteQueueCapacity, "ruler.remote-write.queue-capacity", 10000, "Capacity of remote-write queues; if a queue exceeds its capacity it will evict oldest samples.")
f.StringVar(&l.PerTenantOverrideConfig, "limits.per-user-override-config", "", "File name of per-user overrides.")
_ = l.RetentionPeriod.Set("744h")
@@ -354,6 +356,11 @@ func (o *Overrides) RulerMaxRuleGroupsPerTenant(userID string) int {
return o.getOverridesForUser(userID).RulerMaxRuleGroupsPerTenant
}
+// RulerRemoteWriteQueueCapacity returns the remote-write queue capacity for a given user.
+func (o *Overrides) RulerRemoteWriteQueueCapacity(userID string) int {
+ return o.getOverridesForUser(userID).RulerRemoteWriteQueueCapacity
+}
+
// RetentionPeriod returns the retention period for a given user.
func (o *Overrides) RetentionPeriod(userID string) time.Duration {
return time.Duration(o.getOverridesForUser(userID).RetentionPeriod)
|
ruler
|
Recording Rules (#3766)
|
dc283741884bab9dd65ee0b4a746fb414ba9a378
|
2025-02-15 02:33:32
|
J Stickler
|
docs: fix spacing in Cardinality URL (#16291)
| false
|
diff --git a/docs/sources/get-started/labels/cardinality.md b/docs/sources/get-started/labels/cardinality.md
index 7987931c2d405..7c7cb9b62b47a 100644
--- a/docs/sources/get-started/labels/cardinality.md
+++ b/docs/sources/get-started/labels/cardinality.md
@@ -19,11 +19,11 @@ Other examples of high cardinality attributes include the following:
- Customer ID
- Trace ID
-When we talk about _cardinality_ in Loki we are referring to the combination of labels and values and the number of log streams they create. Loki was not designed or built to support high cardinality label values. In fact, it was built for exactly the opposite. It was built for very long-lived streams and very low cardinality in the labels. In Loki, the fewer labels you use, the better. This is why Loki has a default limit of 15 index labels.
+When we talk about _cardinality_ in Loki we are referring to the combination of labels and values and the number of log streams they create. In Loki, the fewer labels you use, the better. This is why Loki has a default limit of 15 index labels.
High cardinality can result from using labels with a large range of possible values, **or** combining many labels, even if they have a small and finite set of values, such as combining `status_code` and `action`. A typical set of status codes (200, 404, 500) and actions (GET, POST, PUT, PATCH, DELETE) would create 15 unique streams. But, adding just one more label like `endpoint` (/cart, /products, /customers) would triple this to 45 unique streams.
-To see an example of series labels and cardinality, refer to the [LogCLI tutorial] (https://grafana.com/docs/loki/<LOKI_VERSION>/query/logcli/logcli-tutorial/#checking-series-cardinality). As you can see, the cardinality for individual labels can be quite high, even before you begin combining labels for a particular log stream, which increases the cardinality even further.
+To see an example of series labels and cardinality, refer to the [LogCLI tutorial](https://grafana.com/docs/loki/<LOKI_VERSION>/query/logcli/logcli-tutorial/#checking-series-cardinality). As you can see, the cardinality for individual labels can be quite high, even before you begin combining labels for a particular log stream, which increases the cardinality even further.
To view the cardinality of your current labels, you can use [logcli](https://grafana.com/docs/loki/<LOKI_VERSION>/query/logcli/getting-started/).
|
docs
|
fix spacing in Cardinality URL (#16291)
|
0f9088bc89cf4fe4c3b64f9424225e38a4279943
|
2024-07-04 20:52:23
|
benclive
|
chore: use read-only index store for ingester RF1 (#13419)
| false
|
diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index 2b3fb852918d1..9734d29a7e031 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -899,7 +899,7 @@ func (t *Loki) updateConfigForShipperStore() {
t.Cfg.StorageConfig.TSDBShipperConfig.Mode = indexshipper.ModeWriteOnly
t.Cfg.StorageConfig.TSDBShipperConfig.IngesterDBRetainPeriod = shipperQuerierIndexUpdateDelay(t.Cfg.StorageConfig.IndexCacheValidity, t.Cfg.StorageConfig.TSDBShipperConfig.ResyncInterval)
- case t.Cfg.isTarget(Querier), t.Cfg.isTarget(Ruler), t.Cfg.isTarget(Read), t.Cfg.isTarget(Backend), t.isModuleActive(IndexGateway), t.Cfg.isTarget(BloomCompactor), t.Cfg.isTarget(BloomPlanner), t.Cfg.isTarget(BloomBuilder):
+ case t.Cfg.isTarget(IngesterRF1), t.Cfg.isTarget(Querier), t.Cfg.isTarget(Ruler), t.Cfg.isTarget(Read), t.Cfg.isTarget(Backend), t.isModuleActive(IndexGateway), t.Cfg.isTarget(BloomCompactor), t.Cfg.isTarget(BloomPlanner), t.Cfg.isTarget(BloomBuilder):
// We do not want query to do any updates to index
t.Cfg.StorageConfig.BoltDBShipperConfig.Mode = indexshipper.ModeReadOnly
t.Cfg.StorageConfig.TSDBShipperConfig.Mode = indexshipper.ModeReadOnly
|
chore
|
use read-only index store for ingester RF1 (#13419)
|
8c8454b9db35901896113d3e19eb3862359aeba8
|
2024-07-03 16:28:37
|
benclive
|
feat: exclude and from creating new tokens in patterns (#13395)
| false
|
diff --git a/pkg/pattern/drain/drain_test.go b/pkg/pattern/drain/drain_test.go
index 16e2841e794ea..f725439e89d30 100644
--- a/pkg/pattern/drain/drain_test.go
+++ b/pkg/pattern/drain/drain_test.go
@@ -31,34 +31,28 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
inputFile: `testdata/agent-logfmt.txt`,
format: FormatLogfmt,
patterns: []string{
- `ts=2024-04-16T15:10:42.<_> level=info msg="finished node evaluation" controller_id=module.http.cloudwatch_pipelines node_id=prometheus.scrape.<_> duration=<_>.<_>`,
`ts=2024-04-16T15:10:43.192290389Z caller=filetargetmanager.go:361 level=info component=logs logs_config=default msg="Adding target" key="/var/log/pods/*19a1cce8-5f04-46e0-a124-292b0dd9b343/testcoordinator/*.log:{batch_kubernetes_io_controller_uid=\"25ec5edf-f78e-468b-b6f3-3b9685f0cc8f\", batch_kubernetes_io_job_name=\"testcoordinator-job-2665838\", container=\"testcoordinator\", controller_uid=\"25ec5edf-f78e-468b-b6f3-3b9685f0cc8f\", job=\"k6-cloud/testcoordinator\", job_name=\"testcoordinator-job-2665838\", name=\"testcoordinator\", namespace=\"k6-cloud\", pod=\"testcoordinator-job-2665838-9g8ds\"}"`,
- `ts=2024-04-16T15:10:43.551782223Z caller=tailer.go:245 level=info component=logs logs_config=default component=tailer msg="stopped tailing file" path=/var/log/pods/grafana-com_marketplaces-api-f67ff7567-gqrvb_35649bfd-52ff-4281-9294-5f65fd5a89fc/marketplaces-api/0.log`,
- `ts=2024-04-16T15:10:43.<_> caller=filetargetmanager.go:<_> level=info component=logs logs_config=default msg="<_> target" key="/var/log/pods/*<_>/<_>/*.log:{<_>=\"<_>\", <_>=\"<_><_><_><_><_><_> <_><_><_><_><_>\", namespace=\"<_>\", pod=\"<_>\", <_>=\"<_>\"}"`,
- `ts=2024-04-16T15:10:43.<_> caller=tailer.go:<_> level=info component=logs logs_config=default component=tailer msg="<_> <_><_> <_> <_> <_><_> <_> <_><_> <_><_><_><_><_><_><_><_><_><_><_><_><_><_><_><_> <_><_><_>`,
- `ts=2024-04-16T15:10:<_>.<_> caller=filetarget.go:192 level=info component=logs logs_config=default msg="filetarget: watcher closed, tailer stopped, positions saved" path=/var/log/pods/*<_>/<_>/*.log`,
- `ts=2024-04-16T15:10:<_>.<_> caller=filetarget.go:313 level=info component=logs logs_config=default msg="watching new directory" directory=/var/log/pods/<_>/<_>`,
- `ts=2024-04-16T15:10:<_>.<_> caller=filetarget.go:313 level=info component=logs logs_config=default msg="watching new directory" directory=/var/log/pods/hosted-grafana_.<_>/<_>`,
- `ts=2024-04-16T15:10:<_>.<_> caller=filetarget.go:326 level=info component=logs logs_config=default msg="removing directory from watcher" directory=/var/log/pods/hosted-grafana_.<_>/<_>`,
- `ts=2024-04-16T15:10:<_>.<_> caller=filetargetmanager.go:181 level=info component=logs logs_config=default msg="received file watcher event" name=/var/log/pods/<_>/<_>/<_>.log op=CREATE`,
- `ts=2024-04-16T15:10:<_>.<_> caller=filetargetmanager.go:181 level=info component=logs logs_config=default msg="received file watcher event" name=/var/log/pods/<_><_><_>/<_><_><_>.<_> op=CREATE`,
- `ts=2024-04-16T15:10:<_>.<_> caller=filetargetmanager.go:181 level=info component=logs logs_config=default msg="received file watcher event" name=/var/log/pods/<_><_><_>/<_><_><_>.<_>.<_> op=CREATE`,
- `ts=2024-04-16T15:10:<_>.<_> caller=filetargetmanager.go:181 level=info component=logs logs_config=default msg="received file watcher event" name=/var/log/pods/hosted-grafana_.<_>/<_>/0.log.<_>.<_> op=CREATE`,
- `ts=2024-04-16T15:10:<_>.<_> caller=filetargetmanager.go:<_> level=info component=logs logs_config=default msg="<_> target" key="/var/log/pods/*<_>/<_>/*.log:{app=\"grafana\", conprof=\"true\", container=\"<_>\", instanceId=\"<_>\", job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=\"<_>\", plan=\"free\", pod=<_>`,
- `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Re-opening moved/deleted file /var/log/pods/<_>/<_>/<_>.log ..."`,
- `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Re-opening moved/deleted file /var/log/pods/hosted-grafana_.<_>/<_>/0.log ..."`,
- `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Seeked /var/log/pods/<_>/<_>/0.log - &{Offset:0 Whence:0}"`,
- `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Seeked /var/log/pods/hosted-grafana_.<_>/<_>/0.log - &{Offset:0 Whence:0}"`,
- `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Successfully reopened /var/log/pods/<_>/<_>/<_>.log"`,
- `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Successfully reopened /var/log/pods/hosted-grafana_.<_>/<_>/0.log"`,
- `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Waiting for /var/log/pods/<_>/<_>/0.log to appear..."`,
- `ts=2024-04-16T15:10:<_>.<_> caller=log.go:168 component=logs logs_config=default level=info msg="Waiting for /var/log/pods/hosted-grafana_.<_>/<_>/0.log to appear..."`,
- `ts=2024-04-16T15:10:<_>.<_> caller=logfmt.go:139 level=error component=logs logs_config=default component=file_pipeline component=stage type=logfmt msg="failed to decode logfmt" err="bufio.Scanner: token too long"`,
- `ts=2024-04-16T15:10:<_>.<_> caller=logfmt.go:139 level=error component=logs logs_config=default component=file_pipeline component=stage type=logfmt msg="failed to decode logfmt" err="logfmt syntax error at pos <_> on line 1: unexpected '\"'"`,
- `ts=2024-04-16T15:10:<_>.<_> caller=tailer.go:245 level=info component=logs logs_config=default component=tailer msg="stopped tailing file" path=/var/log/pods/hosted-grafana_.<_>/<_>/0.log`,
- `ts=2024-04-16T15:10:<_>.<_> caller=tailer.go:<_> level=info component=logs logs_config=default component=tailer msg="<_> <_>: <_>" path=/var/log/pods/<_>/<_>/0.log`,
- `ts=2024-04-16T15:10:<_>.<_> caller=tailer.go:<_> level=info component=logs logs_config=default component=tailer msg="<_> <_>: <_>" path=/var/log/pods/hosted-grafana_.<_>/<_>/0.log`,
- `ts=2024-04-16T15:10:<_>.<_> caller=tailer.go:<_> level=info component=logs logs_config=default component=tailer msg="<_> <_><_> <_> <_> <_><_> <_> <_><_> <_><_><_><_><_><_><_><_><_><_><_><_><_><_><_><_><_><_> <_><_><_>`,
+ `ts=2024-04-16T15:10:43.551543875Z caller=filetargetmanager.go:397 level=info component=logs logs_config=default msg="Removing target" key="/var/log/pods/*35649bfd-52ff-4281-9294-5f65fd5a89fc/marketplaces-api/*.log:{container=\"marketplaces-api\", job=\"grafana-com/marketplaces-api\", name=\"marketplaces-api\", namespace=\"grafana-com\", pod=\"marketplaces-api-f67ff7567-gqrvb\", pod_template_hash=\"f67ff7567\"}"`,
+ `ts=<_> caller=filetarget.go:192 level=info component=logs logs_config=default msg="filetarget: watcher closed, tailer stopped, positions saved" path=/var/log/pods/*<_>/<_>/*.log`,
+ `ts=<_> caller=filetarget.go:313 level=info component=logs logs_config=default msg="watching new directory" directory=/var/log/pods/<_>/<_>`,
+ `ts=<_> caller=filetarget.go:326 level=info component=logs logs_config=default msg="removing directory from watcher" directory=/var/log/pods/<_>/<_>`,
+ `ts=<_> caller=filetargetmanager.go:181 level=info component=logs logs_config=default msg="received file watcher event" name=/var/log/pods/<_>/<_>/<_> op=CREATE`,
+ `ts=<_> caller=filetargetmanager.go:361 level=info component=logs logs_config=default msg="Adding target" key="/var/log/pods/*<_>/kube-proxy/*.log:{component=\"kube-proxy\", container=\"kube-proxy\", job=\"kube-system/<_>\", namespace=\"kube-system\", pod=\"kube-proxy-gke-ops-us-east-0-main-n2s32-1-1dd39c-32ae1dde-hmhw\", tier=\"node\"}"`,
+ `ts=<_> caller=filetargetmanager.go:361 level=info component=logs logs_config=default msg="Adding target" key="/var/log/pods/*b92ee988-5c26-4c64-bba3-ff6a01723759/<_>/*.log:{app=\"grafana\", conprof=\"true\", container=\"<_>\", instanceId=\"i1111\", job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=\"orgnamehere\", plan=\"free\", pod=\"orgnamehere-grafana-7c65678f86-9zhlb\", pod_template_hash=\"7c65678f86\", resource_version=\"143638246\", slug=\"orgnamehere\", stackId=\"866772\"}"`,
+ `ts=<_> caller=filetargetmanager.go:397 level=info component=logs logs_config=default msg="Removing target" key="/var/log/pods/*<_>/<_>/*.log:{app=\"grafana\", conprof=\"true\", container=\"<_>\", instanceId=\"<_>\", job=\"hosted-grafana/grafana\", name=\"grafana\", namespace=\"hosted-grafana\", org=\"<_>\", plan=\"free\", pod=\"<_>\", pod_template_hash=\"<_>\<_>`,
+ `ts=<_> caller=log.go:168 component=logs logs_config=default level=info msg="Re-opening moved/deleted file /var/log/pods/<_>/<_>/<_> ..."`,
+ `ts=<_> caller=log.go:168 component=logs logs_config=default level=info msg="Seeked /var/log/pods/<_>/<_>/0.log - &{Offset:0 Whence:0}"`,
+ `ts=<_> caller=log.go:168 component=logs logs_config=default level=info msg="Successfully reopened /var/log/pods/<_>/<_>/<_>"`,
+ `ts=<_> caller=log.go:168 component=logs logs_config=default level=info msg="Waiting for /var/log/pods/<_>/<_>/0.log to appear..."`,
+ `ts=<_> caller=logfmt.go:139 level=error component=logs logs_config=default component=file_pipeline component=stage type=logfmt msg="failed to decode logfmt" err="bufio.Scanner: token too long"`,
+ `ts=<_> caller=logfmt.go:139 level=error component=logs logs_config=default component=file_pipeline component=stage type=logfmt msg="failed to decode logfmt" err="logfmt syntax error at pos <_> on line 1: unexpected '\"'"`,
+ `ts=<_> caller=tailer.go:118 level=info component=logs logs_config=default component=tailer msg="position timer: exited" path=/var/log/pods/<_>/<_>/0.log`,
+ `ts=<_> caller=tailer.go:147 level=info component=logs logs_config=default component=tailer msg="tail routine: started" path=/var/log/pods/<_>/<_>/0.log`,
+ `ts=<_> caller=tailer.go:155 level=info component=logs logs_config=default component=tailer msg="tail routine: exited" path=/var/log/pods/<_>/<_>/0.log`,
+ `ts=<_> caller=tailer.go:164 level=info component=logs logs_config=default component=tailer msg="tail routine: tail channel closed, stopping tailer" path=/var/log/pods/<_>/<_>/0.log reason=null`,
+ `ts=<_> caller=tailer.go:207 level=info component=logs logs_config=default component=tailer msg="skipping update of position for a file which does not currently exist" path=/var/log/pods/<_>/<_>/0.log`,
+ `ts=<_> caller=tailer.go:245 level=info component=logs logs_config=default component=tailer msg="stopped tailing file" path=/var/log/pods/<_>/<_>/0.log`,
+ `ts=<_> level=info msg="finished node evaluation" controller_id=module.http.cloudwatch_pipelines node_id=<_> duration=<_>`,
},
},
{
@@ -67,21 +61,20 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
format: FormatLogfmt,
patterns: []string{
`ts=2024-04-17T09:52:46.363974185Z caller=http.go:194 level=debug traceID=1b48f5156a61ca69 msg="GET /debug/pprof/delta_mutex (200) 1.161082ms"`,
- `ts=2024-04-17T09:52:46.<_> caller=head.go:216 level=debug tenant=987678 msg="profile is empty after delta computation" metricName=memory`,
- `ts=2024-04-17T09:52:46.<_> caller=http.go:194 level=debug traceID=<_> orgID=<_> msg="POST /ingester.v1.IngesterService/Push (200) <_>.<_>"`,
- },
+ `ts=<_> caller=head.go:216 level=debug tenant=987678 msg="profile is empty after delta computation" metricName=memory`,
+ `ts=<_> caller=http.go:194 level=debug traceID=<_> orgID=<_> msg="POST /ingester.v1.IngesterService/Push (200) <_>"`},
},
{
drain: New(DefaultConfig(), nil),
inputFile: `testdata/drone-json.txt`,
format: FormatJSON,
patterns: []string{
- `{"duration":<_>,"level":"debug","method":"GET","msg":"request completed","referer":"","remote":"10.136.105.40:52702","request":"/metrics","status":200,"time":"<_>:<_>:<_>","user-agent":"GrafanaAgent/v0.40.3 (flow; linux; helm)"}`,
- `{"id":"<_>","level":"debug","max-pool":4,"min-pool":0,"msg":"check capacity","pending-builds":0,"running-builds":0,"server-buffer":0,"server-capacity":0,"server-count":0,"time":"<_>:<_>:<_>"}`,
- `{"id":"<_>","level":"debug","msg":"calculate server capacity","time":"<_>:<_>:<_>"}`,
- `{"id":"<_>","level":"debug","msg":"calculate unfinished jobs","time":"<_>:<_>:<_>"}`,
- `{"id":"<_>","level":"debug","msg":"check capacity complete","time":"<_>:<_>:<_>"}`,
- `{"id":"<_>","level":"debug","msg":"no capacity changes required","time":"<_>:<_>:<_>"}`,
+ `{"duration"<_>,"level":"debug","method":"GET","msg":"request completed","referer":"","remote":"10.136.105.40:52702","request":"/metrics","status":200,"time":"<_>","user-agent":"GrafanaAgent/v0.40.3 (flow; linux; helm)"}`,
+ `{"id":"<_>","level":"debug","max-pool":4,"min-pool":0,"msg":"check capacity","pending-builds":0,"running-builds":0,"server-buffer":0,"server-capacity":0,"server-count":0,"time":"<_>"}`,
+ `{"id":"<_>","level":"debug","msg":"calculate server capacity","time":"<_>"}`,
+ `{"id":"<_>","level":"debug","msg":"calculate unfinished jobs","time":"<_>"}`,
+ `{"id":"<_>","level":"debug","msg":"check capacity complete","time":"<_>"}`,
+ `{"id":"<_>","level":"debug","msg":"no capacity changes required","time":"<_>"}`,
},
},
{
@@ -89,12 +82,11 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
inputFile: "testdata/distributor-logfmt.txt",
format: FormatLogfmt,
patterns: []string{
- `ts=2024-05-02T12:17:22.851228301Z caller=http.go:194 level=debug traceID=1e1fe5ba1756bc38 orgID=1819 msg="POST /pyroscope/ingest?aggregationType=sum&from=1714652230&name=flamegraph.com%7Bapp_kubernetes_io_instance%3Dflamegraph-com%2Capp_kubernetes_io_name%3Dflamegraph-com%2Ccluster%3Dflamegraph.com%2Cinstance%3D10.0.11.146%3A8001%2Cjob%3Dkubernetes-pods%2Cnamespace%3Dflamegraph-com%2Cpod%3Dflamegraph-com-backend-79c858c7bf-jw2hn%2Cpod_template_hash%3D79c858c7bf%2Cpyroscope_tenant%3Dpyroscope%2Ctier%3Dbackend%7D&sampleRate=0&spyName=scrape&units=samples&until=1714652240 (200) 22.345191ms"`,
- `ts=2024-05-02T12:17:22.<_> caller=http.go:194 level=debug traceID=<_> orgID=75 msg="POST /ingest?aggregationType=&from=1714652227232613927&name=checkoutservice%7B__session_id__%3D294b9729f5a7de95%2Cnamespace%3Dotel-demo%7D&sampleRate=<_>&spyName=gospy&units=&until=1714652242232506798 (200) <_>.<_>"`,
- `ts=2024-05-02T12:17:22.<_> caller=http.go:194 level=debug traceID=<_> orgID=75 msg="POST /ingest?aggregationType=<_>&from=<_>&name=checkoutservice%7B__session_id__%3D294b9729f5a7de95%2Cnamespace%3Dotel-demo%7D&sampleRate=<_>&spyName=gospy&units=<_>&until=<_> (200) <_>.<_>"`,
- `ts=2024-05-02T12:17:<_>.<_> caller=http.go:194 level=debug traceID=<_> orgID=1819 msg="POST /pyroscope/ingest?aggregationType=sum&from=1714652230&name=flamegraph.com.frontend%7Bapp_kubernetes_io_instance%3Dflamegraph-com%2Capp_kubernetes_io_name%3Dflamegraph-com%2Ccluster%3Dflamegraph.com%2Cinstance%3D10.0.9.115%3A9091%2Cjob%3Dkubernetes-pods%2Cnamespace%3Dflamegraph-com%2Cpod%3Dflamegraph-com-frontend-6fb87f8785-pd87k%2Cpod_template_hash%3D6fb87f8785%2Cpyroscope_tenant%3Dpyroscope%2Ctier%3Dfrontend%7D&sampleRate=0&spyName=scrape&units=samples&until=1714652240 (200) <_>.<_>"`,
- `ts=2024-05-02T12:17:<_>.<_> caller=http.go:194 level=debug traceID=<_> orgID=<_> msg="POST /push.v1.PusherService/Push (<_>) <_>.<_>"`,
- },
+ `ts=2024-05-02T12:17:22.115385619Z caller=http.go:194 level=debug traceID=7836a12bb7f1964e orgID=75 msg="POST /ingest?aggregationType=sum&from=1714652227107641016&name=checkoutservice%7B__session_id__%3D294b9729f5a7de95%2Cnamespace%3Dotel-demo%7D&sampleRate=100&spyName=gospy&units=samples&until=1714652242109516917 (200) 1.562143ms"`,
+ `ts=2024-05-02T12:17:22.242343806Z caller=http.go:194 level=debug traceID=404c6a83a18e66a4 orgID=75 msg="POST /ingest?aggregationType=average&from=1714652227232613927&name=checkoutservice%7B__session_id__%3D294b9729f5a7de95%2Cnamespace%3Dotel-demo%7D&sampleRate=0&spyName=gospy&units=goroutines&until=1714652242232506798 (200) 2.902485ms"`,
+ `ts=<_> caller=http.go:194 level=debug traceID=<_> orgID=1819 msg="POST /pyroscope/ingest?aggregationType=sum&from=1714652230&name=<_>%7Bapp_kubernetes_io_instance%3Dflamegraph-com%2Capp_kubernetes_io_name%3Dflamegraph-com%2Ccluster%3Dflamegraph.com%2Cinstance%<_>%<_>%2Cjob%3Dkubernetes-pods%2Cnamespace%3Dflamegraph-com%2Cpod%<_>%2Cpod_template_hash%<_>%2Cpyroscope_tenant%3Dpyroscope%2Ctier%<_>%7D&sampleRate=0&spyName=scrape&units=samples&until=1714652240 (200) <_>"`,
+ `ts=<_> caller=http.go:194 level=debug traceID=<_> orgID=75 msg="POST /ingest?aggregationType=&from=1714652227232613927&name=checkoutservice%7B__session_id__%3D294b9729f5a7de95%2Cnamespace%3Dotel-demo%7D&sampleRate=<_>&spyName=gospy&units=&until=1714652242232506798 (200) <_>"`,
+ `ts=<_> caller=http.go:194 level=debug traceID=<_> orgID=<_> msg="POST /push.v1.PusherService/Push (<_>) <_>"`},
},
{
drain: New(DefaultConfig(), nil),
@@ -104,69 +96,84 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
` ln --force -s /proc/$(pidof hgrun-pause)/root/bin/hgrun /bin/hgrun;`,
` while [ "$(pidof plugins-pause)" = "" ]; do sleep 0.5; done;`,
` ts=2024-05-07T11:59:32.025687537Z level=error caller=http_client.go:56 app=hgrun hgrun_version=0.1.453-59-gf3f63162a msg="request`,
- ` ts=2024-05-07T11:59:<_>.<_> level=error caller=http_client.go:56 app=hgrun hgrun_version=0.1.<_> msg="request failed" error="Get \"http://127.0.0.1:3000/api/health\": dial tcp 127.0.0.1:3000: connect: connection refused" method=GET url=http://127.0.0.1:3000/api/health`,
+ ` ts=<_> level=error caller=http_client.go:56 app=hgrun hgrun_version=<_> msg="request failed" error="Get \"http://127.0.0.1:3000/api/health\": dial tcp 127.0.0.1:3000: connect: connection refused" method=GET url=http://127.0.0.1:3000/api/health`,
`2024-05-07T11:59:43.484606Z INFO ExtHandler ExtHandler Downloading agent manifest`,
- `2024-05-07T11:59:<_>.<_> INFO TelemetryEventsCollector ExtHandler Collected 2 events for extension: Microsoft.Azure.Extensions.CustomScript`,
- `<_>.scope: Consumed <_>.<_> CPU time.`,
- `<_>.scope: Deactivated successfully.`,
+ `<_> Consumed <_> CPU time.`,
+ `<_> INFO TelemetryEventsCollector ExtHandler Collected 2 events for extension: Microsoft.Azure.Extensions.CustomScript`,
`AVC apparmor="DENIED" operation="ptrace" profile="cri-containerd.apparmor.d" pid=<_> comm="pidof" requested_mask="read" denied_mask="read" peer="unconfined"`,
+ `E0507 11:59:29.725681 3089 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"azure-resourcemanager-exporter\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=azure-resourcemanager-exporter pod=azure-resourcemanager-exporter-6b5b58c666-rsttd_infra-exporters(5a95f801-309c-4f33-864a-406262c6ece6)\"" pod="infra-exporters/azure-resourcemanager-exporter-6b5b58c666-rsttd" podUID="5a95f801-309c-4f33-864a-406262c6ece6"`,
+ `E0507 11:59:31.554203 4531 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frontend\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=frontend pod=otel-demo-alt-dev-frontend-79ccf98858-mbj4x_otel-demo-alt(d08e620e-00d0-49f1-a195-820a62e8de8f)\"" pod="otel-demo-alt/otel-demo-alt-dev-frontend-79ccf98858-mbj4x" podUID="d08e620e-00d0-49f1-a195-820a62e8de8f"`,
`E0507 11:59:31.928148 4734 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[terraform-drift-detector-data], unattached volumes=[terraform-drift-detector-data], failed to process volumes=[]: context deadline exceeded" pod="terraform-drift-detector/terraform-drift-detector-d68b4c545-jg2vj" podUID="6c607496-ef26-454e-b2f2-4cb75b233fa3"`,
+ `E0507 11:59:34.856101 4727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"grafana-render-security\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/hosted-grafana/hosted-grafana-security:0.1.181\\\"\"" pod="integration/grafana-render-service-cbff479fc-cj9tp" podUID="0e3114d1-2f3a-49d6-a71d-dbc75050d8e0"`,
`E0507 11:59:34.923938 3027 kuberuntime_manager.go:1261] container &Container{Name:mysqld-exporter,Image:prom/mysqld-exporter:v0.13.0,Command:[],Args:[--collect.info_schema.innodb_metrics],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_USER,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:testcrossplane-user-exporter,},Key:username,Optional:nil,},},},EnvVar{Name:MYSQL_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:testcrossplane-user-exporter,},Key:password,Optional:nil,},},},EnvVar{Name:MYSQL_HOST,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:testcrossplane-user-exporter,},Key:endpoint,Optional:nil,},},},EnvVar{Name:MYSQL_PORT,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:testcrossplane-user-exporter,},Key:port,Optional:nil,},},},EnvVar{Name:MYSQL_TLS_MODE,Value:preferred,ValueFrom:nil,},EnvVar{Name:DATA_SOURCE_NAME,Value:$(MYSQL_USER):$(MYSQL_PASSWORD)@tcp($(MYSQL_HOST):$(MYSQL_PORT))/?tls=$(MYSQL_TLS_MODE),ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dzx7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod testcrossplane-exporter-c67cfc58f-vbzl4_crossplane-playground(3d49134d-3378-4ec3-824c-5ff4ea2590a5): CreateContainerConfigError: secret "testcrossplane-user-exporter" not found`,
+ `E0507 11:59:34.923984 3027 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysqld-exporter\" with CreateContainerConfigError: \"secret \\\"testcrossplane-user-exporter\\\" not found\"" pod="crossplane-playground/testcrossplane-exporter-c67cfc58f-vbzl4" podUID="3d49134d-3378-4ec3-824c-5ff4ea2590a5"`,
`E0507 11:59:35.928465 4734 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[custom-grafana-agent], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="loki-dev-010/custom-grafana-agent-856948968f-6jfks" podUID="17b244cc-ecb9-4fbc-beaa-8fa47fafe013"`,
- `E0507 11:59:<_>.<_> <_> kuberuntime_manager.go:1256] container &Container{Name:grafana,Image:us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>,Command:[/bin/sh],Args:[-c set -e; while [ "$(pidof hgrun-pause)" = "" ]; do sleep 0.5; done;`,
- `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"<_>\" with CrashLoopBackOff: \"back-off <_> restarting failed container=<_> pod=<_>(<_>)\"" pod="<_>/<_>" podUID="<_>"`,
- `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"<_>\" with CreateContainerConfigError: \"secret \\\"<_>\\\" not found\"" pod="<_>/<_>" podUID="<_>"`,
- `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"<_>\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/hosted-grafana/<_>:<_>.<_>.<_>\\\"\"" pod="<_>/<_>" podUID="<_>"`,
- `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"<_>\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/kubernetes-dev/<_>:<_>\\\"\"" pod="<_>/<_>" podUID="<_>"`,
- `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"grafana\" with ErrImagePull: \"[rpc error: code = NotFound desc = failed to pull and unpack image \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\\\": failed to resolve reference \\\"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\\\": us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>: not <_>`,
- `E0507 11:59:<_>.<_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pdc\" with ErrImageNeverPull: \"Container image \\\"us.gcr.io/hosted-grafana/pdc:0.1.415\\\" is not present with pull policy of Never\"" pod="pdc/<_>" podUID="<_>"`,
- `E0507 11:59:<_>.<_> <_> prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task <_> not found: not found" probeType="Readiness" pod="hosted-grafana/<_>" podUID="<_>" containerName="grafana"`,
- `E0507 11:59:<_>.<_> <_> prober.go:239] "Unable to write all bytes from execInContainer" err="short write" expectedBytes=<_> actualBytes=10240`,
- `E0507 11:59:<_>.<_> <_> remote_image.go:180] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>: not found" image="us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>"`,
- `E0507 11:59:<_>.<_> <_> remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": unexpected status from HEAD request to https://us.gcr.io/v2/hosted-grafana/hosted-grafana-pro/manifests/<_>.1.<_>: 403 Forbidden" image="us.gcr.io/hosted-grafana/<_>`,
- `E0507 11:59:<_>.<_> <_> remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"<_>\": not found" containerID="<_>"`,
- `E0507 11:59:<_>.<_> <_> remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task <_> not found: not found" containerID="<_>" cmd=["/bin/hgrun","check"]`,
+ `E0507 11:59:37.252214 4736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ksm\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ksm pod=new-relic-nri-bundle-nrk8s-ksm-6c785668f5-jcxh2_integration(f7cc3cca-2ffb-4fde-a73e-a4ba8b0f6b3c)\"" pod="integration/new-relic-nri-bundle-nrk8s-ksm-6c785668f5-jcxh2" podUID="f7cc3cca-2ffb-4fde-a73e-a4ba8b0f6b3c"`,
+ `E0507 11:59:39.149450 4729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-agent\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cluster-agent pod=appdynamics-cluster-agent-appdynamics-cluster-agent-56667dmbnkv_integration(69bc5e6c-0451-443e-af8a-c831871afbb8)\"" pod="integration/appdynamics-cluster-agent-appdynamics-cluster-agent-56667dmbnkv" podUID="69bc5e6c-0451-443e-af8a-c831871afbb8"`,
+ `E0507 <_> 4731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"overrides-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/kubernetes-dev/enterprise-logs:callum-shard-firstlast-08\\\"\"" pod="loki-dev-010/overrides-exporter-98c77fd66-6zj6m" podUID="1ff5bf3e-5856-4f6f-ae04-273f2dee170b"`,
+ `E0507 <_> 4733 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=prometheus pod=bryan-prometheus-0_bryan-prometheus(6dadfe71-eb19-4231-a96e-c64bb5499a1e)\"" pod="bryan-prometheus/bryan-prometheus-0" podUID="6dadfe71-eb19-4231-a96e-c64bb5499a1e"`,
+ `E0507 <_> <_> kuberuntime_manager.go:1256] container &Container{Name:grafana,Image:us.gcr.io/hosted-grafana/<_>,Command:[/bin/sh],Args:[-c set -e; while [ "$(pidof hgrun-pause)" = "" ]; do sleep 0.5; done;`,
+ `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"agent\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=agent pod=<_>(<_>)\"" pod="jaeger/<_>" podUID="<_>"`,
+ `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cortex-gw\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cortex-gw pod=<_>(<_>)\"" pod="faro/<_>" podUID="<_>"`,
+ `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gcom-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/kubernetes-dev/frontend-monitoring:6a8eb5a\\\"\"" pod="faro/<_>" podUID="<_>"`,
+ `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldpinger\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=goldpinger pod=<_>(<_>)\"" pod="goldpinger/<_>" podUID="<_>"`,
+ `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"grafana\" with CrashLoopBackOff: \"back-off <_> restarting failed container=grafana pod=<_>(<_>)\"" pod="<_>/<_>" podUID="<_>"`,
+ `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"grafana\" with ErrImagePull: \"[rpc error: code = NotFound desc = failed to pull and unpack image \\\"us.gcr.io/hosted-grafana/<_>\\\": failed to resolve reference \\\"us.gcr.io/hosted-grafana/<_>\\\": us.gcr.io/hosted-grafana/<_> not found, failed to pull and unpack image \\\"us.gcr.io/hosted-grafana/<_>\\\": failed to resolve reference \\\"us.gcr.io/hosted-grafana/<_>\\\": unexpected status from <_>`,
+ `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"grafana\" with ImagePullBackOff: \"Back-off pulling image \\\"us.gcr.io/hosted-grafana/<_>\\\"\"" pod="hosted-grafana/<_>" podUID="<_>"`,
+ `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pdc\" with ErrImageNeverPull: \"Container image \\\"us.gcr.io/hosted-grafana/pdc:0.1.415\\\" is not present with pull policy of Never\"" pod="pdc/<_>" podUID="<_>"`,
+ `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ruler\" with CreateContainerConfigError: \"secret \\\"ruler-alertmanager-token\\\" not found\"" pod="ge-metrics-federation/<_>" podUID="<_>"`,
+ `E0507 <_> <_> pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"support-agent\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=support-agent pod=<_>(<_>)\"" pod="support-agent/<_>" podUID="<_>"`,
+ `E0507 <_> <_> prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task <_> not found: not found" probeType="Readiness" pod="hosted-grafana/<_>" podUID="<_>" containerName="grafana"`,
+ `E0507 <_> <_> prober.go:239] "Unable to write all bytes from execInContainer" err="short write" expectedBytes=<_> actualBytes=10240`,
+ `E0507 <_> <_> remote_image.go:180] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"us.gcr.io/hosted-grafana/<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/<_>\": us.gcr.io/hosted-grafana/<_> not found" image="us.gcr.io/hosted-grafana/<_>"`,
+ `E0507 <_> <_> remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"us.gcr.io/hosted-grafana/<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/<_>\": unexpected status from HEAD request to https://us.gcr.io/v2/hosted-grafana/hosted-grafana-pro/manifests/<_> 403 Forbidden" image="us.gcr.io/hosted-grafana/<_>"`,
+ `E0507 <_> <_> remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"<_>\": not found" containerID="<_>"`,
+ `E0507 <_> <_> remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task <_> not found: not found" containerID="<_>" cmd=["/bin/hgrun","check"]`,
`I0507 11:59:31.815514 2791 azure_credentials.go:220] image(us.gcr.io/hosted-grafana/hosted-grafana-pro) is not from ACR, return empty authentication`,
`I0507 11:59:34.518822 3224 kuberuntime_container.go:745] "Killing container with a grace period" pod="hosted-grafana/hosted-grafana-api-7b6bd9b949-9csb4" podUID="25cb986c-3d6c-4ed0-abf3-ee59ed6175f9" containerName="hgapi" containerID="containerd://c91436db00920ec961b9d5d6b4859d80a912e862e34fb5c45d8a85684fe6a97e" gracePeriod=30`,
`I0507 11:59:34.834734 3224 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95j2t\" (UniqueName: \"kubernetes.io/projected/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-kube-api-access-95j2t\") pod \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\" (UID: \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\") "`,
`I0507 11:59:34.834794 3224 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"pdc-certs\" (UniqueName: \"kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-pdc-certs\") pod \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\" (UID: \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\") "`,
`I0507 11:59:34.834835 3224 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcs-serviceaccount\" (UniqueName: \"kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-gcs-serviceaccount\") pod \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\" (UID: \"25cb986c-3d6c-4ed0-abf3-ee59ed6175f9\") "`,
+ `I0507 11:59:34.836955 3224 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-pdc-certs" (OuterVolumeSpecName: "pdc-certs") pod "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9" (UID: "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9"). InnerVolumeSpecName "pdc-certs". PluginName "kubernetes.io/secret", VolumeGidValue ""`,
`I0507 11:59:34.841404 3224 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-kube-api-access-95j2t" (OuterVolumeSpecName: "kube-api-access-95j2t") pod "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9" (UID: "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9"). InnerVolumeSpecName "kube-api-access-95j2t". PluginName "kubernetes.io/projected", VolumeGidValue ""`,
+ `I0507 11:59:34.841447 3224 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-gcs-serviceaccount" (OuterVolumeSpecName: "gcs-serviceaccount") pod "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9" (UID: "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9"). InnerVolumeSpecName "gcs-serviceaccount". PluginName "kubernetes.io/secret", VolumeGidValue ""`,
+ `I0507 11:59:34.935951 3224 reconciler_common.go:300] "Volume detached for volume \"gcs-serviceaccount\" (UniqueName: \"kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-gcs-serviceaccount\") on node \"ip-10-60-2-58.us-east-2.compute.internal\" DevicePath \"\""`,
+ `I0507 11:59:34.935988 3224 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-95j2t\" (UniqueName: \"kubernetes.io/projected/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-kube-api-access-95j2t\") on node \"ip-10-60-2-58.us-east-2.compute.internal\" DevicePath \"\""`,
`I0507 11:59:34.936025 3224 reconciler_common.go:300] "Volume detached for volume \"pdc-certs\" (UniqueName: \"kubernetes.io/secret/25cb986c-3d6c-4ed0-abf3-ee59ed6175f9-pdc-certs\") on node \"ip-10-60-2-58.us-east-2.compute.internal\" DevicePath \"\""`,
- `I0507 11:59:34.<_> 3224 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/<_>" (OuterVolumeSpecName: "<_>") pod "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9" (UID: "25cb986c-3d6c-4ed0-abf3-ee59ed6175f9"). InnerVolumeSpecName "<_>". PluginName "kubernetes.io/secret", VolumeGidValue ""`,
- `I0507 11:59:34.<_> 3224 reconciler_common.go:300] "Volume detached for volume \"<_>\" (UniqueName: \"kubernetes.io/<_>/<_>\") on node \"ip-10-60-2-58.us-east-2.compute.internal\" DevicePath \"\""`,
- `I0507 11:59:37.<_> <_> prober.go:107] "Probe failed" probeType="Readiness" pod="<_>/<_>" podUID="<_>" containerName="<_>" probeResult="failure" output="HTTP probe failed with statuscode: <_>"`,
+ `I0507 11:59:38.092172 4527 kubelet.go:2426] "SyncLoop (PLEG): event for pod" pod="otel-demo/otel-demo-dev-checkoutservice-6ddf9b978b-zqrsr" event={"ID":"f263b787-926e-459a-95a0-f9ef8e4e9bc2","Type":"ContainerStarted","Data":"95bf586cd79d43120ff44582d4dbd2476de61744411f8515b9b2c527a41fd5d9"}`,
`I0507 11:59:38.116658 2791 azure_credentials.go:220] image(us.gcr.io/hosted-grafana/hg-plugins) is not from ACR, return empty authentication`,
`I0507 11:59:39.168633 2776 kubelet.go:2493] "SyncLoop (probe)" probe="readiness" status="" pod="hosted-grafana/dafdeveuwest2-grafana-7845d969b5-f8h5q"`,
- `I0507 11:59:<_>.<_> 2791 azure_credentials.go:220] image(us.gcr.io/hosted-grafana/hgrun) is not from ACR, return empty authentication`,
- `I0507 11:59:<_>.<_> 6247 prober.go:107] "Probe failed" probeType="Readiness" pod="grafana-agent/grafana-agent-helm-4" podUID="c36c5200-1cd6-4093-893c-c022f91af996" containerName="grafana-agent" probeResult="failure" output="Get \"http://10.0.99.125:3090/-/ready\": dial tcp 10.0.99.125:3090: connect: connection refused"`,
- `I0507 11:59:<_>.<_> <_> generic.go:334] "Generic (PLEG): container finished" podID="<_>" containerID="<_>" exitCode=1`,
- `I0507 11:59:<_>.<_> <_> kubelet.go:2498] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hosted-grafana/<_>"`,
- `I0507 11:59:<_>.<_> <_> kubelet.go:2498] "SyncLoop (probe)" probe="readiness" status="ready" pod="hosted-grafana/<_>"`,
- `I0507 11:59:<_>.<_> <_> kubelet.go:<_>] "SyncLoop (PLEG): event for pod" pod="<_>/<_>" event={"ID":"<_>","Type":"<_>","Data":"<_>"}`,
- `I0507 11:59:<_>.<_> <_> kubelet.go:<_>] "SyncLoop DELETE" source="api" pods=["hosted-grafana/<_>"]`,
- `I0507 11:59:<_>.<_> <_> kubelet.go:<_>] "SyncLoop REMOVE" source="api" pods=["hosted-grafana/<_>"]`,
- `I0507 11:59:<_>.<_> <_> kubelet_getters.go:187] "Pod status updated" pod="kube-system/<_>" status="Running"`,
- `I0507 11:59:<_>.<_> <_> kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="<_>/<_>" secret="" err="secret \"<_>\" not found"`,
- `I0507 11:59:<_>.<_> <_> kubelet_volumes.go:<_>] "Cleaned up orphaned pod volumes dir" podUID="<_>" path="/var/lib/kubelet/pods/<_>/volumes"`,
- `I0507 11:59:<_>.<_> <_> pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"<_>"} err="failed to get container status \"<_>\": rpc error: code = NotFound desc = an error occurred when try to find container \"<_>\": not found"`,
- `I0507 11:59:<_>.<_> <_> prober.go:107] "Probe failed" probeType="Readiness" pod="hosted-grafana/<_>" podUID="<_>" containerName="grafana" probeResult="failure" output=<`,
- `I0507 11:59:<_>.<_> <_> scope.go:117] "RemoveContainer" containerID="<_>"`,
- `I0507 11:59:<_>.<_> <_> cache.go:40] re-using cached key and certificate`,
- `IPv4: martian source 10.132.<_>.<_> from 10.132.<_>.<_>, on dev eth0`,
+ `I0507 <_> 2791 azure_credentials.go:220] image(us.gcr.io/hosted-grafana/hgrun) is not from ACR, return empty authentication`,
+ `I0507 <_> 6247 prober.go:107] "Probe failed" probeType="Readiness" pod="grafana-agent/grafana-agent-helm-4" podUID="c36c5200-1cd6-4093-893c-c022f91af996" containerName="grafana-agent" probeResult="failure" output="Get \"http://10.0.99.125:3090/-/ready\": dial tcp 10.0.99.125:3090: connect: connection refused"`,
+ `I0507 <_> <_> <_>] "Cleaned up orphaned pod volumes dir" podUID="<_>" path="/var/lib/kubelet/pods/<_>/volumes"`,
+ `I0507 <_> <_> <_>] "SyncLoop (PLEG): event for pod" pod="hosted-grafana/<_>" event={"ID":"<_>","Type":"<_>","Data":"<_>"}`,
+ `I0507 <_> <_> <_>] "SyncLoop DELETE" source="api" pods=["hosted-grafana/<_>"]`,
+ `I0507 <_> <_> <_>] "SyncLoop REMOVE" source="api" pods=["hosted-grafana/<_>"]`,
+ `I0507 <_> <_> generic.go:334] "Generic (PLEG): container finished" podID="<_>" containerID="<_>" exitCode=1`,
+ `I0507 <_> <_> kubelet.go:2498] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hosted-grafana/<_>"`,
+ `I0507 <_> <_> kubelet.go:2498] "SyncLoop (probe)" probe="readiness" status="ready" pod="hosted-grafana/<_>"`,
+ `I0507 <_> <_> kubelet_getters.go:187] "Pod status updated" pod="kube-system/<_>" status="Running"`,
+ `I0507 <_> <_> kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="<_>/<_>" secret="" err="secret \"<_>\" not found"`,
+ `I0507 <_> <_> kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="grafana-apps/<_>" secret="" err="secret \"dockerhub\" not found"`,
+ `I0507 <_> <_> pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"<_>"} err="failed to get container status \"<_>\": rpc error: code = NotFound desc = an error occurred when try to find container \"<_>\": not found"`,
+ `I0507 <_> <_> prober.go:107] "Probe failed" probeType="Readiness" pod="<_>/<_>" podUID="<_>" containerName="<_>" probeResult="failure" output="HTTP probe failed with statuscode: <_>"`,
+ `I0507 <_> <_> prober.go:107] "Probe failed" probeType="Readiness" pod="hosted-grafana/<_>" podUID="<_>" containerName="grafana" probeResult="failure" output=<`,
+ `I0507 <_> <_> scope.go:117] "RemoveContainer" containerID="<_>"`,
+ `I0507 <_> <_> cache.go:40] re-using cached key and certificate`,
+ `IPv4: martian source <_> from <_>, on dev eth0`,
`PRC: Renewing lease on eth0.`,
`RCV: Reply message on eth0 from fe80::e9:7eff:fedf:3d37.`,
`Removed slice libcontainer container kubepods-burstable-pod25cb986c_3d6c_4ed0_abf3_ee59ed6175f9.slice.`,
- `Started cri-containerd-95bf586cd79d43120ff44582d4dbd2476de61744411f8515b9b2c527a41fd5d9.scope.`,
- `Started libcontainer container <_>.`,
+ `Started libcontainer container <_>`,
`XMT: Renew on eth0, interval 9700ms.`,
- `XMT: Solicit on eth0, interval <_>.`,
- `audit: type=1400 audit(<_>.<_>:<_>): apparmor="DENIED" operation="ptrace" profile="cri-containerd.apparmor.d" pid=<_> comm="pidof" requested_mask="read" denied_mask="read" peer="unconfined"`,
+ `XMT: Solicit on eth0, interval <_>`,
+ `audit: type=1400 audit(<_>): apparmor="DENIED" operation="ptrace" profile="cri-containerd.apparmor.d" pid=<_> comm="pidof" requested_mask="read" denied_mask="read" peer="unconfined"`,
`kauditd_printk_skb: <_> callbacks suppressed`,
`ll header: 00000000: 42 01 0a 80 00 <_> 42 01 0a 80 00 01 08 00`,
`net_ratelimit: 2 callbacks suppressed`,
- `run-containerd-io.containerd.runtime.v2.task-k8s.<_>.mount: Deactivated successfully.`,
- `run-containerd-runc-k8s.io-e5f17d69eee483ec8d43b26d5d628246984ba92f794ee5f3748935f5b6448b9b-runc.6eAyHn.mount: Deactivated successfully.`,
+ `time="2024-05-07T11:59:31.813758661Z" level=info msg="ImageCreate event name:\"us.gcr.io/hosted-grafana/hosted-grafana-pro@sha256:0853965a142fb95648de3281a7c71de0d05fb51616bc32b523dc2f1da6ca06dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="2024-05-07T11:59:32.755926053Z" level=info msg="CreateContainer within sandbox \"81e019a0248a0300a328fd59f9939c3eaa1b98aa7f325a7f6e00592633275ef6\" for container &ContainerMetadata{Name:checkoutservice,Attempt:3417,}"`,
+ `time="2024-05-07T11:59:33.579013658Z" level=info msg="ImageUpdate event name:\"us.gcr.io/hosted-grafana/hosted-grafana-pro@sha256:0853965a142fb95648de3281a7c71de0d05fb51616bc32b523dc2f1da6ca06dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
`time="2024-05-07T11:59:34.519591759Z" level=info msg="StopContainer for \"c91436db00920ec961b9d5d6b4859d80a912e862e34fb5c45d8a85684fe6a97e\" with timeout 30 (s)"`,
`time="2024-05-07T11:59:34.520032214Z" level=info msg="Stop container \"c91436db00920ec961b9d5d6b4859d80a912e862e34fb5c45d8a85684fe6a97e\" with signal terminated"`,
`time="2024-05-07T11:59:34.591282703Z" level=info msg="StopContainer for \"c91436db00920ec961b9d5d6b4859d80a912e862e34fb5c45d8a85684fe6a97e\" returns successfully"`,
@@ -174,33 +181,37 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
`time="2024-05-07T11:59:34.592084495Z" level=info msg="Container to stop \"c91436db00920ec961b9d5d6b4859d80a912e862e34fb5c45d8a85684fe6a97e\" must be in running or unknown state, current state \"CONTAINER_EXITED\""`,
`time="2024-05-07T11:59:34.706960850Z" level=info msg="TearDown network for sandbox \"c605ad2cdc74c6b5288f2532ad71cce81a28ef6965f97a89ff6609deb825553a\" successfully"`,
`time="2024-05-07T11:59:34.707025668Z" level=info msg="StopPodSandbox for \"c605ad2cdc74c6b5288f2532ad71cce81a28ef6965f97a89ff6609deb825553a\" returns successfully"`,
- `time="2024-05-07T11:59:38.117772842Z" level=info msg="PullImage \"us.gcr.io/hosted-grafana/hg-plugins:2024-05-07-v545244-f51851984\""`,
+ `time="2024-05-07T11:59:36.177858616Z" level=info msg="CreateContainer within sandbox \"81e019a0248a0300a328fd59f9939c3eaa1b98aa7f325a7f6e00592633275ef6\" for &ContainerMetadata{Name:checkoutservice,Attempt:3417,} returns container id \"95bf586cd79d43120ff44582d4dbd2476de61744411f8515b9b2c527a41fd5d9\""`,
+ `time="2024-05-07T11:59:37.332685003Z" level=info msg="ImageCreate event name:\"us.gcr.io/hosted-grafana/hgrun@sha256:b492dbbbee9faf9dba63c9fd89e6f9e148239765454c6a54c4284a2828dec153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="2024-05-07T11:59:38.115073809Z" level=info msg="ImageUpdate event name:\"us.gcr.io/hosted-grafana/hgrun@sha256:b492dbbbee9faf9dba63c9fd89e6f9e148239765454c6a54c4284a2828dec153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
`time="2024-05-07T11:59:38.484586527Z" level=error msg="Failed to delete exec process \"d9e0a1867ce73695ad859f2b0a76fe8f5053db8a5e49142d747e53a445729bd4\" for container \"6ad3e55547f2192f865518e75009243418b177091c1c781236e2ac8f0324b408\"" error="ttrpc: closed: unknown"`,
- `time="2024-05-07T11:59:<_>.<_>" level=error msg="ContainerStatus for \"<_>\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"<_>\": not found"`,
- `time="2024-05-07T11:59:<_>.<_>" level=error msg="ExecSync for \"<_>\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task <_> not found: not found"`,
- `time="2024-05-07T11:59:<_>.<_>" level=error msg="PullImage \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\" failed" error="failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": unexpected status from HEAD request to https://us.gcr.io/v2/hosted-grafana/hosted-grafana-pro/manifests/<_>.1.<_>: 403 Forbidden"`,
- `time="2024-05-07T11:59:<_>.<_>" level=error msg="PullImage \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>\": us.gcr.io/hosted-grafana/hosted-grafana-pro:<_>.1.<_>: not found"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="CreateContainer within sandbox \"<_>\" for &ContainerMetadata{Name:<_>,Attempt:<_>,} returns container id \"<_>\""`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="CreateContainer within sandbox \"<_>\" for container &ContainerMetadata{Name:<_>,Attempt:<_>,}"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageCreate event name:\"sha256:<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageCreate event name:\"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageCreate event name:\"us.gcr.io/hosted-grafana/<_>@sha256:<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageUpdate event name:\"sha256:<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageUpdate event name:\"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="ImageUpdate event name:\"us.gcr.io/hosted-grafana/<_>@sha256:<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="PullImage \"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\" returns image reference \"sha256:<_>\""`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="PullImage \"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\""`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="Pulled image \"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\" with image id \"sha256:<_>\", repo tag \"us.gcr.io/hosted-grafana/<_>:<_>.1.<_>\", repo digest \"us.gcr.io/hosted-grafana/<_>@sha256:<_>\", size \"<_>\" in <_>.<_>"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="RemoveContainer for \"<_>\" returns successfully"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="RemoveContainer for \"<_>\""`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="StartContainer for \"<_>\" returns successfully"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="StartContainer for \"<_>\""`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="cleaning up dead shim" namespace=k8s.io`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="shim disconnected" id=<_> namespace=k8s.io`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="stop pulling image us.gcr.io/hosted-grafana/<_>:<_>.1.<_>: active requests=0, bytes read=<_>"`,
- `time="2024-05-07T11:59:<_>.<_>" level=info msg="trying next host - response was http.StatusNotFound" host=us.gcr.io`,
- `time="2024-05-07T11:59:<_>.<_>" level=warning msg="cleaning up after shim disconnected" id=<_> namespace=k8s.io`,
- `var-lib-containerd-tmpmounts-containerd\<_>.mount: Deactivated successfully.`,
+ `time="2024-05-07T11:59:43.941729092Z" level=info msg="CreateContainer within sandbox \"ee9dc07bca79ef7dffe2a6eb326e27236e9e97c35913c7aae16ee0a62632fc25\" for container &ContainerMetadata{Name:cortex-gw,Attempt:1660,}"`,
+ `time="2024-05-07T11:59:43.954289531Z" level=info msg="CreateContainer within sandbox \"ee9dc07bca79ef7dffe2a6eb326e27236e9e97c35913c7aae16ee0a62632fc25\" for &ContainerMetadata{Name:cortex-gw,Attempt:1660,} returns container id \"93fa5decd62691912f90c9b27526f5e00183239bfa4d3f4ea8578a7873b9c2b4\""`,
+ `time="<_>" level=error msg="ContainerStatus for \"<_>\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"<_>\": not found"`,
+ `time="<_>" level=error msg="ExecSync for \"<_>\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task <_> not found: not found"`,
+ `time="<_>" level=error msg="PullImage \"us.gcr.io/hosted-grafana/<_>\" failed" error="failed to pull and unpack image \"us.gcr.io/hosted-grafana/<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/<_>\": unexpected status from HEAD request to https://us.gcr.io/v2/hosted-grafana/hosted-grafana-pro/manifests/<_> 403 Forbidden"`,
+ `time="<_>" level=error msg="PullImage \"us.gcr.io/hosted-grafana/<_>\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"us.gcr.io/hosted-grafana/<_>\": failed to resolve reference \"us.gcr.io/hosted-grafana/<_>\": us.gcr.io/hosted-grafana/<_> not found"`,
+ `time="<_>" level=info msg="CreateContainer within sandbox \"<_>\" for &ContainerMetadata{Name:grafana,<_>,} returns container id \"<_>\""`,
+ `time="<_>" level=info msg="CreateContainer within sandbox \"<_>\" for &ContainerMetadata{Name:hgrun,Attempt:0,} returns container id \"<_>\""`,
+ `time="<_>" level=info msg="CreateContainer within sandbox \"<_>\" for container &ContainerMetadata{Name:grafana,<_>,}"`,
+ `time="<_>" level=info msg="CreateContainer within sandbox \"<_>\" for container &ContainerMetadata{Name:hgrun,Attempt:0,}"`,
+ `time="<_>" level=info msg="ImageCreate event name:\"<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="<_>" level=info msg="ImageCreate event name:\"us.gcr.io/hosted-grafana/<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="<_>" level=info msg="ImageUpdate event name:\"<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="<_>" level=info msg="ImageUpdate event name:\"us.gcr.io/hosted-grafana/<_>\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"`,
+ `time="<_>" level=info msg="PullImage \"us.gcr.io/hosted-grafana/<_>\" returns image reference \"<_>\""`,
+ `time="<_>" level=info msg="PullImage \"us.gcr.io/hosted-grafana/<_>\""`,
+ `time="<_>" level=info msg="Pulled image \"us.gcr.io/hosted-grafana/<_>\" with image id \"<_>\", repo tag \"us.gcr.io/hosted-grafana/<_>\", repo digest \"us.gcr.io/hosted-grafana/<_>@<_>\", size \"<_>\" in <_>"`,
+ `time="<_>" level=info msg="RemoveContainer for \"<_>\" returns successfully"`,
+ `time="<_>" level=info msg="RemoveContainer for \"<_>\""`,
+ `time="<_>" level=info msg="StartContainer for \"<_>\" returns successfully"`,
+ `time="<_>" level=info msg="StartContainer for \"<_>\""`,
+ `time="<_>" level=info msg="cleaning up dead shim" namespace=k8s.io`,
+ `time="<_>" level=info msg="shim disconnected" id=<_> namespace=k8s.io`,
+ `time="<_>" level=info msg="stop pulling image us.gcr.io/hosted-grafana/<_> active requests=0, bytes read=<_>"`,
+ `time="<_>" level=info msg="trying next host - response was http.StatusNotFound" host=us.gcr.io`,
+ `time="<_>" level=warning msg="cleaning up after shim disconnected" id=<_> namespace=k8s.io`,
+ `var-lib-containerd-tmpmounts-containerd\<_> Deactivated successfully.`,
},
},
{
@@ -211,17 +222,17 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
`[2024-05-07 10:55:40,626] INFO [LocalLog partition=ingest-6, dir=/bitnami/kafka/data] Deleting segment files LogSegment(baseOffset=180391157, size=16991045, lastModifiedTime=1715075754780, largestRecordTimestamp=Some(1715075754774)),LogSegment(baseOffset=180393429, size=16997692, lastModifiedTime=1715075760206, largestRecordTimestamp=Some(1715075760186)),LogSegment(baseOffset=180395889, size=16998200, lastModifiedTime=1715075765542, largestRecordTimestamp=Some(1715075765526)),LogSegment(baseOffset=180398373, size=16977347, lastModifiedTime=1715075770515, largestRecordTimestamp=Some(1715075770504)) (kafka.log.LocalLog$)`,
`[2024-05-07 10:55:53,038] INFO [LocalLog partition=mimir-dev-09-aggregations-offsets-1, dir=/bitnami/kafka/data] Deleting segment files LogSegment(baseOffset=447957, size=948, lastModifiedTime=1715059232052, largestRecordTimestamp=Some(1715059232002)),LogSegment(baseOffset=447969, size=948, lastModifiedTime=1715059424352, largestRecordTimestamp=Some(1715059424301)) (kafka.log.LocalLog$)`,
`[2024-05-07 10:55:53,<_>] INFO [LocalLog partition=mimir-dev-09-aggregations-offsets-0, dir=/bitnami/kafka/data] Deleting segment files LogSegment(baseOffset=<_>, size=948, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) (kafka.log.LocalLog$)`,
- `[2024-05-07 10:55:<_>,<_>] INFO Deleted log /bitnami/kafka/data/<_>/<_>.log.deleted. (kafka.log.LogSegment)`,
- `[2024-05-07 10:55:<_>,<_>] INFO Deleted offset index /bitnami/kafka/data/<_>/<_>.index.deleted. (kafka.log.LogSegment)`,
- `[2024-05-07 10:55:<_>,<_>] INFO Deleted producer state snapshot /bitnami/kafka/data/<_>/<_>.snapshot.deleted (kafka.log.SnapshotFile)`,
- `[2024-05-07 10:55:<_>,<_>] INFO Deleted time index /bitnami/kafka/data/<_>/<_>.timeindex.deleted. (kafka.log.LogSegment)`,
- `[2024-05-07 10:55:<_>,<_>] INFO [LocalLog partition=<_>, dir=/bitnami/kafka/data] Rolled new log segment at offset <_> in <_> ms. (kafka.log.LocalLog)`,
- `[2024-05-07 10:55:<_>,<_>] INFO [ProducerStateManager partition=<_>] Wrote producer snapshot at offset <_> with 0 producer ids in <_> ms. (kafka.log.ProducerStateManager)`,
- `[2024-05-07 10:55:<_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Deleting segment LogSegment(baseOffset=<_>, size=<_>, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) due to retention size <_> breach. Log size after deletion will be <_>. (kafka.log.UnifiedLog)`,
- `[2024-05-07 10:55:<_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Deleting segments due to log start offset <_> breach: LogSegment(baseOffset=<_>, size=948, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)),LogSegment(baseOffset=<_>, size=948, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) (kafka.log.UnifiedLog)`,
- `[2024-05-07 10:55:<_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Deleting segments due to log start offset <_> breach: LogSegment(baseOffset=<_>, size=<_>, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) (kafka.log.UnifiedLog)`,
- `[2024-05-07 10:55:<_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Incremented log start offset to <_> due to leader offset increment (kafka.log.UnifiedLog)`,
- `[2024-05-07 10:55:<_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Incremented log start offset to <_> due to segment deletion (kafka.log.UnifiedLog)`,
+ `[2024-05-07 <_>,<_>] INFO Deleted log /bitnami/kafka/data/<_>/<_> (kafka.log.LogSegment)`,
+ `[2024-05-07 <_>,<_>] INFO Deleted offset index /bitnami/kafka/data/<_>/<_> (kafka.log.LogSegment)`,
+ `[2024-05-07 <_>,<_>] INFO Deleted producer state snapshot /bitnami/kafka/data/<_>/<_> (kafka.log.SnapshotFile)`,
+ `[2024-05-07 <_>,<_>] INFO Deleted time index /bitnami/kafka/data/<_>/<_> (kafka.log.LogSegment)`,
+ `[2024-05-07 <_>,<_>] INFO [LocalLog partition=<_>, dir=/bitnami/kafka/data] Rolled new log segment at offset <_> in <_> ms. (kafka.log.LocalLog)`,
+ `[2024-05-07 <_>,<_>] INFO [ProducerStateManager partition=<_>] Wrote producer snapshot at offset <_> with 0 producer ids in <_> ms. (kafka.log.ProducerStateManager)`,
+ `[2024-05-07 <_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Deleting segment LogSegment(baseOffset=<_>, size=<_>, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) due to retention size <_> breach. Log size after deletion will be <_> (kafka.log.UnifiedLog)`,
+ `[2024-05-07 <_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Deleting segments due to log start offset <_> breach: LogSegment(baseOffset=<_>, size=948, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)),LogSegment(baseOffset=<_>, size=948, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) (kafka.log.UnifiedLog)`,
+ `[2024-05-07 <_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Deleting segments due to log start offset <_> breach: LogSegment(baseOffset=<_>, size=<_>, lastModifiedTime=<_>, largestRecordTimestamp=Some(<_>)) (kafka.log.UnifiedLog)`,
+ `[2024-05-07 <_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Incremented log start offset to <_> due to leader offset increment (kafka.log.UnifiedLog)`,
+ `[2024-05-07 <_>,<_>] INFO [UnifiedLog partition=<_>, dir=/bitnami/kafka/data] Incremented log start offset to <_> due to segment deletion (kafka.log.UnifiedLog)`,
},
},
{
@@ -230,48 +241,51 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
format: FormatUnknown,
patterns: []string{
`I0507 12:02:27.947830 1 nodeutilization.go:274] "Evicting pods based on priority, if they have same priority, they'll be evicted based on QoS tiers"`,
- `I0507 12:02:27.<_> 1 defaultevictor.go:163] "pod does not fit on any other node because of nodeSelector(s), Taint(s), or nodes marked as unschedulable" pod="<_>/<_>"`,
- `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="<_>/<_>" checks="pod has local storage and descheduler is not configured with evictLocalStoragePods"`,
- `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="ge-logs/<_>" checks="[pod is a DaemonSet pod, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="insight-logs/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="loki-dev-ssd/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="promtail-ops/<_>" checks="[pod is a DaemonSet pod, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:02:27.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="pyroscope-ebpf/<_>" checks="pod is a DaemonSet pod"`,
- `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, <_> <_><_> <_> <_><_> <_> <_>]"`,
- `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, insufficient <_>, insufficient <_>]"`,
- `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, insufficient <_>]"`,
- `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, pod does not tolerate taints on the node, insufficient <_>, insufficient <_>]"`,
- `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, pod does not tolerate taints on the node, insufficient <_>]"`,
- `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="insufficient cpu"`,
- `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="loki-dev-005/querier-burst-6b5f6db455-5zvkm" node:="<_>" error:="[insufficient <_>, insufficient <_>]"`,
- `I0507 12:02:27.<_> 1 node.go:157] "Pod does not fit on any other node" pod:="loki-dev-005/querier-burst-6b5f6db455-5zvkm" node:="<_>" error:="pod node selector does not match the node label"`,
- `I0507 12:02:27.<_> 1 node.go:339] "no Pod antiaffinity rule found" pod="<_>/<_>"`,
+ `I0507 12:02:27.988834 1 defaultevictor.go:202] "Pod fails the following checks" pod="netfilter-exporter/netfilter-exporter-vsqft" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
`I0507 12:04:17.595169 1 descheduler.go:155] Building a pod evictor`,
`I0507 12:04:17.596431 1 nodeutilization.go:204] "Node is underutilized" node="gke-dev-eu-west-3-main-n2s8-1-1dd39c-d1c92061-4z2l" usage={"cpu":"984m","memory":"611Mi","pods":"16"} usagePercentage={"cpu":12.44,"memory":2.15,"pods":25}`,
`I0507 12:04:17.596484 1 highnodeutilization.go:107] "Criteria for a node below target utilization" CPU=50 Mem=50 Pods=100`,
`I0507 12:04:17.596504 1 highnodeutilization.go:108] "Number of underutilized nodes" totalNumber=1`,
`I0507 12:04:17.596528 1 nodeutilization.go:260] "Total capacity to be moved" CPU=5060 Mem=112216292800 Pods=163`,
`I0507 12:04:17.596651 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/metrics-server-v0.6.3-68f5b7c4d5-t5mz8" checks="[pod has system critical priority, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:04:17.596685 1 defaultevictor.go:202] "Pod fails the following checks" pod="agent-logs/agent-lmlhl" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 12:04:17.596722 1 defaultevictor.go:202] "Pod fails the following checks" pod="startup/startup-sjjws" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
`I0507 12:04:17.596803 1 defaultevictor.go:202] "Pod fails the following checks" pod="gadget/gadget-zjjts" checks="[pod is a DaemonSet pod, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:04:17.<_> 1 nodeutilization.go:207] "Node is overutilized" node="<_>" usage={"cpu":"<_>","memory":"<_>","pods":"<_>"} usagePercentage={"cpu":<_>.<_>,"memory":<_>.<_>,"pods":<_>.<_>}`,
- `I0507 12:04:17.<_> 1 nodeutilization.go:207] "Node is overutilized" node="<_>" usage={"cpu":"<_>","memory":"<_>","pods":"<_>"} usagePercentage={"cpu":<_>.<_>,"memory":<_>.<_>,"pods":<_>}`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="agent-logs/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="conntrack-exporter/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="goldpinger/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod has system critical priority, pod has higher priority than specified priority class threshold]"`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod is a DaemonSet pod, pod has system critical priority, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod is a DaemonSet pod, pod has system critical priority, pod has higher priority than specified priority class threshold]"`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod is a mirror pod, pod is a static pod, pod has system critical priority, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="netfilter-exporter/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="node-exporter/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="promtail-ops/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
- `I0507 12:<_>:<_>.<_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="startup/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
- `I0507 12:<_>:<_>.<_> 1 descheduler.go:<_>] "Number of evicted pods" totalEvicted=<_>`,
- `I0507 12:<_>:<_>.<_> 1 nodeutilization.go:<_>] "Evicting pods from node" node="<_>" usage={"cpu":"<_>","memory":"<_>","pods":"<_>"}`,
- `I0507 12:<_>:<_>.<_> 1 nodeutilization.go:<_>] "No removable pods on node, try next node" node="<_>"`,
- `I0507 12:<_>:<_>.<_> 1 nodeutilization.go:<_>] "Pods on node" node="<_>" allPods=<_> nonRemovablePods=<_> removablePods=<_>`,
- `I0507 12:<_>:<_>.<_> 1 profile.go:<_>] "Total number of pods evicted" extension point="Balance" evictedPods=<_>`,
- `I0507 12:<_>:<_>.<_> 1 reflector.go:<_>] k8s.io/client-go/informers/factory.go:<_>: Watch close - *v1.<_> total <_> items received`,
+ `I0507 12:04:17.596827 1 defaultevictor.go:202] "Pod fails the following checks" pod="netfilter-exporter/netfilter-exporter-jkrhn" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 <_> 1 <_>] "Evicting pods from node" node="<_>" usage={"cpu":"<_>","memory":"<_>","pods":"<_>"}`,
+ `I0507 <_> 1 <_>] "No removable pods on node, try next node" node="<_>"`,
+ `I0507 <_> 1 <_>] "Number of evicted pods" totalEvicted=<_>`,
+ `I0507 <_> 1 <_>] "Pods on node" node="<_>" allPods=<_> nonRemovablePods=<_> removablePods=<_>`,
+ `I0507 <_> 1 <_>] "Total number of pods evicted" extension point="Balance" evictedPods=<_>`,
+ `I0507 <_> 1 <_>] k8s.io/client-go/informers/<_> Watch close - *<_> total <_> items received`,
+ `I0507 <_> 1 defaultevictor.go:163] "pod does not fit on any other node because of nodeSelector(s), Taint(s), or nodes marked as unschedulable" pod="<_>/<_>"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="<_>/<_>" checks="pod has local storage and descheduler is not configured with evictLocalStoragePods"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="agent-logs/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="conntrack-exporter/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="ge-logs/<_>" checks="[pod is a DaemonSet pod, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="goldpinger/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="insight-logs/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod has system critical priority, pod has higher priority than specified priority class threshold]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod is a DaemonSet pod, pod has system critical priority, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod is a DaemonSet pod, pod has system critical priority, pod has higher priority than specified priority class threshold]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="kube-system/<_>" checks="[pod is a mirror pod, pod is a static pod, pod has system critical priority, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="loki-dev-ssd/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="netfilter-exporter/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="node-exporter/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="promtail-ops/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="promtail-ops/<_>" checks="[pod is a DaemonSet pod, pod has local storage and descheduler is not configured with evictLocalStoragePods]"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="pyroscope-ebpf/<_>" checks="pod is a DaemonSet pod"`,
+ `I0507 <_> 1 defaultevictor.go:202] "Pod fails the following checks" pod="startup/<_>" checks="[pod is a DaemonSet pod, pod has higher priority than specified priority class threshold]"`,
+ `I0507 <_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, <_> <_><_> <_> <_><_> <_> <_>]"`,
+ `I0507 <_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, insufficient <_>, insufficient <_>]"`,
+ `I0507 <_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, insufficient <_>]"`,
+ `I0507 <_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, pod does not tolerate taints on the node, insufficient <_>, insufficient <_>]"`,
+ `I0507 <_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="[pod node selector does not match the node label, pod does not tolerate taints on the node, insufficient <_>]"`,
+ `I0507 <_> 1 node.go:157] "Pod does not fit on any other node" pod:="<_>/<_>" node:="<_>" error:="insufficient cpu"`,
+ `I0507 <_> 1 node.go:157] "Pod does not fit on any other node" pod:="loki-dev-005/querier-burst-6b5f6db455-5zvkm" node:="<_>" error:="[insufficient <_>, insufficient <_>]"`,
+ `I0507 <_> 1 node.go:157] "Pod does not fit on any other node" pod:="loki-dev-005/querier-burst-6b5f6db455-5zvkm" node:="<_>" error:="pod node selector does not match the node label"`,
+ `I0507 <_> 1 node.go:339] "no Pod antiaffinity rule found" pod="<_>/<_>"`,
+ `I0507 <_> 1 nodeutilization.go:207] "Node is overutilized" node="<_>" usage={"cpu":"<_>","memory":"<_>","pods":"<_>"} usagePercentage={"cpu"<_>,"memory"<_>,"pods"<_>}`,
},
},
{
@@ -280,7 +294,7 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
format: FormatUnknown,
patterns: []string{
`2024-05-07T10:56:38.667Z [INFO] expiration: revoked lease: lease_id=auth/gcp/login/h4c031a99aa555040a0dd99864d828e946c6d4e31f4f5178757183def61f9d104`,
- `2024-05-07T10:<_>:<_>.<_> [INFO] expiration: revoked lease: lease_id=auth/kubernetes/<_>/login/<_>`,
+ `<_> [INFO] expiration: revoked lease: lease_id=auth/kubernetes/<_>/login/<_>`,
},
},
{
@@ -289,10 +303,10 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
format: FormatUnknown,
patterns: []string{
`2024-05-08 15:23:56.403 [DEBUG][615489] felix/table.go 699: Finished loading iptables state ipVersion=0x4 table="filter"`,
- `2024-05-08 15:23:56.403 [INFO][615489] felix/summary.go 100: Summarising 1 dataplane reconciliation loops over 600ms: avg=119ms longest=119ms (resync-filter-v4)`,
`2024-05-08 15:23:56.614 [DEBUG][76] felix/int_dataplane.go 1777: Refreshing routes`,
`2024-05-08 15:23:56.615 [DEBUG][76] felix/route_rule.go 179: Queueing a resync of routing rules. ipVersion=4`,
- `2024-05-08 15:23:56.615 [DEBUG][76] felix/route_table.go 480: Queueing a resync of routing table. ifaceRegex="<_>.<_>" ipVersion=0x4 tableIndex=<_>`,
+ `2024-05-08 15:23:56.615 [DEBUG][76] felix/route_table.go 480: Queueing a resync of routing table. ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 15:23:56.615 [DEBUG][76] felix/route_table.go 480: Queueing a resync of routing table. ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1`,
`2024-05-08 15:23:56.615 [DEBUG][76] felix/route_table.go 533: Check interfaces matching regex`,
`2024-05-08 15:23:56.615 [DEBUG][76] felix/wireguard.go 605: Queueing a resync of wireguard configuration ipVersion=0x4`,
`2024-05-08 15:23:56.615 [DEBUG][76] felix/wireguard.go 654: Wireguard is not in-sync - verifying wireguard configuration is removed ipVersion=0x4`,
@@ -302,18 +316,8 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
`2024-05-08 15:23:56.619 [DEBUG][76] felix/route_table.go 661: Syncing interface routes ifaceName="*NoOIF*" ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1`,
`2024-05-08 15:23:56.619 [DEBUG][76] felix/route_table.go 686: Reconcile against kernel programming ifaceName="*NoOIF*" ifaceRegex="^wireguard.cali$" ipVersion=0x4 tableIndex=1`,
`2024-05-08 15:23:56.624 [INFO][76] felix/summary.go 100: Summarising 1 dataplane reconciliation loops over 200ms: avg=10ms longest=10ms (resync-routes-v4,resync-routes-v4,resync-rules-v4,resync-wg)`,
- `2024-05-08 15:23:56.<_> [DEBUG][615489] felix/table.go 677: Skipping expected chain chainName="<_>" ipVersion=0x4 table="filter"`,
- `2024-05-08 15:23:56.<_> [DEBUG][615489] felix/table.go 677: Skipping expected chain chainName="<_>.<_>" ipVersion=0x4 table="filter"`,
- `2024-05-08 15:23:56.<_> [DEBUG][615489] felix/table.go 677: Skipping expected chain chainName="cali-pro-ksa.<_>.<_>" ipVersion=0x4 table="filter"`,
- `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 557: Resync: found calico-owned interface ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 614: Synchronised routes on interface ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 661: Syncing interface routes ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 686: Reconcile against kernel programming ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 880: Processing route: 254 <_> 10.68.10.<_>/32 ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
- `2024-05-08 15:23:56.<_> [DEBUG][76] felix/route_table.go 915: Route is correct dest=10.68.10.<_>/32 ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
`2024-05-08 15:23:57.942 [WARNING][56] felix/table.go 654: Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "6gwbT8clXdHdC1b1"} chainName="PREROUTING" expectedRuleIDs=[]string{"6gwbT8clXdHdC1b1", "", "", "", ""} ipVersion=0x4 table="raw"`,
`2024-05-08 15:23:57.969 [WARNING][56] felix/table.go 654: Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "tVnHkvAo15HuiPy0", "", "", "", "", ""} chainName="OUTPUT" expectedRuleIDs=[]string{"tVnHkvAo15HuiPy0", "", "", "", "", "", "", "", "", ""} ipVersion=0x4 table="filter"`,
- `2024-05-08 15:23:57.<_> [WARNING][56] felix/table.go 654: Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "<_><_><_> "<_><_> "<_><_> "<_><_> "<_><_> "<_><_> "<_><_> "<_><_> <_>", "", ""<_> <_><_><_> <_>"<_> <_><_><_><_><_> <_><_><_><_><_><_><_><_>", "", "", "", "", "", "", "", "", ""<_>`,
`2024-05-08 15:23:58.169 [INFO][2333] felix/summary.go 100: Summarising 35 dataplane reconciliation loops over 1m2s: avg=12ms longest=46ms (resync-filter-v4,resync-filter-v6,resync-mangle-v4,resync-mangle-v6,update-filter-v4,update-filter-v6)`,
`2024-05-08 15:23:58.566 [DEBUG][3576126] felix/int_dataplane.go 957: Examining link for MTU calculation mtu=1500 name="eth0"`,
`2024-05-08 15:23:58.680 [DEBUG][216945] felix/int_dataplane.go 1785: Reschedule kick received`,
@@ -328,40 +332,58 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
`2024-05-08 15:23:58.716 [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line="*nat" table="nat"`,
`2024-05-08 15:23:58.716 [DEBUG][216945] felix/table.go 881: Not an append, skipping ipVersion=0x4 line="# Generated by iptables-nft-save v1.8.4 on Wed May 8 15:23:58 2024" table="nat"`,
`2024-05-08 15:23:58.716 [DEBUG][216945] felix/table.go 881: Not an append, skipping ipVersion=0x4 line="*nat" table="nat"`,
- `2024-05-08 15:23:58.<_> [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line=":<_> <_> [0:0]" table="nat"`,
- `2024-05-08 15:23:58.<_> [DEBUG][216945] felix/table.go 870: Found forward-reference chainName="<_>" ipVersion=0x4 line=":<_> <_> [0:0]" table="nat"`,
- `2024-05-08 15:23:58.<_> [DEBUG][3576126] felix/int_dataplane.go 954: Skipping interface for MTU detection mtu=<_> name="<_>"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/endpoint_mgr.go 443: Reporting endpoint status. dirtyEndpoints=set.Set{}`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/health.go 167: Health: <_>`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/health.go 196: Checking state of reporter reporter=&health.reporterState{name:"<_>", reports:health.HealthReport{Live:true, Ready:true, Detail:""}, timeout:<_>, latest:health.HealthReport{Live:true, Ready:true, Detail:""}, timestamp:time.Time{wall:<_>, ext:<_>, loc:(*time.Location)(0x4ce3aa0)}}`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/health.go 245: Calculated health summary healthResult=&health.HealthReport{Live:true, Ready:true, Detail:"+------------------+---------+----------------+-----------------+--------+\n| COMPONENT | TIMEOUT | LIVENESS | READINESS | DETAIL |\n+------------------+---------+----------------+-----------------+--------+\n| async_calc_graph | 20s | reporting live | reporting ready | |\n| felix-startup | 0s | reporting live | reporting ready | |\n| int_dataplane | 1m30s | reporting live | reporting ready | |\n+------------------+---------+----------------+-----------------+--------+"}`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/health.go <_>: GET /<_>`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/int_dataplane.go 1773: Refreshing IP sets state`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/int_dataplane.go 1807: Applying dataplane updates`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/int_dataplane.go 2080: Asked to reschedule. delay=<_>.<_>`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 234: Asked to resync with the dataplane on next update. family="inet"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 314: Resyncing ipsets with dataplane. family="inet"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 366: Finished IPSets resync family="inet" numInconsistenciesFound=0 resyncDuration=<_>.<_>`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 426: Parsing IP set. family="inet" setName="<_>"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 467: Found member in dataplane canon=<_>.<_>.<_>.<_> family="inet" member="<_>.<_>.<_>.<_>" setID="this-host"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 589: Whitelisting IP sets. ID="<_>" family="inet" mainName="<_>"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 607: Skipping expected Calico IP set. family="inet" setName="<_>"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/ipsets.go 643: No dirty IP sets. family="inet"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/sync_client.go 347: Ping received from Typha connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} type=""`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/sync_client.go 356: Pong sent to Typha connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} type=""`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/sync_client.go 434: New message from Typha. connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} envelope=syncproto.Envelope{Message:syncproto.MsgPing{Timestamp:time.Date(2024, time.May, 8, 15, 23, <_>, <_>, time.Local)}} type=""`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/table.go 1233: In nftables mode, restarting transaction between updates and deletions. ipVersion=0x4 table="<_>"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/table.go 1263: Update ended up being no-op, skipping call to ip(6)tables-restore. ipVersion=0x4 table="<_>"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/wireguard.go 652: Wireguard is not enabled, skipping sync ipVersion=0x4`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1004: Updating ipsetIDsToMembers cache. family=4`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1043: Processing pending diff state. cs=&intdataplane.xdpSystemState{IfaceNameToData:map[string]intdataplane.xdpIfaceData{}, XDPEligiblePolicies:map[proto.PolicyID]intdataplane.xdpRules{}} family=4`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1270: Finished processing pending diff state. bpfActions=intdataplane.xdpBPFActions{CreateMap:set.Typed[string]{}, RemoveMap:set.Typed[string]{}, AddToMap:map[string]map[string]uint32{}, RemoveFromMap:map[string]map[string]uint32{}, InstallXDP:set.Typed[string]{}, UninstallXDP:set.Typed[string]{}, MembersToDrop:map[string]map[string]uint32{}, MembersToAdd:map[string]map[string]uint32{}} family=4 newCS=&intdataplane.xdpSystemState{IfaceNameToData:map[string]intdataplane.xdpIfaceData{}, XDPEligiblePolicies:map[proto.PolicyID]intdataplane.xdpRules{}}`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1605: Getting member changes. family=4 oldMembers=map[string]set.Set[string]{}`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1798: Processing BPF actions. family="ipv4"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 1932: Finished processing BPF actions. family="ipv4"`,
- `2024-05-08 15:23:<_>.<_> [DEBUG][<_>] felix/xdp_state.go 968: Processing member updates. family=4`,
- `2024-05-08 15:23:<_>.<_> [INFO][<_>] felix/summary.go 100: Summarising <_> dataplane reconciliation loops over <_>.<_>: avg=<_> longest=<_> (<_>)`,
- "bird: Netlink: No route to host",
+ `2024-05-08 15:23:58.717 [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line=":POSTROUTING ACCEPT [0:0]" table="nat"`,
+ `2024-05-08 15:23:58.717 [DEBUG][216945] felix/table.go 870: Found forward-reference chainName="POSTROUTING" ipVersion=0x4 line=":POSTROUTING ACCEPT [0:0]" table="nat"`,
+ `2024-05-08 15:23:58.718 [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line=":OUTPUT ACCEPT [0:0]" table="nat"`,
+ `2024-05-08 15:23:58.718 [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line=":PREROUTING ACCEPT [0:0]" table="nat"`,
+ `2024-05-08 15:23:58.718 [DEBUG][216945] felix/table.go 870: Found forward-reference chainName="OUTPUT" ipVersion=0x4 line=":OUTPUT ACCEPT [0:0]" table="nat"`,
+ `2024-05-08 15:23:58.718 [DEBUG][216945] felix/table.go 870: Found forward-reference chainName="PREROUTING" ipVersion=0x4 line=":PREROUTING ACCEPT [0:0]" table="nat"`,
+ `2024-05-08 <_> [DEBUG][216945] felix/table.go 851: Parsing line ipVersion=0x4 line="<_> - [0:0]" table="nat"`,
+ `2024-05-08 <_> [DEBUG][216945] felix/table.go 870: Found forward-reference chainName="<_>" ipVersion=0x4 line="<_> - [0:0]" table="nat"`,
+ `2024-05-08 <_> [DEBUG][3576126] felix/int_dataplane.go 954: Skipping interface for MTU detection mtu=<_> name="<_>"`,
+ `2024-05-08 <_> [DEBUG][615489] felix/table.go 677: Skipping expected chain chainName="<_>" ipVersion=0x4 table="filter"`,
+ `2024-05-08 <_> [DEBUG][76] felix/route_table.go 557: Resync: found calico-owned interface ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 <_> [DEBUG][76] felix/route_table.go 614: Synchronised routes on interface ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 <_> [DEBUG][76] felix/route_table.go 661: Syncing interface routes ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 <_> [DEBUG][76] felix/route_table.go 686: Reconcile against kernel programming ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 <_> [DEBUG][76] felix/route_table.go 880: Processing route: 254 <_> <_>/32 ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 <_> [DEBUG][76] felix/route_table.go 915: Route is correct dest=<_>/32 ifaceName="<_>" ifaceRegex="^azv.*" ipVersion=0x4 tableIndex=0`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/endpoint_mgr.go 443: Reporting endpoint status. dirtyEndpoints=set.Set{}`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/health.go 167: Health: <_>`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/health.go 196: Checking state of reporter reporter=&health.reporterState{name:"async_calc_graph", reports:health.HealthReport{Live:true, Ready:true, Detail:""}, timeout:20000000000, latest:health.HealthReport{Live:true, Ready:true, Detail:""}, timestamp:time.Time{<_>, <_>, loc:(*time.Location)(0x4ce3aa0)}}`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/health.go 196: Checking state of reporter reporter=&health.reporterState{name:"felix-startup", reports:health.HealthReport{Live:true, Ready:true, Detail:""}, timeout:0, latest:health.HealthReport{Live:true, Ready:true, Detail:""}, timestamp:time.Time{<_>, <_>, loc:(*time.Location)(0x4ce3aa0)}}`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/health.go 196: Checking state of reporter reporter=&health.reporterState{name:"int_dataplane", reports:health.HealthReport{Live:true, Ready:true, Detail:""}, timeout:90000000000, latest:health.HealthReport{Live:true, Ready:true, Detail:""}, timestamp:time.Time{<_>, <_>, loc:(*time.Location)(0x4ce3aa0)}}`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/health.go 245: Calculated health summary healthResult=&health.HealthReport{Live:true, Ready:true, Detail:"+------------------+---------+----------------+-----------------+--------+\n| COMPONENT | TIMEOUT | LIVENESS | READINESS | DETAIL |\n+------------------+---------+----------------+-----------------+--------+\n| async_calc_graph | 20s | reporting live | reporting ready | |\n| felix-startup | 0s | reporting live | reporting ready | |\n| int_dataplane | 1m30s | reporting live | reporting ready | |\n+------------------+---------+----------------+-----------------+--------+"}`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/health.go <_> GET /<_>`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/int_dataplane.go 1773: Refreshing IP sets state`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/int_dataplane.go 1807: Applying dataplane updates`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/int_dataplane.go 2080: Asked to reschedule. delay=<_>`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/ipsets.go 234: Asked to resync with the dataplane on next update. family="inet"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/ipsets.go 314: Resyncing ipsets with dataplane. family="inet"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/ipsets.go 366: Finished IPSets resync family="inet" numInconsistenciesFound=0 resyncDuration=<_>`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/ipsets.go 426: Parsing IP set. family="inet" setName="<_>"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/ipsets.go 467: Found member in dataplane canon=<_> family="inet" member="<_>" setID="this-host"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/ipsets.go 589: Whitelisting IP sets. ID="all-ipam-pools" family="inet" mainName="cali40all-ipam-pools"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/ipsets.go 589: Whitelisting IP sets. ID="masq-ipam-pools" family="inet" mainName="cali40masq-ipam-pools"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/ipsets.go 589: Whitelisting IP sets. ID="this-host" family="inet" mainName="cali40this-host"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/ipsets.go 607: Skipping expected Calico IP set. family="inet" setName="<_>"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/ipsets.go 643: No dirty IP sets. family="inet"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/sync_client.go 347: Ping received from Typha connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} type=""`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/sync_client.go 356: Pong sent to Typha connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} type=""`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/sync_client.go 434: New message from Typha. connID=0x0 connection=&discovery.Typha{Addr:"", IP:"", NodeName:(*string)(nil)} envelope=syncproto.Envelope{Message:syncproto.MsgPing{Timestamp:time.Date(2024, time.May, 8, 15, 23, <_>, <_>, time.Local)}} type=""`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/table.go 1233: In nftables mode, restarting transaction between updates and deletions. ipVersion=0x4 table="<_>"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/table.go 1263: Update ended up being no-op, skipping call to ip(6)tables-restore. ipVersion=0x4 table="<_>"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/wireguard.go 652: Wireguard is not enabled, skipping sync ipVersion=0x4`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/xdp_state.go 1004: Updating ipsetIDsToMembers cache. family=4`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/xdp_state.go 1043: Processing pending diff state. cs=&intdataplane.xdpSystemState{IfaceNameToData:map[string]intdataplane.xdpIfaceData{}, XDPEligiblePolicies:map[proto.PolicyID]intdataplane.xdpRules{}} family=4`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/xdp_state.go 1270: Finished processing pending diff state. bpfActions=intdataplane.xdpBPFActions{CreateMap:set.Typed[string]{}, RemoveMap:set.Typed[string]{}, AddToMap:map[string]map[string]uint32{}, RemoveFromMap:map[string]map[string]uint32{}, InstallXDP:set.Typed[string]{}, UninstallXDP:set.Typed[string]{}, MembersToDrop:map[string]map[string]uint32{}, MembersToAdd:map[string]map[string]uint32{}} family=4 newCS=&intdataplane.xdpSystemState{IfaceNameToData:map[string]intdataplane.xdpIfaceData{}, XDPEligiblePolicies:map[proto.PolicyID]intdataplane.xdpRules{}}`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/xdp_state.go 1605: Getting member changes. family=4 oldMembers=map[string]set.Set[string]{}`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/xdp_state.go 1798: Processing BPF actions. family="ipv4"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/xdp_state.go 1932: Finished processing BPF actions. family="ipv4"`,
+ `2024-05-08 <_> [DEBUG][<_>] felix/xdp_state.go 968: Processing member updates. family=4`,
+ `2024-05-08 <_> [INFO][<_>] felix/summary.go 100: Summarising <_> dataplane reconciliation loops over <_> avg=<_> longest=<_> (<_>)`,
+ `2024-05-08 <_> [WARNING][56] felix/table.go 654: Detected out-of-sync inserts, marking for resync actualRuleIDs=[]string{"", "", "", "", "<_><_><_> "<_><_> "<_><_> "<_><_> "<_><_> "<_><_> "<_><_> "<_><_> <_>", "", ""<_> <_><_><_> <_>"<_> <_><_><_><_><_> <_><_><_><_><_><_><_><_>", "", "", "", "", "", "", "", "", "", "", "", ""<_>`,
+ `bird: Netlink: No route to host`,
},
},
{
@@ -370,8 +392,8 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
format: FormatLogfmt,
patterns: []string{
`level=debug ts=2024-05-29T13:44:15.804597912Z caller=remote_instance_store.go:51 user=297794 slug=leanix msg="calling SaveAlertInstance"`,
- `level=debug ts=2024-05-29T13:44:15.<_> caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"`,
- `level=debug ts=2024-05-29T13:44:15.<_> caller=remote_instance_store.go:51 user=<_> slug=<_> msg="calling SaveAlertInstance"`,
+ `level=debug ts=<_> caller=remote_instance_store.go:51 user=396586 slug=opengov msg="calling SaveAlertInstance"`,
+ `level=debug ts=<_> caller=remote_instance_store.go:51 user=<_> slug=<_> msg="calling SaveAlertInstance"`,
`logger=ngalert.scheduler user=102553 slug=flownative version=1 fingerprint=4ad9e35be0f80ca3 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.79499903Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error:<nil> Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.794695854s EvaluationString:}]" duration=116.038803ms`,
`logger=ngalert.scheduler user=473762 slug=intentiq version=35 fingerprint=0bc4b6f46a852420 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.788200731Z level=debug msg="Alert rule evaluated" results="[{Instance:datasource_uid=grafanacloud-prom, ref_id=A State:NoData Error:<nil> Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.787878355s EvaluationString:}]" duration=15.345212ms`,
`logger=ngalert.scheduler user=70430 slug=dapperlabs version=1 fingerprint=65a68c433031b4e0 attempt=1 now=2024-05-29T13:44:10Z t=2024-05-29T13:44:15.790598463Z level=debug msg="Alert rule evaluated" results="[{Instance: State:Normal Error:<nil> Results:map[] Values:map[] EvaluatedAt:2024-05-29 13:44:10 +0000 UTC EvaluationDuration:5.78875161s EvaluationString:}]" duration=1.693079007s`,
@@ -387,9 +409,9 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
`logger=ngalert.state.manager user=412141 slug=sharethrough instance="datasource_uid=pFBylkiVz, ref_id=Swap Usage for Alert" t=2024-05-29T13:44:15.792775073Z level=debug msg="Keeping state" state=Normal`,
`logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.799932951Z level=debug msg="Setting next state" handler=resultNormal`,
`logger=ngalert.state.manager user=430961 slug=solifi instance= t=2024-05-29T13:44:15.799945019Z level=debug msg="Keeping state" state=Normal`,
- `logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.<_> level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData`,
- `logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.<_> level=debug msg="Keeping state" state=Normal`,
- `logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=2024-05-29T13:44:15.<_> level=debug msg="Setting next state" handler=resultNoData`,
+ `logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=<_> level=debug msg="Execution no data state is Normal" handler=resultNormal previous_handler=resultNoData`,
+ `logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=<_> level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=473762 slug=intentiq instance="datasource_uid=grafanacloud-prom, ref_id=A" t=<_> level=debug msg="Setting next state" handler=resultNoData`,
`logger=ngalert.state.manager user=473762 slug=intentiq t=2024-05-29T13:44:15.788261794Z level=debug msg="State manager processing evaluation results" resultCount=1`,
`logger=ngalert.state.manager user=630397 slug=tatin instance= t=2024-05-29T13:44:15.795542988Z level=debug msg="Keeping state" state=Normal`,
`logger=ngalert.state.manager user=679029 slug=joveoprodaws instance="datasource_uid=grafanacloud-logs, ref_id=A" t=2024-05-29T13:44:15.800327814Z level=debug msg="Setting next state" handler=resultNoData`,
@@ -398,20 +420,20 @@ func TestDrain_TrainExtractsPatterns(t *testing.T) {
`logger=ngalert.state.manager user=692010 slug=mercariusprod instance="datasource_uid=gfds-prometheus-wrapper, ref_id=B" t=2024-05-29T13:44:15.791129917Z level=debug msg="Setting next state" handler=resultNoData`,
`logger=ngalert.state.manager user=84535 slug=arweave instance= t=2024-05-29T13:44:15.796640981Z level=debug msg="Setting next state" handler=resultNormal`,
`logger=ngalert.state.manager user=84535 slug=arweave t=2024-05-29T13:44:15.796542294Z level=debug msg="State manager processing evaluation results" resultCount=1`,
- `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=<_>, instance=172.30.<_>.<_>:8080, job=integrations/kubernetes/kube-state-metrics, namespace=<_>, pod=<_>, uid=<_>" t=2024-05-29T13:44:15.<_> level=debug msg="Setting next state" handler=resultNormal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=<_>, instance=<_>, job=integrations/kubernetes/kube-state-metrics, namespace=<_>, pod=<_>, uid=<_>" t=<_> level=debug msg="Setting next state" handler=resultNormal`,
`logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=consul, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=wcs9-tds-devus-vault-con-74f6c575b8-6d879, uid=f5320297-1117-400f-9704-d4f43fa1127d" t=2024-05-29T13:44:15.78870732Z level=debug msg="Keeping state" state=Normal`,
- `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=crs-app, instance=172.30.<_>.<_>:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=<_>, uid=<_>" t=2024-05-29T13:44:15.<_> level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=crs-app, instance=<_>, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=<_>, uid=<_>" t=<_> level=debug msg="Keeping state" state=Normal`,
`logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=frontend, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevcaauth-exo-frontend-5c569cbc88-fr7t4, uid=2b8456c8-297f-4763-8f00-f8076b542d7c" t=2024-05-29T13:44:15.790564871Z level=debug msg="Keeping state" state=Normal`,
`logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=node, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds-devops, pod=exo-devca-cicd-288-zcl2b-9ws4z-nzgt7, uid=ca99b6a7-f08f-475a-adf6-dcf8c8936eed" t=2024-05-29T13:44:15.791738618Z level=debug msg="Keeping state" state=Normal`,
`logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-master, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthsearch-app-master-65969fb8d5-c7nl4, uid=c4f14b2b-581a-4543-a848-af6e25ada58a" t=2024-05-29T13:44:15.79227249Z level=debug msg="Keeping state" state=Normal`,
- `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-repeater, instance=172.30.<_>.<_>:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=<_>, uid=<_>" t=2024-05-29T13:44:15.<_> level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=search-app-repeater, instance=<_>, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=<_>, uid=<_>" t=<_> level=debug msg="Keeping state" state=Normal`,
`logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=tdsdevauthts-utils, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsdevauthts-utils-7f54f8d7b4-njddr, uid=352d7df2-7832-41f3-ad3e-cbe1a060c968" t=2024-05-29T13:44:15.793846886Z level=debug msg="Keeping state" state=Normal`,
`logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=tdsqalivets-utils, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqalivets-utils-75b748978f-r2vkj, uid=1d39d0d7-d483-427b-ba91-45d897674698" t=2024-05-29T13:44:15.794284465Z level=debug msg="Keeping state" state=Normal`,
- `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-app, instance=172.30.<_>.<_>:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=<_>, uid=<_>" t=2024-05-29T13:44:15.<_> level=debug msg="Keeping state" state=Normal`,
+ `logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-app, instance=<_>, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=<_>, uid=<_>" t=<_> level=debug msg="Keeping state" state=Normal`,
`logger=ngalert.state.manager user=893151 slug=cmtdsnp instance="cluster=tds-np-cluster, container=ts-web, instance=172.30.43.160:8080, job=integrations/kubernetes/kube-state-metrics, namespace=tds, pod=tdsqaauthts-web-57f5b6f56b-bdmh9, uid=8f6b5224-94ce-4f5d-ba08-03f9fc2f572f" t=2024-05-29T13:44:15.795397351Z level=debug msg="Keeping state" state=Normal`,
`logger=ngalert.state.manager.persist user=14927 slug=rstsoftware t=2024-05-29T13:44:15.798496844Z level=debug msg="Saving alert states done" count=1 max_state_save_concurrency=1 duration=26.340653ms`,
`logger=ngalert.state.manager.persist user=20177 slug=paddledash t=2024-05-29T13:44:15.806655602Z level=debug msg="Saving alert states" count=1 max_state_save_concurrency=1`,
- `logger=ngalert.state.manager.persist user=<_> slug=<_> t=2024-05-29T13:44:15.<_> level=debug msg="Saving alert states" count=<_> max_state_save_concurrency=1`,
+ `logger=ngalert.state.manager.persist user=<_> slug=<_> t=<_> level=debug msg="Saving alert states" count=<_> max_state_save_concurrency=1`,
},
},
}
diff --git a/pkg/pattern/drain/line_tokenizer.go b/pkg/pattern/drain/line_tokenizer.go
index 9a366edc6fe89..f69b2a6ff2862 100644
--- a/pkg/pattern/drain/line_tokenizer.go
+++ b/pkg/pattern/drain/line_tokenizer.go
@@ -31,6 +31,8 @@ func newPunctuationTokenizer() *punctuationTokenizer {
included['='] = 1
excluded['_'] = 1
excluded['-'] = 1
+ excluded['.'] = 1
+ excluded[':'] = 1
return &punctuationTokenizer{
includeDelimiters: included,
excludeDelimiters: excluded,
diff --git a/pkg/pattern/drain/line_tokenizer_test.go b/pkg/pattern/drain/line_tokenizer_test.go
index 7aac061d28f38..0223bdd110172 100644
--- a/pkg/pattern/drain/line_tokenizer_test.go
+++ b/pkg/pattern/drain/line_tokenizer_test.go
@@ -29,7 +29,7 @@ var testCases = []TestCase{
name: "Test with colon",
line: "key1:value1 key2:value2",
want: map[string][]string{
- typePunctuation: {"key1", ":", "value1", "key2", ":", "value2"},
+ typePunctuation: {"key1:value1", "key2:value2"},
typeSplitting: {"key1:", "value1", "key2:", "value2"},
},
},
@@ -37,7 +37,7 @@ var testCases = []TestCase{
name: "Test with mixed delimiters, more = than :",
line: "key1=value1 key2:value2 key3=value3",
want: map[string][]string{
- typePunctuation: {"key1", "=", "value1", "key2", ":", "value2", "key3", "=", "value3"},
+ typePunctuation: {"key1", "=", "value1", "key2:value2", "key3", "=", "value3"},
typeSplitting: {"key1=", "value1", "key2:value2", "key3=", "value3"},
},
},
@@ -45,7 +45,7 @@ var testCases = []TestCase{
name: "Test with mixed delimiters, more : than =",
line: "key1:value1 key2:value2 key3=value3",
want: map[string][]string{
- typePunctuation: {"key1", ":", "value1", "key2", ":", "value2", "key3", "=", "value3"},
+ typePunctuation: {"key1:value1", "key2:value2", "key3", "=", "value3"},
typeSplitting: {"key1:", "value1", "key2:", "value2", "key3=value3"},
},
},
@@ -77,7 +77,7 @@ var testCases = []TestCase{
name: "longer line",
line: "09:17:38.033366 ▶ INFO route ops sending to dest https://graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics: service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_counter.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_Metric.action_is_drop.reason_is_queue_full 0 1717060658",
want: map[string][]string{
- typePunctuation: {`09`, `:`, `17`, `:`, `38`, `.`, `033366`, `▶`, `INFO`, `route`, `ops`, `sending`, `to`, `dest`, `https`, `:`, `/`, `/`, `graphite-cortex-ops-blocks-us-east4`, `.`, `grafana`, `.`, `net`, `/`, `graphite`, `/`, `metrics`, `:`, `service_is_carbon-relay-ng`, `.`, `instance_is_carbon-relay-ng-c665b7b-j2trk`, `.`, `mtype_is_counter`, `.`, `dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics`, `.`, `unit_is_Metric`, `.`, `action_is_drop`, `.`, `reason_is_queue_full`, `0`, `1717060658`},
+ typePunctuation: {`09:17:38.033366`, `▶`, `INFO`, `route`, `ops`, `sending`, `to`, `dest`, `https:`, `/`, `/`, `graphite-cortex-ops-blocks-us-east4.grafana.net`, `/`, `graphite`, `/`, `metrics:`, `service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_counter.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_Metric.action_is_drop.reason_is_queue_full`, `0`, `1717060658`},
typeSplitting: {`09:`, `17:`, `38.033366`, `▶`, `INFO`, ``, `route`, `ops`, `sending`, `to`, `dest`, `https:`, `//graphite-cortex-ops-blocks-us-east4.grafana.net/graphite/metrics:`, ``, `service_is_carbon-relay-ng.instance_is_carbon-relay-ng-c665b7b-j2trk.mtype_is_counter.dest_is_https_graphite-cortex-ops-blocks-us-east4_grafana_netgraphitemetrics.unit_is_Metric.action_is_drop.reason_is_queue_full`, `0`, `1717060658`},
},
},
@@ -85,7 +85,7 @@ var testCases = []TestCase{
name: "Consecutive splits points: equals followed by space",
line: `ts=2024-05-30T12:50:36.648377186Z caller=scheduler_processor.go:143 level=warn msg="error contacting scheduler" err="rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: EOF\"" addr=10.0.151.101:9095`,
want: map[string][]string{
- typePunctuation: {`ts`, `=`, `2024-05-30T12`, `:`, `50`, `:`, `36`, `.`, `648377186Z`, `caller`, `=`, `scheduler_processor`, `.`, `go`, `:`, `143`, `level`, `=`, `warn`, `msg`, `=`, `"`, `error`, `contacting`, `scheduler`, `"`, `err`, `=`, `"`, `rpc`, `error`, `:`, `code`, `=`, `Unavailable`, `desc`, `=`, `connection`, `error`, `:`, `desc`, `=`, `\`, `"`, `error`, `reading`, `server`, `preface`, `:`, `EOF`, `\`, `"`, `"`, `addr`, `=`, `10`, `.`, `0`, `.`, `151`, `.`, `101`, `:`, `9095`},
+ typePunctuation: {`ts`, `=`, `2024-05-30T12:50:36.648377186Z`, `caller`, `=`, `scheduler_processor.go:143`, `level`, `=`, `warn`, `msg`, `=`, `"`, `error`, `contacting`, `scheduler`, `"`, `err`, `=`, `"`, `rpc`, `error:`, `code`, `=`, `Unavailable`, `desc`, `=`, `connection`, `error:`, `desc`, `=`, `\`, `"`, `error`, `reading`, `server`, `preface:`, `EOF`, `\`, `"`, `"`, `addr`, `=`, `10.0.151.101:9095`},
typeSplitting: {"ts=", "2024-05-30T12:50:36.648377186Z", "caller=", "scheduler_processor.go:143", "level=", "warn", "msg=", "\"error", "contacting", "scheduler\"", "err=", "\"rpc", "error:", "code", "=", ``, "Unavailable", "desc", "=", ``, "connection", "error:", "desc", "=", ``, `\"error`, "reading", "server", "preface:", `EOF\""`, "addr=", "10.0.151.101:9095"},
},
},
|
feat
|
exclude and from creating new tokens in patterns (#13395)
|
5f476a39e130a22dc5b641af84a0c902f83a4ad3
|
2025-01-09 15:55:03
|
jackyin
|
fix: Fix goroutine leak in queryrange downstreamer (#15665)
| false
|
diff --git a/pkg/querier/queryrange/downstreamer.go b/pkg/querier/queryrange/downstreamer.go
index c6fba0fbf49a1..b2aa50c292154 100644
--- a/pkg/querier/queryrange/downstreamer.go
+++ b/pkg/querier/queryrange/downstreamer.go
@@ -170,10 +170,12 @@ func (in instance) For(
go func() {
err := concurrency.ForEachJob(ctx, len(queries), in.parallelism, func(ctx context.Context, i int) error {
res, err := fn(queries[i])
+ if err != nil {
+ return err
+ }
response := logql.Resp{
I: i,
Res: res,
- Err: err,
}
// Feed the result into the channel unless the work has completed.
@@ -181,7 +183,7 @@ func (in instance) For(
case <-ctx.Done():
case ch <- response:
}
- return err
+ return nil
})
if err != nil {
ch <- logql.Resp{
@@ -192,15 +194,19 @@ func (in instance) For(
close(ch)
}()
+ var err error
for resp := range ch {
- if resp.Err != nil {
- return nil, resp.Err
+ if err != nil {
+ continue
}
- if err := acc.Accumulate(ctx, resp.Res, resp.I); err != nil {
- return nil, err
+ if resp.Err != nil {
+ err = resp.Err
+ continue
}
+ err = acc.Accumulate(ctx, resp.Res, resp.I)
}
- return acc.Result(), nil
+
+ return acc.Result(), err
}
// convert to matrix
|
fix
|
Fix goroutine leak in queryrange downstreamer (#15665)
|
d3615f7e9e7b349c504b0b90416303c9b8c4e60a
|
2022-11-30 14:44:21
|
George Tsilias
|
promtail: Handle nil error on target Details() call (#7771)
| false
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 8b9f6a13f81ba..45b2e7b73a5ee 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -33,6 +33,7 @@
* [7602](https://github.com/grafana/loki/pull/7602) **vmax**: Add decolorize stage to Promtail to easily parse colored logs.
##### Fixes
+* [7771](https://github.com/grafana/loki/pull/7771) **GeorgeTsilias**: Handle nil error on target Details() call.
* [7461](https://github.com/grafana/loki/pull/7461) **MarNicGit**: Promtail: Fix collecting userdata field from Windows Event Log
diff --git a/clients/pkg/promtail/targets/cloudflare/target.go b/clients/pkg/promtail/targets/cloudflare/target.go
index 2cb4cea974937..d1bc2cd9af4fc 100644
--- a/clients/pkg/promtail/targets/cloudflare/target.go
+++ b/clients/pkg/promtail/targets/cloudflare/target.go
@@ -224,9 +224,13 @@ func (t *Target) Ready() bool {
func (t *Target) Details() interface{} {
fields, _ := Fields(FieldsType(t.config.FieldsType))
+ var errMsg string
+ if t.err != nil {
+ errMsg = t.err.Error()
+ }
return map[string]string{
"zone_id": t.config.ZoneID,
- "error": t.err.Error(),
+ "error": errMsg,
"position": t.positions.GetString(positions.CursorKey(t.config.ZoneID)),
"last_timestamp": t.to.String(),
"fields": strings.Join(fields, ","),
diff --git a/clients/pkg/promtail/targets/docker/target.go b/clients/pkg/promtail/targets/docker/target.go
index 71837e8b7dd2b..329827e5b61cc 100644
--- a/clients/pkg/promtail/targets/docker/target.go
+++ b/clients/pkg/promtail/targets/docker/target.go
@@ -251,9 +251,13 @@ func (t *Target) Labels() model.LabelSet {
// Details returns target-specific details.
func (t *Target) Details() interface{} {
+ var errMsg string
+ if t.err != nil {
+ errMsg = t.err.Error()
+ }
return map[string]string{
"id": t.containerName,
- "error": t.err.Error(),
+ "error": errMsg,
"position": t.positions.GetString(positions.CursorKey(t.containerName)),
"running": strconv.FormatBool(t.running.Load()),
}
|
promtail
|
Handle nil error on target Details() call (#7771)
|
93c35a7a13b39942cc6e77574d39f289345b7005
|
2023-02-23 11:50:54
|
Robert Jacob
|
operator: Refactor status update to reduce API calls (#8578)
| false
|
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index 8a0a92fa6ab8c..300d5f134cc01 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,6 @@
## Main
+- [8578](https://github.com/grafana/loki/pull/8578) **xperimental**: Refactor status update to reduce API calls
- [8577](https://github.com/grafana/loki/pull/8577) **Red-GV**: Store gateway tenant information in secret instead of configmap
- [8397](https://github.com/grafana/loki/pull/8397) **periklis**: Update Loki operand to v2.7.3
- [8308](https://github.com/grafana/loki/pull/8308) **aminesnow**: operator: Cleanup ruler resources when disabled
diff --git a/operator/controllers/loki/lokistack_controller.go b/operator/controllers/loki/lokistack_controller.go
index c3cdf52e39930..f7d066221cea4 100644
--- a/operator/controllers/loki/lokistack_controller.go
+++ b/operator/controllers/loki/lokistack_controller.go
@@ -3,6 +3,7 @@ package controllers
import (
"context"
"errors"
+ "time"
"github.com/go-logr/logr"
"github.com/google/go-cmp/cmp"
@@ -158,7 +159,7 @@ func (r *LokiStackReconciler) Reconcile(ctx context.Context, req ctrl.Request) (
return res, derr
}
- err = status.Refresh(ctx, r.Client, req)
+ err = status.Refresh(ctx, r.Client, req, time.Now())
if err != nil {
return ctrl.Result{}, err
}
diff --git a/operator/internal/status/components.go b/operator/internal/status/components.go
index 032f6d3e71fe5..a1bf2853ecba3 100644
--- a/operator/internal/status/components.go
+++ b/operator/internal/status/components.go
@@ -9,64 +9,54 @@ import (
"github.com/grafana/loki/operator/internal/manifests"
corev1 "k8s.io/api/core/v1"
- apierrors "k8s.io/apimachinery/pkg/api/errors"
- ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
-// SetComponentsStatus updates the pod status map component
-func SetComponentsStatus(ctx context.Context, k k8s.Client, req ctrl.Request) error {
- var s lokiv1.LokiStack
- if err := k.Get(ctx, req.NamespacedName, &s); err != nil {
- if apierrors.IsNotFound(err) {
- return nil
- }
- return kverrors.Wrap(err, "failed to lookup lokistack", "name", req.NamespacedName)
- }
-
+// generateComponentStatus updates the pod status map component
+func generateComponentStatus(ctx context.Context, k k8s.Client, s *lokiv1.LokiStack) (*lokiv1.LokiStackComponentStatus, error) {
var err error
- s.Status.Components = lokiv1.LokiStackComponentStatus{}
- s.Status.Components.Compactor, err = appendPodStatus(ctx, k, manifests.LabelCompactorComponent, s.Name, s.Namespace)
+ result := &lokiv1.LokiStackComponentStatus{}
+ result.Compactor, err = appendPodStatus(ctx, k, manifests.LabelCompactorComponent, s.Name, s.Namespace)
if err != nil {
- return kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelCompactorComponent)
+ return nil, kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelCompactorComponent)
}
- s.Status.Components.Querier, err = appendPodStatus(ctx, k, manifests.LabelQuerierComponent, s.Name, s.Namespace)
+ result.Querier, err = appendPodStatus(ctx, k, manifests.LabelQuerierComponent, s.Name, s.Namespace)
if err != nil {
- return kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelQuerierComponent)
+ return nil, kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelQuerierComponent)
}
- s.Status.Components.Distributor, err = appendPodStatus(ctx, k, manifests.LabelDistributorComponent, s.Name, s.Namespace)
+ result.Distributor, err = appendPodStatus(ctx, k, manifests.LabelDistributorComponent, s.Name, s.Namespace)
if err != nil {
- return kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelDistributorComponent)
+ return nil, kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelDistributorComponent)
}
- s.Status.Components.QueryFrontend, err = appendPodStatus(ctx, k, manifests.LabelQueryFrontendComponent, s.Name, s.Namespace)
+ result.QueryFrontend, err = appendPodStatus(ctx, k, manifests.LabelQueryFrontendComponent, s.Name, s.Namespace)
if err != nil {
- return kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelQueryFrontendComponent)
+ return nil, kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelQueryFrontendComponent)
}
- s.Status.Components.IndexGateway, err = appendPodStatus(ctx, k, manifests.LabelIndexGatewayComponent, s.Name, s.Namespace)
+ result.IndexGateway, err = appendPodStatus(ctx, k, manifests.LabelIndexGatewayComponent, s.Name, s.Namespace)
if err != nil {
- return kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelIngesterComponent)
+ return nil, kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelIngesterComponent)
}
- s.Status.Components.Ingester, err = appendPodStatus(ctx, k, manifests.LabelIngesterComponent, s.Name, s.Namespace)
+ result.Ingester, err = appendPodStatus(ctx, k, manifests.LabelIngesterComponent, s.Name, s.Namespace)
if err != nil {
- return kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelIndexGatewayComponent)
+ return nil, kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelIndexGatewayComponent)
}
- s.Status.Components.Gateway, err = appendPodStatus(ctx, k, manifests.LabelGatewayComponent, s.Name, s.Namespace)
+ result.Gateway, err = appendPodStatus(ctx, k, manifests.LabelGatewayComponent, s.Name, s.Namespace)
if err != nil {
- return kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelGatewayComponent)
+ return nil, kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelGatewayComponent)
}
- s.Status.Components.Ruler, err = appendPodStatus(ctx, k, manifests.LabelRulerComponent, s.Name, s.Namespace)
+ result.Ruler, err = appendPodStatus(ctx, k, manifests.LabelRulerComponent, s.Name, s.Namespace)
if err != nil {
- return kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelRulerComponent)
+ return nil, kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelRulerComponent)
}
- return k.Status().Update(ctx, &s, &client.UpdateOptions{})
+ return result, nil
}
func appendPodStatus(ctx context.Context, k k8s.Client, component, stack, ns string) (lokiv1.PodStatusMap, error) {
diff --git a/operator/internal/status/components_test.go b/operator/internal/status/components_test.go
index dd3f05c4e0107..3e0987fbc8717 100644
--- a/operator/internal/status/components_test.go
+++ b/operator/internal/status/components_test.go
@@ -1,316 +1,137 @@
-package status_test
+package status
import (
"context"
+ "fmt"
"testing"
lokiv1 "github.com/grafana/loki/operator/apis/loki/v1"
"github.com/grafana/loki/operator/internal/external/k8s/k8sfakes"
- "github.com/grafana/loki/operator/internal/status"
+ "github.com/grafana/loki/operator/internal/manifests"
"github.com/stretchr/testify/require"
-
- v1 "k8s.io/api/core/v1"
- apierrors "k8s.io/apimachinery/pkg/api/errors"
+ corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/runtime/schema"
- "k8s.io/apimachinery/pkg/types"
- ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
-func TestSetComponentsStatus_WhenGetLokiStackReturnsError_ReturnError(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- return apierrors.NewBadRequest("something wasn't found")
- }
-
- err := status.SetComponentsStatus(context.TODO(), k, r)
- require.Error(t, err)
-}
-
-func TestSetComponentsStatus_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
+func createPodList(baseName string, phases ...corev1.PodPhase) *corev1.PodList {
+ items := []corev1.Pod{}
+ for i, p := range phases {
+ items = append(items, corev1.Pod{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: fmt.Sprintf("%s-pod-%d", baseName, i),
+ },
+ Status: corev1.PodStatus{
+ Phase: p,
+ },
+ })
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
+ return &corev1.PodList{
+ Items: items,
}
-
- err := status.SetComponentsStatus(context.TODO(), k, r)
- require.NoError(t, err)
}
-func TestSetComponentsStatus_WhenListReturnError_ReturnError(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
+func setupListClient(t *testing.T, stack *lokiv1.LokiStack, componentPods map[string]*corev1.PodList) (*k8sfakes.FakeClient, *k8sfakes.FakeStatusWriter) {
+ k, sw := setupFakesNoError(t, stack)
+ k.ListStub = func(_ context.Context, list client.ObjectList, options ...client.ListOption) error {
+ componentLabel := ""
+ for _, o := range options {
+ if m, ok := o.(client.MatchingLabels); ok {
+ componentLabel = m["app.kubernetes.io/component"]
+ }
}
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
-
- k.ListStub = func(_ context.Context, l client.ObjectList, opts ...client.ListOption) error {
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
-
- err := status.SetComponentsStatus(context.TODO(), k, r)
- require.Error(t, err)
-}
-
-func TestSetComponentsStatus_WhenPodListExisting_SetPodStatusMap(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
- k.StatusStub = func() client.StatusWriter { return sw }
-
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
+ if componentLabel == "" {
+ t.Fatalf("no component label on list call: %s", options)
}
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
- k.ListStub = func(_ context.Context, l client.ObjectList, _ ...client.ListOption) error {
- pods := v1.PodList{
- Items: []v1.Pod{
- {
- ObjectMeta: metav1.ObjectMeta{
- Name: "pod-a",
- },
- Status: v1.PodStatus{
- Phase: v1.PodPending,
- },
- },
- {
- ObjectMeta: metav1.ObjectMeta{
- Name: "pod-b",
- },
- Status: v1.PodStatus{
- Phase: v1.PodRunning,
- },
- },
- },
+ podList, ok := componentPods[componentLabel]
+ if !ok {
+ t.Fatalf("no pods found for label: %s", componentLabel)
}
- k.SetClientObjectList(l, &pods)
- return nil
- }
-
- expected := lokiv1.PodStatusMap{
- "Pending": []string{"pod-a"},
- "Running": []string{"pod-b"},
- }
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- stack := obj.(*lokiv1.LokiStack)
- require.Equal(t, expected, stack.Status.Components.Compactor)
+ k.SetClientObjectList(list, podList)
return nil
}
- err := status.SetComponentsStatus(context.TODO(), k, r)
- require.NoError(t, err)
- require.NotZero(t, k.ListCallCount())
- require.NotZero(t, k.StatusCallCount())
- require.NotZero(t, sw.UpdateCallCount())
+ return k, sw
}
-func TestSetComponentsStatus_WhenRulerEnabled_SetPodStatusMap(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- Spec: lokiv1.LokiStackSpec{
- Rules: &lokiv1.RulesSpec{
- Enabled: true,
+func TestGenerateComponentStatus(t *testing.T) {
+ tt := []struct {
+ desc string
+ componentPods map[string]*corev1.PodList
+ wantComponentStatus *lokiv1.LokiStackComponentStatus
+ }{
+ {
+ desc: "no pods",
+ componentPods: map[string]*corev1.PodList{
+ manifests.LabelCompactorComponent: {},
+ manifests.LabelDistributorComponent: {},
+ manifests.LabelIngesterComponent: {},
+ manifests.LabelQuerierComponent: {},
+ manifests.LabelQueryFrontendComponent: {},
+ manifests.LabelIndexGatewayComponent: {},
+ manifests.LabelRulerComponent: {},
+ manifests.LabelGatewayComponent: {},
},
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
-
- k.ListStub = func(_ context.Context, l client.ObjectList, _ ...client.ListOption) error {
- pods := v1.PodList{
- Items: []v1.Pod{
- {
- ObjectMeta: metav1.ObjectMeta{
- Name: "pod-a",
- },
- Status: v1.PodStatus{
- Phase: v1.PodPending,
- },
- },
- {
- ObjectMeta: metav1.ObjectMeta{
- Name: "pod-b",
- },
- Status: v1.PodStatus{
- Phase: v1.PodRunning,
- },
- },
+ wantComponentStatus: &lokiv1.LokiStackComponentStatus{
+ Compactor: map[corev1.PodPhase][]string{},
+ Distributor: map[corev1.PodPhase][]string{},
+ IndexGateway: map[corev1.PodPhase][]string{},
+ Ingester: map[corev1.PodPhase][]string{},
+ Querier: map[corev1.PodPhase][]string{},
+ QueryFrontend: map[corev1.PodPhase][]string{},
+ Gateway: map[corev1.PodPhase][]string{},
+ Ruler: map[corev1.PodPhase][]string{},
},
- }
- k.SetClientObjectList(l, &pods)
- return nil
- }
-
- expected := lokiv1.PodStatusMap{
- "Pending": []string{"pod-a"},
- "Running": []string{"pod-b"},
- }
-
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- stack := obj.(*lokiv1.LokiStack)
- require.Equal(t, expected, stack.Status.Components.Ruler)
- return nil
- }
-
- err := status.SetComponentsStatus(context.TODO(), k, r)
- require.NoError(t, err)
- require.NotZero(t, k.ListCallCount())
- require.NotZero(t, k.StatusCallCount())
- require.NotZero(t, sw.UpdateCallCount())
-}
-
-func TestSetComponentsStatus_WhenRulerNotEnabled_DoNothing(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
},
- Spec: lokiv1.LokiStackSpec{
- Rules: &lokiv1.RulesSpec{
- Enabled: false,
+ {
+ desc: "all one pod running",
+ componentPods: map[string]*corev1.PodList{
+ manifests.LabelCompactorComponent: createPodList(manifests.LabelCompactorComponent, corev1.PodRunning),
+ manifests.LabelDistributorComponent: createPodList(manifests.LabelDistributorComponent, corev1.PodRunning),
+ manifests.LabelIngesterComponent: createPodList(manifests.LabelIngesterComponent, corev1.PodRunning),
+ manifests.LabelQuerierComponent: createPodList(manifests.LabelQuerierComponent, corev1.PodRunning),
+ manifests.LabelQueryFrontendComponent: createPodList(manifests.LabelQueryFrontendComponent, corev1.PodRunning),
+ manifests.LabelIndexGatewayComponent: createPodList(manifests.LabelIndexGatewayComponent, corev1.PodRunning),
+ manifests.LabelRulerComponent: createPodList(manifests.LabelRulerComponent, corev1.PodRunning),
+ manifests.LabelGatewayComponent: createPodList(manifests.LabelGatewayComponent, corev1.PodRunning),
+ },
+ wantComponentStatus: &lokiv1.LokiStackComponentStatus{
+ Compactor: map[corev1.PodPhase][]string{corev1.PodRunning: {"compactor-pod-0"}},
+ Distributor: map[corev1.PodPhase][]string{corev1.PodRunning: {"distributor-pod-0"}},
+ IndexGateway: map[corev1.PodPhase][]string{corev1.PodRunning: {"index-gateway-pod-0"}},
+ Ingester: map[corev1.PodPhase][]string{corev1.PodRunning: {"ingester-pod-0"}},
+ Querier: map[corev1.PodPhase][]string{corev1.PodRunning: {"querier-pod-0"}},
+ QueryFrontend: map[corev1.PodPhase][]string{corev1.PodRunning: {"query-frontend-pod-0"}},
+ Gateway: map[corev1.PodPhase][]string{corev1.PodRunning: {"lokistack-gateway-pod-0"}},
+ Ruler: map[corev1.PodPhase][]string{corev1.PodRunning: {"ruler-pod-0"}},
},
},
}
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
+ for _, tc := range tt {
+ tc := tc
+ t.Run(tc.desc, func(t *testing.T) {
+ t.Parallel()
- k.ListStub = func(_ context.Context, l client.ObjectList, o ...client.ListOption) error {
- s := o[0].(client.MatchingLabels)
+ stack := &lokiv1.LokiStack{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "my-stack",
+ Namespace: "some-ns",
+ },
+ }
- c, ok := s["app.kubernetes.io/component"]
- if !ok || c == "ruler" {
- return nil
- }
+ k, _ := setupListClient(t, stack, tc.componentPods)
- pods := v1.PodList{
- Items: []v1.Pod{
- {
- ObjectMeta: metav1.ObjectMeta{
- Name: "pod-a",
- },
- Status: v1.PodStatus{
- Phase: v1.PodPending,
- },
- },
- {
- ObjectMeta: metav1.ObjectMeta{
- Name: "pod-b",
- },
- Status: v1.PodStatus{
- Phase: v1.PodRunning,
- },
- },
- },
- }
- k.SetClientObjectList(l, &pods)
- return nil
- }
+ componentStatus, err := generateComponentStatus(context.Background(), k, stack)
+ require.NoError(t, err)
+ require.Equal(t, tc.wantComponentStatus, componentStatus)
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- stack := obj.(*lokiv1.LokiStack)
- require.Equal(t, stack.Status.Components.Ruler, lokiv1.PodStatusMap{})
- return nil
+ // one list call for each component
+ require.Equal(t, 8, k.ListCallCount())
+ })
}
-
- err := status.SetComponentsStatus(context.TODO(), k, r)
- require.NoError(t, err)
- require.NotZero(t, k.ListCallCount())
- require.NotZero(t, k.StatusCallCount())
- require.NotZero(t, sw.UpdateCallCount())
}
diff --git a/operator/internal/status/lokistack.go b/operator/internal/status/lokistack.go
index f0d06133720dd..1a34d7865831d 100644
--- a/operator/internal/status/lokistack.go
+++ b/operator/internal/status/lokistack.go
@@ -7,6 +7,7 @@ import (
"github.com/ViaQ/logerr/v2/kverrors"
lokiv1 "github.com/grafana/loki/operator/apis/loki/v1"
"github.com/grafana/loki/operator/internal/external/k8s"
+ corev1 "k8s.io/api/core/v1"
"k8s.io/client-go/util/retry"
apierrors "k8s.io/apimachinery/pkg/api/errors"
@@ -20,51 +21,33 @@ const (
messagePending = "Some LokiStack components pending on dependencies"
)
-// DegradedError contains information about why the managed LokiStack has an invalid configuration.
-type DegradedError struct {
- Message string
- Reason lokiv1.LokiStackConditionReason
- Requeue bool
-}
-
-func (e *DegradedError) Error() string {
- return fmt.Sprintf("cluster degraded: %s", e.Message)
-}
-
-// SetReadyCondition updates or appends the condition Ready to the lokistack status conditions.
-// In addition it resets all other Status conditions to false.
-func SetReadyCondition(ctx context.Context, k k8s.Client, req ctrl.Request) error {
- ready := metav1.Condition{
- Type: string(lokiv1.ConditionReady),
- Message: messageReady,
- Reason: string(lokiv1.ReasonReadyComponents),
- }
-
- return updateCondition(ctx, k, req, ready)
-}
-
-// SetFailedCondition updates or appends the condition Failed to the lokistack status conditions.
-// In addition it resets all other Status conditions to false.
-func SetFailedCondition(ctx context.Context, k k8s.Client, req ctrl.Request) error {
- failed := metav1.Condition{
+var (
+ conditionFailed = metav1.Condition{
Type: string(lokiv1.ConditionFailed),
Message: messageFailed,
Reason: string(lokiv1.ReasonFailedComponents),
}
-
- return updateCondition(ctx, k, req, failed)
-}
-
-// SetPendingCondition updates or appends the condition Pending to the lokistack status conditions.
-// In addition it resets all other Status conditions to false.
-func SetPendingCondition(ctx context.Context, k k8s.Client, req ctrl.Request) error {
- pending := metav1.Condition{
+ conditionPending = metav1.Condition{
Type: string(lokiv1.ConditionPending),
Message: messagePending,
Reason: string(lokiv1.ReasonPendingComponents),
}
+ conditionReady = metav1.Condition{
+ Type: string(lokiv1.ConditionReady),
+ Message: messageReady,
+ Reason: string(lokiv1.ReasonReadyComponents),
+ }
+)
+
+// DegradedError contains information about why the managed LokiStack has an invalid configuration.
+type DegradedError struct {
+ Message string
+ Reason lokiv1.LokiStackConditionReason
+ Requeue bool
+}
- return updateCondition(ctx, k, req, pending)
+func (e *DegradedError) Error() string {
+ return fmt.Sprintf("cluster degraded: %s", e.Message)
}
// SetDegradedCondition appends the condition Degraded to the lokistack status conditions.
@@ -78,6 +61,38 @@ func SetDegradedCondition(ctx context.Context, k k8s.Client, req ctrl.Request, m
return updateCondition(ctx, k, req, degraded)
}
+func generateCondition(cs *lokiv1.LokiStackComponentStatus) metav1.Condition {
+ // Check for failed pods first
+ failed := len(cs.Compactor[corev1.PodFailed]) +
+ len(cs.Distributor[corev1.PodFailed]) +
+ len(cs.Ingester[corev1.PodFailed]) +
+ len(cs.Querier[corev1.PodFailed]) +
+ len(cs.QueryFrontend[corev1.PodFailed]) +
+ len(cs.Gateway[corev1.PodFailed]) +
+ len(cs.IndexGateway[corev1.PodFailed]) +
+ len(cs.Ruler[corev1.PodFailed])
+
+ if failed != 0 {
+ return conditionFailed
+ }
+
+ // Check for pending pods
+ pending := len(cs.Compactor[corev1.PodPending]) +
+ len(cs.Distributor[corev1.PodPending]) +
+ len(cs.Ingester[corev1.PodPending]) +
+ len(cs.Querier[corev1.PodPending]) +
+ len(cs.QueryFrontend[corev1.PodPending]) +
+ len(cs.Gateway[corev1.PodPending]) +
+ len(cs.IndexGateway[corev1.PodPending]) +
+ len(cs.Ruler[corev1.PodPending])
+
+ if pending != 0 {
+ return conditionPending
+ }
+
+ return conditionReady
+}
+
func updateCondition(ctx context.Context, k k8s.Client, req ctrl.Request, condition metav1.Condition) error {
var stack lokiv1.LokiStack
if err := k.Get(ctx, req.NamespacedName, &stack); err != nil {
diff --git a/operator/internal/status/lokistack_test.go b/operator/internal/status/lokistack_test.go
index 4208cd9c2dea9..2e2c64a6a1aeb 100644
--- a/operator/internal/status/lokistack_test.go
+++ b/operator/internal/status/lokistack_test.go
@@ -6,9 +6,8 @@ import (
lokiv1 "github.com/grafana/loki/operator/apis/loki/v1"
"github.com/grafana/loki/operator/internal/external/k8s/k8sfakes"
-
"github.com/stretchr/testify/require"
-
+ corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -39,391 +38,6 @@ func setupFakesNoError(t *testing.T, stack *lokiv1.LokiStack) (*k8sfakes.FakeCli
return k, sw
}
-func TestSetReadyCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.T) {
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k := &k8sfakes.FakeClient{}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- return apierrors.NewBadRequest("something wasn't found")
- }
-
- err := SetReadyCondition(context.Background(), k, r)
- require.Error(t, err)
-}
-
-func TestSetReadyCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing.T) {
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k := &k8sfakes.FakeClient{}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
-
- err := SetReadyCondition(context.Background(), k, r)
- require.NoError(t, err)
-}
-
-func TestSetReadyCondition_WhenExisting_DoNothing(t *testing.T) {
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- Status: lokiv1.LokiStackStatus{
- Conditions: []metav1.Condition{
- {
- Type: string(lokiv1.ConditionReady),
- Message: messageReady,
- Reason: string(lokiv1.ReasonReadyComponents),
- Status: metav1.ConditionTrue,
- },
- },
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k, _ := setupFakesNoError(t, &s)
-
- err := SetReadyCondition(context.Background(), k, r)
- require.NoError(t, err)
- require.Zero(t, k.StatusCallCount())
-}
-
-func TestSetReadyCondition_WhenExisting_SetReadyConditionTrue(t *testing.T) {
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- Status: lokiv1.LokiStackStatus{
- Conditions: []metav1.Condition{
- {
- Type: string(lokiv1.ConditionReady),
- Status: metav1.ConditionFalse,
- },
- },
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k, sw := setupFakesNoError(t, &s)
-
- err := SetReadyCondition(context.Background(), k, r)
- require.NoError(t, err)
-
- require.NotZero(t, k.StatusCallCount())
- require.NotZero(t, sw.UpdateCallCount())
-}
-
-func TestSetReadyCondition_WhenNoneExisting_AppendReadyCondition(t *testing.T) {
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k, sw := setupFakesNoError(t, &s)
-
- err := SetReadyCondition(context.Background(), k, r)
- require.NoError(t, err)
-
- require.NotZero(t, k.StatusCallCount())
- require.NotZero(t, sw.UpdateCallCount())
-}
-
-func TestSetFailedCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.T) {
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k := &k8sfakes.FakeClient{}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- return apierrors.NewBadRequest("something wasn't found")
- }
-
- err := SetFailedCondition(context.Background(), k, r)
- require.Error(t, err)
-}
-
-func TestSetFailedCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing.T) {
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k := &k8sfakes.FakeClient{}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
-
- err := SetFailedCondition(context.Background(), k, r)
- require.NoError(t, err)
-}
-
-func TestSetFailedCondition_WhenExisting_DoNothing(t *testing.T) {
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- Status: lokiv1.LokiStackStatus{
- Conditions: []metav1.Condition{
- {
- Type: string(lokiv1.ConditionFailed),
- Reason: string(lokiv1.ReasonFailedComponents),
- Message: messageFailed,
- Status: metav1.ConditionTrue,
- },
- },
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k, _ := setupFakesNoError(t, &s)
-
- err := SetFailedCondition(context.Background(), k, r)
- require.NoError(t, err)
- require.Zero(t, k.StatusCallCount())
-}
-
-func TestSetFailedCondition_WhenExisting_SetFailedConditionTrue(t *testing.T) {
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- Status: lokiv1.LokiStackStatus{
- Conditions: []metav1.Condition{
- {
- Type: string(lokiv1.ConditionFailed),
- Status: metav1.ConditionFalse,
- },
- },
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k, sw := setupFakesNoError(t, &s)
-
- err := SetFailedCondition(context.Background(), k, r)
- require.NoError(t, err)
-
- require.NotZero(t, k.StatusCallCount())
- require.NotZero(t, sw.UpdateCallCount())
-}
-
-func TestSetFailedCondition_WhenNoneExisting_AppendFailedCondition(t *testing.T) {
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k, sw := setupFakesNoError(t, &s)
-
- err := SetFailedCondition(context.Background(), k, r)
- require.NoError(t, err)
-
- require.NotZero(t, k.StatusCallCount())
- require.NotZero(t, sw.UpdateCallCount())
-}
-
-func TestSetDegradedCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.T) {
- msg := "tell me nothing"
- reason := lokiv1.ReasonMissingObjectStorageSecret
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k := &k8sfakes.FakeClient{}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- return apierrors.NewBadRequest("something wasn't found")
- }
-
- err := SetDegradedCondition(context.Background(), k, r, msg, reason)
- require.Error(t, err)
-}
-
-func TestSetPendingCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.T) {
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k := &k8sfakes.FakeClient{}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- return apierrors.NewBadRequest("something wasn't found")
- }
-
- err := SetPendingCondition(context.Background(), k, r)
- require.Error(t, err)
-}
-
-func TestSetPendingCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing.T) {
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k := &k8sfakes.FakeClient{}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
-
- err := SetPendingCondition(context.Background(), k, r)
- require.NoError(t, err)
-}
-
-func TestSetPendingCondition_WhenExisting_DoNothing(t *testing.T) {
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- Status: lokiv1.LokiStackStatus{
- Conditions: []metav1.Condition{
- {
- Type: string(lokiv1.ConditionPending),
- Reason: string(lokiv1.ReasonPendingComponents),
- Message: messagePending,
- Status: metav1.ConditionTrue,
- },
- },
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k, _ := setupFakesNoError(t, &s)
-
- err := SetPendingCondition(context.Background(), k, r)
- require.NoError(t, err)
- require.Zero(t, k.StatusCallCount())
-}
-
-func TestSetPendingCondition_WhenExisting_SetPendingConditionTrue(t *testing.T) {
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- Status: lokiv1.LokiStackStatus{
- Conditions: []metav1.Condition{
- {
- Type: string(lokiv1.ConditionPending),
- Status: metav1.ConditionFalse,
- },
- },
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k, sw := setupFakesNoError(t, &s)
-
- err := SetPendingCondition(context.Background(), k, r)
- require.NoError(t, err)
- require.NotZero(t, k.StatusCallCount())
- require.NotZero(t, sw.UpdateCallCount())
-}
-
-func TestSetPendingCondition_WhenNoneExisting_AppendPendingCondition(t *testing.T) {
- s := lokiv1.LokiStack{
- ObjectMeta: metav1.ObjectMeta{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- r := ctrl.Request{
- NamespacedName: types.NamespacedName{
- Name: "my-stack",
- Namespace: "some-ns",
- },
- }
-
- k, sw := setupFakesNoError(t, &s)
-
- err := SetPendingCondition(context.Background(), k, r)
- require.NoError(t, err)
-
- require.NotZero(t, k.StatusCallCount())
- require.NotZero(t, sw.UpdateCallCount())
-}
-
func TestSetDegradedCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing.T) {
msg := "tell me nothing"
reason := lokiv1.ReasonMissingObjectStorageSecret
@@ -537,3 +151,49 @@ func TestSetDegradedCondition_WhenNoneExisting_AppendDegradedCondition(t *testin
require.NotZero(t, k.StatusCallCount())
require.NotZero(t, sw.UpdateCallCount())
}
+
+func TestGenerateConditions(t *testing.T) {
+ tt := []struct {
+ desc string
+ componentStatus *lokiv1.LokiStackComponentStatus
+ wantCondition metav1.Condition
+ }{
+ {
+ desc: "no error",
+ componentStatus: &lokiv1.LokiStackComponentStatus{},
+ wantCondition: conditionReady,
+ },
+ {
+ desc: "container pending",
+ componentStatus: &lokiv1.LokiStackComponentStatus{
+ Ingester: map[corev1.PodPhase][]string{
+ corev1.PodPending: {
+ "pod-0",
+ },
+ },
+ },
+ wantCondition: conditionPending,
+ },
+ {
+ desc: "container failed",
+ componentStatus: &lokiv1.LokiStackComponentStatus{
+ Ingester: map[corev1.PodPhase][]string{
+ corev1.PodFailed: {
+ "pod-0",
+ },
+ },
+ },
+ wantCondition: conditionFailed,
+ },
+ }
+
+ for _, tc := range tt {
+ tc := tc
+ t.Run(tc.desc, func(t *testing.T) {
+ t.Parallel()
+
+ condition := generateCondition(tc.componentStatus)
+ require.Equal(t, tc.wantCondition, condition)
+ })
+ }
+}
diff --git a/operator/internal/status/status.go b/operator/internal/status/status.go
index 247aea4e325a9..ca4e7c1bf301e 100644
--- a/operator/internal/status/status.go
+++ b/operator/internal/status/status.go
@@ -2,12 +2,14 @@ package status
import (
"context"
+ "time"
"github.com/ViaQ/logerr/v2/kverrors"
lokiv1 "github.com/grafana/loki/operator/apis/loki/v1"
"github.com/grafana/loki/operator/internal/external/k8s"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/util/retry"
- corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
ctrl "sigs.k8s.io/controller-runtime"
)
@@ -15,56 +17,66 @@ import (
// Refresh executes an aggregate update of the LokiStack Status struct, i.e.
// - It recreates the Status.Components pod status map per component.
// - It sets the appropriate Status.Condition to true that matches the pod status maps.
-func Refresh(ctx context.Context, k k8s.Client, req ctrl.Request) error {
- if err := SetComponentsStatus(ctx, k, req); err != nil {
- return err
- }
-
- var s lokiv1.LokiStack
- if err := k.Get(ctx, req.NamespacedName, &s); err != nil {
+func Refresh(ctx context.Context, k k8s.Client, req ctrl.Request, now time.Time) error {
+ var stack lokiv1.LokiStack
+ if err := k.Get(ctx, req.NamespacedName, &stack); err != nil {
if apierrors.IsNotFound(err) {
return nil
}
return kverrors.Wrap(err, "failed to lookup lokistack", "name", req.NamespacedName)
}
- cs := s.Status.Components
+ cs, err := generateComponentStatus(ctx, k, &stack)
+ if err != nil {
+ return err
+ }
+
+ condition := generateCondition(cs)
- // Check for failed pods first
- failed := len(cs.Compactor[corev1.PodFailed]) +
- len(cs.Distributor[corev1.PodFailed]) +
- len(cs.Ingester[corev1.PodFailed]) +
- len(cs.Querier[corev1.PodFailed]) +
- len(cs.QueryFrontend[corev1.PodFailed]) +
- len(cs.Gateway[corev1.PodFailed]) +
- len(cs.IndexGateway[corev1.PodFailed]) +
- len(cs.Ruler[corev1.PodFailed])
+ condition.LastTransitionTime = metav1.NewTime(now)
+ condition.Status = metav1.ConditionTrue
- unknown := len(cs.Compactor[corev1.PodUnknown]) +
- len(cs.Distributor[corev1.PodUnknown]) +
- len(cs.Ingester[corev1.PodUnknown]) +
- len(cs.Querier[corev1.PodUnknown]) +
- len(cs.QueryFrontend[corev1.PodUnknown]) +
- len(cs.Gateway[corev1.PodUnknown]) +
- len(cs.IndexGateway[corev1.PodUnknown]) +
- len(cs.Ruler[corev1.PodUnknown])
+ statusUpdater := func(stack *lokiv1.LokiStack) {
+ stack.Status.Components = *cs
- if failed != 0 || unknown != 0 {
- return SetFailedCondition(ctx, k, req)
- }
+ index := -1
+ for i := range stack.Status.Conditions {
+ // Reset all other conditions first
+ stack.Status.Conditions[i].Status = metav1.ConditionFalse
+ stack.Status.Conditions[i].LastTransitionTime = metav1.NewTime(now)
- // Check for pending pods
- pending := len(cs.Compactor[corev1.PodPending]) +
- len(cs.Distributor[corev1.PodPending]) +
- len(cs.Ingester[corev1.PodPending]) +
- len(cs.Querier[corev1.PodPending]) +
- len(cs.QueryFrontend[corev1.PodPending]) +
- len(cs.Gateway[corev1.PodPending]) +
- len(cs.IndexGateway[corev1.PodPending]) +
- len(cs.Ruler[corev1.PodPending])
+ // Locate existing pending condition if any
+ if stack.Status.Conditions[i].Type == condition.Type {
+ index = i
+ }
+ }
- if pending != 0 {
- return SetPendingCondition(ctx, k, req)
+ if index == -1 {
+ stack.Status.Conditions = append(stack.Status.Conditions, condition)
+ } else {
+ stack.Status.Conditions[index] = condition
+ }
+ }
+
+ statusUpdater(&stack)
+ err = k.Status().Update(ctx, &stack)
+ switch {
+ case err == nil:
+ return nil
+ case apierrors.IsConflict(err):
+ // break into retry-logic below on conflict
+ break
+ case err != nil:
+ // return non-conflict errors
+ return err
}
- return SetReadyCondition(ctx, k, req)
+
+ return retry.RetryOnConflict(retry.DefaultRetry, func() error {
+ if err := k.Get(ctx, req.NamespacedName, &stack); err != nil {
+ return err
+ }
+
+ statusUpdater(&stack)
+ return k.Status().Update(ctx, &stack)
+ })
}
diff --git a/operator/internal/status/status_test.go b/operator/internal/status/status_test.go
new file mode 100644
index 0000000000000..81ecc15345a4d
--- /dev/null
+++ b/operator/internal/status/status_test.go
@@ -0,0 +1,83 @@
+package status
+
+import (
+ "context"
+ "testing"
+ "time"
+
+ lokiv1 "github.com/grafana/loki/operator/apis/loki/v1"
+ "github.com/grafana/loki/operator/internal/manifests"
+ "github.com/stretchr/testify/require"
+ corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/types"
+ ctrl "sigs.k8s.io/controller-runtime"
+)
+
+func TestRefreshSuccess(t *testing.T) {
+ now := time.Now()
+ stack := &lokiv1.LokiStack{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "my-stack",
+ Namespace: "some-ns",
+ },
+ }
+
+ req := ctrl.Request{
+ NamespacedName: types.NamespacedName{
+ Name: "my-stack",
+ Namespace: "some-ns",
+ },
+ }
+
+ componentPods := map[string]*corev1.PodList{
+ manifests.LabelCompactorComponent: createPodList(manifests.LabelCompactorComponent, corev1.PodRunning),
+ manifests.LabelDistributorComponent: createPodList(manifests.LabelDistributorComponent, corev1.PodRunning),
+ manifests.LabelIngesterComponent: createPodList(manifests.LabelIngesterComponent, corev1.PodRunning),
+ manifests.LabelQuerierComponent: createPodList(manifests.LabelQuerierComponent, corev1.PodRunning),
+ manifests.LabelQueryFrontendComponent: createPodList(manifests.LabelQueryFrontendComponent, corev1.PodRunning),
+ manifests.LabelIndexGatewayComponent: createPodList(manifests.LabelIndexGatewayComponent, corev1.PodRunning),
+ manifests.LabelRulerComponent: createPodList(manifests.LabelRulerComponent, corev1.PodRunning),
+ manifests.LabelGatewayComponent: createPodList(manifests.LabelGatewayComponent, corev1.PodRunning),
+ }
+
+ wantStatus := lokiv1.LokiStackStatus{
+ Components: lokiv1.LokiStackComponentStatus{
+ Compactor: map[corev1.PodPhase][]string{corev1.PodRunning: {"compactor-pod-0"}},
+ Distributor: map[corev1.PodPhase][]string{corev1.PodRunning: {"distributor-pod-0"}},
+ IndexGateway: map[corev1.PodPhase][]string{corev1.PodRunning: {"index-gateway-pod-0"}},
+ Ingester: map[corev1.PodPhase][]string{corev1.PodRunning: {"ingester-pod-0"}},
+ Querier: map[corev1.PodPhase][]string{corev1.PodRunning: {"querier-pod-0"}},
+ QueryFrontend: map[corev1.PodPhase][]string{corev1.PodRunning: {"query-frontend-pod-0"}},
+ Gateway: map[corev1.PodPhase][]string{corev1.PodRunning: {"lokistack-gateway-pod-0"}},
+ Ruler: map[corev1.PodPhase][]string{corev1.PodRunning: {"ruler-pod-0"}},
+ },
+ Storage: lokiv1.LokiStackStorageStatus{},
+ Conditions: []metav1.Condition{
+ {
+ Type: string(lokiv1.ConditionReady),
+ Reason: string(lokiv1.ReasonReadyComponents),
+ Message: messageReady,
+ Status: metav1.ConditionTrue,
+ LastTransitionTime: metav1.NewTime(now),
+ },
+ },
+ }
+
+ k, sw := setupListClient(t, stack, componentPods)
+
+ err := Refresh(context.Background(), k, req, now)
+
+ require.NoError(t, err)
+ require.Equal(t, 1, k.GetCallCount())
+ require.Equal(t, 8, k.ListCallCount())
+
+ require.Equal(t, 1, sw.UpdateCallCount())
+ _, updated, _ := sw.UpdateArgsForCall(0)
+ updatedStack, ok := updated.(*lokiv1.LokiStack)
+ if !ok {
+ t.Fatalf("not a LokiStack: %T", updatedStack)
+ }
+
+ require.Equal(t, wantStatus, updatedStack.Status)
+}
|
operator
|
Refactor status update to reduce API calls (#8578)
|
098127390fa89b5aebaa52f5076a9b95bbe1083c
|
2024-12-04 14:03:45
|
Owen Diehl
|
feat(blockbuilder): priority queue for job dispatching (#15245)
| false
|
diff --git a/pkg/blockbuilder/scheduler/prioritiy_queue_test.go b/pkg/blockbuilder/scheduler/prioritiy_queue_test.go
new file mode 100644
index 0000000000000..b27d950aa04b0
--- /dev/null
+++ b/pkg/blockbuilder/scheduler/prioritiy_queue_test.go
@@ -0,0 +1,193 @@
+package scheduler
+
+import (
+ "testing"
+
+ "github.com/stretchr/testify/require"
+)
+
+func TestPriorityQueue(t *testing.T) {
+ t.Run("operations", func(t *testing.T) {
+ tests := []struct {
+ name string
+ input []int
+ wantPops []int
+ }{
+ {
+ name: "empty queue",
+ input: []int{},
+ wantPops: []int{},
+ },
+ {
+ name: "single element",
+ input: []int{1},
+ wantPops: []int{1},
+ },
+ {
+ name: "multiple elements in order",
+ input: []int{1, 2, 3},
+ wantPops: []int{1, 2, 3},
+ },
+ {
+ name: "multiple elements out of order",
+ input: []int{3, 1, 2},
+ wantPops: []int{1, 2, 3},
+ },
+ {
+ name: "duplicate elements",
+ input: []int{2, 1, 2, 1},
+ wantPops: []int{1, 1, 2, 2},
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ pq := NewPriorityQueue[int](func(a, b int) bool { return a < b })
+ require.Equal(t, 0, pq.Len())
+
+ // Push all elements
+ for _, v := range tt.input {
+ pq.Push(v)
+ }
+ require.Equal(t, len(tt.input), pq.Len())
+
+ // Pop all elements and verify order
+ got := make([]int, 0, len(tt.input))
+ for range tt.input {
+ v, ok := pq.Pop()
+ require.True(t, ok)
+ got = append(got, v)
+ }
+ require.Equal(t, tt.wantPops, got)
+
+ // Verify empty queue behavior
+ v, ok := pq.Pop()
+ require.False(t, ok)
+ require.Zero(t, v)
+ require.Equal(t, 0, pq.Len())
+ })
+ }
+ })
+
+ t.Run("custom type", func(t *testing.T) {
+ type Job struct {
+ ID string
+ Priority int
+ }
+
+ pq := NewPriorityQueue[Job](func(a, b Job) bool {
+ return a.Priority < b.Priority
+ })
+
+ jobs := []Job{
+ {ID: "high", Priority: 3},
+ {ID: "low", Priority: 1},
+ {ID: "medium", Priority: 2},
+ }
+
+ // Push all jobs
+ for _, j := range jobs {
+ pq.Push(j)
+ }
+
+ // Verify they come out in priority order
+ want := []string{"low", "medium", "high"}
+ got := make([]string, 0, len(jobs))
+ for range jobs {
+ j, ok := pq.Pop()
+ require.True(t, ok)
+ got = append(got, j.ID)
+ }
+ require.Equal(t, want, got)
+ })
+
+ t.Run("mixed operations", func(t *testing.T) {
+ pq := NewPriorityQueue[int](func(a, b int) bool { return a < b })
+
+ // Push some elements
+ pq.Push(3)
+ pq.Push(1)
+ require.Equal(t, 2, pq.Len())
+
+ // Pop lowest
+ v, ok := pq.Pop()
+ require.True(t, ok)
+ require.Equal(t, 1, v)
+
+ // Push more elements
+ pq.Push(2)
+ pq.Push(4)
+
+ // Verify remaining elements come out in order
+ want := []int{2, 3, 4}
+ got := make([]int, 0, 3)
+ for range want {
+ v, ok := pq.Pop()
+ require.True(t, ok)
+ got = append(got, v)
+ }
+ require.Equal(t, want, got)
+ })
+}
+
+func TestCircularBuffer(t *testing.T) {
+ tests := []struct {
+ name string
+ capacity int
+ input []int
+ wantPops []int
+ }{
+ {
+ name: "empty buffer",
+ capacity: 5,
+ input: []int{},
+ wantPops: []int{},
+ },
+ {
+ name: "partial fill",
+ capacity: 5,
+ input: []int{1, 2, 3},
+ wantPops: []int{1, 2, 3},
+ },
+ {
+ name: "full buffer",
+ capacity: 3,
+ input: []int{1, 2, 3},
+ wantPops: []int{1, 2, 3},
+ },
+ {
+ name: "overflow buffer",
+ capacity: 3,
+ input: []int{1, 2, 3, 4, 5},
+ wantPops: []int{3, 4, 5},
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ cb := NewCircularBuffer[int](tt.capacity)
+ require.Equal(t, 0, cb.Len())
+
+ // Push all elements
+ for _, v := range tt.input {
+ cb.Push(v)
+ }
+ require.Equal(t, min(tt.capacity, len(tt.input)), cb.Len())
+
+ // Pop all elements and verify order
+ got := make([]int, 0, cb.Len())
+ for cb.Len() > 0 {
+ v, ok := cb.Pop()
+ require.True(t, ok)
+ got = append(got, v)
+ }
+ require.Equal(t, tt.wantPops, got)
+
+ // Verify empty buffer behavior
+ v, ok := cb.Pop()
+ require.False(t, ok)
+ require.Zero(t, v)
+ require.Equal(t, 0, cb.Len())
+ })
+ }
+}
diff --git a/pkg/blockbuilder/scheduler/priority_queue.go b/pkg/blockbuilder/scheduler/priority_queue.go
new file mode 100644
index 0000000000000..3b488716cabe8
--- /dev/null
+++ b/pkg/blockbuilder/scheduler/priority_queue.go
@@ -0,0 +1,126 @@
+package scheduler
+
+import (
+ "container/heap"
+)
+
+// PriorityQueue is a generic priority queue.
+type PriorityQueue[T any] struct {
+ h *priorityHeap[T]
+}
+
+// NewPriorityQueue creates a new priority queue.
+func NewPriorityQueue[T any](less func(T, T) bool) *PriorityQueue[T] {
+ h := &priorityHeap[T]{
+ less: less,
+ heap: make([]T, 0),
+ }
+ heap.Init(h)
+ return &PriorityQueue[T]{h: h}
+}
+
+// Push adds an element to the queue.
+func (pq *PriorityQueue[T]) Push(v T) {
+ heap.Push(pq.h, v)
+}
+
+// Pop removes and returns the element with the highest priority from the queue.
+func (pq *PriorityQueue[T]) Pop() (T, bool) {
+ if pq.Len() == 0 {
+ var zero T
+ return zero, false
+ }
+ return heap.Pop(pq.h).(T), true
+}
+
+// Len returns the number of elements in the queue.
+func (pq *PriorityQueue[T]) Len() int {
+ return pq.h.Len()
+}
+
+// priorityHeap is the internal heap implementation that satisfies heap.Interface.
+type priorityHeap[T any] struct {
+ less func(T, T) bool
+ heap []T
+}
+
+func (h *priorityHeap[T]) Len() int {
+ return len(h.heap)
+}
+
+func (h *priorityHeap[T]) Less(i, j int) bool {
+ return h.less(h.heap[i], h.heap[j])
+}
+
+func (h *priorityHeap[T]) Swap(i, j int) {
+ h.heap[i], h.heap[j] = h.heap[j], h.heap[i]
+}
+
+func (h *priorityHeap[T]) Push(x any) {
+ h.heap = append(h.heap, x.(T))
+}
+
+func (h *priorityHeap[T]) Pop() any {
+ old := h.heap
+ n := len(old)
+ x := old[n-1]
+ h.heap = old[0 : n-1]
+ return x
+}
+
+// CircularBuffer is a generic circular buffer.
+type CircularBuffer[T any] struct {
+ buffer []T
+ size int
+ head int
+ tail int
+}
+
+// NewCircularBuffer creates a new circular buffer with the given capacity.
+func NewCircularBuffer[T any](capacity int) *CircularBuffer[T] {
+ return &CircularBuffer[T]{
+ buffer: make([]T, capacity),
+ size: 0,
+ head: 0,
+ tail: 0,
+ }
+}
+
+// Push adds an element to the circular buffer and returns the evicted element if any
+func (b *CircularBuffer[T]) Push(v T) (T, bool) {
+ var evicted T
+ hasEvicted := false
+
+ if b.size == len(b.buffer) {
+ // If buffer is full, evict the oldest element (at head)
+ evicted = b.buffer[b.head]
+ hasEvicted = true
+ b.head = (b.head + 1) % len(b.buffer)
+ } else {
+ b.size++
+ }
+
+ b.buffer[b.tail] = v
+ b.tail = (b.tail + 1) % len(b.buffer)
+
+ return evicted, hasEvicted
+}
+
+// Pop removes and returns the oldest element from the buffer
+func (b *CircularBuffer[T]) Pop() (T, bool) {
+ if b.size == 0 {
+ var zero T
+ return zero, false
+ }
+
+ v := b.buffer[b.head]
+ b.head = (b.head + 1) % len(b.buffer)
+ b.size--
+
+ return v, true
+}
+
+// Len returns the number of elements in the buffer
+func (b *CircularBuffer[T]) Len() int {
+ return b.size
+}
diff --git a/pkg/blockbuilder/scheduler/queue.go b/pkg/blockbuilder/scheduler/queue.go
index e2f125ad70a07..dab46f164908d 100644
--- a/pkg/blockbuilder/scheduler/queue.go
+++ b/pkg/blockbuilder/scheduler/queue.go
@@ -3,30 +3,58 @@ package scheduler
import (
"fmt"
"sync"
+ "time"
"github.com/grafana/loki/v3/pkg/blockbuilder/types"
)
-// jobAssignment tracks a job and its assigned builder
-type jobAssignment struct {
+const (
+ defaultCompletedJobsCapacity = 100
+)
+
+// JobWithPriority wraps a job with a priority value
+type JobWithPriority[T comparable] struct {
+ Job *types.Job
+ Priority T
+}
+
+// NewJobWithPriority creates a new JobWithPriority instance
+func NewJobWithPriority[T comparable](job *types.Job, priority T) *JobWithPriority[T] {
+ return &JobWithPriority[T]{
+ Job: job,
+ Priority: priority,
+ }
+}
+
+// inProgressJob contains a job and its start time
+type inProgressJob struct {
job *types.Job
- builderID string
+ startTime time.Time
+}
+
+// Duration returns how long the job has been running
+func (j *inProgressJob) Duration() time.Duration {
+ return time.Since(j.startTime)
}
// JobQueue manages the queue of pending jobs and tracks their state.
type JobQueue struct {
- pending map[string]*types.Job // Jobs waiting to be processed, key is job ID
- inProgress map[string]*jobAssignment // job ID -> assignment info
- completed map[string]*types.Job // Completed jobs, key is job ID
+ pending *PriorityQueue[*JobWithPriority[int]] // Jobs waiting to be processed, ordered by priority
+ inProgress map[string]*inProgressJob // Jobs currently being processed, key is job ID
+ completed *CircularBuffer[*types.Job] // Last N completed jobs
+ statusMap map[string]types.JobStatus // Maps job ID to its current status
mu sync.RWMutex
}
// NewJobQueue creates a new job queue instance
func NewJobQueue() *JobQueue {
return &JobQueue{
- pending: make(map[string]*types.Job),
- inProgress: make(map[string]*jobAssignment),
- completed: make(map[string]*types.Job),
+ pending: NewPriorityQueue[*JobWithPriority[int]](func(a, b *JobWithPriority[int]) bool {
+ return a.Priority > b.Priority // Higher priority first
+ }),
+ inProgress: make(map[string]*inProgressJob),
+ completed: NewCircularBuffer[*types.Job](defaultCompletedJobsCapacity),
+ statusMap: make(map[string]types.JobStatus),
}
}
@@ -34,92 +62,81 @@ func (q *JobQueue) Exists(job *types.Job) (types.JobStatus, bool) {
q.mu.RLock()
defer q.mu.RUnlock()
- if _, ok := q.inProgress[job.ID]; ok {
- return types.JobStatusInProgress, true
- }
-
- if _, ok := q.pending[job.ID]; ok {
- return types.JobStatusPending, true
- }
-
- if _, ok := q.completed[job.ID]; ok {
- return types.JobStatusComplete, true
- }
-
- return -1, false
+ status, exists := q.statusMap[job.ID]
+ return status, exists
}
-// Enqueue adds a new job to the pending queue
-// This is a naive implementation, intended to be refactored
-func (q *JobQueue) Enqueue(job *types.Job) error {
+// Enqueue adds a new job to the pending queue with a priority
+func (q *JobQueue) Enqueue(job *types.Job, priority int) error {
q.mu.Lock()
defer q.mu.Unlock()
- if _, exists := q.pending[job.ID]; exists {
- return fmt.Errorf("job %s already exists in pending queue", job.ID)
- }
- if _, exists := q.inProgress[job.ID]; exists {
- return fmt.Errorf("job %s already exists in progress", job.ID)
- }
- if _, exists := q.completed[job.ID]; exists {
- return fmt.Errorf("job %s already completed", job.ID)
+ // Check if job already exists
+ if status, exists := q.statusMap[job.ID]; exists {
+ return fmt.Errorf("job %s already exists with status %v", job.ID, status)
}
- q.pending[job.ID] = job
+ jobWithPriority := NewJobWithPriority(job, priority)
+ q.pending.Push(jobWithPriority)
+ q.statusMap[job.ID] = types.JobStatusPending
return nil
}
// Dequeue gets the next available job and assigns it to a builder
-func (q *JobQueue) Dequeue(builderID string) (*types.Job, bool, error) {
+func (q *JobQueue) Dequeue(_ string) (*types.Job, bool, error) {
q.mu.Lock()
defer q.mu.Unlock()
- // Simple FIFO for now
- for id, job := range q.pending {
- delete(q.pending, id)
- q.inProgress[id] = &jobAssignment{
- job: job,
- builderID: builderID,
- }
- return job, true, nil
+ if q.pending.Len() == 0 {
+ return nil, false, nil
+ }
+
+ jobWithPriority, ok := q.pending.Pop()
+ if !ok {
+ return nil, false, nil
+ }
+
+ // Add to in-progress with current time
+ q.inProgress[jobWithPriority.Job.ID] = &inProgressJob{
+ job: jobWithPriority.Job,
+ startTime: time.Now(),
}
+ q.statusMap[jobWithPriority.Job.ID] = types.JobStatusInProgress
- return nil, false, nil
+ return jobWithPriority.Job, true, nil
}
// MarkComplete moves a job from in-progress to completed
-func (q *JobQueue) MarkComplete(jobID string, builderID string) error {
+func (q *JobQueue) MarkComplete(jobID string) {
q.mu.Lock()
defer q.mu.Unlock()
- assignment, exists := q.inProgress[jobID]
- if !exists {
- return fmt.Errorf("job %s not found in progress", jobID)
+ // Find job in in-progress map
+ inProgressJob, exists := q.inProgress[jobID]
+ // if it doesn't exist, it could be previously removed (duplicate job execution)
+ // or the scheduler may have restarted and not have the job state anymore.
+ if exists {
+ // Remove from in-progress
+ delete(q.inProgress, jobID)
}
- if assignment.builderID != builderID {
- return fmt.Errorf("job %s not assigned to builder %s", jobID, builderID)
+ // Add to completed buffer and handle evicted job
+ if evictedJob, hasEvicted := q.completed.Push(inProgressJob.job); hasEvicted {
+ // Remove evicted job from status map
+ delete(q.statusMap, evictedJob.ID)
}
-
- delete(q.inProgress, jobID)
- q.completed[jobID] = assignment.job
- return nil
+ q.statusMap[jobID] = types.JobStatusComplete
}
-// SyncJob updates the state of an in-progress job
-func (q *JobQueue) SyncJob(jobID string, builderID string, job *types.Job) error {
+// SyncJob registers a job as in-progress, used for restoring state after scheduler restarts
+func (q *JobQueue) SyncJob(jobID string, _ string, job *types.Job) {
q.mu.Lock()
defer q.mu.Unlock()
- assignment, exists := q.inProgress[jobID]
- if !exists {
- return fmt.Errorf("job %s not found in progress", jobID)
- }
-
- if assignment.builderID != builderID {
- return fmt.Errorf("job %s not assigned to builder %s", jobID, builderID)
+ // Add directly to in-progress
+ q.inProgress[jobID] = &inProgressJob{
+ job: job,
+ startTime: time.Now(),
}
-
- assignment.job = job
- return nil
+ q.statusMap[jobID] = types.JobStatusInProgress
}
diff --git a/pkg/blockbuilder/scheduler/scheduler.go b/pkg/blockbuilder/scheduler/scheduler.go
index dbf732742de39..96356515a921f 100644
--- a/pkg/blockbuilder/scheduler/scheduler.go
+++ b/pkg/blockbuilder/scheduler/scheduler.go
@@ -114,12 +114,12 @@ func (s *BlockScheduler) runOnce(ctx context.Context) error {
for _, job := range jobs {
// TODO: end offset keeps moving each time we plan jobs, maybe we should not use it as part of the job ID
- if status, ok := s.queue.Exists(&job); ok {
+ if status, ok := s.queue.Exists(job.Job); ok {
level.Debug(s.logger).Log("msg", "job already exists", "job", job, "status", status)
continue
}
- if err := s.queue.Enqueue(&job); err != nil {
+ if err := s.queue.Enqueue(job.Job, job.Priority); err != nil {
level.Error(s.logger).Log("msg", "failed to enqueue job", "job", job, "err", err)
}
}
@@ -144,13 +144,15 @@ func (s *BlockScheduler) HandleGetJob(ctx context.Context, builderID string) (*t
}
}
-func (s *BlockScheduler) HandleCompleteJob(_ context.Context, builderID string, job *types.Job) error {
+func (s *BlockScheduler) HandleCompleteJob(_ context.Context, _ string, job *types.Job) error {
// TODO: handle commits
- return s.queue.MarkComplete(job.ID, builderID)
+ s.queue.MarkComplete(job.ID)
+ return nil
}
func (s *BlockScheduler) HandleSyncJob(_ context.Context, builderID string, job *types.Job) error {
- return s.queue.SyncJob(job.ID, builderID, job)
+ s.queue.SyncJob(job.ID, builderID, job)
+ return nil
}
// unimplementedScheduler provides default implementations that panic.
diff --git a/pkg/blockbuilder/scheduler/scheduler_test.go b/pkg/blockbuilder/scheduler/scheduler_test.go
index 35e53ee255993..2d857d06a2fe9 100644
--- a/pkg/blockbuilder/scheduler/scheduler_test.go
+++ b/pkg/blockbuilder/scheduler/scheduler_test.go
@@ -39,7 +39,7 @@ func TestScheduleAndProcessJob(t *testing.T) {
// Create and enqueue a test job
job := types.NewJob(1, types.Offsets{Min: 100, Max: 200})
- err := env.queue.Enqueue(job)
+ err := env.queue.Enqueue(job, 100)
if err != nil {
t.Fatalf("failed to enqueue job: %v", err)
}
@@ -98,11 +98,11 @@ func TestMultipleBuilders(t *testing.T) {
job2 := types.NewJob(2, types.Offsets{Min: 300, Max: 400})
// Enqueue jobs
- err := env1.queue.Enqueue(job1)
+ err := env1.queue.Enqueue(job1, 100)
if err != nil {
t.Fatalf("failed to enqueue job1: %v", err)
}
- err = env1.queue.Enqueue(job2)
+ err = env1.queue.Enqueue(job2, 100)
if err != nil {
t.Fatalf("failed to enqueue job2: %v", err)
}
diff --git a/pkg/blockbuilder/scheduler/strategy.go b/pkg/blockbuilder/scheduler/strategy.go
index 5ea1fb6db2d9c..8824c16f510ea 100644
--- a/pkg/blockbuilder/scheduler/strategy.go
+++ b/pkg/blockbuilder/scheduler/strategy.go
@@ -2,6 +2,7 @@ package scheduler
import (
"context"
+ "sort"
"time"
"github.com/go-kit/log"
@@ -19,7 +20,7 @@ type OffsetReader interface {
type Planner interface {
Name() string
- Plan(ctx context.Context) ([]types.Job, error)
+ Plan(ctx context.Context) ([]*JobWithPriority[int], error)
}
const (
@@ -44,31 +45,35 @@ func (p *RecordCountPlanner) Name() string {
return RecordCountStrategy
}
-func (p *RecordCountPlanner) Plan(ctx context.Context) ([]types.Job, error) {
+func (p *RecordCountPlanner) Plan(ctx context.Context) ([]*JobWithPriority[int], error) {
offsets, err := p.offsetReader.GroupLag(ctx)
if err != nil {
level.Error(p.logger).Log("msg", "failed to get group lag", "err", err)
return nil, err
}
- jobs := make([]types.Job, 0, len(offsets))
- for _, partition := range offsets {
+ var jobs []*JobWithPriority[int]
+ for _, partitionOffset := range offsets {
// kadm.GroupMemberLag contains valid Commit.At even when consumer group never committed any offset.
// no additional validation is needed here
- startOffset := partition.Commit.At + 1
- endOffset := min(startOffset+p.targetRecordCount, partition.End.Offset)
+ startOffset := partitionOffset.Commit.At + 1
+ endOffset := min(startOffset+p.targetRecordCount, partitionOffset.End.Offset)
- job := types.Job{
- Partition: int(partition.Partition),
- Offsets: types.Offsets{
+ job := NewJobWithPriority(
+ types.NewJob(int(partitionOffset.Partition), types.Offsets{
Min: startOffset,
Max: endOffset,
- },
- }
+ }), int(partitionOffset.End.Offset-startOffset),
+ )
jobs = append(jobs, job)
}
+ // Sort jobs by partition number to ensure consistent ordering
+ sort.Slice(jobs, func(i, j int) bool {
+ return jobs[i].Job.Partition < jobs[j].Job.Partition
+ })
+
return jobs, nil
}
@@ -98,7 +103,7 @@ func (p *TimeRangePlanner) Name() string {
return TimeRangeStrategy
}
-func (p *TimeRangePlanner) Plan(ctx context.Context) ([]types.Job, error) {
+func (p *TimeRangePlanner) Plan(ctx context.Context) ([]*JobWithPriority[int], error) {
// truncate to the nearest Interval
consumeUptoTS := p.now().Add(-p.buffer).Truncate(p.targetPeriod)
@@ -115,7 +120,7 @@ func (p *TimeRangePlanner) Plan(ctx context.Context) ([]types.Job, error) {
return nil, err
}
- var jobs []types.Job
+ var jobs []*JobWithPriority[int]
for _, partitionOffset := range offsets {
startOffset := partitionOffset.Commit.At + 1
// TODO: we could further break down the work into Interval sized chunks if this partition has pending records spanning a long time range
@@ -129,14 +134,20 @@ func (p *TimeRangePlanner) Plan(ctx context.Context) ([]types.Job, error) {
continue
}
- jobs = append(jobs, types.Job{
- Partition: int(partitionOffset.Partition),
- Offsets: types.Offsets{
+ job := NewJobWithPriority(
+ types.NewJob(int(partitionOffset.Partition), types.Offsets{
Min: startOffset,
Max: endOffset,
- },
- })
+ }), int(endOffset-startOffset),
+ )
+
+ jobs = append(jobs, job)
}
+ // Sort jobs by partition number to ensure consistent ordering
+ sort.Slice(jobs, func(i, j int) bool {
+ return jobs[i].Job.Partition < jobs[j].Job.Partition
+ })
+
return jobs, nil
}
diff --git a/pkg/blockbuilder/scheduler/strategy_test.go b/pkg/blockbuilder/scheduler/strategy_test.go
index d777113433f35..30cbd1ee8a172 100644
--- a/pkg/blockbuilder/scheduler/strategy_test.go
+++ b/pkg/blockbuilder/scheduler/strategy_test.go
@@ -17,16 +17,11 @@ func TestTimeRangePlanner_Plan(t *testing.T) {
for _, tc := range []struct {
name string
now time.Time
- expectedJobs []types.Job
+ expectedJobs []*JobWithPriority[int]
groupLag map[int32]kadm.GroupMemberLag
consumeUpto map[int32]kadm.ListedOffset
}{
{
- // Interval 1
- // now: 00:42:00. consume until 00:15:00
- // last consumed offset 100 with record ts: 00:10:00
- // record offset with ts after 00:15:00 - offset 200
- // resulting jobs: [100, 200]
name: "normal case. schedule first interval",
now: time.Date(0, 0, 0, 0, 42, 0, 0, time.UTC), // 00:42:00
groupLag: map[int32]kadm.GroupMemberLag{
@@ -42,19 +37,14 @@ func TestTimeRangePlanner_Plan(t *testing.T) {
Offset: 200,
},
},
- expectedJobs: []types.Job{
- {
- Partition: 0,
- Offsets: types.Offsets{Min: 101, Max: 200},
- },
+ expectedJobs: []*JobWithPriority[int]{
+ NewJobWithPriority(
+ types.NewJob(0, types.Offsets{Min: 101, Max: 200}),
+ 99, // 200-101
+ ),
},
},
{
- // Interval 2
- // now: 00:46:00. consume until 00:30:00
- // last consumed offset 199 with record ts: 00:11:00
- // record offset with ts after 00:30:00 - offset 300
- // resulting jobs: [200, 300]
name: "normal case. schedule second interval",
now: time.Date(0, 0, 0, 0, 46, 0, 0, time.UTC), // 00:46:00
groupLag: map[int32]kadm.GroupMemberLag{
@@ -79,23 +69,18 @@ func TestTimeRangePlanner_Plan(t *testing.T) {
Offset: 123,
},
},
- expectedJobs: []types.Job{
- {
- Partition: 0,
- Offsets: types.Offsets{Min: 200, Max: 300},
- },
- {
- Partition: 1,
- Offsets: types.Offsets{Min: 12, Max: 123},
- },
+ expectedJobs: []*JobWithPriority[int]{
+ NewJobWithPriority(
+ types.NewJob(0, types.Offsets{Min: 200, Max: 300}),
+ 100, // 300-200
+ ),
+ NewJobWithPriority(
+ types.NewJob(1, types.Offsets{Min: 12, Max: 123}),
+ 111, // 123-12
+ ),
},
},
{
- // Interval 2 - run scheduling again
- // now: 00:48:00. consume until 00:30:00
- // last consumed offset 299
- // record offset with ts after 00:30:00 - offset 300
- // no jobs to schedule for partition 0
name: "no pending records to consume. schedule second interval once more time",
now: time.Date(0, 0, 0, 0, 48, 0, 0, time.UTC), // 00:48:00
groupLag: map[int32]kadm.GroupMemberLag{
@@ -116,16 +101,15 @@ func TestTimeRangePlanner_Plan(t *testing.T) {
0: {
Offset: 300,
},
- // still pending. assume no builder were assigned
1: {
Offset: 123,
},
},
- expectedJobs: []types.Job{
- {
- Partition: 1,
- Offsets: types.Offsets{Min: 12, Max: 123},
- },
+ expectedJobs: []*JobWithPriority[int]{
+ NewJobWithPriority(
+ types.NewJob(1, types.Offsets{Min: 12, Max: 123}),
+ 111, // 123-12
+ ),
},
},
} {
|
feat
|
priority queue for job dispatching (#15245)
|
4eddcc1c8c16dccb437447b337f48c9b4e921f3a
|
2019-08-13 02:06:37
|
Ed
|
chore(packaging): avoid running containers from Dockerfile
| false
|
diff --git a/cmd/docker-driver/Dockerfile b/cmd/docker-driver/Dockerfile
index b437d74b4e717..c4e92e7d0adc6 100644
--- a/cmd/docker-driver/Dockerfile
+++ b/cmd/docker-driver/Dockerfile
@@ -7,7 +7,7 @@ ARG BUILD_IMAGE=grafana/loki-build-image:latest
FROM $BUILD_IMAGE as build
COPY . /go/src/github.com/grafana/loki
WORKDIR /go/src/github.com/grafana/loki
-RUN make clean && make cmd/docker-driver/docker-driver
+RUN make clean && make BUILD_IN_CONTAINER=false cmd/docker-driver/docker-driver
FROM alpine:3.9
RUN apk add --update --no-cache ca-certificates
diff --git a/cmd/loki-canary/Dockerfile b/cmd/loki-canary/Dockerfile
index 7c6e264c824ed..f769def8895ea 100644
--- a/cmd/loki-canary/Dockerfile
+++ b/cmd/loki-canary/Dockerfile
@@ -10,7 +10,7 @@ FROM --platform=linux/amd64 $BUILD_IMAGE as build
COPY --from=goenv /goarch /goarm /
COPY . /go/src/github.com/grafana/loki
WORKDIR /go/src/github.com/grafana/loki
-RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make loki-canary
+RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make BUILD_IN_CONTAINER=false loki-canary
FROM alpine:3.9
RUN apk add --update --no-cache ca-certificates
diff --git a/cmd/loki/Dockerfile b/cmd/loki/Dockerfile
index c90e66ae2734d..213275263cb51 100644
--- a/cmd/loki/Dockerfile
+++ b/cmd/loki/Dockerfile
@@ -10,7 +10,7 @@ FROM --platform=linux/amd64 $BUILD_IMAGE as build
COPY --from=goenv /goarch /goarm /
COPY . /go/src/github.com/grafana/loki
WORKDIR /go/src/github.com/grafana/loki
-RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make loki
+RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make BUILD_IN_CONTAINER=false loki
FROM alpine:3.9
RUN apk add --update --no-cache ca-certificates
diff --git a/cmd/loki/Dockerfile.debug b/cmd/loki/Dockerfile.debug
index 58ff97d27b898..222d3012282d9 100644
--- a/cmd/loki/Dockerfile.debug
+++ b/cmd/loki/Dockerfile.debug
@@ -6,7 +6,7 @@ FROM grafana/loki-build-image as build
ARG GOARCH="amd64"
COPY . /go/src/github.com/grafana/loki
WORKDIR /go/src/github.com/grafana/loki
-RUN make clean && make loki-debug
+RUN make clean && make BUILD_IN_CONTAINER=false loki-debug
FROM alpine:3.9
RUN apk add --update --no-cache ca-certificates
diff --git a/cmd/promtail/Dockerfile b/cmd/promtail/Dockerfile
index 259f39ce3b043..606343265558c 100644
--- a/cmd/promtail/Dockerfile
+++ b/cmd/promtail/Dockerfile
@@ -10,7 +10,7 @@ FROM --platform=linux/amd64 $BUILD_IMAGE as build
COPY --from=goenv /goarch /goarm /
COPY . /go/src/github.com/grafana/loki
WORKDIR /go/src/github.com/grafana/loki
-RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make promtail
+RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make BUILD_IN_CONTAINER=false promtail
# Promtail requires debian as the base image to support systemd journal reading
FROM debian:stretch-slim
diff --git a/cmd/promtail/Dockerfile.debug b/cmd/promtail/Dockerfile.debug
index fc4d3fe35d778..f6c19aea1d9ac 100644
--- a/cmd/promtail/Dockerfile.debug
+++ b/cmd/promtail/Dockerfile.debug
@@ -6,7 +6,7 @@ FROM grafana/loki-build-image as build
ARG GOARCH="amd64"
COPY . /go/src/github.com/grafana/loki
WORKDIR /go/src/github.com/grafana/loki
-RUN make clean && make promtail-debug
+RUN make clean && make BUILD_IN_CONTAINER=false promtail-debug
FROM alpine:3.9
|
chore
|
avoid running containers from Dockerfile
|
fac5997b18e3fb07f92c20f4fa429213574e49cf
|
2024-02-20 15:39:06
|
Kaviraj Kanagaraj
|
feat: Support split align and caching for instant metric query results (#11814)
| false
|
diff --git a/.gitignore b/.gitignore
index 66eb0a8cefeb2..83ab9c808d348 100644
--- a/.gitignore
+++ b/.gitignore
@@ -27,8 +27,8 @@ cmd/querytee/querytee
dlv
rootfs/
dist
-coverage.txt
-test_results.txt
+*coverage.txt
+*test_results.txt
.DS_Store
.aws-sam
.idea
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 8abd9a846458b..fa8861228407f 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,7 @@
##### Enhancements
+* [11814](https://github.com/grafana/loki/pull/11814) **kavirajk**: feat: Support split align and caching for instant metric query results
* [11851](https://github.com/grafana/loki/pull/11851) **elcomtik**: Helm: Allow the definition of resources for GrafanaAgent pods.
* [11819](https://github.com/grafana/loki/pull/11819) **jburnham**: Ruler: Add the ability to disable the `X-Scope-OrgId` tenant identification header in remote write requests.
* [11633](https://github.com/grafana/loki/pull/11633) **cyriltovena**: Add profiling integrations to tracing instrumentation.
@@ -70,7 +71,7 @@
* [11657](https://github.com/grafana/loki/pull/11657) **ashwanthgoli** Log results cache: compose empty response based on the request being served to avoid returning incorrect limit or direction.
* [11587](https://github.com/grafana/loki/pull/11587) **trevorwhitney** Fix semantics of label parsing logic of metrics and logs queries. Both only parse the first label if multiple extractions into the same label are requested.
* [11776](https://github.com/grafana/loki/pull/11776) **ashwanthgoli** Background Cache: Fixes a bug that is causing the background queue size to be incremented twice for each enqueued item.
-* [11921](https://github.com/grafana/loki/pull/11921) **paul1r**: Parsing: String array elements were not being parsed correctly in JSON processing
+* [11921](https://github.com/grafana/loki/pull/11921) **paul1r**: Parsing: String array elements were not being parsed correctly in JSON processing
##### Changes
diff --git a/cmd/loki/loki-local-with-memcached.yaml b/cmd/loki/loki-local-with-memcached.yaml
index d1b0ae1c2493c..a2f4336cdd484 100644
--- a/cmd/loki/loki-local-with-memcached.yaml
+++ b/cmd/loki/loki-local-with-memcached.yaml
@@ -22,6 +22,17 @@ query_range:
cache_results: true
cache_volume_results: true
cache_series_results: true
+ cache_instant_metric_results: true
+ instant_metric_query_split_align: true
+ instant_metric_results_cache:
+ cache:
+ default_validity: 12h
+ memcached_client:
+ consistent_hash: true
+ addresses: "dns+localhost:11211"
+ max_idle_conns: 16
+ timeout: 500ms
+ update_interval: 1m
series_results_cache:
cache:
default_validity: 12h
diff --git a/docs/sources/configure/_index.md b/docs/sources/configure/_index.md
index d3c5593b4da23..70891a0448419 100644
--- a/docs/sources/configure/_index.md
+++ b/docs/sources/configure/_index.md
@@ -886,6 +886,28 @@ volume_results_cache:
# CLI flag: -frontend.volume-results-cache.compression
[compression: <string> | default = ""]
+# Cache instant metric query results.
+# CLI flag: -querier.cache-instant-metric-results
+[cache_instant_metric_results: <boolean> | default = false]
+
+# If a cache config is not specified and cache_instant_metric_results is true,
+# the config for the results cache is used.
+instant_metric_results_cache:
+ # The cache block configures the cache backend.
+ # The CLI flags prefix for this block configuration is:
+ # frontend.instant-metric-results-cache
+ [cache: <cache_config>]
+
+ # Use compression in cache. The default is an empty value '', which disables
+ # compression. Supported values are: 'snappy' and ''.
+ # CLI flag: -frontend.instant-metric-results-cache.compression
+ [compression: <string> | default = ""]
+
+# Whether to align the splits of instant metric query with splitByInterval and
+# query's exec time. Useful when instant_metric_cache is enabled
+# CLI flag: -querier.instant-metric-query-split-align
+[instant_metric_query_split_align: <boolean> | default = false]
+
# Cache series query results.
# CLI flag: -querier.cache-series-results
[cache_series_results: <boolean> | default = false]
@@ -2935,6 +2957,13 @@ The `limits_config` block configures global and per-tenant limits in Loki.
# CLI flag: -experimental.querier.recent-metadata-query-window
[recent_metadata_query_window: <duration> | default = 0s]
+# Split instant metric queries by a time interval and execute in parallel. The
+# value 0 disables splitting instant metric queries by time. This also
+# determines how cache keys are chosen when instant metric query result caching
+# is enabled.
+# CLI flag: -querier.split-instant-metric-queries-by-interval
+[split_instant_metric_queries_by_interval: <duration> | default = 1h]
+
# Interval to use for time-based splitting when a request is within the
# `query_ingesters_within` window; defaults to `split-queries-by-interval` by
# setting to 0.
@@ -4403,6 +4432,7 @@ The cache block configures the cache backend. The supported CLI flags `<prefix>`
- `bloom.metas-cache`
- `frontend`
- `frontend.index-stats-results-cache`
+- `frontend.instant-metric-results-cache`
- `frontend.label-results-cache`
- `frontend.series-results-cache`
- `frontend.volume-results-cache`
diff --git a/pkg/logql/downstream.go b/pkg/logql/downstream.go
index 33d945f11b923..6946c06e54a09 100644
--- a/pkg/logql/downstream.go
+++ b/pkg/logql/downstream.go
@@ -636,6 +636,10 @@ func NewResultStepEvaluator(res logqlmodel.Result, params Params) (StepEvaluator
step = params.Step()
)
+ if res.Data == nil {
+ return nil, fmt.Errorf("data in the passed result is nil (res.Data), cannot be processed by stepevaluator")
+ }
+
switch data := res.Data.(type) {
case promql.Vector:
return NewVectorStepEvaluator(start, data), nil
diff --git a/pkg/logql/metrics.go b/pkg/logql/metrics.go
index 40fbece82d87d..b55e9840a4758 100644
--- a/pkg/logql/metrics.go
+++ b/pkg/logql/metrics.go
@@ -94,7 +94,8 @@ func RecordRangeAndInstantQueryMetrics(
) {
var (
logger = fixLogger(ctx, log)
- rt = string(GetRangeType(p))
+ rangeType = GetRangeType(p)
+ rt = string(rangeType)
latencyType = latencyTypeFast
returnedLines = 0
)
@@ -103,6 +104,12 @@ func RecordRangeAndInstantQueryMetrics(
level.Warn(logger).Log("msg", "error parsing query type", "err", err)
}
+ resultCache := stats.Caches.Result
+
+ if queryType == QueryTypeMetric && rangeType == InstantType {
+ resultCache = stats.Caches.InstantMetricResult
+ }
+
// Tag throughput metric by latency type based on a threshold.
// Latency below the threshold is fast, above is slow.
if stats.Summary.ExecTime > slowQueryThresholdSecond {
@@ -162,10 +169,10 @@ func RecordRangeAndInstantQueryMetrics(
"cache_volume_results_req", stats.Caches.VolumeResult.EntriesRequested,
"cache_volume_results_hit", stats.Caches.VolumeResult.EntriesFound,
"cache_volume_results_download_time", stats.Caches.VolumeResult.CacheDownloadTime(),
- "cache_result_req", stats.Caches.Result.EntriesRequested,
- "cache_result_hit", stats.Caches.Result.EntriesFound,
- "cache_result_download_time", stats.Caches.Result.CacheDownloadTime(),
- "cache_result_query_length_served", stats.Caches.Result.CacheQueryLengthServed(),
+ "cache_result_req", resultCache.EntriesRequested,
+ "cache_result_hit", resultCache.EntriesFound,
+ "cache_result_download_time", resultCache.CacheDownloadTime(),
+ "cache_result_query_length_served", resultCache.CacheQueryLengthServed(),
}...)
logValues = append(logValues, tagsToKeyValues(queryTags)...)
diff --git a/pkg/logql/rangemapper.go b/pkg/logql/rangemapper.go
index 975f63f4c9523..14cf76f1475a5 100644
--- a/pkg/logql/rangemapper.go
+++ b/pkg/logql/rangemapper.go
@@ -57,6 +57,20 @@ type RangeMapper struct {
splitByInterval time.Duration
metrics *MapperMetrics
stats *MapperStats
+
+ splitAlignTs time.Time
+}
+
+// NewRangeMapperWithSplitAlign is similar to `NewRangeMapper` except it accepts additonal `splitAlign` argument and used to
+// align the subqueries generated according to that. Look at `rangeSplitAlign` method for more information.
+func NewRangeMapperWithSplitAlign(interval time.Duration, splitAlign time.Time, metrics *MapperMetrics, stats *MapperStats) (RangeMapper, error) {
+ rm, err := NewRangeMapper(interval, metrics, stats)
+ if err != nil {
+ return RangeMapper{}, err
+ }
+ rm.splitAlignTs = splitAlign
+
+ return rm, nil
}
// NewRangeMapper creates a new RangeMapper instance with the given duration as
@@ -327,6 +341,77 @@ func (m RangeMapper) getOriginalOffset(expr syntax.SampleExpr) (offset time.Dura
// rangeInterval should be greater than m.splitByInterval, otherwise the resultant expression
// will have an unnecessary aggregation operation
func (m RangeMapper) mapConcatSampleExpr(expr syntax.SampleExpr, rangeInterval time.Duration, recorder *downstreamRecorder) syntax.SampleExpr {
+ if m.splitAlignTs.IsZero() {
+ return m.rangeSplit(expr, rangeInterval, recorder)
+ }
+ return m.rangeSplitAlign(expr, rangeInterval, recorder)
+}
+
+// rangeSplitAlign try to split given `rangeInterval` into units of `m.splitByInterval` by making sure `rangeInterval` is aligned with `m.splitByInterval` for as much as the units as possible.
+// Consider following example with real use case.
+// Instant Query: `sum(rate({foo="bar"}[3h])`
+// execTs: 12:34:00
+// splitBy: 1h
+// Given above parameters, queries will be split into following
+// 1. sum(rate({foo="bar"}[34m]))
+// 2. sum(rate({foo="bar"}[1h] offset 34m))
+// 3. sum(rate({foo="bar"}[1h] offset 1h34m))
+// 4. sum(rate({foo="bar"}[26m] offset 2h34m))
+func (m RangeMapper) rangeSplitAlign(
+ expr syntax.SampleExpr, rangeInterval time.Duration, recorder *downstreamRecorder,
+) syntax.SampleExpr {
+ if rangeInterval <= m.splitByInterval {
+ return expr
+ }
+
+ originalOffset, err := m.getOriginalOffset(expr)
+ if err != nil {
+ return expr
+ }
+
+ align := m.splitAlignTs.Sub(m.splitAlignTs.Truncate(m.splitByInterval)) // say, 12:34:00 - 12:00:00(truncated) = 34m
+
+ if align == 0 {
+ return m.rangeSplit(expr, rangeInterval, recorder) // Don't have to align
+ }
+
+ var (
+ newRng = align
+
+ // TODO(kavi): If the originalOffset is non-zero, there may be a edge case, where subqueries generated won't be aligned correctly. Handle this edge case in separate PR.
+ newOffset = originalOffset
+ downstreams *ConcatSampleExpr
+ pendingRangeInterval = rangeInterval
+ splits = 0
+ )
+
+ // first subquery
+ downstreams = appendDownstream(downstreams, expr, newRng, newOffset)
+ splits++
+
+ newOffset += align // e.g: offset 34m
+ pendingRangeInterval -= newRng
+ newRng = m.splitByInterval // [1h]
+
+ // Rest of the subqueries.
+ for pendingRangeInterval > 0 {
+ if pendingRangeInterval < m.splitByInterval {
+ newRng = pendingRangeInterval // last subquery
+ }
+ downstreams = appendDownstream(downstreams, expr, newRng, newOffset)
+ newOffset += m.splitByInterval
+ pendingRangeInterval -= newRng
+ splits++
+ }
+
+ // update stats and metrics
+ m.stats.AddSplitQueries(splits)
+ recorder.Add(splits, MetricsKey)
+
+ return downstreams
+}
+
+func (m RangeMapper) rangeSplit(expr syntax.SampleExpr, rangeInterval time.Duration, recorder *downstreamRecorder) syntax.SampleExpr {
splitCount := int(math.Ceil(float64(rangeInterval) / float64(m.splitByInterval)))
if splitCount <= 1 {
return expr
diff --git a/pkg/logql/rangemapper_test.go b/pkg/logql/rangemapper_test.go
index 562ac0cd168e9..5e95486a8c8e2 100644
--- a/pkg/logql/rangemapper_test.go
+++ b/pkg/logql/rangemapper_test.go
@@ -93,6 +93,84 @@ func Test_SplitRangeInterval(t *testing.T) {
}
}
+func Test_RangeMapperSplitAlign(t *testing.T) {
+ cases := []struct {
+ name string
+ expr string
+ queryTime time.Time
+ splityByInterval time.Duration
+ expected string
+ expectedSplits int
+ }{
+ {
+ name: "query_time_aligned_with_split_by",
+ expr: `bytes_over_time({app="foo"}[3m])`,
+ expected: `sum without() (
+ downstream<bytes_over_time({app="foo"}[1m] offset 2m0s), shard=<nil>>
+ ++ downstream<bytes_over_time({app="foo"}[1m] offset 1m0s), shard=<nil>>
+ ++ downstream<bytes_over_time({app="foo"}[1m]), shard=<nil>>
+ )`,
+ queryTime: time.Unix(60, 0), // 1970 00:01:00
+ splityByInterval: 1 * time.Minute,
+ expectedSplits: 3,
+ },
+ {
+ name: "query_time_aligned_with_split_by_with_original_offset",
+ expr: `bytes_over_time({app="foo"}[3m] offset 20m10s)`, // NOTE: original query has offset, which should be considered in all the splits subquery
+ expected: `sum without() (
+ downstream<bytes_over_time({app="foo"}[1m] offset 22m10s), shard=<nil>>
+ ++ downstream<bytes_over_time({app="foo"}[1m] offset 21m10s), shard=<nil>>
+ ++ downstream<bytes_over_time({app="foo"}[1m] offset 20m10s), shard=<nil>>
+ )`,
+ queryTime: time.Unix(60, 0), // 1970 00:01:00
+ splityByInterval: 1 * time.Minute,
+ expectedSplits: 3,
+ },
+ {
+ name: "query_time_not_aligned_with_split_by",
+ expr: `bytes_over_time({app="foo"}[3h])`,
+ expected: `sum without() (
+ downstream<bytes_over_time({app="foo"}[6m] offset 2h54m0s), shard=<nil>>
+ ++ downstream<bytes_over_time({app="foo"}[1h] offset 1h54m0s), shard=<nil>>
+ ++ downstream<bytes_over_time({app="foo"}[1h] offset 54m0s), shard=<nil>>
+ ++ downstream<bytes_over_time({app="foo"}[54m]), shard=<nil>>
+ )`,
+ queryTime: time.Date(0, 0, 0, 12, 54, 0, 0, time.UTC), // 1970 12:54:00
+ splityByInterval: 1 * time.Hour,
+ expectedSplits: 4,
+ },
+ {
+ name: "query_time_not_aligned_with_split_by_with_original_offset",
+ expr: `bytes_over_time({app="foo"}[3h] offset 1h2m20s)`, // NOTE: original query has offset, which should be considered in all the splits subquery
+ expected: `sum without() (
+ downstream<bytes_over_time({app="foo"}[6m] offset 3h56m20s), shard=<nil>>
+ ++ downstream<bytes_over_time({app="foo"}[1h] offset 2h56m20s), shard=<nil>>
+ ++ downstream<bytes_over_time({app="foo"}[1h] offset 1h56m20s), shard=<nil>>
+ ++ downstream<bytes_over_time({app="foo"}[54m] offset 1h2m20s), shard=<nil>>
+ )`,
+ queryTime: time.Date(0, 0, 0, 12, 54, 0, 0, time.UTC), // 1970 12:54:00
+ splityByInterval: 1 * time.Hour,
+ expectedSplits: 4,
+ },
+ }
+
+ for _, tc := range cases {
+ t.Run(tc.name, func(t *testing.T) {
+ mapperStats := NewMapperStats()
+ rvm, err := NewRangeMapperWithSplitAlign(tc.splityByInterval, tc.queryTime, nilShardMetrics, mapperStats)
+ require.NoError(t, err)
+
+ noop, mappedExpr, err := rvm.Parse(syntax.MustParseExpr(tc.expr))
+ require.NoError(t, err)
+
+ require.Equal(t, removeWhiteSpace(tc.expected), removeWhiteSpace(mappedExpr.String()))
+ require.Equal(t, tc.expectedSplits, mapperStats.GetSplitQueries())
+ require.False(t, noop)
+
+ })
+ }
+}
+
func Test_SplitRangeVectorMapping(t *testing.T) {
for _, tc := range []struct {
expr string
@@ -1675,7 +1753,7 @@ func Test_SplitRangeVectorMapping(t *testing.T) {
// Non-splittable vector aggregators - should go deeper in the AST
{
`topk(2, count_over_time({app="foo"}[3m]))`,
- `topk(2,
+ `topk(2,
sum without () (
downstream<count_over_time({app="foo"}[1m] offset 2m0s), shard=<nil>>
++ downstream<count_over_time({app="foo"}[1m] offset 1m0s), shard=<nil>>
@@ -1713,7 +1791,7 @@ func Test_SplitRangeVectorMapping(t *testing.T) {
++ downstream<sum by (baz) (count_over_time({app="foo"} [1m] offset 1m0s)), shard=<nil>>
++ downstream<sum by (baz) (count_over_time({app="foo"} [1m])), shard=<nil>>
)
- ),
+ ),
"x", "$1", "a", "(.*)"
)`,
3,
@@ -1727,7 +1805,7 @@ func Test_SplitRangeVectorMapping(t *testing.T) {
++ downstream<count_over_time({job="api-server",service="a:c"} |= "err" [1m] offset 1m0s), shard=<nil>>
++ downstream<count_over_time({job="api-server",service="a:c"} |= "err" [1m]), shard=<nil>>
)
- / 180),
+ / 180),
"foo", "$1", "service", "(.*):.*"
)`,
3,
diff --git a/pkg/logqlmodel/stats/context.go b/pkg/logqlmodel/stats/context.go
index 4fbddc790b8b2..41a96ca24c75a 100644
--- a/pkg/logqlmodel/stats/context.go
+++ b/pkg/logqlmodel/stats/context.go
@@ -55,17 +55,18 @@ type Context struct {
type CacheType string
const (
- ChunkCache CacheType = "chunk" //nolint:staticcheck
- IndexCache CacheType = "index" //nolint:staticcheck
- ResultCache CacheType = "result" //nolint:staticcheck
- StatsResultCache CacheType = "stats-result" //nolint:staticcheck
- VolumeResultCache CacheType = "volume-result" //nolint:staticcheck
- WriteDedupeCache CacheType = "write-dedupe" //nolint:staticcheck
- SeriesResultCache CacheType = "series-result" //nolint:staticcheck
- LabelResultCache CacheType = "label-result" //nolint:staticcheck
- BloomFilterCache CacheType = "bloom-filter" //nolint:staticcheck
- BloomBlocksCache CacheType = "bloom-blocks" //nolint:staticcheck
- BloomMetasCache CacheType = "bloom-metas" //nolint:staticcheck
+ ChunkCache CacheType = "chunk" //nolint:staticcheck
+ IndexCache CacheType = "index" //nolint:staticcheck
+ ResultCache CacheType = "result" //nolint:staticcheck
+ StatsResultCache CacheType = "stats-result" //nolint:staticcheck
+ VolumeResultCache CacheType = "volume-result" //nolint:staticcheck
+ InstantMetricResultsCache CacheType = "instant-metric-result" // nolint:staticcheck
+ WriteDedupeCache CacheType = "write-dedupe" //nolint:staticcheck
+ SeriesResultCache CacheType = "series-result" //nolint:staticcheck
+ LabelResultCache CacheType = "label-result" //nolint:staticcheck
+ BloomFilterCache CacheType = "bloom-filter" //nolint:staticcheck
+ BloomBlocksCache CacheType = "bloom-blocks" //nolint:staticcheck
+ BloomMetasCache CacheType = "bloom-metas" //nolint:staticcheck
)
// NewContext creates a new statistics context
@@ -98,13 +99,14 @@ func (c *Context) Ingester() Ingester {
// Caches returns the cache statistics accumulated so far.
func (c *Context) Caches() Caches {
return Caches{
- Chunk: c.caches.Chunk,
- Index: c.caches.Index,
- Result: c.caches.Result,
- StatsResult: c.caches.StatsResult,
- VolumeResult: c.caches.VolumeResult,
- SeriesResult: c.caches.SeriesResult,
- LabelResult: c.caches.LabelResult,
+ Chunk: c.caches.Chunk,
+ Index: c.caches.Index,
+ Result: c.caches.Result,
+ StatsResult: c.caches.StatsResult,
+ VolumeResult: c.caches.VolumeResult,
+ SeriesResult: c.caches.SeriesResult,
+ LabelResult: c.caches.LabelResult,
+ InstantMetricResult: c.caches.InstantMetricResult,
}
}
@@ -222,6 +224,7 @@ func (c *Caches) Merge(m Caches) {
c.VolumeResult.Merge(m.VolumeResult)
c.SeriesResult.Merge(m.SeriesResult)
c.LabelResult.Merge(m.LabelResult)
+ c.InstantMetricResult.Merge(m.InstantMetricResult)
}
func (c *Cache) Merge(m Cache) {
@@ -470,6 +473,8 @@ func (c *Context) getCacheStatsByType(t CacheType) *Cache {
stats = &c.caches.SeriesResult
case LabelResultCache:
stats = &c.caches.LabelResult
+ case InstantMetricResultsCache:
+ stats = &c.caches.InstantMetricResult
default:
return nil
}
@@ -571,6 +576,12 @@ func (c Caches) Log(log log.Logger) {
"Cache.Result.EntriesStored", c.Result.EntriesStored,
"Cache.Result.BytesSent", humanize.Bytes(uint64(c.Result.BytesSent)),
"Cache.Result.BytesReceived", humanize.Bytes(uint64(c.Result.BytesReceived)),
- "Cache.Result.DownloadTime", c.Result.CacheDownloadTime(),
+ "Cache.InstantMetricResult.Requests", c.InstantMetricResult.Requests,
+ "Cache.InstantMetricResult.EntriesRequested", c.InstantMetricResult.EntriesRequested,
+ "Cache.InstantMetricResult.EntriesFound", c.InstantMetricResult.EntriesFound,
+ "Cache.InstantMetricResult.EntriesStored", c.InstantMetricResult.EntriesStored,
+ "Cache.InstantMetricResult.BytesSent", humanize.Bytes(uint64(c.InstantMetricResult.BytesSent)),
+ "Cache.InstantMetricResult.BytesReceived", humanize.Bytes(uint64(c.InstantMetricResult.BytesReceived)),
+ "Cache.InstantMetricResult.DownloadTime", c.InstantMetricResult.CacheDownloadTime(),
)
}
diff --git a/pkg/logqlmodel/stats/stats.pb.go b/pkg/logqlmodel/stats/stats.pb.go
index 75be704020c97..65f8f0f642381 100644
--- a/pkg/logqlmodel/stats/stats.pb.go
+++ b/pkg/logqlmodel/stats/stats.pb.go
@@ -95,13 +95,14 @@ func (m *Result) GetCaches() Caches {
}
type Caches struct {
- Chunk Cache `protobuf:"bytes,1,opt,name=chunk,proto3" json:"chunk"`
- Index Cache `protobuf:"bytes,2,opt,name=index,proto3" json:"index"`
- Result Cache `protobuf:"bytes,3,opt,name=result,proto3" json:"result"`
- StatsResult Cache `protobuf:"bytes,4,opt,name=statsResult,proto3" json:"statsResult"`
- VolumeResult Cache `protobuf:"bytes,5,opt,name=volumeResult,proto3" json:"volumeResult"`
- SeriesResult Cache `protobuf:"bytes,6,opt,name=seriesResult,proto3" json:"seriesResult"`
- LabelResult Cache `protobuf:"bytes,7,opt,name=labelResult,proto3" json:"labelResult"`
+ Chunk Cache `protobuf:"bytes,1,opt,name=chunk,proto3" json:"chunk"`
+ Index Cache `protobuf:"bytes,2,opt,name=index,proto3" json:"index"`
+ Result Cache `protobuf:"bytes,3,opt,name=result,proto3" json:"result"`
+ StatsResult Cache `protobuf:"bytes,4,opt,name=statsResult,proto3" json:"statsResult"`
+ VolumeResult Cache `protobuf:"bytes,5,opt,name=volumeResult,proto3" json:"volumeResult"`
+ SeriesResult Cache `protobuf:"bytes,6,opt,name=seriesResult,proto3" json:"seriesResult"`
+ LabelResult Cache `protobuf:"bytes,7,opt,name=labelResult,proto3" json:"labelResult"`
+ InstantMetricResult Cache `protobuf:"bytes,8,opt,name=instantMetricResult,proto3" json:"instantMetricResult"`
}
func (m *Caches) Reset() { *m = Caches{} }
@@ -185,6 +186,13 @@ func (m *Caches) GetLabelResult() Cache {
return Cache{}
}
+func (m *Caches) GetInstantMetricResult() Cache {
+ if m != nil {
+ return m.InstantMetricResult
+ }
+ return Cache{}
+}
+
// Summary is the summary of a query statistics.
type Summary struct {
// Total bytes processed per second.
@@ -773,83 +781,85 @@ func init() {
func init() { proto.RegisterFile("pkg/logqlmodel/stats/stats.proto", fileDescriptor_6cdfe5d2aea33ebb) }
var fileDescriptor_6cdfe5d2aea33ebb = []byte{
- // 1215 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x57, 0x4d, 0x6f, 0xe3, 0x54,
- 0x17, 0x8e, 0x27, 0xaf, 0x93, 0xce, 0xed, 0xe7, 0xdc, 0x76, 0xde, 0xc9, 0x80, 0x64, 0x97, 0xc0,
- 0x88, 0x22, 0x50, 0x23, 0x3e, 0x24, 0x04, 0x62, 0x24, 0xe4, 0x0e, 0x95, 0x2a, 0x75, 0x44, 0x39,
- 0x81, 0x0d, 0x3b, 0xc7, 0xbe, 0x4d, 0xa2, 0x3a, 0x76, 0x6a, 0x5f, 0x97, 0xe9, 0x0a, 0x7e, 0x02,
- 0x3f, 0x83, 0x0d, 0x2b, 0x56, 0x48, 0x88, 0x0d, 0x9b, 0x59, 0x76, 0x39, 0x2b, 0x8b, 0xa6, 0x1b,
- 0xe4, 0xd5, 0x48, 0xfc, 0x01, 0x74, 0xcf, 0xbd, 0xf1, 0x57, 0x9c, 0x99, 0x6e, 0xe2, 0x7b, 0x9e,
- 0xf3, 0x3c, 0xe7, 0x7e, 0x9e, 0x73, 0x6f, 0xc8, 0xee, 0xf4, 0x6c, 0xd8, 0xf3, 0x82, 0xe1, 0xb9,
- 0x37, 0x09, 0x5c, 0xe6, 0xf5, 0x22, 0x6e, 0xf3, 0x48, 0xfe, 0xee, 0x4f, 0xc3, 0x80, 0x07, 0x54,
- 0x47, 0xe3, 0x8d, 0x9d, 0x61, 0x30, 0x0c, 0x10, 0xe9, 0x89, 0x96, 0x74, 0x76, 0xff, 0xd5, 0x48,
- 0x0b, 0x58, 0x14, 0x7b, 0x9c, 0x7e, 0x46, 0xda, 0x51, 0x3c, 0x99, 0xd8, 0xe1, 0x65, 0x47, 0xdb,
- 0xd5, 0xf6, 0x56, 0x3f, 0xda, 0xd8, 0x97, 0x61, 0xfa, 0x12, 0xb5, 0x36, 0x9f, 0x27, 0x66, 0x23,
- 0x4d, 0xcc, 0x39, 0x0d, 0xe6, 0x0d, 0x21, 0x3d, 0x8f, 0x59, 0x38, 0x66, 0x61, 0xe7, 0x4e, 0x49,
- 0xfa, 0x8d, 0x44, 0x73, 0xa9, 0xa2, 0xc1, 0xbc, 0x41, 0x1f, 0x93, 0x95, 0xb1, 0x3f, 0x64, 0x11,
- 0x67, 0x61, 0xa7, 0x89, 0xda, 0x4d, 0xa5, 0x3d, 0x52, 0xb0, 0xb5, 0xa5, 0xc4, 0x19, 0x11, 0xb2,
- 0x16, 0xfd, 0x84, 0xb4, 0x1c, 0xdb, 0x19, 0xb1, 0xa8, 0xf3, 0x3f, 0x14, 0xaf, 0x2b, 0xf1, 0x01,
- 0x82, 0xd6, 0xba, 0x92, 0xea, 0x48, 0x02, 0xc5, 0xed, 0xfe, 0xd9, 0x24, 0x2d, 0xc9, 0xa0, 0x1f,
- 0x12, 0xdd, 0x19, 0xc5, 0xfe, 0x99, 0x9a, 0xf3, 0x5a, 0x51, 0x5f, 0x90, 0x0b, 0x0a, 0xc8, 0x8f,
- 0x90, 0x8c, 0x7d, 0x97, 0x3d, 0x53, 0x73, 0x5d, 0x22, 0x41, 0x0a, 0xc8, 0x8f, 0x18, 0x66, 0x88,
- 0xab, 0xac, 0xe6, 0x58, 0xd6, 0x6c, 0x28, 0x8d, 0xe2, 0x80, 0xfa, 0xd2, 0x03, 0xb2, 0x8a, 0x34,
- 0xb9, 0x41, 0x6a, 0x86, 0x65, 0xe9, 0xb6, 0x92, 0x16, 0x89, 0x50, 0x34, 0xe8, 0x21, 0x59, 0xbb,
- 0x08, 0xbc, 0x78, 0xc2, 0x54, 0x14, 0xbd, 0x26, 0xca, 0x8e, 0x8a, 0x52, 0x62, 0x42, 0xc9, 0x12,
- 0x71, 0x22, 0xb1, 0x65, 0xf3, 0xd1, 0xb4, 0x5e, 0x15, 0xa7, 0xc8, 0x84, 0x92, 0x25, 0x26, 0xe5,
- 0xd9, 0x03, 0xe6, 0xa9, 0x30, 0xed, 0x57, 0x4d, 0xaa, 0x40, 0x84, 0xa2, 0xd1, 0xfd, 0xbd, 0x45,
- 0xda, 0xea, 0x58, 0xd2, 0xef, 0xc8, 0x83, 0xc1, 0x25, 0x67, 0xd1, 0x49, 0x18, 0x38, 0x2c, 0x8a,
- 0x98, 0x7b, 0xc2, 0xc2, 0x3e, 0x73, 0x02, 0xdf, 0xc5, 0x3d, 0x6d, 0x5a, 0x6f, 0xa6, 0x89, 0xb9,
- 0x8c, 0x02, 0xcb, 0x1c, 0x22, 0xac, 0x37, 0xf6, 0x6b, 0xc3, 0xde, 0xc9, 0xc3, 0x2e, 0xa1, 0xc0,
- 0x32, 0x07, 0x3d, 0x22, 0xdb, 0x3c, 0xe0, 0xb6, 0x67, 0x95, 0xba, 0xc5, 0x63, 0xd1, 0xb4, 0x1e,
- 0xa4, 0x89, 0x59, 0xe7, 0x86, 0x3a, 0x30, 0x0b, 0x75, 0x5c, 0xea, 0x0a, 0x8f, 0x49, 0x31, 0x54,
- 0xd9, 0x0d, 0x75, 0x20, 0xdd, 0x23, 0x2b, 0xec, 0x19, 0x73, 0xbe, 0x1d, 0x4f, 0x18, 0x1e, 0x10,
- 0xcd, 0x5a, 0x13, 0x09, 0x37, 0xc7, 0x20, 0x6b, 0xd1, 0xf7, 0xc9, 0xdd, 0xf3, 0x98, 0xc5, 0x0c,
- 0xa9, 0x2d, 0xa4, 0xae, 0xa7, 0x89, 0x99, 0x83, 0x90, 0x37, 0xe9, 0x3e, 0x21, 0x51, 0x3c, 0x90,
- 0xa9, 0x1e, 0xe1, 0x56, 0x37, 0xad, 0x8d, 0x34, 0x31, 0x0b, 0x28, 0x14, 0xda, 0xf4, 0x98, 0xec,
- 0xe0, 0xe8, 0xbe, 0xf2, 0xb9, 0x3c, 0x31, 0x3c, 0x0e, 0x7d, 0xe6, 0x76, 0x56, 0x50, 0xd9, 0x49,
- 0x13, 0xb3, 0xd6, 0x0f, 0xb5, 0x28, 0xed, 0x92, 0x56, 0x34, 0xf5, 0xc6, 0x3c, 0xea, 0xdc, 0x45,
- 0x3d, 0x11, 0x29, 0x26, 0x11, 0x50, 0x5f, 0xe4, 0x8c, 0xec, 0xd0, 0x8d, 0x3a, 0xa4, 0xc0, 0x41,
- 0x04, 0xd4, 0x37, 0x1b, 0xd5, 0x49, 0x10, 0xf1, 0xc3, 0xb1, 0xc7, 0x59, 0x88, 0xab, 0xd7, 0x59,
- 0xad, 0x8c, 0xaa, 0xe2, 0x87, 0x5a, 0x94, 0xfe, 0x48, 0x1e, 0x21, 0xde, 0xe7, 0x61, 0xec, 0xf0,
- 0x38, 0x64, 0xee, 0x53, 0xc6, 0x6d, 0xd7, 0xe6, 0x76, 0xe5, 0x48, 0xac, 0x61, 0xf8, 0xf7, 0xd2,
- 0xc4, 0xbc, 0x9d, 0x00, 0x6e, 0x47, 0xeb, 0x7e, 0x41, 0xda, 0xaa, 0x2c, 0x8b, 0x4a, 0x16, 0xf1,
- 0x20, 0x64, 0x95, 0xe2, 0xd7, 0x17, 0x58, 0x5e, 0xc9, 0x90, 0x02, 0xf2, 0xd3, 0xfd, 0xf5, 0x0e,
- 0x59, 0x39, 0xca, 0xab, 0xef, 0x1a, 0xf6, 0x09, 0x4c, 0xe4, 0xad, 0xcc, 0x37, 0xdd, 0xda, 0x12,
- 0x15, 0xa0, 0x88, 0x43, 0xc9, 0xa2, 0x87, 0x84, 0xa2, 0x7d, 0x20, 0xaa, 0x69, 0xf4, 0xd4, 0xe6,
- 0xa8, 0x95, 0x49, 0xf5, 0xff, 0x34, 0x31, 0x6b, 0xbc, 0x50, 0x83, 0x65, 0xbd, 0x5b, 0x68, 0x47,
- 0x2a, 0x87, 0xf2, 0xde, 0x15, 0x0e, 0x25, 0x8b, 0x7e, 0x4e, 0x36, 0xf2, 0x0c, 0xe8, 0x33, 0x9f,
- 0xab, 0x84, 0xa1, 0x69, 0x62, 0x56, 0x3c, 0x50, 0xb1, 0xf3, 0xf5, 0xd2, 0x6f, 0xbd, 0x5e, 0x7f,
- 0x34, 0x89, 0x8e, 0xfe, 0xac, 0x63, 0x39, 0x09, 0x60, 0xa7, 0xaa, 0x3c, 0xe5, 0x1d, 0x67, 0x1e,
- 0xa8, 0xd8, 0xf4, 0x6b, 0x72, 0xbf, 0x80, 0x3c, 0x09, 0x7e, 0xf0, 0xbd, 0xc0, 0x76, 0xb3, 0x55,
- 0x7b, 0x98, 0x26, 0x66, 0x3d, 0x01, 0xea, 0x61, 0xb1, 0x07, 0x4e, 0x09, 0xc3, 0x7c, 0x6e, 0xe6,
- 0x7b, 0xb0, 0xe8, 0x85, 0x1a, 0x8c, 0x3a, 0xe4, 0xa1, 0x48, 0xde, 0x4b, 0x60, 0xa7, 0x2c, 0x64,
- 0xbe, 0xc3, 0xdc, 0xfc, 0xfc, 0x75, 0xd6, 0x77, 0xb5, 0xbd, 0x15, 0xeb, 0x51, 0x9a, 0x98, 0x6f,
- 0x2d, 0x25, 0xcd, 0x0f, 0x29, 0x2c, 0x8f, 0x93, 0xdf, 0xd1, 0x95, 0x1b, 0x50, 0x60, 0x4b, 0xee,
- 0xe8, 0xf9, 0xfc, 0x80, 0x9d, 0x46, 0x87, 0x8c, 0x3b, 0xa3, 0xac, 0xb4, 0x15, 0xe7, 0x57, 0xf2,
- 0x42, 0x0d, 0xd6, 0xfd, 0x4d, 0x27, 0x3a, 0xf6, 0x23, 0xb6, 0x6f, 0xc4, 0x6c, 0x57, 0x76, 0x2a,
- 0x32, 0xaa, 0x78, 0x6e, 0xca, 0x1e, 0xa8, 0xd8, 0x25, 0xad, 0xac, 0x1d, 0x7a, 0x8d, 0x56, 0x56,
- 0x8d, 0x8a, 0x4d, 0x0f, 0xc8, 0x3d, 0x97, 0x39, 0xc1, 0x64, 0x1a, 0x62, 0xfa, 0xca, 0xae, 0x5b,
- 0x28, 0xbf, 0x9f, 0x26, 0xe6, 0xa2, 0x13, 0x16, 0xa1, 0x6a, 0x10, 0x39, 0x86, 0x76, 0x7d, 0x10,
- 0x39, 0x8c, 0x45, 0x88, 0x3e, 0x26, 0x9b, 0xd5, 0x71, 0xc8, 0xc2, 0xbc, 0x9d, 0x26, 0x66, 0xd5,
- 0x05, 0x55, 0x40, 0xc8, 0xf1, 0x2c, 0x3e, 0x89, 0xa7, 0xde, 0xd8, 0xb1, 0x85, 0xfc, 0x6e, 0x2e,
- 0xaf, 0xb8, 0xa0, 0x0a, 0x08, 0xf9, 0xb4, 0x52, 0x80, 0x49, 0x2e, 0xaf, 0xb8, 0xa0, 0x0a, 0xd0,
- 0x29, 0xd9, 0xcd, 0x16, 0x76, 0x49, 0x89, 0x54, 0x05, 0xfd, 0x9d, 0x34, 0x31, 0x5f, 0xcb, 0x85,
- 0xd7, 0x32, 0xe8, 0x25, 0x79, 0xbb, 0xb8, 0x86, 0xcb, 0x3a, 0x95, 0x65, 0xfe, 0xdd, 0x34, 0x31,
- 0x6f, 0x43, 0x87, 0xdb, 0x90, 0xba, 0x7f, 0x35, 0x89, 0x8e, 0x4f, 0x29, 0x51, 0x23, 0x99, 0xbc,
- 0x16, 0x0f, 0x83, 0xd8, 0x2f, 0x55, 0xe8, 0x22, 0x0e, 0x25, 0x8b, 0x7e, 0x49, 0xb6, 0xd8, 0xfc,
- 0x32, 0x3d, 0x8f, 0x45, 0xad, 0x97, 0x95, 0x46, 0xb7, 0x76, 0xd2, 0xc4, 0x5c, 0xf0, 0xc1, 0x02,
- 0x42, 0x3f, 0x25, 0xeb, 0x0a, 0xc3, 0xe2, 0x27, 0x1f, 0x38, 0xba, 0x75, 0x2f, 0x4d, 0xcc, 0xb2,
- 0x03, 0xca, 0xa6, 0x10, 0xe2, 0x8b, 0x0c, 0x98, 0xc3, 0xc6, 0x17, 0xd9, 0x73, 0x06, 0x85, 0x25,
- 0x07, 0x94, 0x4d, 0xf1, 0x30, 0x41, 0x00, 0x4b, 0xba, 0x4c, 0x2f, 0x7c, 0x98, 0x64, 0x20, 0xe4,
- 0x4d, 0xf1, 0xde, 0x09, 0xe5, 0x58, 0x65, 0x2e, 0xe9, 0xf2, 0xbd, 0x33, 0xc7, 0x20, 0x6b, 0x89,
- 0x05, 0x74, 0x8b, 0x25, 0xb2, 0x9d, 0x5f, 0x32, 0x45, 0x1c, 0x4a, 0x96, 0xc8, 0x37, 0x2c, 0x67,
- 0xc7, 0xcc, 0x1f, 0xf2, 0x51, 0x9f, 0x85, 0x17, 0xd9, 0x2b, 0x06, 0xf3, 0x6d, 0xc1, 0x09, 0x8b,
- 0x90, 0x35, 0xb8, 0xba, 0x36, 0x1a, 0x2f, 0xae, 0x8d, 0xc6, 0xcb, 0x6b, 0x43, 0xfb, 0x69, 0x66,
- 0x68, 0xbf, 0xcc, 0x0c, 0xed, 0xf9, 0xcc, 0xd0, 0xae, 0x66, 0x86, 0xf6, 0xf7, 0xcc, 0xd0, 0xfe,
- 0x99, 0x19, 0x8d, 0x97, 0x33, 0x43, 0xfb, 0xf9, 0xc6, 0x68, 0x5c, 0xdd, 0x18, 0x8d, 0x17, 0x37,
- 0x46, 0xe3, 0xfb, 0x0f, 0x86, 0x63, 0x3e, 0x8a, 0x07, 0xfb, 0x4e, 0x30, 0xe9, 0x0d, 0x43, 0xfb,
- 0xd4, 0xf6, 0xed, 0x9e, 0x17, 0x9c, 0x8d, 0x7b, 0x75, 0x7f, 0x14, 0x07, 0x2d, 0xfc, 0x1b, 0xf8,
- 0xf1, 0x7f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xa8, 0xe8, 0xef, 0xe7, 0x47, 0x0e, 0x00, 0x00,
+ // 1241 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x57, 0x4b, 0x6f, 0xe3, 0x54,
+ 0x14, 0x8e, 0x27, 0xe3, 0xa4, 0xbd, 0x7d, 0xce, 0x6d, 0x87, 0xc9, 0x30, 0x92, 0x5d, 0x02, 0x23,
+ 0x8a, 0x40, 0x8d, 0x78, 0x48, 0x08, 0xc4, 0x48, 0xc8, 0x1d, 0x2a, 0x55, 0x6a, 0x45, 0x39, 0x81,
+ 0x0d, 0xac, 0x1c, 0xfb, 0x36, 0xb1, 0xea, 0xd8, 0xa9, 0x7d, 0x5d, 0xa6, 0x2b, 0xf8, 0x09, 0xec,
+ 0xf9, 0x03, 0x6c, 0x58, 0xb1, 0x42, 0x62, 0xc7, 0x66, 0x96, 0x5d, 0xce, 0xca, 0xa2, 0xe9, 0x06,
+ 0x79, 0x35, 0x12, 0x7f, 0x00, 0xdd, 0x47, 0x6c, 0x5f, 0xc7, 0x99, 0xe9, 0x26, 0xbe, 0xe7, 0x3b,
+ 0xdf, 0x77, 0xee, 0xc3, 0xe7, 0x1c, 0xdf, 0xa0, 0x9d, 0xc9, 0xd9, 0xb0, 0xe7, 0x87, 0xc3, 0x73,
+ 0x7f, 0x1c, 0xba, 0xc4, 0xef, 0xc5, 0xd4, 0xa6, 0xb1, 0xf8, 0xdd, 0x9b, 0x44, 0x21, 0x0d, 0xb1,
+ 0xce, 0x8d, 0x37, 0xb7, 0x87, 0xe1, 0x30, 0xe4, 0x48, 0x8f, 0x8d, 0x84, 0xb3, 0xfb, 0x9f, 0x86,
+ 0x5a, 0x40, 0xe2, 0xc4, 0xa7, 0xf8, 0x33, 0xd4, 0x8e, 0x93, 0xf1, 0xd8, 0x8e, 0x2e, 0x3b, 0xda,
+ 0x8e, 0xb6, 0xbb, 0xf2, 0xd1, 0xfa, 0x9e, 0x08, 0xd3, 0x17, 0xa8, 0xb5, 0xf1, 0x3c, 0x35, 0x1b,
+ 0x59, 0x6a, 0xce, 0x68, 0x30, 0x1b, 0x30, 0xe9, 0x79, 0x42, 0x22, 0x8f, 0x44, 0x9d, 0x3b, 0x8a,
+ 0xf4, 0x1b, 0x81, 0x16, 0x52, 0x49, 0x83, 0xd9, 0x00, 0x3f, 0x41, 0x4b, 0x5e, 0x30, 0x24, 0x31,
+ 0x25, 0x51, 0xa7, 0xc9, 0xb5, 0x1b, 0x52, 0x7b, 0x28, 0x61, 0x6b, 0x53, 0x8a, 0x73, 0x22, 0xe4,
+ 0x23, 0xfc, 0x09, 0x6a, 0x39, 0xb6, 0x33, 0x22, 0x71, 0xe7, 0x2e, 0x17, 0xaf, 0x49, 0xf1, 0x3e,
+ 0x07, 0xad, 0x35, 0x29, 0xd5, 0x39, 0x09, 0x24, 0xb7, 0xfb, 0xeb, 0x5d, 0xd4, 0x12, 0x0c, 0xfc,
+ 0x21, 0xd2, 0x9d, 0x51, 0x12, 0x9c, 0xc9, 0x3d, 0xaf, 0x96, 0xf5, 0x25, 0x39, 0xa3, 0x80, 0x78,
+ 0x30, 0x89, 0x17, 0xb8, 0xe4, 0x99, 0xdc, 0xeb, 0x02, 0x09, 0xa7, 0x80, 0x78, 0xb0, 0x65, 0x46,
+ 0xfc, 0x94, 0xe5, 0x1e, 0x55, 0xcd, 0xba, 0xd4, 0x48, 0x0e, 0xc8, 0x27, 0xde, 0x47, 0x2b, 0x9c,
+ 0x26, 0x5e, 0x90, 0xdc, 0xa1, 0x2a, 0xdd, 0x92, 0xd2, 0x32, 0x11, 0xca, 0x06, 0x3e, 0x40, 0xab,
+ 0x17, 0xa1, 0x9f, 0x8c, 0x89, 0x8c, 0xa2, 0xd7, 0x44, 0xd9, 0x96, 0x51, 0x14, 0x26, 0x28, 0x16,
+ 0x8b, 0x13, 0xb3, 0x57, 0x36, 0x5b, 0x4d, 0xeb, 0x55, 0x71, 0xca, 0x4c, 0x50, 0x2c, 0xb6, 0x29,
+ 0xdf, 0x1e, 0x10, 0x5f, 0x86, 0x69, 0xbf, 0x6a, 0x53, 0x25, 0x22, 0x94, 0x0d, 0xfc, 0x03, 0xda,
+ 0xf2, 0x82, 0x98, 0xda, 0x01, 0x3d, 0x26, 0x34, 0xf2, 0x1c, 0x19, 0x6c, 0xa9, 0x26, 0xd8, 0x23,
+ 0x19, 0xac, 0x4e, 0x00, 0x75, 0x60, 0xf7, 0xcf, 0x16, 0x6a, 0xcb, 0x9c, 0xc7, 0xdf, 0xa1, 0x07,
+ 0x83, 0x4b, 0x4a, 0xe2, 0x93, 0x28, 0x74, 0x48, 0x1c, 0x13, 0xf7, 0x84, 0x44, 0x7d, 0xe2, 0x84,
+ 0x81, 0xcb, 0x13, 0xa6, 0x69, 0x3d, 0xca, 0x52, 0x73, 0x11, 0x05, 0x16, 0x39, 0x58, 0x58, 0xdf,
+ 0x0b, 0x6a, 0xc3, 0xde, 0x29, 0xc2, 0x2e, 0xa0, 0xc0, 0x22, 0x07, 0x3e, 0x44, 0x5b, 0x34, 0xa4,
+ 0xb6, 0x6f, 0x29, 0xd3, 0xf2, 0x9c, 0x6b, 0x5a, 0x0f, 0xd8, 0x21, 0xd4, 0xb8, 0xa1, 0x0e, 0xcc,
+ 0x43, 0x1d, 0x29, 0x53, 0xf1, 0x1c, 0x2c, 0x87, 0x52, 0xdd, 0x50, 0x07, 0xe2, 0x5d, 0xb4, 0x44,
+ 0x9e, 0x11, 0xe7, 0x5b, 0x6f, 0x4c, 0x78, 0xf6, 0x69, 0xd6, 0x2a, 0xab, 0xe6, 0x19, 0x06, 0xf9,
+ 0x08, 0xbf, 0x8f, 0x96, 0xcf, 0x13, 0x92, 0x10, 0x4e, 0x6d, 0x71, 0xea, 0x5a, 0x96, 0x9a, 0x05,
+ 0x08, 0xc5, 0x10, 0xef, 0x21, 0x14, 0x27, 0x03, 0xd1, 0x47, 0x62, 0x9e, 0x47, 0x4d, 0x6b, 0x3d,
+ 0x4b, 0xcd, 0x12, 0x0a, 0xa5, 0x31, 0x3e, 0x42, 0xdb, 0x7c, 0x75, 0x5f, 0x05, 0x54, 0xa4, 0x23,
+ 0x4d, 0xa2, 0x80, 0xb8, 0x3c, 0x69, 0x9a, 0x56, 0x27, 0x4b, 0xcd, 0x5a, 0x3f, 0xd4, 0xa2, 0xb8,
+ 0x8b, 0x5a, 0xf1, 0xc4, 0xf7, 0x68, 0xdc, 0x59, 0xe6, 0x7a, 0xc4, 0xea, 0x57, 0x20, 0x20, 0x9f,
+ 0x9c, 0x33, 0xb2, 0x23, 0x37, 0xee, 0xa0, 0x12, 0x87, 0x23, 0x20, 0x9f, 0xf9, 0xaa, 0x4e, 0xc2,
+ 0x98, 0x1e, 0x78, 0x3e, 0x25, 0x11, 0x3f, 0xbd, 0xce, 0x4a, 0x65, 0x55, 0x15, 0x3f, 0xd4, 0xa2,
+ 0xf8, 0x27, 0xf4, 0x98, 0xe3, 0x7d, 0x1a, 0x25, 0x0e, 0x4d, 0x22, 0xe2, 0x1e, 0x13, 0x6a, 0xbb,
+ 0x36, 0xb5, 0x2b, 0x29, 0xb1, 0xca, 0xc3, 0xbf, 0x97, 0xa5, 0xe6, 0xed, 0x04, 0x70, 0x3b, 0x5a,
+ 0xf7, 0x0b, 0xd4, 0x96, 0x3d, 0x9f, 0xb5, 0xc9, 0x98, 0x86, 0x11, 0xa9, 0x74, 0xd6, 0x3e, 0xc3,
+ 0x8a, 0x36, 0xc9, 0x29, 0x20, 0x1e, 0xdd, 0xdf, 0xef, 0xa0, 0xa5, 0xc3, 0xa2, 0xb5, 0xaf, 0xf2,
+ 0x39, 0x81, 0xb0, 0x3a, 0x16, 0xf5, 0xa6, 0x5b, 0x9b, 0xac, 0xbd, 0x94, 0x71, 0x50, 0x2c, 0x7c,
+ 0x80, 0x30, 0xb7, 0xf7, 0x59, 0xab, 0x8e, 0x8f, 0x6d, 0xca, 0xb5, 0xa2, 0xa8, 0xde, 0xc8, 0x52,
+ 0xb3, 0xc6, 0x0b, 0x35, 0x58, 0x3e, 0xbb, 0xc5, 0xed, 0x58, 0xd6, 0x50, 0x31, 0xbb, 0xc4, 0x41,
+ 0xb1, 0xf0, 0xe7, 0x68, 0xbd, 0xa8, 0x80, 0x3e, 0x09, 0xa8, 0x2c, 0x18, 0x9c, 0xa5, 0x66, 0xc5,
+ 0x03, 0x15, 0xbb, 0x38, 0x2f, 0xfd, 0xd6, 0xe7, 0xf5, 0x57, 0x13, 0xe9, 0xdc, 0x9f, 0x4f, 0x2c,
+ 0x36, 0x01, 0xe4, 0x54, 0xb6, 0xa7, 0x62, 0xe2, 0xdc, 0x03, 0x15, 0x1b, 0x7f, 0x8d, 0xee, 0x97,
+ 0x90, 0xa7, 0xe1, 0x8f, 0x81, 0x1f, 0xda, 0x6e, 0x7e, 0x6a, 0x0f, 0xb3, 0xd4, 0xac, 0x27, 0x40,
+ 0x3d, 0xcc, 0xde, 0x81, 0xa3, 0x60, 0xbc, 0x9e, 0x9b, 0xc5, 0x3b, 0x98, 0xf7, 0x42, 0x0d, 0x86,
+ 0x1d, 0xf4, 0x90, 0x15, 0xef, 0x25, 0x90, 0x53, 0x12, 0x91, 0xc0, 0x21, 0x6e, 0x91, 0x7f, 0x9d,
+ 0xb5, 0x1d, 0x6d, 0x77, 0xc9, 0x7a, 0x9c, 0xa5, 0xe6, 0x5b, 0x0b, 0x49, 0xb3, 0x24, 0x85, 0xc5,
+ 0x71, 0x8a, 0x0b, 0x40, 0xe5, 0xf3, 0xca, 0xb0, 0x05, 0x17, 0x80, 0xd9, 0xfe, 0x80, 0x9c, 0xc6,
+ 0x07, 0x84, 0x3a, 0xa3, 0xbc, 0xb5, 0x95, 0xf7, 0xa7, 0x78, 0xa1, 0x06, 0xeb, 0xfe, 0xa1, 0x23,
+ 0x9d, 0xcf, 0xc3, 0x5e, 0xdf, 0x88, 0xd8, 0xae, 0x98, 0x94, 0x55, 0x54, 0x39, 0x6f, 0x54, 0x0f,
+ 0x54, 0x6c, 0x45, 0x2b, 0x7a, 0x87, 0x5e, 0xa3, 0x15, 0x5d, 0xa3, 0x62, 0xe3, 0x7d, 0x74, 0xcf,
+ 0x25, 0x4e, 0x38, 0x9e, 0x44, 0xbc, 0x7c, 0xc5, 0xd4, 0x2d, 0x2e, 0xbf, 0x9f, 0xa5, 0xe6, 0xbc,
+ 0x13, 0xe6, 0xa1, 0x6a, 0x10, 0xb1, 0x86, 0x76, 0x7d, 0x10, 0xb1, 0x8c, 0x79, 0x08, 0x3f, 0x41,
+ 0x1b, 0xd5, 0x75, 0x88, 0xc6, 0xbc, 0x95, 0xa5, 0x66, 0xd5, 0x05, 0x55, 0x80, 0xc9, 0x79, 0x2e,
+ 0x3e, 0x4d, 0x26, 0xbe, 0xe7, 0xd8, 0x4c, 0xbe, 0x5c, 0xc8, 0x2b, 0x2e, 0xa8, 0x02, 0x4c, 0x3e,
+ 0xa9, 0x34, 0x60, 0x54, 0xc8, 0x2b, 0x2e, 0xa8, 0x02, 0x78, 0x82, 0x76, 0xf2, 0x83, 0x5d, 0xd0,
+ 0x22, 0x65, 0x43, 0x7f, 0x27, 0x4b, 0xcd, 0xd7, 0x72, 0xe1, 0xb5, 0x0c, 0x7c, 0x89, 0xde, 0x2e,
+ 0x9f, 0xe1, 0xa2, 0x49, 0x45, 0x9b, 0x7f, 0x37, 0x4b, 0xcd, 0xdb, 0xd0, 0xe1, 0x36, 0xa4, 0xee,
+ 0xdf, 0x4d, 0xa4, 0xf3, 0xab, 0x15, 0xeb, 0x91, 0x44, 0x7c, 0x16, 0x0f, 0xc2, 0x24, 0x50, 0x3a,
+ 0x74, 0x19, 0x07, 0xc5, 0xc2, 0x5f, 0xa2, 0x4d, 0x32, 0xfb, 0x98, 0x9e, 0x27, 0xac, 0xd7, 0x8b,
+ 0x4e, 0xa3, 0x5b, 0xdb, 0x59, 0x6a, 0xce, 0xf9, 0x60, 0x0e, 0xc1, 0x9f, 0xa2, 0x35, 0x89, 0xf1,
+ 0xe6, 0x27, 0x2e, 0x38, 0xba, 0x75, 0x2f, 0x4b, 0x4d, 0xd5, 0x01, 0xaa, 0xc9, 0x84, 0xfc, 0x46,
+ 0x06, 0xc4, 0x21, 0xde, 0x45, 0x7e, 0x9d, 0xe1, 0x42, 0xc5, 0x01, 0xaa, 0xc9, 0x2e, 0x26, 0x1c,
+ 0xe0, 0x2d, 0x5d, 0x94, 0x17, 0xbf, 0x98, 0xe4, 0x20, 0x14, 0x43, 0x76, 0xdf, 0x89, 0xc4, 0x5a,
+ 0x45, 0x2d, 0xe9, 0xe2, 0xbe, 0x33, 0xc3, 0x20, 0x1f, 0xb1, 0x03, 0x74, 0xcb, 0x2d, 0xb2, 0x5d,
+ 0x7c, 0x64, 0xca, 0x38, 0x28, 0x16, 0xab, 0x37, 0xde, 0xce, 0x8e, 0x48, 0x30, 0xa4, 0xa3, 0x3e,
+ 0x89, 0x2e, 0xf2, 0x5b, 0x0c, 0xaf, 0xb7, 0x39, 0x27, 0xcc, 0x43, 0xd6, 0xe0, 0xea, 0xda, 0x68,
+ 0xbc, 0xb8, 0x36, 0x1a, 0x2f, 0xaf, 0x0d, 0xed, 0xe7, 0xa9, 0xa1, 0xfd, 0x36, 0x35, 0xb4, 0xe7,
+ 0x53, 0x43, 0xbb, 0x9a, 0x1a, 0xda, 0x3f, 0x53, 0x43, 0xfb, 0x77, 0x6a, 0x34, 0x5e, 0x4e, 0x0d,
+ 0xed, 0x97, 0x1b, 0xa3, 0x71, 0x75, 0x63, 0x34, 0x5e, 0xdc, 0x18, 0x8d, 0xef, 0x3f, 0x18, 0x7a,
+ 0x74, 0x94, 0x0c, 0xf6, 0x9c, 0x70, 0xdc, 0x1b, 0x46, 0xf6, 0xa9, 0x1d, 0xd8, 0x3d, 0x3f, 0x3c,
+ 0xf3, 0x7a, 0x75, 0xff, 0x42, 0x07, 0x2d, 0xfe, 0x1f, 0xf3, 0xe3, 0xff, 0x03, 0x00, 0x00, 0xff,
+ 0xff, 0x38, 0x60, 0xd8, 0x7d, 0xa4, 0x0e, 0x00, 0x00,
}
func (this *Result) Equal(that interface{}) bool {
@@ -925,6 +935,9 @@ func (this *Caches) Equal(that interface{}) bool {
if !this.LabelResult.Equal(&that1.LabelResult) {
return false
}
+ if !this.InstantMetricResult.Equal(&that1.InstantMetricResult) {
+ return false
+ }
return true
}
func (this *Summary) Equal(that interface{}) bool {
@@ -1193,7 +1206,7 @@ func (this *Caches) GoString() string {
if this == nil {
return "nil"
}
- s := make([]string, 0, 11)
+ s := make([]string, 0, 12)
s = append(s, "&stats.Caches{")
s = append(s, "Chunk: "+strings.Replace(this.Chunk.GoString(), `&`, ``, 1)+",\n")
s = append(s, "Index: "+strings.Replace(this.Index.GoString(), `&`, ``, 1)+",\n")
@@ -1202,6 +1215,7 @@ func (this *Caches) GoString() string {
s = append(s, "VolumeResult: "+strings.Replace(this.VolumeResult.GoString(), `&`, ``, 1)+",\n")
s = append(s, "SeriesResult: "+strings.Replace(this.SeriesResult.GoString(), `&`, ``, 1)+",\n")
s = append(s, "LabelResult: "+strings.Replace(this.LabelResult.GoString(), `&`, ``, 1)+",\n")
+ s = append(s, "InstantMetricResult: "+strings.Replace(this.InstantMetricResult.GoString(), `&`, ``, 1)+",\n")
s = append(s, "}")
return strings.Join(s, "")
}
@@ -1391,6 +1405,16 @@ func (m *Caches) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
+ {
+ size, err := m.InstantMetricResult.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintStats(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x42
{
size, err := m.LabelResult.MarshalToSizedBuffer(dAtA[:i])
if err != nil {
@@ -1877,6 +1901,8 @@ func (m *Caches) Size() (n int) {
n += 1 + l + sovStats(uint64(l))
l = m.LabelResult.Size()
n += 1 + l + sovStats(uint64(l))
+ l = m.InstantMetricResult.Size()
+ n += 1 + l + sovStats(uint64(l))
return n
}
@@ -2085,6 +2111,7 @@ func (this *Caches) String() string {
`VolumeResult:` + strings.Replace(strings.Replace(this.VolumeResult.String(), "Cache", "Cache", 1), `&`, ``, 1) + `,`,
`SeriesResult:` + strings.Replace(strings.Replace(this.SeriesResult.String(), "Cache", "Cache", 1), `&`, ``, 1) + `,`,
`LabelResult:` + strings.Replace(strings.Replace(this.LabelResult.String(), "Cache", "Cache", 1), `&`, ``, 1) + `,`,
+ `InstantMetricResult:` + strings.Replace(strings.Replace(this.InstantMetricResult.String(), "Cache", "Cache", 1), `&`, ``, 1) + `,`,
`}`,
}, "")
return s
@@ -2637,6 +2664,39 @@ func (m *Caches) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
+ case 8:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field InstantMetricResult", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowStats
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthStats
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthStats
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.InstantMetricResult.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipStats(dAtA[iNdEx:])
diff --git a/pkg/logqlmodel/stats/stats.proto b/pkg/logqlmodel/stats/stats.proto
index 8db5b474a7906..d36b8e557d984 100644
--- a/pkg/logqlmodel/stats/stats.proto
+++ b/pkg/logqlmodel/stats/stats.proto
@@ -57,6 +57,10 @@ message Caches {
(gogoproto.nullable) = false,
(gogoproto.jsontag) = "labelResult"
];
+ Cache instantMetricResult = 8 [
+ (gogoproto.nullable) = false,
+ (gogoproto.jsontag) = "instantMetricResult"
+ ];
}
// Summary is the summary of a query statistics.
diff --git a/pkg/loki/config_wrapper.go b/pkg/loki/config_wrapper.go
index 9817c04afdc5e..1914c8ab3edfc 100644
--- a/pkg/loki/config_wrapper.go
+++ b/pkg/loki/config_wrapper.go
@@ -646,6 +646,13 @@ func applyEmbeddedCacheConfig(r *ConfigWrapper) {
r.QueryRange.LabelsCacheConfig.CacheConfig = r.QueryRange.ResultsCacheConfig.CacheConfig
r.QueryRange.LabelsCacheConfig.CacheConfig.Prefix = prefix
}
+
+ instantMetricCacheConfig := r.QueryRange.InstantMetricCacheConfig.CacheConfig
+ if !cache.IsCacheConfigured(instantMetricCacheConfig) {
+ prefix := instantMetricCacheConfig.Prefix
+ r.QueryRange.InstantMetricCacheConfig.CacheConfig = r.QueryRange.ResultsCacheConfig.CacheConfig
+ r.QueryRange.InstantMetricCacheConfig.CacheConfig.Prefix = prefix
+ }
}
func applyIngesterFinalSleep(cfg *ConfigWrapper) {
diff --git a/pkg/loki/config_wrapper_test.go b/pkg/loki/config_wrapper_test.go
index 866079b71f60f..3b1237dad4d1d 100644
--- a/pkg/loki/config_wrapper_test.go
+++ b/pkg/loki/config_wrapper_test.go
@@ -1055,6 +1055,49 @@ query_range:
})
})
+ t.Run("for the instant-metric results cache config", func(t *testing.T) {
+ t.Run("no embedded cache enabled by default if Redis is set", func(t *testing.T) {
+ configFileString := `---
+query_range:
+ instant_metric_results_cache:
+ cache:
+ redis:
+ endpoint: endpoint.redis.org`
+
+ config, _, _ := configWrapperFromYAML(t, configFileString, nil)
+ assert.EqualValues(t, "endpoint.redis.org", config.QueryRange.InstantMetricCacheConfig.CacheConfig.Redis.Endpoint)
+ assert.EqualValues(t, "frontend.instant-metric-results-cache.", config.QueryRange.InstantMetricCacheConfig.CacheConfig.Prefix)
+ assert.False(t, config.QueryRange.InstantMetricCacheConfig.CacheConfig.EmbeddedCache.Enabled)
+ })
+
+ t.Run("no embedded cache enabled by default if Memcache is set", func(t *testing.T) {
+ configFileString := `---
+query_range:
+ instant_metric_results_cache:
+ cache:
+ memcached_client:
+ host: memcached.host.org`
+
+ config, _, _ := configWrapperFromYAML(t, configFileString, nil)
+ assert.EqualValues(t, "memcached.host.org", config.QueryRange.InstantMetricCacheConfig.CacheConfig.MemcacheClient.Host)
+ assert.EqualValues(t, "frontend.instant-metric-results-cache.", config.QueryRange.InstantMetricCacheConfig.CacheConfig.Prefix)
+ assert.False(t, config.QueryRange.InstantMetricCacheConfig.CacheConfig.EmbeddedCache.Enabled)
+ })
+
+ t.Run("embedded cache is enabled by default if no other cache is set", func(t *testing.T) {
+ config, _, _ := configWrapperFromYAML(t, minimalConfig, nil)
+ assert.True(t, config.QueryRange.InstantMetricCacheConfig.CacheConfig.EmbeddedCache.Enabled)
+ assert.EqualValues(t, "frontend.instant-metric-results-cache.", config.QueryRange.InstantMetricCacheConfig.CacheConfig.Prefix)
+ })
+
+ t.Run("gets results cache config if not configured directly", func(t *testing.T) {
+ config, _, _ := configWrapperFromYAML(t, defaultResulsCacheString, nil)
+ assert.EqualValues(t, "memcached.host.org", config.QueryRange.InstantMetricCacheConfig.CacheConfig.MemcacheClient.Host)
+ assert.EqualValues(t, "frontend.instant-metric-results-cache.", config.QueryRange.InstantMetricCacheConfig.CacheConfig.Prefix)
+ assert.False(t, config.QueryRange.InstantMetricCacheConfig.CacheConfig.EmbeddedCache.Enabled)
+ })
+ })
+
t.Run("for the labels results cache config", func(t *testing.T) {
t.Run("no embedded cache enabled by default if Redis is set", func(t *testing.T) {
configFileString := `---
diff --git a/pkg/querier/queryrange/codec_test.go b/pkg/querier/queryrange/codec_test.go
index 976665df95b99..52e3cc8551b7f 100644
--- a/pkg/querier/queryrange/codec_test.go
+++ b/pkg/querier/queryrange/codec_test.go
@@ -427,10 +427,12 @@ func Test_codec_DecodeResponse(t *testing.T) {
func Test_codec_DecodeProtobufResponseParity(t *testing.T) {
// test fixtures from pkg/util/marshal_test
var queryTests = []struct {
+ name string
actual parser.Value
expected string
}{
{
+ "basic",
logqlmodel.Streams{
logproto.Stream{
Entries: []logproto.Entry{
@@ -462,6 +464,7 @@ func Test_codec_DecodeProtobufResponseParity(t *testing.T) {
},
// vector test
{
+ "vector",
promql.Vector{
{
T: 1568404331324,
@@ -524,6 +527,7 @@ func Test_codec_DecodeProtobufResponseParity(t *testing.T) {
},
// matrix test
{
+ "matrix",
promql.Matrix{
{
Floats: []promql.FPoint{
@@ -607,50 +611,53 @@ func Test_codec_DecodeProtobufResponseParity(t *testing.T) {
}
codec := RequestProtobufCodec{}
for i, queryTest := range queryTests {
- params := url.Values{
- "query": []string{`{app="foo"}`},
- }
- u := &url.URL{
- Path: "/loki/api/v1/query_range",
- RawQuery: params.Encode(),
- }
- httpReq := &http.Request{
- Method: "GET",
- RequestURI: u.String(),
- URL: u,
- }
- req, err := codec.DecodeRequest(context.TODO(), httpReq, nil)
- require.NoError(t, err)
+ i := i
+ t.Run(queryTest.name, func(t *testing.T) {
+ params := url.Values{
+ "query": []string{`{app="foo"}`},
+ }
+ u := &url.URL{
+ Path: "/loki/api/v1/query_range",
+ RawQuery: params.Encode(),
+ }
+ httpReq := &http.Request{
+ Method: "GET",
+ RequestURI: u.String(),
+ URL: u,
+ }
+ req, err := codec.DecodeRequest(context.TODO(), httpReq, nil)
+ require.NoError(t, err)
- // parser.Value -> queryrange.QueryResponse
- var b bytes.Buffer
- result := logqlmodel.Result{
- Data: queryTest.actual,
- Statistics: statsResult,
- }
- err = WriteQueryResponseProtobuf(&logql.LiteralParams{}, result, &b)
- require.NoError(t, err)
+ // parser.Value -> queryrange.QueryResponse
+ var b bytes.Buffer
+ result := logqlmodel.Result{
+ Data: queryTest.actual,
+ Statistics: statsResult,
+ }
+ err = WriteQueryResponseProtobuf(&logql.LiteralParams{}, result, &b)
+ require.NoError(t, err)
- // queryrange.QueryResponse -> queryrangebase.Response
- querierResp := &http.Response{
- StatusCode: 200,
- Body: io.NopCloser(&b),
- Header: http.Header{
- "Content-Type": []string{ProtobufType},
- },
- }
- resp, err := codec.DecodeResponse(context.TODO(), querierResp, req)
- require.NoError(t, err)
+ // queryrange.QueryResponse -> queryrangebase.Response
+ querierResp := &http.Response{
+ StatusCode: 200,
+ Body: io.NopCloser(&b),
+ Header: http.Header{
+ "Content-Type": []string{ProtobufType},
+ },
+ }
+ resp, err := codec.DecodeResponse(context.TODO(), querierResp, req)
+ require.NoError(t, err)
- // queryrange.Response -> JSON
- ctx := user.InjectOrgID(context.Background(), "1")
- httpResp, err := codec.EncodeResponse(ctx, httpReq, resp)
- require.NoError(t, err)
+ // queryrange.Response -> JSON
+ ctx := user.InjectOrgID(context.Background(), "1")
+ httpResp, err := codec.EncodeResponse(ctx, httpReq, resp)
+ require.NoError(t, err)
- body, _ := io.ReadAll(httpResp.Body)
- require.JSONEqf(t, queryTest.expected, string(body), "Protobuf Decode Query Test %d failed", i)
+ body, err := io.ReadAll(httpResp.Body)
+ require.NoError(t, err)
+ require.JSONEqf(t, queryTest.expected, string(body), "Protobuf Decode Query Test %d failed", i)
+ })
}
-
}
func Test_codec_EncodeRequest(t *testing.T) {
@@ -1645,6 +1652,16 @@ var (
"downloadTime": 0,
"queryLengthServed": 0
},
+ "instantMetricResult": {
+ "entriesFound": 0,
+ "entriesRequested": 0,
+ "entriesStored": 0,
+ "bytesReceived": 0,
+ "bytesSent": 0,
+ "requests": 0,
+ "downloadTime": 0,
+ "queryLengthServed": 0
+ },
"result": {
"entriesFound": 0,
"entriesRequested": 0,
@@ -2027,13 +2044,14 @@ var (
},
Caches: stats.Caches{
- Chunk: stats.Cache{},
- Index: stats.Cache{},
- StatsResult: stats.Cache{},
- VolumeResult: stats.Cache{},
- SeriesResult: stats.Cache{},
- LabelResult: stats.Cache{},
- Result: stats.Cache{},
+ Chunk: stats.Cache{},
+ Index: stats.Cache{},
+ StatsResult: stats.Cache{},
+ VolumeResult: stats.Cache{},
+ SeriesResult: stats.Cache{},
+ LabelResult: stats.Cache{},
+ Result: stats.Cache{},
+ InstantMetricResult: stats.Cache{},
},
}
)
diff --git a/pkg/querier/queryrange/downstreamer.go b/pkg/querier/queryrange/downstreamer.go
index 31f8997ed767e..4db8034291f64 100644
--- a/pkg/querier/queryrange/downstreamer.go
+++ b/pkg/querier/queryrange/downstreamer.go
@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"reflect"
+ "time"
"github.com/go-kit/log/level"
"github.com/grafana/dskit/concurrency"
@@ -14,6 +15,7 @@ import (
"github.com/prometheus/prometheus/promql/parser"
"github.com/grafana/loki/pkg/logql"
+ "github.com/grafana/loki/pkg/logql/syntax"
"github.com/grafana/loki/pkg/logqlmodel"
"github.com/grafana/loki/pkg/querier/plan"
"github.com/grafana/loki/pkg/querier/queryrange/queryrangebase"
@@ -27,6 +29,8 @@ const (
type DownstreamHandler struct {
limits Limits
next queryrangebase.Handler
+
+ splitAlign bool
}
func ParamsToLokiRequest(params logql.Params) queryrangebase.Request {
@@ -86,6 +90,7 @@ func (h DownstreamHandler) Downstreamer(ctx context.Context) logql.Downstreamer
parallelism: p,
locks: locks,
handler: h.next,
+ splitAlign: h.splitAlign,
}
}
@@ -94,16 +99,50 @@ type instance struct {
parallelism int
locks chan struct{}
handler queryrangebase.Handler
+
+ splitAlign bool
+}
+
+// withoutOffset returns the given query string with offsets removed and timestamp adjusted accordingly. If no offset is present in original query, it will be returned as is.
+func withoutOffset(query logql.DownstreamQuery) (string, time.Time, time.Time) {
+ expr := query.Params.GetExpression()
+
+ var (
+ newStart = query.Params.Start()
+ newEnd = query.Params.End()
+ )
+ expr.Walk(func(e syntax.Expr) {
+ switch rng := e.(type) {
+ case *syntax.RangeAggregationExpr:
+ off := rng.Left.Offset
+
+ if off != 0 {
+ rng.Left.Offset = 0 // remove offset
+
+ // adjust start and end time
+ newEnd = newEnd.Add(-off)
+ newStart = newStart.Add(-off)
+
+ }
+ }
+ })
+ return expr.String(), newStart, newEnd
}
func (in instance) Downstream(ctx context.Context, queries []logql.DownstreamQuery, acc logql.Accumulator) ([]logqlmodel.Result, error) {
return in.For(ctx, queries, acc, func(qry logql.DownstreamQuery) (logqlmodel.Result, error) {
- req := ParamsToLokiRequest(qry.Params).WithQuery(qry.Params.GetExpression().String())
+ var req queryrangebase.Request
+ if in.splitAlign {
+ qs, newStart, newEnd := withoutOffset(qry)
+ req = ParamsToLokiRequest(qry.Params).WithQuery(qs).WithStartEnd(newStart, newEnd)
+ } else {
+ req = ParamsToLokiRequest(qry.Params).WithQuery(qry.Params.GetExpression().String())
+ }
sp, ctx := opentracing.StartSpanFromContext(ctx, "DownstreamHandler.instance")
defer sp.Finish()
logger := spanlogger.FromContext(ctx)
defer logger.Finish()
- level.Debug(logger).Log("shards", fmt.Sprintf("%+v", qry.Params.Shards()), "query", req.GetQuery(), "step", req.GetStep(), "handler", reflect.TypeOf(in.handler))
+ level.Debug(logger).Log("shards", fmt.Sprintf("%+v", qry.Params.Shards()), "query", req.GetQuery(), "step", req.GetStep(), "handler", reflect.TypeOf(in.handler), "engine", "downstream")
res, err := in.handler.Do(ctx, req)
if err != nil {
diff --git a/pkg/querier/queryrange/downstreamer_test.go b/pkg/querier/queryrange/downstreamer_test.go
index a23f2a381b007..cadfceeee20e3 100644
--- a/pkg/querier/queryrange/downstreamer_test.go
+++ b/pkg/querier/queryrange/downstreamer_test.go
@@ -3,6 +3,7 @@ package queryrange
import (
"context"
"errors"
+ "fmt"
"strconv"
"strings"
"sync"
@@ -12,6 +13,7 @@ import (
"github.com/grafana/dskit/user"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/promql"
+ "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uber.org/atomic"
@@ -325,71 +327,142 @@ func TestInstanceFor(t *testing.T) {
}
func TestInstanceDownstream(t *testing.T) {
- params, err := logql.NewLiteralParams(
- `{foo="bar"}`,
- time.Now(),
- time.Now(),
- 0,
- 0,
- logproto.BACKWARD,
- 1000,
- nil,
- )
- require.NoError(t, err)
- expr, err := syntax.ParseExpr(`{foo="bar"}`)
- require.NoError(t, err)
-
- expectedResp := func() *LokiResponse {
- return &LokiResponse{
- Data: LokiData{
- Result: []logproto.Stream{{
- Labels: `{foo="bar"}`,
- Entries: []logproto.Entry{
- {Timestamp: time.Unix(0, 0), Line: "foo"},
- },
- }},
+ t.Run("Downstream simple query", func(t *testing.T) {
+ ts := time.Unix(1, 0)
+
+ params, err := logql.NewLiteralParams(
+ `{foo="bar"}`,
+ ts,
+ ts,
+ 0,
+ 0,
+ logproto.BACKWARD,
+ 1000,
+ nil,
+ )
+ require.NoError(t, err)
+ expr, err := syntax.ParseExpr(`{foo="bar"}`)
+ require.NoError(t, err)
+
+ expectedResp := func() *LokiResponse {
+ return &LokiResponse{
+ Data: LokiData{
+ Result: []logproto.Stream{{
+ Labels: `{foo="bar"}`,
+ Entries: []logproto.Entry{
+ {Timestamp: time.Unix(0, 0), Line: "foo"},
+ },
+ }},
+ },
+ Statistics: stats.Result{
+ Summary: stats.Summary{QueueTime: 1, ExecTime: 2},
+ },
+ }
+ }
+
+ queries := []logql.DownstreamQuery{
+ {
+ Params: logql.ParamsWithShardsOverride{
+ Params: logql.ParamsWithExpressionOverride{Params: params, ExpressionOverride: expr},
+ ShardsOverride: logql.Shards{{Shard: 0, Of: 2}}.Encode(),
+ },
},
- Statistics: stats.Result{
- Summary: stats.Summary{QueueTime: 1, ExecTime: 2},
+ }
+
+ var got queryrangebase.Request
+ var want queryrangebase.Request
+ handler := queryrangebase.HandlerFunc(
+ func(_ context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ // for some reason these seemingly can't be checked in their own goroutines,
+ // so we assign them to scoped variables for later comparison.
+ got = req
+ want = ParamsToLokiRequest(queries[0].Params).WithQuery(expr.String())
+
+ return expectedResp(), nil
},
+ )
+
+ expected, err := ResponseToResult(expectedResp())
+ require.Nil(t, err)
+
+ results, err := DownstreamHandler{
+ limits: fakeLimits{},
+ next: handler,
+ }.Downstreamer(context.Background()).Downstream(context.Background(), queries, logql.NewBufferedAccumulator(len(queries)))
+
+ fmt.Println("want", want.GetEnd(), want.GetStart(), "got", got.GetEnd(), got.GetStart())
+ require.Equal(t, want, got)
+ require.Nil(t, err)
+ require.Equal(t, 1, len(results))
+ require.Equal(t, expected.Data, results[0].Data)
+ })
+
+ t.Run("Downstream with offset removed", func(t *testing.T) {
+ ts := time.Unix(1, 0)
+
+ params, err := logql.NewLiteralParams(
+ `sum(rate({foo="bar"}[2h] offset 1h))`,
+ ts,
+ ts,
+ 0,
+ 0,
+ logproto.BACKWARD,
+ 1000,
+ nil,
+ )
+ require.NoError(t, err)
+
+ expectedResp := func() *LokiResponse {
+ return &LokiResponse{
+ Data: LokiData{
+ Result: []logproto.Stream{{
+ Labels: `{foo="bar"}`,
+ Entries: []logproto.Entry{
+ {Timestamp: time.Unix(0, 0), Line: "foo"},
+ },
+ }},
+ },
+ Statistics: stats.Result{
+ Summary: stats.Summary{QueueTime: 1, ExecTime: 2},
+ },
+ }
}
- }
- queries := []logql.DownstreamQuery{
- {
- Params: logql.ParamsWithShardsOverride{
- Params: logql.ParamsWithExpressionOverride{Params: params, ExpressionOverride: expr},
- ShardsOverride: logql.Shards{{Shard: 0, Of: 2}}.Encode(),
+ queries := []logql.DownstreamQuery{
+ {
+ Params: params,
},
- },
- }
+ }
- var got queryrangebase.Request
- var want queryrangebase.Request
- handler := queryrangebase.HandlerFunc(
- func(_ context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
- // for some reason these seemingly can't be checked in their own goroutines,
- // so we assign them to scoped variables for later comparison.
- got = req
- want = ParamsToLokiRequest(queries[0].Params).WithQuery(expr.String())
+ var got queryrangebase.Request
+ var want queryrangebase.Request
+ handler := queryrangebase.HandlerFunc(
+ func(_ context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ // for some reason these seemingly can't be checked in their own goroutines,
+ // so we assign them to scoped variables for later comparison.
+ got = req
+ want = ParamsToLokiRequest(params).WithQuery(`sum(rate({foo="bar"}[2h]))`).WithStartEnd(ts.Add(-1*time.Hour), ts.Add(-1*time.Hour)) // without offset and start, end adjusted for instant query
- return expectedResp(), nil
- },
- )
+ return expectedResp(), nil
+ },
+ )
- expected, err := ResponseToResult(expectedResp())
- require.Nil(t, err)
+ expected, err := ResponseToResult(expectedResp())
+ require.NoError(t, err)
- results, err := DownstreamHandler{
- limits: fakeLimits{},
- next: handler,
- }.Downstreamer(context.Background()).Downstream(context.Background(), queries, logql.NewBufferedAccumulator(len(queries)))
+ results, err := DownstreamHandler{
+ limits: fakeLimits{},
+ next: handler,
+ splitAlign: true,
+ }.Downstreamer(context.Background()).Downstream(context.Background(), queries, logql.NewBufferedAccumulator(len(queries)))
- require.Equal(t, want, got)
+ assert.Equal(t, want, got)
- require.Nil(t, err)
- require.Equal(t, 1, len(results))
- require.Equal(t, expected.Data, results[0].Data)
+ require.Nil(t, err)
+ require.Equal(t, 1, len(results))
+ require.Equal(t, expected.Data, results[0].Data)
+
+ })
}
func TestCancelWhileWaitingResponse(t *testing.T) {
diff --git a/pkg/querier/queryrange/instant_metric_cache.go b/pkg/querier/queryrange/instant_metric_cache.go
new file mode 100644
index 0000000000000..ef1083e6cd229
--- /dev/null
+++ b/pkg/querier/queryrange/instant_metric_cache.go
@@ -0,0 +1,85 @@
+package queryrange
+
+import (
+ "context"
+ "flag"
+ "fmt"
+ "time"
+
+ "github.com/go-kit/log"
+
+ "github.com/grafana/loki/pkg/querier/queryrange/queryrangebase"
+ "github.com/grafana/loki/pkg/storage/chunk/cache"
+ "github.com/grafana/loki/pkg/storage/chunk/cache/resultscache"
+)
+
+type InstantMetricSplitter struct {
+ Limits
+ transformer UserIDTransformer
+}
+
+// GenerateCacheKey generates a cache key based on the userID, Request and interval.
+func (i InstantMetricSplitter) GenerateCacheKey(ctx context.Context, userID string, r resultscache.Request) string {
+ split := i.InstantMetricQuerySplitDuration(userID)
+
+ var currentInterval int64
+ if denominator := int64(split / time.Millisecond); denominator > 0 {
+ currentInterval = r.GetStart().UnixMilli() / denominator
+ }
+
+ if i.transformer != nil {
+ userID = i.transformer(ctx, userID)
+ }
+
+ // include both the currentInterval and the split duration in key to ensure
+ // a cache key can't be reused when an interval changes
+ return fmt.Sprintf("instant-metric:%s:%s:%d:%d", userID, r.GetQuery(), currentInterval, split)
+}
+
+type InstantMetricCacheConfig struct {
+ queryrangebase.ResultsCacheConfig `yaml:",inline"`
+}
+
+// RegisterFlags registers flags.
+func (cfg *InstantMetricCacheConfig) RegisterFlags(f *flag.FlagSet) {
+ cfg.RegisterFlagsWithPrefix(f, "frontend.instant-metric-results-cache.")
+}
+
+func (cfg *InstantMetricCacheConfig) Validate() error {
+ return cfg.ResultsCacheConfig.Validate()
+}
+
+type instantMetricExtractor struct{}
+
+func NewInstantMetricCacheMiddleware(
+ log log.Logger,
+ limits Limits,
+ merger queryrangebase.Merger,
+ c cache.Cache,
+ cacheGenNumberLoader queryrangebase.CacheGenNumberLoader,
+ shouldCache queryrangebase.ShouldCacheFn,
+ parallelismForReq queryrangebase.ParallelismForReqFn,
+ retentionEnabled bool,
+ transformer UserIDTransformer,
+ metrics *queryrangebase.ResultsCacheMetrics,
+) (queryrangebase.Middleware, error) {
+ return queryrangebase.NewResultsCacheMiddleware(
+ log,
+ c,
+ InstantMetricSplitter{limits, transformer},
+ limits,
+ merger,
+ PrometheusExtractor{},
+ cacheGenNumberLoader,
+ func(ctx context.Context, r queryrangebase.Request) bool {
+ if shouldCache != nil && !shouldCache(ctx, r) {
+ return false
+ }
+ return true
+ },
+ parallelismForReq,
+ retentionEnabled,
+ false,
+ metrics,
+ )
+}
diff --git a/pkg/querier/queryrange/limits.go b/pkg/querier/queryrange/limits.go
index 2d14531909695..ab7818460738f 100644
--- a/pkg/querier/queryrange/limits.go
+++ b/pkg/querier/queryrange/limits.go
@@ -68,6 +68,15 @@ func (l limits) QuerySplitDuration(user string) time.Duration {
return *l.splitDuration
}
+func (l limits) InstantMetricQuerySplitDuration(user string) time.Duration {
+ // NOTE: It returns `splitDuration` for both instant and range queries.
+ // no need to have separate limits for now.
+ if l.splitDuration == nil {
+ return l.Limits.QuerySplitDuration(user)
+ }
+ return *l.splitDuration
+}
+
func (l limits) TSDBMaxQueryParallelism(ctx context.Context, user string) int {
if l.maxQueryParallelism == nil {
return l.Limits.TSDBMaxQueryParallelism(ctx, user)
diff --git a/pkg/querier/queryrange/limits/definitions.go b/pkg/querier/queryrange/limits/definitions.go
index 3e78b34420760..9e1232b750797 100644
--- a/pkg/querier/queryrange/limits/definitions.go
+++ b/pkg/querier/queryrange/limits/definitions.go
@@ -14,6 +14,7 @@ type Limits interface {
queryrangebase.Limits
logql.Limits
QuerySplitDuration(string) time.Duration
+ InstantMetricQuerySplitDuration(string) time.Duration
MetadataQuerySplitDuration(string) time.Duration
RecentMetadataQuerySplitDuration(string) time.Duration
RecentMetadataQueryWindow(string) time.Duration
diff --git a/pkg/querier/queryrange/prometheus_test.go b/pkg/querier/queryrange/prometheus_test.go
index a8e09b378bb2c..4ec798b534a73 100644
--- a/pkg/querier/queryrange/prometheus_test.go
+++ b/pkg/querier/queryrange/prometheus_test.go
@@ -118,6 +118,16 @@ var emptyStats = `"stats": {
"downloadTime": 0,
"queryLengthServed": 0
},
+ "instantMetricResult": {
+ "entriesFound": 0,
+ "entriesRequested": 0,
+ "entriesStored": 0,
+ "bytesReceived": 0,
+ "bytesSent": 0,
+ "requests": 0,
+ "downloadTime": 0,
+ "queryLengthServed": 0
+ },
"result": {
"entriesFound": 0,
"entriesRequested": 0,
diff --git a/pkg/querier/queryrange/roundtrip.go b/pkg/querier/queryrange/roundtrip.go
index 10246f4d8277e..5532eab989c1e 100644
--- a/pkg/querier/queryrange/roundtrip.go
+++ b/pkg/querier/queryrange/roundtrip.go
@@ -44,16 +44,19 @@ const (
// Config is the configuration for the queryrange tripperware
type Config struct {
- base.Config `yaml:",inline"`
- Transformer UserIDTransformer `yaml:"-"`
- CacheIndexStatsResults bool `yaml:"cache_index_stats_results"`
- StatsCacheConfig IndexStatsCacheConfig `yaml:"index_stats_results_cache" doc:"description=If a cache config is not specified and cache_index_stats_results is true, the config for the results cache is used."`
- CacheVolumeResults bool `yaml:"cache_volume_results"`
- VolumeCacheConfig VolumeCacheConfig `yaml:"volume_results_cache" doc:"description=If a cache config is not specified and cache_volume_results is true, the config for the results cache is used."`
- CacheSeriesResults bool `yaml:"cache_series_results"`
- SeriesCacheConfig SeriesCacheConfig `yaml:"series_results_cache" doc:"description=If series_results_cache is not configured and cache_series_results is true, the config for the results cache is used."`
- CacheLabelResults bool `yaml:"cache_label_results"`
- LabelsCacheConfig LabelsCacheConfig `yaml:"label_results_cache" doc:"description=If label_results_cache is not configured and cache_label_results is true, the config for the results cache is used."`
+ base.Config `yaml:",inline"`
+ Transformer UserIDTransformer `yaml:"-"`
+ CacheIndexStatsResults bool `yaml:"cache_index_stats_results"`
+ StatsCacheConfig IndexStatsCacheConfig `yaml:"index_stats_results_cache" doc:"description=If a cache config is not specified and cache_index_stats_results is true, the config for the results cache is used."`
+ CacheVolumeResults bool `yaml:"cache_volume_results"`
+ VolumeCacheConfig VolumeCacheConfig `yaml:"volume_results_cache" doc:"description=If a cache config is not specified and cache_volume_results is true, the config for the results cache is used."`
+ CacheInstantMetricResults bool `yaml:"cache_instant_metric_results"`
+ InstantMetricCacheConfig InstantMetricCacheConfig `yaml:"instant_metric_results_cache" doc:"description=If a cache config is not specified and cache_instant_metric_results is true, the config for the results cache is used."`
+ InstantMetricQuerySplitAlign bool `yaml:"instant_metric_query_split_align" doc:"description=Whether to align the splits of instant metric query with splitByInterval and query's exec time. Useful when instant_metric_cache is enabled"`
+ CacheSeriesResults bool `yaml:"cache_series_results"`
+ SeriesCacheConfig SeriesCacheConfig `yaml:"series_results_cache" doc:"description=If series_results_cache is not configured and cache_series_results is true, the config for the results cache is used."`
+ CacheLabelResults bool `yaml:"cache_label_results"`
+ LabelsCacheConfig LabelsCacheConfig `yaml:"label_results_cache" doc:"description=If label_results_cache is not configured and cache_label_results is true, the config for the results cache is used."`
}
// RegisterFlags adds the flags required to configure this flag set.
@@ -63,6 +66,9 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
cfg.StatsCacheConfig.RegisterFlags(f)
f.BoolVar(&cfg.CacheVolumeResults, "querier.cache-volume-results", false, "Cache volume query results.")
cfg.VolumeCacheConfig.RegisterFlags(f)
+ f.BoolVar(&cfg.CacheInstantMetricResults, "querier.cache-instant-metric-results", false, "Cache instant metric query results.")
+ cfg.InstantMetricCacheConfig.RegisterFlags(f)
+ f.BoolVar(&cfg.InstantMetricQuerySplitAlign, "querier.instant-metric-query-split-align", false, "Align the instant metric splits with splityByInterval and query's exec time.")
f.BoolVar(&cfg.CacheSeriesResults, "querier.cache-series-results", false, "Cache series query results.")
cfg.SeriesCacheConfig.RegisterFlags(f)
f.BoolVar(&cfg.CacheLabelResults, "querier.cache-label-results", false, "Cache label query results.")
@@ -132,12 +138,13 @@ func NewMiddleware(
metrics := NewMetrics(registerer, metricsNamespace)
var (
- resultsCache cache.Cache
- statsCache cache.Cache
- volumeCache cache.Cache
- seriesCache cache.Cache
- labelsCache cache.Cache
- err error
+ resultsCache cache.Cache
+ statsCache cache.Cache
+ volumeCache cache.Cache
+ instantMetricCache cache.Cache
+ seriesCache cache.Cache
+ labelsCache cache.Cache
+ err error
)
if cfg.CacheResults {
@@ -161,6 +168,13 @@ func NewMiddleware(
}
}
+ if cfg.CacheInstantMetricResults {
+ instantMetricCache, err = newResultsCacheFromConfig(cfg.InstantMetricCacheConfig.ResultsCacheConfig, registerer, log, stats.InstantMetricResultsCache)
+ if err != nil {
+ return nil, nil, err
+ }
+ }
+
if cfg.CacheSeriesResults {
seriesCache, err = newResultsCacheFromConfig(cfg.SeriesCacheConfig.ResultsCacheConfig, registerer, log, stats.SeriesResultCache)
if err != nil {
@@ -211,7 +225,7 @@ func NewMiddleware(
return nil, nil, err
}
- instantMetricTripperware, err := NewInstantMetricTripperware(cfg, engineOpts, log, limits, schema, metrics, indexStatsTripperware, metricsNamespace)
+ instantMetricTripperware, err := NewInstantMetricTripperware(cfg, engineOpts, log, limits, schema, metrics, codec, instantMetricCache, cacheGenNumLoader, retentionEnabled, indexStatsTripperware, metricsNamespace)
if err != nil {
return nil, nil, err
}
@@ -761,7 +775,51 @@ func NewMetricTripperware(cfg Config, engineOpts logql.EngineOpts, log log.Logge
}
// NewInstantMetricTripperware creates a new frontend tripperware responsible for handling metric queries
-func NewInstantMetricTripperware(cfg Config, engineOpts logql.EngineOpts, log log.Logger, limits Limits, schema config.SchemaConfig, metrics *Metrics, indexStatsTripperware base.Middleware, metricsNamespace string) (base.Middleware, error) {
+func NewInstantMetricTripperware(
+ cfg Config,
+ engineOpts logql.EngineOpts,
+ log log.Logger,
+ limits Limits,
+ schema config.SchemaConfig,
+ metrics *Metrics,
+ merger base.Merger,
+ c cache.Cache,
+ cacheGenNumLoader base.CacheGenNumberLoader,
+ retentionEnabled bool,
+ indexStatsTripperware base.Middleware,
+ metricsNamespace string,
+) (base.Middleware, error) {
+ var cacheMiddleware base.Middleware
+ if cfg.CacheInstantMetricResults {
+ var err error
+ cacheMiddleware, err = NewInstantMetricCacheMiddleware(
+ log,
+ limits,
+ merger,
+ c,
+ cacheGenNumLoader,
+ func(_ context.Context, r base.Request) bool {
+ return !r.GetCachingOptions().Disabled
+ },
+ func(ctx context.Context, tenantIDs []string, r base.Request) int {
+ return MinWeightedParallelism(
+ ctx,
+ tenantIDs,
+ schema.Configs,
+ limits,
+ model.Time(r.GetStart().UnixMilli()),
+ model.Time(r.GetEnd().UnixMilli()),
+ )
+ },
+ retentionEnabled,
+ cfg.Transformer,
+ metrics.ResultsCacheMetrics,
+ )
+ if err != nil {
+ return nil, err
+ }
+ }
+
return base.MiddlewareFunc(func(next base.Handler) base.Handler {
statsHandler := indexStatsTripperware.Wrap(next)
@@ -769,11 +827,19 @@ func NewInstantMetricTripperware(cfg Config, engineOpts logql.EngineOpts, log lo
StatsCollectorMiddleware(),
NewLimitsMiddleware(limits),
NewQuerySizeLimiterMiddleware(schema.Configs, engineOpts, log, limits, statsHandler),
+ NewSplitByRangeMiddleware(log, engineOpts, limits, cfg.InstantMetricQuerySplitAlign, metrics.MiddlewareMapperMetrics.rangeMapper),
+ }
+
+ if cfg.CacheInstantMetricResults {
+ queryRangeMiddleware = append(
+ queryRangeMiddleware,
+ base.InstrumentMiddleware("instant_metric_results_cache", metrics.InstrumentMiddlewareMetrics),
+ cacheMiddleware,
+ )
}
if cfg.ShardedQueries {
queryRangeMiddleware = append(queryRangeMiddleware,
- NewSplitByRangeMiddleware(log, engineOpts, limits, metrics.MiddlewareMapperMetrics.rangeMapper),
NewQueryShardMiddleware(
log,
schema.Configs,
diff --git a/pkg/querier/queryrange/roundtrip_test.go b/pkg/querier/queryrange/roundtrip_test.go
index 7d74b0dd615c8..206822a50f6e8 100644
--- a/pkg/querier/queryrange/roundtrip_test.go
+++ b/pkg/querier/queryrange/roundtrip_test.go
@@ -1247,6 +1247,7 @@ type fakeLimits struct {
metadataSplitDuration map[string]time.Duration
recentMetadataSplitDuration map[string]time.Duration
recentMetadataQueryWindow map[string]time.Duration
+ instantMetricSplitDuration map[string]time.Duration
ingesterSplitDuration map[string]time.Duration
minShardingLookback time.Duration
queryTimeout time.Duration
@@ -1266,6 +1267,13 @@ func (f fakeLimits) QuerySplitDuration(key string) time.Duration {
return f.splitDuration[key]
}
+func (f fakeLimits) InstantMetricQuerySplitDuration(key string) time.Duration {
+ if f.instantMetricSplitDuration == nil {
+ return 0
+ }
+ return f.instantMetricSplitDuration[key]
+}
+
func (f fakeLimits) MetadataQuerySplitDuration(key string) time.Duration {
if f.metadataSplitDuration == nil {
return 0
diff --git a/pkg/querier/queryrange/split_by_range.go b/pkg/querier/queryrange/split_by_range.go
index 6845846d4deaa..16076cd948596 100644
--- a/pkg/querier/queryrange/split_by_range.go
+++ b/pkg/querier/queryrange/split_by_range.go
@@ -26,20 +26,25 @@ type splitByRange struct {
limits Limits
ng *logql.DownstreamEngine
metrics *logql.MapperMetrics
+
+ // Whether to align rangeInterval align to splitByInterval in the subqueries.
+ splitAlign bool
}
// NewSplitByRangeMiddleware creates a new Middleware that splits log requests by the range interval.
-func NewSplitByRangeMiddleware(logger log.Logger, engineOpts logql.EngineOpts, limits Limits, metrics *logql.MapperMetrics) queryrangebase.Middleware {
+func NewSplitByRangeMiddleware(logger log.Logger, engineOpts logql.EngineOpts, limits Limits, splitAlign bool, metrics *logql.MapperMetrics) queryrangebase.Middleware {
return queryrangebase.MiddlewareFunc(func(next queryrangebase.Handler) queryrangebase.Handler {
return &splitByRange{
logger: log.With(logger, "middleware", "InstantQuery.splitByRangeVector"),
next: next,
limits: limits,
ng: logql.NewDownstreamEngine(engineOpts, DownstreamHandler{
- limits: limits,
- next: next,
+ limits: limits,
+ next: next,
+ splitAlign: splitAlign,
}, limits, logger),
- metrics: metrics,
+ metrics: metrics,
+ splitAlign: splitAlign,
}
})
}
@@ -57,14 +62,26 @@ func (s *splitByRange) Do(ctx context.Context, request queryrangebase.Request) (
return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
}
- interval := validation.SmallestPositiveNonZeroDurationPerTenant(tenants, s.limits.QuerySplitDuration)
+ interval := validation.SmallestPositiveNonZeroDurationPerTenant(tenants, s.limits.InstantMetricQuerySplitDuration)
// if no interval configured, continue to the next middleware
if interval == 0 {
return s.next.Do(ctx, request)
}
mapperStats := logql.NewMapperStats()
- mapper, err := logql.NewRangeMapper(interval, s.metrics, mapperStats)
+
+ ir, ok := request.(*LokiInstantRequest)
+ if !ok {
+ return nil, fmt.Errorf("expected *LokiInstantRequest, got %T", request)
+ }
+
+ var mapper logql.RangeMapper
+
+ if s.splitAlign {
+ mapper, err = logql.NewRangeMapperWithSplitAlign(interval, ir.TimeTs, s.metrics, mapperStats)
+ } else {
+ mapper, err = logql.NewRangeMapper(interval, s.metrics, mapperStats)
+ }
if err != nil {
return nil, err
}
@@ -85,10 +102,6 @@ func (s *splitByRange) Do(ctx context.Context, request queryrangebase.Request) (
queryStatsCtx := stats.FromContext(ctx)
queryStatsCtx.AddSplitQueries(int64(mapperStats.GetSplitQueries()))
- if _, ok := request.(*LokiInstantRequest); !ok {
- return nil, fmt.Errorf("expected *LokiInstantRequest, got %T", request)
- }
-
query := s.ng.Query(ctx, logql.ParamsWithExpressionOverride{Params: params, ExpressionOverride: parsed})
res, err := query.Exec(ctx)
diff --git a/pkg/querier/queryrange/split_by_range_test.go b/pkg/querier/queryrange/split_by_range_test.go
index b1687611abc1d..af66c10a2f08a 100644
--- a/pkg/querier/queryrange/split_by_range_test.go
+++ b/pkg/querier/queryrange/split_by_range_test.go
@@ -8,6 +8,7 @@ import (
"github.com/go-kit/log"
"github.com/grafana/dskit/user"
+ "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/grafana/loki/pkg/loghttp"
@@ -17,14 +18,291 @@ import (
"github.com/grafana/loki/pkg/querier/queryrange/queryrangebase"
)
+func Test_RangeVectorSplitAlign(t *testing.T) {
+ var (
+ twelve34 = time.Date(1970, 1, 1, 12, 34, 0, 0, time.UTC) // 1970 12:34:00 UTC
+ twelve = time.Date(1970, 1, 1, 12, 00, 0, 0, time.UTC) // 1970 12:00:00 UTC
+ eleven = twelve.Add(-1 * time.Hour) // 1970 11:00:00 UTC
+ ten = eleven.Add(-1 * time.Hour) // 1970 10:00:00 UTC
+ )
+
+ for _, tc := range []struct {
+ name string
+ in queryrangebase.Request
+ subQueries []queryrangebase.RequestResponse
+ expected queryrangebase.Response
+ splitByInterval time.Duration
+ }{
+ {
+ name: "sum_splitBy_aligned_with_query_time",
+ splitByInterval: 1 * time.Minute,
+ in: &LokiInstantRequest{
+ Query: `sum(bytes_over_time({app="foo"}[3m]))`,
+ TimeTs: time.Unix(180, 0),
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum(bytes_over_time({app="foo"}[3m]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum(bytes_over_time({app="foo"}[1m]))`, 1, time.Unix(60, 0)),
+ subQueryRequestResponseWithQueryTime(`sum(bytes_over_time({app="foo"}[1m]))`, 2, time.Unix(120, 0)),
+ subQueryRequestResponseWithQueryTime(`sum(bytes_over_time({app="foo"}[1m]))`, 3, time.Unix(180, 0)),
+ },
+ expected: expectedMergedResponseWithTime(1+2+3, time.Unix(180, 0)), // original `TimeTs` of the query.
+ },
+ {
+ name: "sum_splitBy_not_aligned_query_time",
+ splitByInterval: 1 * time.Hour,
+ in: &LokiInstantRequest{
+ Query: `sum(bytes_over_time({app="foo"}[3h]))`,
+ TimeTs: twelve34,
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum(bytes_over_time({app="foo"}[3h]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum(bytes_over_time({app="foo"}[34m]))`, 1, twelve34),
+ subQueryRequestResponseWithQueryTime(`sum(bytes_over_time({app="foo"}[1h]))`, 2, twelve),
+ subQueryRequestResponseWithQueryTime(`sum(bytes_over_time({app="foo"}[1h]))`, 3, eleven),
+ subQueryRequestResponseWithQueryTime(`sum(bytes_over_time({app="foo"}[26m]))`, 4, ten),
+ },
+ expected: expectedMergedResponseWithTime(1+2+3+4, twelve34), // original `TimeTs` of the query.
+ },
+ {
+ name: "sum_aggregation_splitBy_aligned_with_query_time",
+ splitByInterval: 1 * time.Minute,
+ in: &LokiInstantRequest{
+ Query: `sum by (bar) (bytes_over_time({app="foo"}[3m]))`,
+ TimeTs: time.Unix(180, 0),
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum by (bar) (bytes_over_time({app="foo"}[3m]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(bytes_over_time({app="foo"}[1m]))`, 10, time.Unix(60, 0)),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(bytes_over_time({app="foo"}[1m]))`, 20, time.Unix(120, 0)),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(bytes_over_time({app="foo"}[1m]))`, 30, time.Unix(180, 0)),
+ },
+ expected: expectedMergedResponseWithTime(10+20+30, time.Unix(180, 0)),
+ },
+ {
+ name: "sum_aggregation_splitBy_not_aligned_with_query_time",
+ splitByInterval: 1 * time.Hour,
+ in: &LokiInstantRequest{
+ Query: `sum by (bar) (bytes_over_time({app="foo"}[3h]))`,
+ TimeTs: twelve34,
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum by (bar) (bytes_over_time({app="foo"}[3h]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(bytes_over_time({app="foo"}[34m]))`, 10, twelve34), // 12:34:00
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(bytes_over_time({app="foo"}[1h]))`, 20, twelve), // 12:00:00 aligned
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(bytes_over_time({app="foo"}[1h]))`, 30, eleven), // 11:00:00 aligned
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(bytes_over_time({app="foo"}[26m]))`, 40, ten), // 10:00:00
+ },
+ expected: expectedMergedResponseWithTime(10+20+30+40, twelve34),
+ },
+ {
+ name: "count_over_time_aligned_with_query_time",
+ splitByInterval: 1 * time.Minute,
+ in: &LokiInstantRequest{
+ Query: `sum(count_over_time({app="foo"}[3m]))`,
+ TimeTs: time.Unix(180, 0),
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum(count_over_time({app="foo"}[3m]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum(count_over_time({app="foo"}[1m]))`, 1, time.Unix(60, 0)),
+ subQueryRequestResponseWithQueryTime(`sum(count_over_time({app="foo"}[1m]))`, 1, time.Unix(120, 0)),
+ subQueryRequestResponseWithQueryTime(`sum(count_over_time({app="foo"}[1m]))`, 1, time.Unix(180, 0)),
+ },
+ expected: expectedMergedResponseWithTime(1+1+1, time.Unix(180, 0)),
+ },
+ {
+ name: "count_over_time_not_aligned_with_query_time",
+ splitByInterval: 1 * time.Hour,
+ in: &LokiInstantRequest{
+ Query: `sum(count_over_time({app="foo"}[3h]))`,
+ TimeTs: twelve34,
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum(count_over_time({app="foo"}[3h]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum(count_over_time({app="foo"}[34m]))`, 1, twelve34),
+ subQueryRequestResponseWithQueryTime(`sum(count_over_time({app="foo"}[1h]))`, 1, twelve),
+ subQueryRequestResponseWithQueryTime(`sum(count_over_time({app="foo"}[1h]))`, 1, eleven),
+ subQueryRequestResponseWithQueryTime(`sum(count_over_time({app="foo"}[26m]))`, 1, ten),
+ },
+ expected: expectedMergedResponseWithTime(1+1+1+1, twelve34),
+ },
+ {
+ name: "sum_agg_count_over_time_align_with_query_time",
+ splitByInterval: 1 * time.Minute,
+ in: &LokiInstantRequest{
+ Query: `sum by (bar) (count_over_time({app="foo"}[3m]))`,
+ TimeTs: time.Unix(180, 0),
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum by (bar) (count_over_time({app="foo"}[3m]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(count_over_time({app="foo"}[1m]))`, 0, time.Unix(60, 0)),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(count_over_time({app="foo"}[1m]))`, 0, time.Unix(120, 0)),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(count_over_time({app="foo"}[1m]))`, 0, time.Unix(180, 0)),
+ },
+ expected: expectedMergedResponseWithTime(0+0+0, time.Unix(180, 0)),
+ },
+ {
+ name: "sum_agg_count_over_time_not_align_with_query_time",
+ splitByInterval: 1 * time.Hour,
+ in: &LokiInstantRequest{
+ Query: `sum by (bar) (count_over_time({app="foo"}[3h]))`,
+ TimeTs: twelve34,
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum by (bar) (count_over_time({app="foo"}[3h]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(count_over_time({app="foo"}[34m]))`, 0, twelve34),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(count_over_time({app="foo"}[1h]))`, 0, twelve),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(count_over_time({app="foo"}[1h]))`, 0, eleven),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(count_over_time({app="foo"}[26m]))`, 0, ten),
+ },
+ expected: expectedMergedResponseWithTime(0+0+0+0, twelve34),
+ },
+ {
+ name: "sum_over_time_aligned_with_query_time",
+ splitByInterval: 1 * time.Minute,
+ in: &LokiInstantRequest{
+ Query: `sum(sum_over_time({app="foo"} | unwrap bar [3m]))`,
+ TimeTs: time.Unix(180, 0),
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum(sum_over_time({app="foo"} | unwrap bar [3m]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum(sum_over_time({app="foo"} | unwrap bar[1m]))`, 1, time.Unix(60, 0)),
+ subQueryRequestResponseWithQueryTime(`sum(sum_over_time({app="foo"} | unwrap bar[1m]))`, 2, time.Unix(120, 0)),
+ subQueryRequestResponseWithQueryTime(`sum(sum_over_time({app="foo"} | unwrap bar[1m]))`, 3, time.Unix(180, 0)),
+ },
+ expected: expectedMergedResponseWithTime(1+2+3, time.Unix(180, 0)),
+ },
+ {
+ name: "sum_over_time_not_aligned_with_query_time",
+ splitByInterval: 1 * time.Hour,
+ in: &LokiInstantRequest{
+ Query: `sum(sum_over_time({app="foo"} | unwrap bar [3h]))`,
+ TimeTs: twelve34,
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum(sum_over_time({app="foo"} | unwrap bar [3h]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum(sum_over_time({app="foo"} | unwrap bar[34m]))`, 1, twelve34),
+ subQueryRequestResponseWithQueryTime(`sum(sum_over_time({app="foo"} | unwrap bar[1h]))`, 2, twelve),
+ subQueryRequestResponseWithQueryTime(`sum(sum_over_time({app="foo"} | unwrap bar[1h]))`, 3, eleven),
+ subQueryRequestResponseWithQueryTime(`sum(sum_over_time({app="foo"} | unwrap bar[26m]))`, 4, ten),
+ },
+ expected: expectedMergedResponseWithTime(1+2+3+4, twelve34),
+ },
+ {
+ name: "sum_agg_sum_over_time_aligned_with_query_time",
+ splitByInterval: 1 * time.Minute,
+ in: &LokiInstantRequest{
+ Query: `sum by (bar) (sum_over_time({app="foo"} | unwrap bar [3m]))`,
+ TimeTs: time.Unix(180, 0),
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum by (bar) (sum_over_time({app="foo"} | unwrap bar [3m]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(sum_over_time({app="foo"} | unwrap bar[1m]))`, 1, time.Unix(60, 0)),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(sum_over_time({app="foo"} | unwrap bar[1m]))`, 2, time.Unix(120, 0)),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(sum_over_time({app="foo"} | unwrap bar[1m]))`, 3, time.Unix(180, 0)),
+ },
+ expected: expectedMergedResponseWithTime(1+2+3, time.Unix(180, 0)),
+ },
+ {
+ name: "sum_agg_sum_over_time_not_aligned_with_query_time",
+ splitByInterval: 1 * time.Hour,
+ in: &LokiInstantRequest{
+ Query: `sum by (bar) (sum_over_time({app="foo"} | unwrap bar [3h]))`,
+ TimeTs: twelve34,
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(`sum by (bar) (sum_over_time({app="foo"} | unwrap bar [3h]))`),
+ },
+ },
+ subQueries: []queryrangebase.RequestResponse{
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(sum_over_time({app="foo"} | unwrap bar[34m]))`, 1, twelve34),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(sum_over_time({app="foo"} | unwrap bar[1h]))`, 2, twelve),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(sum_over_time({app="foo"} | unwrap bar[1h]))`, 3, eleven),
+ subQueryRequestResponseWithQueryTime(`sum by (bar)(sum_over_time({app="foo"} | unwrap bar[26m]))`, 4, ten),
+ },
+ expected: expectedMergedResponseWithTime(1+2+3+4, twelve34),
+ },
+ } {
+ tc := tc
+ t.Run(tc.name, func(t *testing.T) {
+ srm := NewSplitByRangeMiddleware(log.NewNopLogger(), testEngineOpts, fakeLimits{
+ maxSeries: 10000,
+ queryTimeout: time.Second,
+ instantMetricSplitDuration: map[string]time.Duration{
+ "tenant": tc.splitByInterval,
+ },
+ }, true, nilShardingMetrics) // enable splitAlign
+
+ ctx := user.InjectOrgID(context.TODO(), "tenant")
+
+ byTimeTs := make(map[int64]queryrangebase.RequestResponse)
+ for _, v := range tc.subQueries {
+ key := v.Request.(*LokiInstantRequest).TimeTs.UnixNano()
+ byTimeTs[key] = v
+ }
+
+ resp, err := srm.Wrap(queryrangebase.HandlerFunc(
+ func(ctx context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ // req should match with one of the subqueries.
+ ts := req.(*LokiInstantRequest).TimeTs
+ subq, ok := byTimeTs[ts.UnixNano()]
+ if !ok { // every req **should** match with one of the subqueries
+ return nil, fmt.Errorf("subquery request '%s-%s' not found", req.GetQuery(), ts)
+ }
+
+ // Assert subquery request
+ assert.Equal(t, subq.Request.GetQuery(), req.GetQuery())
+ assert.Equal(t, subq.Request, req)
+ return subq.Response, nil
+
+ })).Do(ctx, tc.in)
+ require.NoError(t, err)
+ assert.Equal(t, tc.expected, resp.(*LokiPromResponse).Response)
+ })
+ }
+}
+
func Test_RangeVectorSplit(t *testing.T) {
srm := NewSplitByRangeMiddleware(log.NewNopLogger(), testEngineOpts, fakeLimits{
maxSeries: 10000,
queryTimeout: time.Second,
- splitDuration: map[string]time.Duration{
+ instantMetricSplitDuration: map[string]time.Duration{
"tenant": time.Minute,
},
- }, nilShardingMetrics)
+ }, false, nilShardingMetrics)
ctx := user.InjectOrgID(context.TODO(), "tenant")
@@ -151,6 +429,39 @@ func Test_RangeVectorSplit(t *testing.T) {
}
}
+// subQueryRequestResponse returns a RequestResponse containing the expected subQuery instant request
+// and a response containing a sample value returned from the following wrapper
+func subQueryRequestResponseWithQueryTime(expectedSubQuery string, sampleValue float64, exec time.Time) queryrangebase.RequestResponse {
+ return queryrangebase.RequestResponse{
+ Request: &LokiInstantRequest{
+ Query: expectedSubQuery,
+ TimeTs: exec,
+ Path: "/loki/api/v1/query",
+ Plan: &plan.QueryPlan{
+ AST: syntax.MustParseExpr(expectedSubQuery),
+ },
+ },
+ Response: &LokiPromResponse{
+ Response: &queryrangebase.PrometheusResponse{
+ Status: loghttp.QueryStatusSuccess,
+ Data: queryrangebase.PrometheusData{
+ ResultType: loghttp.ResultTypeVector,
+ Result: []queryrangebase.SampleStream{
+ {
+ Labels: []logproto.LabelAdapter{
+ {Name: "app", Value: "foo"},
+ },
+ Samples: []logproto.LegacySample{
+ {TimestampMs: 1000, Value: sampleValue},
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+}
+
// subQueryRequestResponse returns a RequestResponse containing the expected subQuery instant request
// and a response containing a sample value returned from the following wrapper
func subQueryRequestResponse(expectedSubQuery string, sampleValue float64) queryrangebase.RequestResponse {
@@ -202,3 +513,20 @@ func expectedMergedResponse(expectedSampleValue float64) *queryrangebase.Prometh
},
}
}
+
+func expectedMergedResponseWithTime(expectedSampleValue float64, exec time.Time) *queryrangebase.PrometheusResponse {
+ return &queryrangebase.PrometheusResponse{
+ Status: loghttp.QueryStatusSuccess,
+ Data: queryrangebase.PrometheusData{
+ ResultType: loghttp.ResultTypeVector,
+ Result: []queryrangebase.SampleStream{
+ {
+ Labels: []logproto.LabelAdapter{},
+ Samples: []logproto.LegacySample{
+ {TimestampMs: exec.UnixMilli(), Value: expectedSampleValue},
+ },
+ },
+ },
+ },
+ }
+}
diff --git a/pkg/util/marshal/legacy/marshal_test.go b/pkg/util/marshal/legacy/marshal_test.go
index 6e07d84615928..a3dca73ac299f 100644
--- a/pkg/util/marshal/legacy/marshal_test.go
+++ b/pkg/util/marshal/legacy/marshal_test.go
@@ -161,6 +161,16 @@ var queryTests = []struct {
"downloadTime": 0,
"queryLengthServed": 0
},
+ "instantMetricResult": {
+ "entriesFound": 0,
+ "entriesRequested": 0,
+ "entriesStored": 0,
+ "bytesReceived": 0,
+ "bytesSent": 0,
+ "requests": 0,
+ "downloadTime": 0,
+ "queryLengthServed": 0
+ },
"result": {
"entriesFound": 0,
"entriesRequested": 0,
@@ -180,7 +190,7 @@ var queryTests = []struct {
"shards": 0,
"splits": 0,
"subqueries": 0,
- "totalBytesProcessed": 0,
+ "totalBytesProcessed": 0,
"totalEntriesReturned": 0,
"totalLinesProcessed": 0,
"totalStructuredMetadataBytesProcessed": 0,
diff --git a/pkg/util/marshal/marshal_test.go b/pkg/util/marshal/marshal_test.go
index d5336298c37c8..ce7a49f97e76c 100644
--- a/pkg/util/marshal/marshal_test.go
+++ b/pkg/util/marshal/marshal_test.go
@@ -129,6 +129,16 @@ const emptyStats = `{
"downloadTime": 0,
"queryLengthServed": 0
},
+ "instantMetricResult": {
+ "entriesFound": 0,
+ "entriesRequested": 0,
+ "entriesStored": 0,
+ "bytesReceived": 0,
+ "bytesSent": 0,
+ "requests": 0,
+ "downloadTime": 0,
+ "queryLengthServed": 0
+ },
"result": {
"entriesFound": 0,
"entriesRequested": 0,
@@ -208,13 +218,13 @@ var queryTestWithEncodingFlags = []struct {
[ "123456789012346", "super line with labels", {
"structuredMetadata": {
"foo": "a",
- "bar": "b"
- }
+ "bar": "b"
+ }
}],
[ "123456789012347", "super line with labels msg=text", {
"structuredMetadata": {
"foo": "a",
- "bar": "b"
+ "bar": "b"
},
"parsed": {
"msg": "text"
@@ -549,13 +559,13 @@ var tailTestWithEncodingFlags = []struct {
[ "123456789012346", "super line with labels", {
"structuredMetadata": {
"foo": "a",
- "bar": "b"
- }
+ "bar": "b"
+ }
}],
[ "123456789012347", "super line with labels msg=text", {
"structuredMetadata": {
"foo": "a",
- "bar": "b"
+ "bar": "b"
},
"parsed": {
"msg": "text"
diff --git a/pkg/validation/limits.go b/pkg/validation/limits.go
index 00ee2e152144a..ab845380f9682 100644
--- a/pkg/validation/limits.go
+++ b/pkg/validation/limits.go
@@ -111,6 +111,7 @@ type Limits struct {
MetadataQuerySplitDuration model.Duration `yaml:"split_metadata_queries_by_interval" json:"split_metadata_queries_by_interval"`
RecentMetadataQuerySplitDuration model.Duration `yaml:"split_recent_metadata_queries_by_interval" json:"split_recent_metadata_queries_by_interval"`
RecentMetadataQueryWindow model.Duration `yaml:"recent_metadata_query_window" json:"recent_metadata_query_window"`
+ InstantMetricQuerySplitDuration model.Duration `yaml:"split_instant_metric_queries_by_interval" json:"split_instant_metric_queries_by_interval"`
IngesterQuerySplitDuration model.Duration `yaml:"split_ingester_queries_by_interval" json:"split_ingester_queries_by_interval"`
MinShardingLookback model.Duration `yaml:"min_sharding_lookback" json:"min_sharding_lookback"`
MaxQueryBytesRead flagext.ByteSize `yaml:"max_query_bytes_read" json:"max_query_bytes_read"`
@@ -307,6 +308,8 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
_ = l.QuerySplitDuration.Set("1h")
f.Var(&l.QuerySplitDuration, "querier.split-queries-by-interval", "Split queries by a time interval and execute in parallel. The value 0 disables splitting by time. This also determines how cache keys are chosen when result caching is enabled.")
+ _ = l.InstantMetricQuerySplitDuration.Set("1h")
+ f.Var(&l.InstantMetricQuerySplitDuration, "querier.split-instant-metric-queries-by-interval", "Split instant metric queries by a time interval and execute in parallel. The value 0 disables splitting instant metric queries by time. This also determines how cache keys are chosen when instant metric query result caching is enabled.")
_ = l.MetadataQuerySplitDuration.Set("24h")
f.Var(&l.MetadataQuerySplitDuration, "querier.split-metadata-queries-by-interval", "Split metadata queries by a time interval and execute in parallel. The value 0 disables splitting metadata queries by time. This also determines how cache keys are chosen when label/series result caching is enabled.")
@@ -601,6 +604,11 @@ func (o *Overrides) QuerySplitDuration(userID string) time.Duration {
return time.Duration(o.getOverridesForUser(userID).QuerySplitDuration)
}
+// InstantMetricQuerySplitDuration returns the tenant specific instant metric queries splitby interval applied in the query frontend.
+func (o *Overrides) InstantMetricQuerySplitDuration(userID string) time.Duration {
+ return time.Duration(o.getOverridesForUser(userID).InstantMetricQuerySplitDuration)
+}
+
// MetadataQuerySplitDuration returns the tenant specific metadata splitby interval applied in the query frontend.
func (o *Overrides) MetadataQuerySplitDuration(userID string) time.Duration {
return time.Duration(o.getOverridesForUser(userID).MetadataQuerySplitDuration)
|
feat
|
Support split align and caching for instant metric query results (#11814)
|
90c4821dd3377087980cde515fe5a5895c7bf34c
|
2021-11-06 00:17:50
|
Dylan Guedes
|
docs: Document the common ring section (#4664)
| false
|
diff --git a/docs/sources/configuration/_index.md b/docs/sources/configuration/_index.md
index ac9894e973633..2bd4a53f8c487 100644
--- a/docs/sources/configuration/_index.md
+++ b/docs/sources/configuration/_index.md
@@ -314,83 +314,8 @@ The `query_scheduler_config` block configures the Loki query scheduler.
[use_scheduler_ring: <boolean> | default = false]
# The hash ring configuration. This option is required only if use_scheduler_ring is true
-scheduler_ring:
- # The key-value store used to share the hash ring across multiple instances.
- kvstore:
- # Backend storage to use for the ring. Supported values are: consul, etcd,
- # inmemory, memberlist, multi.
- # CLI flag: -scheduler.ring.store
- [store: <string> | default = "memberlist"]
-
- # The prefix for the keys in the store. Should end with a /.
- # CLI flag: -scheduler.ring.prefix
- [prefix: <string> | default = "schedulers/"]
-
- # The consul_config configures the consul client.
- # The CLI flags prefix for this block config is: scheduler.ring
- [consul: <consul_config>]
-
- # The etcd_config configures the etcd client.
- # The CLI flags prefix for this block config is: scheduler.ring
- [etcd: <etcd_config>]
-
- multi:
- # Primary backend storage used by multi-client.
- # CLI flag: -scheduler.ring.multi.primary
- [primary: <string> | default = ""]
-
- # Secondary backend storage used by multi-client.
- # CLI flag: -scheduler.ring.multi.secondary
- [secondary: <string> | default = ""]
-
- # Mirror writes to secondary store.
- # CLI flag: -scheduler.ring.multi.mirror-enabled
- [mirror_enabled: <boolean> | default = false]
-
- # Timeout for storing value to secondary store.
- # CLI flag: -scheduler.ring.multi.mirror-timeout
- [mirror_timeout: <duration> | default = 2s]
-
- # Interval between heartbeats sent to the ring. 0 = disabled.
- # CLI flag: -scheduler.ring.heartbeat-period
- [heartbeat_period: <duration> | default = 15s]
-
- # The heartbeat timeout after which store gateways are considered unhealthy
- # within the ring. 0 = never (timeout disabled). This option needs be set both
- # on the store-gateway and querier when running in microservices mode.
- # CLI flag: -scheduler.ring.heartbeat-timeout
- [heartbeat_timeout: <duration> | default = 1m]
-
- # File path where tokens are stored. If empty, tokens are neither stored at
- # shutdown nor restored at startup.
- # CLI flag: -scheduler.ring.tokens-file-path
- [tokens_file_path: <string> | default = ""]
-
- # True to enable zone-awareness and replicate blocks across different
- # availability zones.
- # CLI flag: -scheduler.ring.zone-awareness-enabled
- [zone_awareness_enabled: <boolean> | default = false]
-
- # Name of network interface to read addresses from.
- # CLI flag: -scheduler.ring.instance-interface-names
- [instance_interface_names: <list of string> | default = [eth0 en0]]
-
- # IP address to advertise in the ring.
- # CLI flag: -scheduler.ring.instance-addr
- [instance_addr: <list of string> | default = first from instance_interface_names]
-
- # Port to advertise in the ring
- # CLI flag: -scheduler.ring.instance-port
- [instance_port: <list of string> | default = server.grpc-listen-port]
-
- # Instance ID to register in the ring.
- # CLI flag: -scheduler.ring.instance-id
- [instance_id: <list of string> | default = os.Hostname()]
-
- # The availability zone where this instance is running. Required if
- # zone-awareness is enabled.
- # CLI flag: -scheduler.ring.instance-availability-zone
- [instance_availability_zone: <string> | default = ""]
+# The CLI flags prefix for this block config is scheduler.ring
+[scheduler_ring: <ring_config>]
```
## query_frontend_config
@@ -717,62 +642,9 @@ remote_write:
# CLI flag: -ruler.search-pending-for
[search_pending_for: <duration> | default = 5m]
-ring:
- kvstore:
- # Backend storage to use for the ring. Supported values are: consul, etcd,
- # inmemory, memberlist, multi.
- # CLI flag: -ruler.ring.store
- [store: <string> | default = "consul"]
-
- # The prefix for the keys in the store. Should end with a /.
- # CLI flag: -ruler.ring.prefix
- [prefix: <string> | default = "rulers/"]
-
- # The consul_config configures the consul client.
- # The CLI flags prefix for this block config is: ruler.ring
- [consul: <consul_config>]
-
- # The etcd_config configures the etcd client.
- # The CLI flags prefix for this block config is: ruler.ring
- [etcd: <etcd_config>]
-
- multi:
- # Primary backend storage used by multi-client.
- # CLI flag: -ruler.ring.multi.primary
- [primary: <string> | default = ""]
-
- # Secondary backend storage used by multi-client.
- # CLI flag: -ruler.ring.multi.secondary
- [secondary: <string> | default = ""]
-
- # Mirror writes to secondary store.
- # CLI flag: -ruler.ring.multi.mirror-enabled
- [mirror_enabled: <boolean> | default = false]
-
- # Timeout for storing value to secondary store.
- # CLI flag: -ruler.ring.multi.mirror-timeout
- [mirror_timeout: <duration> | default = 2s]
-
- # Period at which to heartbeat to the ring.
- # CLI flag: -ruler.ring.heartbeat-period
- [heartbeat_period: <duration> | default = 5s]
-
- # The heartbeat timeout after which rulers are considered unhealthy within the
- # ring.
- # CLI flag: -ruler.ring.heartbeat-timeout
- [heartbeat_timeout: <duration> | default = 1m]
-
- # Number of tokens for each ingester.
- # CLI flag: -ruler.ring.num-tokens
- [num_tokens: <int> | default = 128]
-
-# Period with which to attempt to flush rule groups.
-# CLI flag: -ruler.flush-period
-[flush_period: <duration> | default = 1m]
-
-# Enable the Ruler API.
-# CLI flag: -ruler.enable-api
-[enable_api: <boolean> | default = false]
+# Ring used by Loki ruler.
+# The CLI flags prefix for this block config is ruler.ring
+[ring: <ring_config>]
```
## azure_storage_config
@@ -2017,83 +1889,8 @@ compacts index shards to more performant forms.
[max_compaction_parallelism: <int> | default = 1]
# The hash ring configuration used by compactors to elect a single instance for running compactions
-compactor_ring:
- # The key-value store used to share the hash ring across multiple instances.
- kvstore:
- # Backend storage to use for the ring. Supported values are: consul, etcd,
- # inmemory, memberlist, multi.
- # CLI flag: -boltdb.shipper.compactor.ring.store
- [store: <string> | default = "memberlist"]
-
- # The prefix for the keys in the store. Should end with a /.
- # CLI flag: -boltdb.shipper.compactor.ring.prefix
- [prefix: <string> | default = "compactors/"]
-
- # The consul_config configures the consul client.
- # The CLI flags prefix for this block config is: boltdb.shipper.compactor.ring
- [consul: <consul_config>]
-
- # The etcd_config configures the etcd client.
- # The CLI flags prefix for this block config is: boltdb.shipper.compactor.ring
- [etcd: <etcd_config>]
-
- multi:
- # Primary backend storage used by multi-client.
- # CLI flag: -boltdb.shipper.compactor.ring.multi.primary
- [primary: <string> | default = ""]
-
- # Secondary backend storage used by multi-client.
- # CLI flag: -boltdb.shipper.compactor.ring.multi.secondary
- [secondary: <string> | default = ""]
-
- # Mirror writes to secondary store.
- # CLI flag: -boltdb.shipper.compactor.ring.multi.mirror-enabled
- [mirror_enabled: <boolean> | default = false]
-
- # Timeout for storing value to secondary store.
- # CLI flag: -boltdb.shipper.compactor.ring.multi.mirror-timeout
- [mirror_timeout: <duration> | default = 2s]
-
- # Interval between heartbeats sent to the ring. 0 = disabled.
- # CLI flag: -boltdb.shipper.compactor.ring.heartbeat-period
- [heartbeat_period: <duration> | default = 15s]
-
- # The heartbeat timeout after which store gateways are considered unhealthy
- # within the ring. 0 = never (timeout disabled). This option needs be set both
- # on the store-gateway and querier when running in microservices mode.
- # CLI flag: -boltdb.shipper.compactor.ring.heartbeat-timeout
- [heartbeat_timeout: <duration> | default = 1m]
-
- # File path where tokens are stored. If empty, tokens are neither stored at
- # shutdown nor restored at startup.
- # CLI flag: -boltdb.shipper.compactor.ring.tokens-file-path
- [tokens_file_path: <string> | default = ""]
-
- # True to enable zone-awareness and replicate blocks across different
- # availability zones.
- # CLI flag: -boltdb.shipper.compactor.ring.zone-awareness-enabled
- [zone_awareness_enabled: <boolean> | default = false]
-
- # Name of network interface to read addresses from.
- # CLI flag: -boltdb.shipper.compactor.ring.instance-interface-names
- [instance_interface_names: <list of string> | default = [eth0 en0]]
-
- # IP address to advertise in the ring.
- # CLI flag: -boltdb.shipper.compactor.ring.instance-addr
- [instance_addr: <list of string> | default = first from instance_interface_names]
-
- # Port to advertise in the ring
- # CLI flag: -boltdb.shipper.compactor.ring.instance-port
- [instance_port: <list of string> | default = server.grpc-listen-port]
-
- # Instance ID to register in the ring.
- # CLI flag: -boltdb.shipper.compactor.ring.instance-id
- [instance_id: <list of string> | default = os.Hostname()]
-
- # The availability zone where this instance is running. Required if
- # zone-awareness is enabled.
- # CLI flag: -boltdb.shipper.compactor.ring.instance-availability-zone
- [instance_availability_zone: <string> | default = ""]
+# The CLI flags prefix for this block config is: boltdb.shipper.compactor.ring
+[compactor_ring: <ring_config>]
```
## limits_config
@@ -2523,6 +2320,12 @@ This way, one doesn't have to replicate configs in multiple places.
# When true, the ingester, compactor and query_scheduler ring tokens will be saved to files in the path_prefix directory
# Loki will error if you set this to true and path_prefix is empty.
[persist_tokens: <boolean>: default = false]
+
+# A common ring config to be used by all Loki rings.
+# If a common ring is given, its values are used to define any undefined ring values. For instance,
+# you can expect the `heartbeat_period` defined in the common section to be used by the distributor's ring,
+# but only if the distributor's ring itself doesn't have a `heartbeat_period` set.
+[ring: <ring_config>]
```
### common_storage_config
@@ -2547,6 +2350,87 @@ If any specific configs for an object storage client have been provided elsewher
[filesystem: <local_storage_config>]
```
+### ring_config
+
+The `ring_config` blocks defines a ring configuration used by Loki component.
+
+```yaml
+# The key-value store used to share the hash ring across multiple instances.
+kvstore:
+ # Backend storage to use for the ring. Supported values are: consul, etcd,
+ # inmemory, memberlist, multi.
+ # CLI flag: -<prefix>.store
+ [store: <string> | default = "memberlist"]
+
+ # The prefix for the keys in the store. Should end with a /.
+ # CLI flag: -<prefix>.prefix
+ [prefix: <string> | default = "schedulers/"]
+
+ # The consul_config configures the consul client.
+ [consul: <consul_config>]
+
+ # The etcd_config configures the etcd client.
+ [etcd: <etcd_config>]
+
+ multi:
+ # Primary backend storage used by multi-client.
+ # CLI flag: -<prefix>.multi.primary
+ [primary: <string> | default = ""]
+
+ # Secondary backend storage used by multi-client.
+ # CLI flag: -<prefix>.multi.secondary
+ [secondary: <string> | default = ""]
+
+ # Mirror writes to secondary store.
+ # CLI flag: -<prefix>.multi.mirror-enabled
+ [mirror_enabled: <boolean> | default = false]
+
+ # Timeout for storing value to secondary store.
+ # CLI flag: -<prefix>.multi.mirror-timeout
+ [mirror_timeout: <duration> | default = 2s]
+
+# Interval between heartbeats sent to the ring. 0 = disabled.
+# CLI flag: -<prefix>.heartbeat-period
+[heartbeat_period: <duration> | default = 15s]
+
+# The heartbeat timeout after which store gateways are considered unhealthy
+# within the ring. 0 = never (timeout disabled). This option needs be set both
+# on the store-gateway and querier when running in microservices mode.
+# CLI flag: -<prefix>.heartbeat-timeout
+[heartbeat_timeout: <duration> | default = 1m]
+
+# File path where tokens are stored. If empty, tokens are neither stored at
+# shutdown nor restored at startup.
+# CLI flag: -<prefix>.tokens-file-path
+[tokens_file_path: <string> | default = ""]
+
+# True to enable zone-awareness and replicate blocks across different
+# availability zones.
+# CLI flag: -<prefix>.zone-awareness-enabled
+[zone_awareness_enabled: <boolean> | default = false]
+
+# Name of network interface to read addresses from.
+# CLI flag: -<prefix>.instance-interface-names
+[instance_interface_names: <list of string> | default = [eth0 en0]]
+
+# IP address to advertise in the ring.
+# CLI flag: -<prefix>.instance-addr
+[instance_addr: <list of string> | default = first from instance_interface_names]
+
+# Port to advertise in the ring
+# CLI flag: -<prefix>.instance-port
+[instance_port: <list of string> | default = server.grpc-listen-port]
+
+# Instance ID to register in the ring.
+# CLI flag: -<prefix>.instance-id
+[instance_id: <list of string> | default = os.Hostname()]
+
+# The availability zone where this instance is running. Required if
+# zone-awareness is enabled.
+# CLI flag: -<prefix>.instance-availability-zone
+[instance_availability_zone: <string> | default = ""]
+```
+
## Runtime Configuration file
Loki has a concept of "runtime config" file, which is simply a file that is reloaded while Loki is running. It is used by some Loki components to allow operator to change some aspects of Loki configuration without restarting it. File is specified by using `-runtime-config.file=<filename>` flag and reload period (which defaults to 10 seconds) can be changed by `-runtime-config.reload-period=<duration>` flag. Previously this mechanism was only used by limits overrides, and flags were called `-limits.per-user-override-config=<filename>` and `-limits.per-user-override-period=10s` respectively. These are still used, if `-runtime-config.file=<filename>` is not specified.
|
docs
|
Document the common ring section (#4664)
|
c4fb17f16e2a2235d4312005a75bb869db008364
|
2020-08-07 06:17:51
|
Jeroen Op 't Eynde
|
refactor: use $.core.v1.envVar (#2460)
| false
|
diff --git a/production/ksonnet/loki-canary/loki-canary.libsonnet b/production/ksonnet/loki-canary/loki-canary.libsonnet
index 72e20766f219b..5b46ae3c80d6a 100644
--- a/production/ksonnet/loki-canary/loki-canary.libsonnet
+++ b/production/ksonnet/loki-canary/loki-canary.libsonnet
@@ -16,8 +16,8 @@ k + config {
container.withPorts($.core.v1.containerPort.new(name='http-metrics', port=80)) +
container.withArgsMixin($.util.mapToFlags($.loki_canary_args)) +
container.withEnv([
- container.envType.fromFieldPath('HOSTNAME', 'spec.nodeName'),
- container.envType.fromFieldPath('POD_NAME', 'metadata.name'),
+ $.core.v1.envVar.fromFieldPath('HOSTNAME', 'spec.nodeName'),
+ $.core.v1.envVar.fromFieldPath('POD_NAME', 'metadata.name'),
]),
local daemonSet = $.apps.v1.daemonSet,
diff --git a/production/ksonnet/promtail/promtail.libsonnet b/production/ksonnet/promtail/promtail.libsonnet
index e1180251f25fe..6d6c7c70a8427 100644
--- a/production/ksonnet/promtail/promtail.libsonnet
+++ b/production/ksonnet/promtail/promtail.libsonnet
@@ -49,7 +49,7 @@ k + config + scrape_config {
container.withPorts($.core.v1.containerPort.new(name='http-metrics', port=80)) +
container.withArgsMixin($.util.mapToFlags($.promtail_args)) +
container.withEnv([
- container.envType.fromFieldPath('HOSTNAME', 'spec.nodeName'),
+ $.core.v1.envVar.fromFieldPath('HOSTNAME', 'spec.nodeName'),
]) +
container.mixin.readinessProbe.httpGet.withPath('/ready') +
container.mixin.readinessProbe.httpGet.withPort(80) +
|
refactor
|
use $.core.v1.envVar (#2460)
|
786f882f41d04a5e5e10668213c70a1dc44231dd
|
2025-03-17 20:50:57
|
loki-gh-app[bot]
|
chore( operator): community release 0.8.0 (#14859)
| false
|
diff --git a/.release-please-manifest.json b/.release-please-manifest.json
index 896a4090883a9..5c5ebfa64575c 100644
--- a/.release-please-manifest.json
+++ b/.release-please-manifest.json
@@ -1,4 +1,4 @@
{
".": "3.4.2",
- "operator": "0.7.1"
+ "operator": "0.8.0"
}
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index fbb9a89f57eae..afeab67065b1e 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,28 @@
## Main
+## [0.8.0](https://github.com/grafana/loki/compare/operator/v0.7.1...operator/v0.8.0) (2025-03-17)
+
+
+### ⚠ BREAKING CHANGES
+
+* **operator:** Add configuration option for dropping OTLP attributes ([#15857](https://github.com/grafana/loki/issues/15857))
+
+### Features
+
+* **operator:** Add configuration option for dropping OTLP attributes ([#15857](https://github.com/grafana/loki/issues/15857)) ([bd1ea23](https://github.com/grafana/loki/commit/bd1ea2313220b9aa187ff5b252f55512434c1865))
+* **operator:** Add support for Swift TLS CA configuration ([#15260](https://github.com/grafana/loki/issues/15260)) ([62a72f6](https://github.com/grafana/loki/commit/62a72f6405d5a5cbb0814fab8010c215b1782c93))
+* **operator:** Enable time-based stream-sharding ([#16390](https://github.com/grafana/loki/issues/16390)) ([1b4f1f5](https://github.com/grafana/loki/commit/1b4f1f57fa6ceac405f5ab5d0314de9d97e309d3))
+* **operator:** Update Loki operand to v3.4.2 ([#16360](https://github.com/grafana/loki/issues/16360)) ([42f87d3](https://github.com/grafana/loki/commit/42f87d3064b60438df09d1eb799fd50e5753d1f8))
+
+
+### Bug Fixes
+
+* **operator:** Fix minimum available ingesters for 1x.pico size ([#16035](https://github.com/grafana/loki/issues/16035)) ([40cf074](https://github.com/grafana/loki/commit/40cf074fba0ed0016a8ca64bed554f3d628e7ec6))
+* **operator:** Select non-zero delete worker count for all sizes ([#16492](https://github.com/grafana/loki/issues/16492)) ([1e5579a](https://github.com/grafana/loki/commit/1e5579abef02ed03f9dc87cf7d09f52f53768152))
+* **operator:** Update maximum OpenShift version ([#16443](https://github.com/grafana/loki/issues/16443)) ([ddf3cfb](https://github.com/grafana/loki/commit/ddf3cfbba7a6529a6902036c486b523b588818e3))
+* **operator:** Update OTLP user guide to reflect change in LokiStack ([#16057](https://github.com/grafana/loki/issues/16057)) ([14e2c87](https://github.com/grafana/loki/commit/14e2c875d2bc5d6678f964f1477cfafe6f37e496))
+* **operator:** Update skipRange in OpenShift variant ([#15984](https://github.com/grafana/loki/issues/15984)) ([dfbe00c](https://github.com/grafana/loki/commit/dfbe00c88a2f17da11b726a3461c11324c21fcca))
+
## [0.7.1](https://github.com/grafana/loki/compare/operator/v0.7.0...operator/v0.7.1) (2024-11-11)
|
chore
|
community release 0.8.0 (#14859)
|
67f711c2abc002e12c16a56e9c2ddff13c319954
|
2024-04-11 23:39:19
|
J Stickler
|
docs: Update 3.0 Release Notes (#12565)
| false
|
diff --git a/docs/sources/release-notes/_index.md b/docs/sources/release-notes/_index.md
index 40dc71c9c672f..db74b50c16d76 100644
--- a/docs/sources/release-notes/_index.md
+++ b/docs/sources/release-notes/_index.md
@@ -8,12 +8,13 @@ weight: 100
Release notes for Loki are in the CHANGELOG for the release and
listed here by version number.
-- [V2.9 release notes]({{< relref "./v2-9" >}})
-- [V2.8 release notes]({{< relref "./v2-8" >}})
-- [V2.7 release notes]({{< relref "./v2-7" >}})
-- [V2.6 release notes]({{< relref "./v2-6" >}})
-- [V2.5 release notes]({{< relref "./v2-5" >}})
-- [V2.4 release notes]({{< relref "./v2-4" >}})
-- [V2.3 release notes]({{< relref "./v2-3" >}})
+- [V3.0 release notes](https://grafana.com/docs/loki/<LOKI_VERSION>/release-notes/v3.0/)
+- [V2.9 release notes](https://grafana.com/docs/loki/<LOKI_VERSION>/release-notes/v2-9/)
+- [V2.8 release notes](https://grafana.com/docs/loki/<LOKI_VERSION>/release-notes/v2-8/)
+- [V2.7 release notes](https://grafana.com/docs/loki/<LOKI_VERSION>/release-notes/v2-7/)
+- [V2.6 release notes](https://grafana.com/docs/loki/<LOKI_VERSION>/release-notes/v2-6/)
+- [V2.5 release notes](https://grafana.com/docs/loki/<LOKI_VERSION>/release-notes/v2-5/)
+- [V2.4 release notes](https://grafana.com/docs/loki/<LOKI_VERSION>/release-notes/v2-4/)
+- [V2.3 release notes](https://grafana.com/docs/loki/<LOKI_VERSION>/release-notes/v2-3/)
-The details about our release cadence are documented [here]({{< relref "./cadence" >}}).
+The details about our release cadence are documented [here](https://grafana.com/docs/loki/<LOKI_VERSION>/release-notes/cadence/).
diff --git a/docs/sources/release-notes/v3.0.md b/docs/sources/release-notes/v3.0.md
index b89a68a5af1a8..a44483d57d2f4 100644
--- a/docs/sources/release-notes/v3.0.md
+++ b/docs/sources/release-notes/v3.0.md
@@ -1,6 +1,6 @@
---
title: v3.0
-description: Version 3.0 release notes.
+description: Version 3.0 release notes.
weight: 30
---
@@ -8,10 +8,14 @@ weight: 30
Grafana Labs and the Loki team are excited to announce the release of Loki 3.0. Here's a summary of new enhancements and important fixes.
-For a full list of all changes and fixes, refer to the [CHANGELOG](https://github.com/grafana/loki/blob/release-3.0.0-rc1/CHANGELOG.md).
+For a full list of all changes and fixes, refer to the [CHANGELOG](https://github.com/grafana/loki/blob/release-3.0.x/CHANGELOG.md).
## Features and enhancements
+{{< admonition type="note" >}}
+Note that Loki 3.0 defaults to using the v13 schema. All of the latest features are built against TSDB and the v13 Schema. This version of the schema is compatible with both Loki 2.9.x and Loki 3.0. The main change is to add support for Structured Metadata which is used by the new OTLP native endpoint and is enabled by default.
+{{< /admonition >}}
+
Key features in Loki 3.0.0 include the following:
- **Query acceleration with Bloom filters** (experimental): This is designed to speed up filter queries, with best results for queries that are looking for a specific text string like an error message or UUID. For more information, refer to [Query acceleration with Blooms](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/query-acceleration-blooms/).
@@ -20,13 +24,19 @@ Key features in Loki 3.0.0 include the following:
- **Helm charts**: A major upgrade to the Loki helm chart introduces support for `Distributed` mode (also known as [microservices](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#microservices-mode) mode), includes memcached by default, and includes several updates to configurations to improve Loki operations.
-- **Lambda/Promtail:** support dropping labels ([#10755](https://github.com/grafana/loki/issues/10755)) ([ec54c72](https://github.com/grafana/loki/commit/ec54c723ebbeeda88000dde188d539ecfe05dad8)).
+- **Pattern match filter**: LogQL now supports two new [pattern match filter operators](https://grafana.com/docs/loki/<LOKI_VERSION>/query/#pattern-match-filter-operators). You can match any word with just one control character and it is simpler and 10x faster than using regex.
+
+- **Caching updates**: This release includes multiple updates to caching to improve performance, add new configuration options and support for new features, deprecate features no longer needed, and add automatic background checks.
+
+- **Lambda/Promtail:** Support dropping labels ([#10755](https://github.com/grafana/loki/issues/10755)) ([ec54c72](https://github.com/grafana/loki/commit/ec54c723ebbeeda88000dde188d539ecfe05dad8)).
-- **Docs improvements**: All the Getting Started topics have been revised, including a new [Quickstart](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/quick-start/) to help new users get up and running with Loki faster.The Storage, Configuration Reference, and API documentation have been updated to reflect deprecated and removed code, configuration options, and API endpoints.
+- **Profiling integration**: Added profiling integrations to tracing instrumentation to allow getting a profile for a single request.
+
+- **Docs improvements**: All the Getting Started topics have been revised, including a new [Quickstart](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/quick-start/) to help new users get up and running with Loki faster. The Storage, Configuration Reference, and API documentation have been updated to reflect deprecated and removed code, configuration options, and API endpoints.
## Deprecations
-One of the focuses of Loki 3.0 was cleaning up unused code and old features that had been previously deprecated but not removed. Loki 3.0 removes a number of previous deprecations and introduces some new deprecations. Some of the main areas with changes include:
+One of the focuses of Loki 3.0 was cleaning up unused code and old features that had been previously deprecated but not removed. Loki 3.0 removes a number of previous deprecations and introduces some new deprecations. Some of the main areas with changes include:
- [Deprecated storage options](https://grafana.com/docs/loki/<LOKI_VERSION>/storage/) including the deprecation of the BoltDB store.
@@ -38,24 +48,35 @@ To learn more about breaking changes in this release, refer to the [Upgrade guid
## Upgrade Considerations
-The path from 2.9 to 3.0 includes several breaking changes. For important upgrade guidance, refer to the [Upgrade Guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/) and the separate [Helm Upgrade Guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/upgrade-to-6x/).
+The path from 2.9 to 3.0 includes several breaking changes. For important upgrade guidance, refer to the [Upgrade Guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/) and the separate [Helm Upgrade Guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/upgrade-to-6x/).
## Bug fixes
### 3.0.0 (2024-04-08)
-- All lifecycler cfgs ref a valid IPv6 addr and port combination ([#11121](https://github.com/grafana/loki/issues/11121)) ([6385b19](https://github.com/grafana/loki/commit/6385b195739bd7d4e9706faddd0de663d8e5331a))
-- **deps:** update github.com/c2h5oh/datasize digest to 859f65c (main) ([#10820](https://github.com/grafana/loki/issues/10820)) ([c66ffd1](https://github.com/grafana/loki/commit/c66ffd125cd89f5845a75a1751186fa46d003f70))
-- **deps:** update github.com/docker/go-plugins-helpers digest to 6eecb7b (main) ([#10826](https://github.com/grafana/loki/issues/10826)) ([fb9c496](https://github.com/grafana/loki/commit/fb9c496b21be62f56866ae0f92440085e7860a2a))
-- **deps:** update github.com/grafana/gomemcache digest to 6947259 (main) ([#10836](https://github.com/grafana/loki/issues/10836)) ([2327789](https://github.com/grafana/loki/commit/2327789b5506d0ccc00d931195da17a2d47bf236))
-- **deps:** update github.com/grafana/loki/pkg/push digest to 583aa28 (main) ([#10842](https://github.com/grafana/loki/issues/10842)) ([02d9418](https://github.com/grafana/loki/commit/02d9418270f4e615c1f78b0def635da7c0572ca4))
-- **deps:** update github.com/grafana/loki/pkg/push digest to cfc4f0e (main) ([#10946](https://github.com/grafana/loki/issues/10946)) ([d27c4d2](https://github.com/grafana/loki/commit/d27c4d297dc6cce93ada98f16b962380ec933c6a))
-- **deps:** update github.com/grafana/loki/pkg/push digest to e523809 (main) ([#11107](https://github.com/grafana/loki/issues/11107)) ([09cb9ae](https://github.com/grafana/loki/commit/09cb9ae76f4aef7dea477961c0c5424d7243bf2a))
-- **deps:** update github.com/joncrlsn/dque digest to c2ef48c (main) ([#10947](https://github.com/grafana/loki/issues/10947)) ([1fe4885](https://github.com/grafana/loki/commit/1fe48858ae15b33646eedb85b05d6773a8bc5020))
-- **deps:** update module google.golang.org/grpc [security] (main) ([#11031](https://github.com/grafana/loki/issues/11031)) ([0695424](https://github.com/grafana/loki/commit/0695424f7dd62435df3a9981276b40f3c5ef5641))
-- **helm:** bump nginx-unprivilege to fix CVE ([#10754](https://github.com/grafana/loki/issues/10754)) ([dbf7dd4](https://github.com/grafana/loki/commit/dbf7dd4bac112a538a59907a8c6092504e7f4a91))
-- **Parse JSON String arrays properly so string elements can be retrieved**: [PR #11921](https://github.com/grafana/loki/pull/11921)]
-- **promtail:** correctly parse list of drop stage sources from YAML ([#10848](https://github.com/grafana/loki/issues/10848)) ([f51ee84](https://github.com/grafana/loki/commit/f51ee849b03c5f6b79f3e93cb7fd7811636bede2))
-- **promtail:** prevent panic due to duplicate metric registration after reloaded ([#10798](https://github.com/grafana/loki/issues/10798)) ([47e2c58](https://github.com/grafana/loki/commit/47e2c5884f443667e64764f3fc3948f8f11abbb8))
-- respect query matcher in ingester when getting label values ([#10375](https://github.com/grafana/loki/issues/10375)) ([85e2e52](https://github.com/grafana/loki/commit/85e2e52279ecac6dc111d5c113c54d6054d2c922))
-- Sidecar configuration for Backend ([#10603](https://github.com/grafana/loki/issues/10603)) ([c29ba97](https://github.com/grafana/loki/commit/c29ba973a0b5b7b59613d210b741d5a547ea0e83))
-- **tools/lambda-promtail:** Do not evaluate empty string for drop_labels ([#11074](https://github.com/grafana/loki/issues/11074)) ([94169a0](https://github.com/grafana/loki/commit/94169a0e6b5bf96426ad21e40f9583b721f35d6c))
+
+- All lifecycler configurations reference a valid IPv6 address and port combination ([#11121](https://github.com/grafana/loki/issues/11121)) ([6385b19](https://github.com/grafana/loki/commit/6385b195739bd7d4e9706faddd0de663d8e5331a)).
+- **deps:** Update github.com/c2h5oh/datasize digest to 859f65c (main) ([#10820](https://github.com/grafana/loki/issues/10820)) ([c66ffd1](https://github.com/grafana/loki/commit/c66ffd125cd89f5845a75a1751186fa46d003f70)).
+- **deps:** Update github.com/docker/go-plugins-helpers digest to 6eecb7b (main) ([#10826](https://github.com/grafana/loki/issues/10826)) ([fb9c496](https://github.com/grafana/loki/commit/fb9c496b21be62f56866ae0f92440085e7860a2a)).
+- **deps:** Update github.com/grafana/gomemcache digest to 6947259 (main) ([#10836](https://github.com/grafana/loki/issues/10836)) ([2327789](https://github.com/grafana/loki/commit/2327789b5506d0ccc00d931195da17a2d47bf236)).
+- **deps:** Update github.com/grafana/loki/pkg/push digest to 583aa28 (main) ([#10842](https://github.com/grafana/loki/issues/10842)) ([02d9418](https://github.com/grafana/loki/commit/02d9418270f4e615c1f78b0def635da7c0572ca4)).
+- **deps:** Update github.com/grafana/loki/pkg/push digest to cfc4f0e (main) ([#10946](https://github.com/grafana/loki/issues/10946)) ([d27c4d2](https://github.com/grafana/loki/commit/d27c4d297dc6cce93ada98f16b962380ec933c6a)).
+- **deps:** Update github.com/grafana/loki/pkg/push digest to e523809 (main) ([#11107](https://github.com/grafana/loki/issues/11107)) ([09cb9ae](https://github.com/grafana/loki/commit/09cb9ae76f4aef7dea477961c0c5424d7243bf2a)).
+- **deps:** Update github.com/joncrlsn/dque digest to c2ef48c (main) ([#10947](https://github.com/grafana/loki/issues/10947)) ([1fe4885](https://github.com/grafana/loki/commit/1fe48858ae15b33646eedb85b05d6773a8bc5020)).
+- **deps:** Update module google.golang.org/grpc [security] (main) ([#11031](https://github.com/grafana/loki/issues/11031)) ([0695424](https://github.com/grafana/loki/commit/0695424f7dd62435df3a9981276b40f3c5ef5641)).
+- **helm:** Bump nginx-unprivilege to fix CVE ([#10754](https://github.com/grafana/loki/issues/10754)) ([dbf7dd4](https://github.com/grafana/loki/commit/dbf7dd4bac112a538a59907a8c6092504e7f4a91)).
+- **helm:** Sidecar configuration for Backend ([#10603](https://github.com/grafana/loki/issues/10603)) ([c29ba97](https://github.com/grafana/loki/commit/c29ba973a0b5b7b59613d210b741d5a547ea0e83)).
+- **lambda-promtail** Fix panic in lambda-promtail due to mishandling of empty DROP_LABELS env var. ([#11074](https://github.com/grafana/loki/pull/11074)).
+- **loki:** Respect query matcher in ingester when getting label values ([#10375](https://github.com/grafana/loki/issues/10375)) ([85e2e52](https://github.com/grafana/loki/commit/85e2e52279ecac6dc111d5c113c54d6054d2c922)).
+- **loki** Generate tsdb_shipper storage_config even if using_boltdb_shipper is false ([#11195](https://github.com/grafana/loki/pull/11195)).
+- **loki** Do not reflect label names in request metrics' "route" label. ([11551](https://github.com/grafana/loki/pull/11551)).
+- **loki** Fix duplicate logs from docker containers. ([#11563](https://github.com/grafana/loki/pull/11563)).
+- **loki** Ruler: Fixed a panic that can be caused by concurrent read-write access of tenant configs when there are a large amount of rules. ([#11601](https://github.com/grafana/loki/pull/11601)).
+- **loki** Fixed regression adding newlines to HTTP error response bodies which may break client integrations. ([#11606](https://github.com/grafana/loki/pull/11606)).
+- **loki** Log results cache: compose empty response based on the request being served to avoid returning incorrect limit or direction. ([#11657](https://github.com/grafana/loki/pull/11657)).
+- **loki** Fix semantics of label parsing logic of metrics and logs queries. Both only parse the first label if multiple extractions into the same label are requested. ([#11587](https://github.com/grafana/loki/pull/11587)).
+- **loki** Background Cache: Fixes a bug that is causing the background queue size to be incremented twice for each enqueued item. ([#11776](https://github.com/grafana/loki/pull/11776)).
+- **loki**: Parsing: String array elements were not being parsed correctly in JSON processing ([#11921](https://github.com/grafana/loki/pull/11921)).
+- **promtail:** Correctly parse list of drop stage sources from YAML ([#10848](https://github.com/grafana/loki/issues/10848)) ([f51ee84](https://github.com/grafana/loki/commit/f51ee849b03c5f6b79f3e93cb7fd7811636bede2)).
+- **promtail:** Prevent panic due to duplicate metric registration after reloaded ([#10798](https://github.com/grafana/loki/issues/10798)) ([47e2c58](https://github.com/grafana/loki/commit/47e2c5884f443667e64764f3fc3948f8f11abbb8)).
+- **promtail**: Fix Promtail excludepath not evaluated on newly added files. ([#9831](https://github.com/grafana/loki/pull/9831)).
+- **tools/lambda-promtail:** Do not evaluate empty string for drop_labels ([#11074](https://github.com/grafana/loki/issues/11074)) ([94169a0](https://github.com/grafana/loki/commit/94169a0e6b5bf96426ad21e40f9583b721f35d6c)).
|
docs
|
Update 3.0 Release Notes (#12565)
|
6e1680b9d1f077f4ea9cb8d39b361ea2c82b7dae
|
2024-04-29 21:03:51
|
Lars Falk-Petersen
|
docs: Fix typo in structured-metadata.md (#12818)
| false
|
diff --git a/docs/sources/get-started/labels/structured-metadata.md b/docs/sources/get-started/labels/structured-metadata.md
index 319af3886d97c..99f46f7087925 100644
--- a/docs/sources/get-started/labels/structured-metadata.md
+++ b/docs/sources/get-started/labels/structured-metadata.md
@@ -57,7 +57,7 @@ You can use labels of structured metadata to filter log line using a [label filt
For example, if you have a label `pod` attached to some of your log lines as structured metadata, you can filter log lines using:
```logql
-{job="example"} | pod="myservice-abc1234-56789"`
+{job="example"} | pod="myservice-abc1234-56789"
```
Of course, you can filter by multiple labels of structured metadata at the same time:
|
docs
|
Fix typo in structured-metadata.md (#12818)
|
0a0b7c83eab70eaa83f6082bebee569e4db394c7
|
2024-01-02 22:36:10
|
Christian Haudum
|
bloomshipper: Use `model.Time` in `MetaRef` and `BlockRef` (#11566)
| false
|
diff --git a/pkg/bloomcompactor/bloomcompactor.go b/pkg/bloomcompactor/bloomcompactor.go
index 4f2a965e3dd27..d9a83ec5b7100 100644
--- a/pkg/bloomcompactor/bloomcompactor.go
+++ b/pkg/bloomcompactor/bloomcompactor.go
@@ -472,10 +472,10 @@ func (c *Compactor) runCompact(ctx context.Context, logger log.Logger, job Job,
}
metaSearchParams := bloomshipper.MetaSearchParams{
TenantID: job.tenantID,
- MinFingerprint: uint64(job.minFp),
- MaxFingerprint: uint64(job.maxFp),
- StartTimestamp: int64(job.from),
- EndTimestamp: int64(job.through),
+ MinFingerprint: job.minFp,
+ MaxFingerprint: job.maxFp,
+ StartTimestamp: job.from,
+ EndTimestamp: job.through,
}
var metas []bloomshipper.Meta
//TODO Configure pool for these to avoid allocations
diff --git a/pkg/bloomcompactor/chunkcompactor.go b/pkg/bloomcompactor/chunkcompactor.go
index a949f26452d9d..744a38b1ad5aa 100644
--- a/pkg/bloomcompactor/chunkcompactor.go
+++ b/pkg/bloomcompactor/chunkcompactor.go
@@ -135,8 +135,8 @@ func buildBlockFromBlooms(
TableName: job.tableName,
MinFingerprint: uint64(job.minFp),
MaxFingerprint: uint64(job.maxFp),
- StartTimestamp: int64(job.from),
- EndTimestamp: int64(job.through),
+ StartTimestamp: job.from,
+ EndTimestamp: job.through,
Checksum: checksum,
},
IndexPath: job.indexPath,
@@ -148,7 +148,7 @@ func buildBlockFromBlooms(
}
func createLocalDirName(workingDir string, job Job) string {
- dir := fmt.Sprintf("bloomBlock-%s-%s-%s-%s-%s-%s", job.tableName, job.tenantID, job.minFp, job.maxFp, job.from, job.through)
+ dir := fmt.Sprintf("bloomBlock-%s-%s-%s-%s-%d-%d", job.tableName, job.tenantID, job.minFp, job.maxFp, job.from, job.through)
return filepath.Join(workingDir, dir)
}
diff --git a/pkg/bloomcompactor/chunkcompactor_test.go b/pkg/bloomcompactor/chunkcompactor_test.go
index 4d19f24417d47..a89e4e967a1d9 100644
--- a/pkg/bloomcompactor/chunkcompactor_test.go
+++ b/pkg/bloomcompactor/chunkcompactor_test.go
@@ -121,8 +121,8 @@ func TestChunkCompactor_CompactNewChunks(t *testing.T) {
require.Equal(t, job.tableName, compactedBlock.TableName)
require.Equal(t, uint64(fp1), compactedBlock.MinFingerprint)
require.Equal(t, uint64(fp2), compactedBlock.MaxFingerprint)
- require.Equal(t, chunkRef1.MinTime, compactedBlock.StartTimestamp)
- require.Equal(t, chunkRef2.MaxTime, compactedBlock.EndTimestamp)
+ require.Equal(t, model.Time(chunkRef1.MinTime), compactedBlock.StartTimestamp)
+ require.Equal(t, model.Time(chunkRef2.MaxTime), compactedBlock.EndTimestamp)
require.Equal(t, indexPath, compactedBlock.IndexPath)
}
diff --git a/pkg/bloomcompactor/mergecompactor.go b/pkg/bloomcompactor/mergecompactor.go
index 94682579ac9e2..0cf55cef86a7c 100644
--- a/pkg/bloomcompactor/mergecompactor.go
+++ b/pkg/bloomcompactor/mergecompactor.go
@@ -137,8 +137,8 @@ func mergeCompactChunks(logger log.Logger,
TableName: job.tableName,
MinFingerprint: uint64(job.minFp),
MaxFingerprint: uint64(job.maxFp),
- StartTimestamp: int64(job.from),
- EndTimestamp: int64(job.through),
+ StartTimestamp: job.from,
+ EndTimestamp: job.through,
Checksum: checksum,
},
IndexPath: job.indexPath,
diff --git a/pkg/bloomgateway/bloomgateway_test.go b/pkg/bloomgateway/bloomgateway_test.go
index f24b8bc8a4e22..b34e3d55852a5 100644
--- a/pkg/bloomgateway/bloomgateway_test.go
+++ b/pkg/bloomgateway/bloomgateway_test.go
@@ -75,7 +75,13 @@ func TestBloomGateway_StartStopService(t *testing.T) {
t.Cleanup(cm.Unregister)
p := config.PeriodConfig{
- From: parseDayTime("2023-09-01"),
+ From: parseDayTime("2023-09-01"),
+ IndexTables: config.IndexPeriodicTableConfig{
+ PeriodicTableConfig: config.PeriodicTableConfig{
+ Prefix: "index_",
+ Period: 24 * time.Hour,
+ },
+ },
IndexType: config.TSDBType,
ObjectType: config.StorageTypeFileSystem,
Schema: "v13",
@@ -137,7 +143,13 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
t.Cleanup(cm.Unregister)
p := config.PeriodConfig{
- From: parseDayTime("2023-09-01"),
+ From: parseDayTime("2023-09-01"),
+ IndexTables: config.IndexPeriodicTableConfig{
+ PeriodicTableConfig: config.PeriodicTableConfig{
+ Prefix: "index_",
+ Period: 24 * time.Hour,
+ },
+ },
IndexType: config.TSDBType,
ObjectType: config.StorageTypeFileSystem,
Schema: "v13",
diff --git a/pkg/storage/stores/shipper/bloomshipper/client.go b/pkg/storage/stores/shipper/bloomshipper/client.go
index 5636d1916f183..50b26d57a3a78 100644
--- a/pkg/storage/stores/shipper/bloomshipper/client.go
+++ b/pkg/storage/stores/shipper/bloomshipper/client.go
@@ -33,15 +33,15 @@ type Ref struct {
TenantID string
TableName string
MinFingerprint, MaxFingerprint uint64
- StartTimestamp, EndTimestamp int64
+ StartTimestamp, EndTimestamp model.Time
Checksum uint32
}
// Cmp returns the fingerprint's position relative to the bounds
-func (b Ref) Cmp(fp uint64) v1.BoundsCheck {
- if fp < b.MinFingerprint {
+func (r Ref) Cmp(fp uint64) v1.BoundsCheck {
+ if fp < r.MinFingerprint {
return v1.Before
- } else if fp > b.MaxFingerprint {
+ } else if fp > r.MaxFingerprint {
return v1.After
}
return v1.Overlap
@@ -67,11 +67,9 @@ type Meta struct {
}
type MetaSearchParams struct {
- TenantID string
- MinFingerprint uint64
- MaxFingerprint uint64
- StartTimestamp int64
- EndTimestamp int64
+ TenantID string
+ MinFingerprint, MaxFingerprint model.Fingerprint
+ StartTimestamp, EndTimestamp model.Time
}
type MetaClient interface {
@@ -128,9 +126,7 @@ type BloomClient struct {
}
func (b *BloomClient) GetMetas(ctx context.Context, params MetaSearchParams) ([]Meta, error) {
- start := model.TimeFromUnix(params.StartTimestamp)
- end := model.TimeFromUnix(params.EndTimestamp)
- tablesByPeriod := tablesByPeriod(b.periodicConfigs, start, end)
+ tablesByPeriod := tablesByPeriod(b.periodicConfigs, params.StartTimestamp, params.EndTimestamp)
var metas []Meta
for periodFrom, tables := range tablesByPeriod {
@@ -146,8 +142,8 @@ func (b *BloomClient) GetMetas(ctx context.Context, params MetaSearchParams) ([]
if err != nil {
return nil, err
}
- if metaRef.MaxFingerprint < params.MinFingerprint || params.MaxFingerprint < metaRef.MinFingerprint ||
- metaRef.StartTimestamp < params.StartTimestamp || params.EndTimestamp < metaRef.EndTimestamp {
+ if metaRef.MaxFingerprint < uint64(params.MinFingerprint) || uint64(params.MaxFingerprint) < metaRef.MinFingerprint ||
+ metaRef.StartTimestamp.Before(params.StartTimestamp) || metaRef.EndTimestamp.After(params.EndTimestamp) {
continue
}
meta, err := b.downloadMeta(ctx, metaRef, periodClient)
@@ -176,24 +172,23 @@ func (b *BloomClient) PutMeta(ctx context.Context, meta Meta) error {
func createBlockObjectKey(meta Ref) string {
blockParentFolder := fmt.Sprintf("%x-%x", meta.MinFingerprint, meta.MaxFingerprint)
- filename := fmt.Sprintf("%v-%v-%x", meta.StartTimestamp, meta.EndTimestamp, meta.Checksum)
+ filename := fmt.Sprintf("%d-%d-%x", meta.StartTimestamp, meta.EndTimestamp, meta.Checksum)
return strings.Join([]string{rootFolder, meta.TableName, meta.TenantID, bloomsFolder, blockParentFolder, filename}, delimiter)
}
func createMetaObjectKey(meta Ref) string {
- filename := fmt.Sprintf("%x-%x-%v-%v-%x", meta.MinFingerprint, meta.MaxFingerprint, meta.StartTimestamp, meta.EndTimestamp, meta.Checksum)
+ filename := fmt.Sprintf("%x-%x-%d-%d-%x", meta.MinFingerprint, meta.MaxFingerprint, meta.StartTimestamp, meta.EndTimestamp, meta.Checksum)
return strings.Join([]string{rootFolder, meta.TableName, meta.TenantID, metasFolder, filename}, delimiter)
}
-func findPeriod(configs []config.PeriodConfig, timestamp int64) (config.DayTime, error) {
- ts := model.TimeFromUnix(timestamp)
+func findPeriod(configs []config.PeriodConfig, ts model.Time) (config.DayTime, error) {
for i := len(configs) - 1; i >= 0; i-- {
periodConfig := configs[i]
if periodConfig.From.Before(ts) || periodConfig.From.Equal(ts) {
return periodConfig.From, nil
}
}
- return config.DayTime{}, fmt.Errorf("can not find period for timestamp %d", timestamp)
+ return config.DayTime{}, fmt.Errorf("can not find period for timestamp %d", ts)
}
func (b *BloomClient) DeleteMeta(ctx context.Context, meta Meta) error {
@@ -289,7 +284,6 @@ func (b *BloomClient) downloadMeta(ctx context.Context, metaRef MetaRef, client
return meta, nil
}
-// todo cover with tests
func createMetaRef(objectKey string, tenantID string, tableName string) (MetaRef, error) {
fileName := objectKey[strings.LastIndex(objectKey, delimiter)+1:]
parts := strings.Split(fileName, fileNamePartDelimiter)
@@ -323,8 +317,8 @@ func createMetaRef(objectKey string, tenantID string, tableName string) (MetaRef
TableName: tableName,
MinFingerprint: minFingerprint,
MaxFingerprint: maxFingerprint,
- StartTimestamp: startTimestamp,
- EndTimestamp: endTimestamp,
+ StartTimestamp: model.Time(startTimestamp),
+ EndTimestamp: model.Time(endTimestamp),
Checksum: uint32(checksum),
},
FilePath: objectKey,
@@ -354,9 +348,9 @@ func tablesByPeriod(periodicConfigs []config.PeriodConfig, start, end model.Time
func tablesForRange(periodConfig config.PeriodConfig, from, to int64) []string {
interval := periodConfig.IndexTables.Period
- intervalSeconds := interval.Seconds()
- lower := from / int64(intervalSeconds)
- upper := to / int64(intervalSeconds)
+ step := int64(interval.Seconds())
+ lower := from / step
+ upper := to / step
tables := make([]string, 0, 1+upper-lower)
prefix := periodConfig.IndexTables.Prefix
for i := lower; i <= upper; i++ {
diff --git a/pkg/storage/stores/shipper/bloomshipper/client_test.go b/pkg/storage/stores/shipper/bloomshipper/client_test.go
index 7267856a43155..d6043febb48c9 100644
--- a/pkg/storage/stores/shipper/bloomshipper/client_test.go
+++ b/pkg/storage/stores/shipper/bloomshipper/client_test.go
@@ -13,7 +13,7 @@ import (
"testing"
"time"
- aws_io "github.com/aws/smithy-go/io"
+ awsio "github.com/aws/smithy-go/io"
"github.com/google/uuid"
"github.com/prometheus/common/model"
"github.com/stretchr/testify/require"
@@ -28,9 +28,24 @@ const (
var (
// table 19627
- fixedDay = model.TimeFromUnix(time.Date(2023, time.September, 27, 0, 0, 0, 0, time.UTC).Unix())
+ fixedDay = Date(2023, time.September, 27, 0, 0, 0)
)
+func Date(year int, month time.Month, day, hour, min, sec int) model.Time {
+ date := time.Date(year, month, day, hour, min, sec, 0, time.UTC)
+ return model.TimeFromUnixNano(date.UnixNano())
+}
+
+func parseDayTime(s string) config.DayTime {
+ t, err := time.Parse("2006-01-02", s)
+ if err != nil {
+ panic(err)
+ }
+ return config.DayTime{
+ Time: model.TimeFromUnix(t.Unix()),
+ }
+}
+
func Test_BloomClient_GetMetas(t *testing.T) {
shipper := createClient(t)
@@ -57,8 +72,8 @@ func Test_BloomClient_GetMetas(t *testing.T) {
TenantID: "tenantA",
MinFingerprint: 50,
MaxFingerprint: 150,
- StartTimestamp: fixedDay.Add(-6 * day).Unix(),
- EndTimestamp: fixedDay.Add(-1*day - 1*time.Hour).Unix(),
+ StartTimestamp: fixedDay.Add(-6 * day),
+ EndTimestamp: fixedDay.Add(-1*day - 1*time.Hour),
})
require.NoError(t, err)
require.ElementsMatch(t, expected, actual)
@@ -75,26 +90,26 @@ func Test_BloomClient_PutMeta(t *testing.T) {
"first-period-19621",
0xff,
0xfff,
- time.Date(2023, time.September, 21, 5, 0, 0, 0, time.UTC).Unix(),
- time.Date(2023, time.September, 21, 6, 0, 0, 0, time.UTC).Unix(),
+ Date(2023, time.September, 21, 5, 0, 0),
+ Date(2023, time.September, 21, 6, 0, 0),
0xaaa,
"ignored-file-path-during-uploading",
),
expectedStorage: "folder-1",
- expectedFilePath: "bloom/first-period-19621/tenantA/metas/ff-fff-1695272400-1695276000-aaa",
+ expectedFilePath: "bloom/first-period-19621/tenantA/metas/ff-fff-1695272400000-1695276000000-aaa",
},
"expected meta to be uploaded to the second folder": {
source: createMetaEntity("tenantA",
"second-period-19625",
200,
300,
- time.Date(2023, time.September, 25, 0, 0, 0, 0, time.UTC).Unix(),
- time.Date(2023, time.September, 25, 1, 0, 0, 0, time.UTC).Unix(),
+ Date(2023, time.September, 25, 0, 0, 0),
+ Date(2023, time.September, 25, 1, 0, 0),
0xbbb,
"ignored-file-path-during-uploading",
),
expectedStorage: "folder-2",
- expectedFilePath: "bloom/second-period-19625/tenantA/metas/c8-12c-1695600000-1695603600-bbb",
+ expectedFilePath: "bloom/second-period-19625/tenantA/metas/c8-12c-1695600000000-1695603600000-bbb",
},
}
for name, data := range tests {
@@ -131,26 +146,26 @@ func Test_BloomClient_DeleteMeta(t *testing.T) {
"first-period-19621",
0xff,
0xfff,
- time.Date(2023, time.September, 21, 5, 0, 0, 0, time.UTC).Unix(),
- time.Date(2023, time.September, 21, 6, 0, 0, 0, time.UTC).Unix(),
+ Date(2023, time.September, 21, 5, 0, 0),
+ Date(2023, time.September, 21, 6, 0, 0),
0xaaa,
"ignored-file-path-during-uploading",
),
expectedStorage: "folder-1",
- expectedFilePath: "bloom/first-period-19621/tenantA/metas/ff-fff-1695272400-1695276000-aaa",
+ expectedFilePath: "bloom/first-period-19621/tenantA/metas/ff-fff-1695272400000-1695276000000-aaa",
},
"expected meta to be delete from the second folder": {
source: createMetaEntity("tenantA",
"second-period-19625",
200,
300,
- time.Date(2023, time.September, 25, 0, 0, 0, 0, time.UTC).Unix(),
- time.Date(2023, time.September, 25, 1, 0, 0, 0, time.UTC).Unix(),
+ Date(2023, time.September, 25, 0, 0, 0),
+ Date(2023, time.September, 25, 1, 0, 0),
0xbbb,
"ignored-file-path-during-uploading",
),
expectedStorage: "folder-2",
- expectedFilePath: "bloom/second-period-19625/tenantA/metas/c8-12c-1695600000-1695603600-bbb",
+ expectedFilePath: "bloom/second-period-19625/tenantA/metas/c8-12c-1695600000000-1695603600000-bbb",
},
}
for name, data := range tests {
@@ -175,10 +190,10 @@ func Test_BloomClient_DeleteMeta(t *testing.T) {
func Test_BloomClient_GetBlocks(t *testing.T) {
bloomClient := createClient(t)
fsNamedStores := bloomClient.storageConfig.NamedStores.Filesystem
- firstBlockPath := "bloom/first-period-19621/tenantA/blooms/eeee-ffff/1695272400-1695276000-1"
+ firstBlockPath := "bloom/first-period-19621/tenantA/blooms/eeee-ffff/1695272400000-1695276000000-1"
firstBlockFullPath := filepath.Join(fsNamedStores["folder-1"].Directory, firstBlockPath)
firstBlockData := createBlockFile(t, firstBlockFullPath)
- secondBlockPath := "bloom/second-period-19624/tenantA/blooms/aaaa-bbbb/1695531600-1695535200-2"
+ secondBlockPath := "bloom/second-period-19624/tenantA/blooms/aaaa-bbbb/1695531600000-1695535200000-2"
secondBlockFullPath := filepath.Join(fsNamedStores["folder-2"].Directory, secondBlockPath)
secondBlockData := createBlockFile(t, secondBlockFullPath)
require.FileExists(t, firstBlockFullPath)
@@ -190,8 +205,8 @@ func Test_BloomClient_GetBlocks(t *testing.T) {
TableName: "first-period-19621",
MinFingerprint: 0xeeee,
MaxFingerprint: 0xffff,
- StartTimestamp: time.Date(2023, time.September, 21, 5, 0, 0, 0, time.UTC).Unix(),
- EndTimestamp: time.Date(2023, time.September, 21, 6, 0, 0, 0, time.UTC).Unix(),
+ StartTimestamp: Date(2023, time.September, 21, 5, 0, 0),
+ EndTimestamp: Date(2023, time.September, 21, 6, 0, 0),
Checksum: 1,
},
BlockPath: firstBlockPath,
@@ -202,8 +217,8 @@ func Test_BloomClient_GetBlocks(t *testing.T) {
TableName: "second-period-19624",
MinFingerprint: 0xaaaa,
MaxFingerprint: 0xbbbb,
- StartTimestamp: time.Date(2023, time.September, 24, 5, 0, 0, 0, time.UTC).Unix(),
- EndTimestamp: time.Date(2023, time.September, 24, 6, 0, 0, 0, time.UTC).Unix(),
+ StartTimestamp: Date(2023, time.September, 24, 5, 0, 0),
+ EndTimestamp: Date(2023, time.September, 24, 6, 0, 0),
Checksum: 2,
},
BlockPath: secondBlockPath,
@@ -232,13 +247,13 @@ func Test_BloomClient_PutBlocks(t *testing.T) {
TableName: "first-period-19621",
MinFingerprint: 0xeeee,
MaxFingerprint: 0xffff,
- StartTimestamp: time.Date(2023, time.September, 21, 5, 0, 0, 0, time.UTC).Unix(),
- EndTimestamp: time.Date(2023, time.September, 21, 6, 0, 0, 0, time.UTC).Unix(),
+ StartTimestamp: Date(2023, time.September, 21, 5, 0, 0),
+ EndTimestamp: Date(2023, time.September, 21, 6, 0, 0),
Checksum: 1,
},
IndexPath: uuid.New().String(),
},
- Data: aws_io.ReadSeekNopCloser{ReadSeeker: bytes.NewReader([]byte(blockForFirstFolderData))},
+ Data: awsio.ReadSeekNopCloser{ReadSeeker: bytes.NewReader([]byte(blockForFirstFolderData))},
}
blockForSecondFolderData := "data2"
@@ -249,13 +264,13 @@ func Test_BloomClient_PutBlocks(t *testing.T) {
TableName: "second-period-19624",
MinFingerprint: 0xaaaa,
MaxFingerprint: 0xbbbb,
- StartTimestamp: time.Date(2023, time.September, 24, 5, 0, 0, 0, time.UTC).Unix(),
- EndTimestamp: time.Date(2023, time.September, 24, 6, 0, 0, 0, time.UTC).Unix(),
+ StartTimestamp: Date(2023, time.September, 24, 5, 0, 0),
+ EndTimestamp: Date(2023, time.September, 24, 6, 0, 0),
Checksum: 2,
},
IndexPath: uuid.New().String(),
},
- Data: aws_io.ReadSeekNopCloser{ReadSeeker: bytes.NewReader([]byte(blockForSecondFolderData))},
+ Data: awsio.ReadSeekNopCloser{ReadSeeker: bytes.NewReader([]byte(blockForSecondFolderData))},
}
results, err := bloomClient.PutBlocks(context.Background(), []Block{blockForFirstFolder, blockForSecondFolder})
@@ -263,7 +278,7 @@ func Test_BloomClient_PutBlocks(t *testing.T) {
require.Len(t, results, 2)
firstResultBlock := results[0]
path := firstResultBlock.BlockPath
- require.Equal(t, "bloom/first-period-19621/tenantA/blooms/eeee-ffff/1695272400-1695276000-1", path)
+ require.Equal(t, "bloom/first-period-19621/tenantA/blooms/eeee-ffff/1695272400000-1695276000000-1", path)
require.Equal(t, blockForFirstFolder.TenantID, firstResultBlock.TenantID)
require.Equal(t, blockForFirstFolder.TableName, firstResultBlock.TableName)
require.Equal(t, blockForFirstFolder.MinFingerprint, firstResultBlock.MinFingerprint)
@@ -281,7 +296,7 @@ func Test_BloomClient_PutBlocks(t *testing.T) {
secondResultBlock := results[1]
path = secondResultBlock.BlockPath
- require.Equal(t, "bloom/second-period-19624/tenantA/blooms/aaaa-bbbb/1695531600-1695535200-2", path)
+ require.Equal(t, "bloom/second-period-19624/tenantA/blooms/aaaa-bbbb/1695531600000-1695535200000-2", path)
require.Equal(t, blockForSecondFolder.TenantID, secondResultBlock.TenantID)
require.Equal(t, blockForSecondFolder.TableName, secondResultBlock.TableName)
require.Equal(t, blockForSecondFolder.MinFingerprint, secondResultBlock.MinFingerprint)
@@ -302,9 +317,9 @@ func Test_BloomClient_PutBlocks(t *testing.T) {
func Test_BloomClient_DeleteBlocks(t *testing.T) {
bloomClient := createClient(t)
fsNamedStores := bloomClient.storageConfig.NamedStores.Filesystem
- block1Path := filepath.Join(fsNamedStores["folder-1"].Directory, "bloom/first-period-19621/tenantA/blooms/eeee-ffff/1695272400-1695276000-1")
+ block1Path := filepath.Join(fsNamedStores["folder-1"].Directory, "bloom/first-period-19621/tenantA/blooms/eeee-ffff/1695272400000-1695276000000-1")
createBlockFile(t, block1Path)
- block2Path := filepath.Join(fsNamedStores["folder-2"].Directory, "bloom/second-period-19624/tenantA/blooms/aaaa-bbbb/1695531600-1695535200-2")
+ block2Path := filepath.Join(fsNamedStores["folder-2"].Directory, "bloom/second-period-19624/tenantA/blooms/aaaa-bbbb/1695531600000-1695535200000-2")
createBlockFile(t, block2Path)
require.FileExists(t, block1Path)
require.FileExists(t, block2Path)
@@ -316,8 +331,8 @@ func Test_BloomClient_DeleteBlocks(t *testing.T) {
TableName: "second-period-19624",
MinFingerprint: 0xaaaa,
MaxFingerprint: 0xbbbb,
- StartTimestamp: time.Date(2023, time.September, 24, 5, 0, 0, 0, time.UTC).Unix(),
- EndTimestamp: time.Date(2023, time.September, 24, 6, 0, 0, 0, time.UTC).Unix(),
+ StartTimestamp: Date(2023, time.September, 24, 5, 0, 0),
+ EndTimestamp: Date(2023, time.September, 24, 6, 0, 0),
Checksum: 2,
},
IndexPath: uuid.New().String(),
@@ -328,8 +343,8 @@ func Test_BloomClient_DeleteBlocks(t *testing.T) {
TableName: "first-period-19621",
MinFingerprint: 0xeeee,
MaxFingerprint: 0xffff,
- StartTimestamp: time.Date(2023, time.September, 21, 5, 0, 0, 0, time.UTC).Unix(),
- EndTimestamp: time.Date(2023, time.September, 21, 6, 0, 0, 0, time.UTC).Unix(),
+ StartTimestamp: Date(2023, time.September, 21, 5, 0, 0),
+ EndTimestamp: Date(2023, time.September, 21, 6, 0, 0),
Checksum: 1,
},
IndexPath: uuid.New().String(),
@@ -500,7 +515,7 @@ func createPeriodConfigs() []config.PeriodConfig {
{
ObjectType: "folder-1",
// from 2023-09-20: table range [19620:19623]
- From: config.DayTime{Time: model.TimeFromUnix(time.Date(2023, time.September, 20, 0, 0, 0, 0, time.UTC).Unix())},
+ From: parseDayTime("2023-09-20"),
IndexTables: config.IndexPeriodicTableConfig{
PeriodicTableConfig: config.PeriodicTableConfig{
Period: day,
@@ -510,7 +525,7 @@ func createPeriodConfigs() []config.PeriodConfig {
{
ObjectType: "folder-2",
// from 2023-09-24: table range [19624:19627]
- From: config.DayTime{Time: model.TimeFromUnix(time.Date(2023, time.September, 24, 0, 0, 0, 0, time.UTC).Unix())},
+ From: parseDayTime("2023-09-24"),
IndexTables: config.IndexPeriodicTableConfig{
PeriodicTableConfig: config.PeriodicTableConfig{
Period: day,
@@ -522,15 +537,15 @@ func createPeriodConfigs() []config.PeriodConfig {
}
func createMetaInStorage(t *testing.T, folder string, tableName string, tenant string, minFingerprint uint64, maxFingerprint uint64, start model.Time) Meta {
- startTimestamp := start.Unix()
- endTimestamp := start.Add(12 * time.Hour).Unix()
+ end := start.Add(12 * time.Hour)
metaChecksum := rand.Uint32()
- metaFileName := fmt.Sprintf("%x-%x-%v-%v-%x", minFingerprint, maxFingerprint, startTimestamp, endTimestamp, metaChecksum)
+ // make sure this is equal to the createMetaObjectKey()
+ metaFileName := fmt.Sprintf("%x-%x-%d-%d-%x", minFingerprint, maxFingerprint, start, end, metaChecksum)
metaFilePath := filepath.Join(rootFolder, tableName, tenant, metasFolder, metaFileName)
err := os.MkdirAll(filepath.Join(folder, metaFilePath[:strings.LastIndex(metaFilePath, delimiter)]), 0700)
require.NoError(t, err)
- meta := createMetaEntity(tenant, tableName, minFingerprint, maxFingerprint, startTimestamp, endTimestamp, metaChecksum, metaFilePath)
+ meta := createMetaEntity(tenant, tableName, minFingerprint, maxFingerprint, start, end, metaChecksum, metaFilePath)
metaFileContent, err := json.Marshal(meta)
require.NoError(t, err)
@@ -544,8 +559,8 @@ func createMetaEntity(
tableName string,
minFingerprint uint64,
maxFingerprint uint64,
- startTimestamp int64,
- endTimestamp int64,
+ startTimestamp model.Time,
+ endTimestamp model.Time,
metaChecksum uint32,
metaFilePath string) Meta {
return Meta{
diff --git a/pkg/storage/stores/shipper/bloomshipper/shipper.go b/pkg/storage/stores/shipper/bloomshipper/shipper.go
index ee0665c4f6c30..d7038fc13761c 100644
--- a/pkg/storage/stores/shipper/bloomshipper/shipper.go
+++ b/pkg/storage/stores/shipper/bloomshipper/shipper.go
@@ -4,11 +4,12 @@ import (
"cmp"
"context"
"fmt"
- "time"
+ "math"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/common/model"
"golang.org/x/exp/slices"
"github.com/grafana/loki/pkg/storage/stores/shipper/bloomshipper/config"
@@ -39,10 +40,10 @@ func NewShipper(client Client, config config.Config, limits Limits, logger log.L
}, nil
}
-func (s *Shipper) GetBlockRefs(ctx context.Context, tenantID string, from, through time.Time) ([]BlockRef, error) {
+func (s *Shipper) GetBlockRefs(ctx context.Context, tenantID string, from, through model.Time) ([]BlockRef, error) {
level.Debug(s.logger).Log("msg", "GetBlockRefs", "tenant", tenantID, "from", from, "through", through)
- blockRefs, err := s.getActiveBlockRefs(ctx, tenantID, from.UnixNano(), through.UnixNano(), nil)
+ blockRefs, err := s.getActiveBlockRefs(ctx, tenantID, from, through, []uint64{0, math.MaxUint64})
if err != nil {
return nil, fmt.Errorf("error fetching active block references : %w", err)
}
@@ -85,10 +86,10 @@ func runCallback(callback ForEachBlockCallback, block blockWithQuerier) error {
return nil
}
-func (s *Shipper) ForEachBlock(ctx context.Context, tenantID string, from, through time.Time, fingerprints []uint64, callback ForEachBlockCallback) error {
+func (s *Shipper) ForEachBlock(ctx context.Context, tenantID string, from, through model.Time, fingerprints []uint64, callback ForEachBlockCallback) error {
level.Debug(s.logger).Log("msg", "ForEachBlock", "tenant", tenantID, "from", from, "through", through, "fingerprints", len(fingerprints))
- blockRefs, err := s.getActiveBlockRefs(ctx, tenantID, from.UnixNano(), through.UnixNano(), fingerprints)
+ blockRefs, err := s.getActiveBlockRefs(ctx, tenantID, from, through, fingerprints)
if err != nil {
return fmt.Errorf("error fetching active block references : %w", err)
}
@@ -111,12 +112,12 @@ func getFirstLast[T any](s []T) (T, T) {
return s[0], s[len(s)-1]
}
-func (s *Shipper) getActiveBlockRefs(ctx context.Context, tenantID string, from, through int64, fingerprints []uint64) ([]BlockRef, error) {
+func (s *Shipper) getActiveBlockRefs(ctx context.Context, tenantID string, from, through model.Time, fingerprints []uint64) ([]BlockRef, error) {
minFingerprint, maxFingerprint := getFirstLast(fingerprints)
metas, err := s.client.GetMetas(ctx, MetaSearchParams{
TenantID: tenantID,
- MinFingerprint: minFingerprint,
- MaxFingerprint: maxFingerprint,
+ MinFingerprint: model.Fingerprint(minFingerprint),
+ MaxFingerprint: model.Fingerprint(maxFingerprint),
StartTimestamp: from,
EndTimestamp: through,
})
@@ -137,7 +138,7 @@ func (s *Shipper) getActiveBlockRefs(ctx context.Context, tenantID string, from,
return activeBlocks, nil
}
-func (s *Shipper) findBlocks(metas []Meta, startTimestamp, endTimestamp int64, fingerprints []uint64) []BlockRef {
+func (s *Shipper) findBlocks(metas []Meta, startTimestamp, endTimestamp model.Time, fingerprints []uint64) []BlockRef {
outdatedBlocks := make(map[string]interface{})
for _, meta := range metas {
for _, tombstone := range meta.Tombstones {
@@ -175,7 +176,7 @@ func getPosition[S ~[]E, E cmp.Ordered](s S, v E) int {
return len(s)
}
-func isOutsideRange(b *BlockRef, startTimestamp, endTimestamp int64, fingerprints []uint64) bool {
+func isOutsideRange(b *BlockRef, startTimestamp, endTimestamp model.Time, fingerprints []uint64) bool {
// First, check time range
if b.EndTimestamp < startTimestamp || b.StartTimestamp > endTimestamp {
return true
diff --git a/pkg/storage/stores/shipper/bloomshipper/shipper_test.go b/pkg/storage/stores/shipper/bloomshipper/shipper_test.go
index 17f21793680ca..83c9379cd44c6 100644
--- a/pkg/storage/stores/shipper/bloomshipper/shipper_test.go
+++ b/pkg/storage/stores/shipper/bloomshipper/shipper_test.go
@@ -5,6 +5,7 @@ import (
"math"
"testing"
+ "github.com/prometheus/common/model"
"github.com/stretchr/testify/require"
)
@@ -190,8 +191,8 @@ func createBlockRef(
TableName: "16600",
MinFingerprint: minFingerprint,
MaxFingerprint: maxFingerprint,
- StartTimestamp: startTimestamp,
- EndTimestamp: endTimestamp,
+ StartTimestamp: model.Time(startTimestamp),
+ EndTimestamp: model.Time(endTimestamp),
Checksum: 0,
},
// block path is unique, and it's used to distinguish the blocks so the rest of the fields might be skipped in this test
diff --git a/pkg/storage/stores/shipper/bloomshipper/store.go b/pkg/storage/stores/shipper/bloomshipper/store.go
index e24d7e35c412a..06e1d7a4675bf 100644
--- a/pkg/storage/stores/shipper/bloomshipper/store.go
+++ b/pkg/storage/stores/shipper/bloomshipper/store.go
@@ -13,8 +13,8 @@ import (
type ForEachBlockCallback func(bq *v1.BlockQuerier, minFp, maxFp uint64) error
type ReadShipper interface {
- GetBlockRefs(ctx context.Context, tenant string, from, through time.Time) ([]BlockRef, error)
- ForEachBlock(ctx context.Context, tenant string, from, through time.Time, fingerprints []uint64, callback ForEachBlockCallback) error
+ GetBlockRefs(ctx context.Context, tenant string, from, through model.Time) ([]BlockRef, error)
+ ForEachBlock(ctx context.Context, tenant string, from, through model.Time, fingerprints []uint64, callback ForEachBlockCallback) error
Fetch(ctx context.Context, tenant string, blocks []BlockRef, callback ForEachBlockCallback) error
}
@@ -52,7 +52,7 @@ func (bs *BloomStore) Stop() {
// GetBlockRefs implements Store
func (bs *BloomStore) GetBlockRefs(ctx context.Context, tenant string, from, through time.Time) ([]BlockRef, error) {
- return bs.shipper.GetBlockRefs(ctx, tenant, from, through)
+ return bs.shipper.GetBlockRefs(ctx, tenant, toModelTime(from), toModelTime(through))
}
// ForEach implements Store
@@ -80,7 +80,7 @@ func (bs *BloomStore) GetBlockQueriersForBlockRefs(ctx context.Context, tenant s
// BlockQueriers implements Store
func (bs *BloomStore) GetBlockQueriers(ctx context.Context, tenant string, from, through time.Time, fingerprints []uint64) ([]BlockQuerierWithFingerprintRange, error) {
bqs := make([]BlockQuerierWithFingerprintRange, 0, 32)
- err := bs.shipper.ForEachBlock(ctx, tenant, from, through, fingerprints, func(bq *v1.BlockQuerier, minFp uint64, maxFp uint64) error {
+ err := bs.shipper.ForEachBlock(ctx, tenant, toModelTime(from), toModelTime(through), fingerprints, func(bq *v1.BlockQuerier, minFp uint64, maxFp uint64) error {
bqs = append(bqs, BlockQuerierWithFingerprintRange{
BlockQuerier: bq,
MinFp: model.Fingerprint(minFp),
@@ -93,3 +93,7 @@ func (bs *BloomStore) GetBlockQueriers(ctx context.Context, tenant string, from,
})
return bqs, err
}
+
+func toModelTime(t time.Time) model.Time {
+ return model.TimeFromUnixNano(t.UnixNano())
+}
|
bloomshipper
|
Use `model.Time` in `MetaRef` and `BlockRef` (#11566)
|
7b2fde3e580958aa235fa9df2619cb18a5ce054a
|
2019-12-06 03:43:21
|
Johannes Staffans
|
fluentd: guard against nil values when sanitizing labels (#1376)
| false
|
diff --git a/fluentd/fluent-plugin-grafana-loki/fluent-plugin-grafana-loki.gemspec b/fluentd/fluent-plugin-grafana-loki/fluent-plugin-grafana-loki.gemspec
index dc15c6ed4379c..601b5c317e87e 100644
--- a/fluentd/fluent-plugin-grafana-loki/fluent-plugin-grafana-loki.gemspec
+++ b/fluentd/fluent-plugin-grafana-loki/fluent-plugin-grafana-loki.gemspec
@@ -4,7 +4,7 @@ $LOAD_PATH.push File.expand_path('lib', __dir__)
Gem::Specification.new do |spec|
spec.name = 'fluent-plugin-grafana-loki'
- spec.version = '1.2.4'
+ spec.version = '1.2.5'
spec.authors = %w[woodsaj briangann cyriltovena]
spec.email = ['[email protected]', '[email protected]' , '[email protected]']
diff --git a/fluentd/fluent-plugin-grafana-loki/lib/fluent/plugin/out_loki.rb b/fluentd/fluent-plugin-grafana-loki/lib/fluent/plugin/out_loki.rb
index a5033729f9355..f92ab515043f8 100644
--- a/fluentd/fluent-plugin-grafana-loki/lib/fluent/plugin/out_loki.rb
+++ b/fluentd/fluent-plugin-grafana-loki/lib/fluent/plugin/out_loki.rb
@@ -195,7 +195,7 @@ def format_labels(data_labels)
data_labels = {} if data_labels.nil?
data_labels = data_labels.merge(@extra_labels)
# sanitize label values
- data_labels.each { |k, v| formatted_labels[k] = v.gsub('"', '\\"') }
+ data_labels.each { |k, v| formatted_labels[k] = v.gsub('"', '\\"') if v }
formatted_labels
end
|
fluentd
|
guard against nil values when sanitizing labels (#1376)
|
4db80b250585f93d9b1f3fb0ff19779ec69f037d
|
2019-08-01 21:43:18
|
sh0rez
|
chore(packaging): add muslc to build-image (#834)
| false
|
diff --git a/loki-build-image/Dockerfile b/loki-build-image/Dockerfile
index 46e6764e7a55d..9b149fda87d6d 100644
--- a/loki-build-image/Dockerfile
+++ b/loki-build-image/Dockerfile
@@ -17,6 +17,7 @@ RUN apk add --no-cache docker-cli
FROM golang:1.11.4-stretch
RUN apt-get update && \
apt-get install -qy \
+ musl \
file unzip jq \
protobuf-compiler libprotobuf-dev \
libsystemd-dev && \
|
chore
|
add muslc to build-image (#834)
|
a88a0d3f6ceaba0082c557ab773b7fd45537ac64
|
2024-07-17 19:13:49
|
Jack Baldry
|
feat: Update doc-validator version (#13558)
| false
|
diff --git a/.github/workflows/doc-validator.yml b/.github/workflows/doc-validator.yml
index bb074949e29ef..a496861e6e731 100644
--- a/.github/workflows/doc-validator.yml
+++ b/.github/workflows/doc-validator.yml
@@ -7,7 +7,7 @@ jobs:
doc-validator:
runs-on: "ubuntu-latest"
container:
- image: "grafana/doc-validator:v5.0.0"
+ image: "grafana/doc-validator:v5.1.0"
steps:
- name: "Checkout code"
uses: "actions/checkout@v4"
|
feat
|
Update doc-validator version (#13558)
|
0dc9d677b6ed5c4440346ab54e9776185900be38
|
2025-01-21 22:10:26
|
Jackson Coelho
|
ci: fix helm diff in case of forks (#15818)
| false
|
diff --git a/.github/workflows/helm-diff-ci.yml b/.github/workflows/helm-diff-ci.yml
index 64e966140cbe1..2bacfd2d25dd6 100644
--- a/.github/workflows/helm-diff-ci.yml
+++ b/.github/workflows/helm-diff-ci.yml
@@ -3,8 +3,9 @@ name: Helm Loki Diff CI
on:
pull_request:
paths:
- - 'production/helm/loki/**'
+ - "production/helm/loki/**"
+# These permissions are needed to assume roles from Github's OIDC.
permissions:
contents: read
pull-requests: write
@@ -273,6 +274,7 @@ jobs:
summary-diff-outputs:
name: Summary Diffs
runs-on: ubuntu-latest
+ if: github.event.pull_request.head.repo.fork == false
needs:
- single-binary-diff
- default-values-diff
@@ -283,6 +285,8 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
+ with:
+ persist-credentials: false
- uses: actions/download-artifact@v4
with:
diff --git a/production/helm/loki/scenarios/README.md b/production/helm/loki/scenarios/README.md
index b84c186e23684..496286bb2009d 100644
--- a/production/helm/loki/scenarios/README.md
+++ b/production/helm/loki/scenarios/README.md
@@ -61,3 +61,9 @@ As the last step you need to run a diff between both files:
```shell
diff current-manifest.yaml release-manifest.yaml
```
+
+### Known Issues
+
+* The Github Action won't be able to post the diff comment if the PR is coming from a fork, because of permissions the workflow run from a fork is not able to write in the PR content.
+
+ In this case, to review the output we recommend to download the artifacts in the workflow run and check the outputs.
diff --git a/production/helm/loki/scenarios/images/added.png b/production/helm/loki/scenarios/images/added.png
deleted file mode 100644
index ced9f9554a8f8..0000000000000
Binary files a/production/helm/loki/scenarios/images/added.png and /dev/null differ
diff --git a/production/helm/loki/scenarios/images/img.png b/production/helm/loki/scenarios/images/img.png
deleted file mode 100644
index 81ba701da26a0..0000000000000
Binary files a/production/helm/loki/scenarios/images/img.png and /dev/null differ
diff --git a/production/helm/loki/scenarios/images/modified.png b/production/helm/loki/scenarios/images/modified.png
deleted file mode 100644
index 39a25bae35b20..0000000000000
Binary files a/production/helm/loki/scenarios/images/modified.png and /dev/null differ
diff --git a/production/helm/loki/scenarios/images/removed.png b/production/helm/loki/scenarios/images/removed.png
deleted file mode 100644
index 219d64c32c983..0000000000000
Binary files a/production/helm/loki/scenarios/images/removed.png and /dev/null differ
|
ci
|
fix helm diff in case of forks (#15818)
|
1ea49e31ea58fbe381f0a300738cc4708dbc0e96
|
2024-11-25 10:23:48
|
Callum Styan
|
chore: remove minor dead code that was missed in a PR review (#15094)
| false
|
diff --git a/pkg/logql/log/pipeline.go b/pkg/logql/log/pipeline.go
index a205039dd7715..181947fc07435 100644
--- a/pkg/logql/log/pipeline.go
+++ b/pkg/logql/log/pipeline.go
@@ -68,7 +68,7 @@ func (n *noopPipeline) ForStream(labels labels.Labels) StreamPipeline {
}
n.mu.RUnlock()
- sp := &noopStreamPipeline{n.baseBuilder.ForLabels(labels, h), make([]int, 0, 10)}
+ sp := &noopStreamPipeline{n.baseBuilder.ForLabels(labels, h)}
n.mu.Lock()
defer n.mu.Unlock()
@@ -93,8 +93,7 @@ func IsNoopPipeline(p Pipeline) bool {
}
type noopStreamPipeline struct {
- builder *LabelsBuilder
- offsetsBuf []int
+ builder *LabelsBuilder
}
func (n noopStreamPipeline) ReferencedStructuredMetadata() bool {
@@ -181,13 +180,12 @@ func NewPipeline(stages []Stage) Pipeline {
}
type streamPipeline struct {
- stages []Stage
- builder *LabelsBuilder
- offsetsBuf []int
+ stages []Stage
+ builder *LabelsBuilder
}
func NewStreamPipeline(stages []Stage, labelsBuilder *LabelsBuilder) StreamPipeline {
- return &streamPipeline{stages, labelsBuilder, make([]int, 0, 10)}
+ return &streamPipeline{stages, labelsBuilder}
}
func (p *pipeline) ForStream(labels labels.Labels) StreamPipeline {
|
chore
|
remove minor dead code that was missed in a PR review (#15094)
|
1b860e2e959b0a9046477a6535c1acab3295b7ef
|
2024-11-12 21:34:49
|
Matt Veitas
|
docs: Update reference to the default tsdb-max-query-parallelism value to be 128 (#14837)
| false
|
diff --git a/docs/sources/operations/storage/tsdb.md b/docs/sources/operations/storage/tsdb.md
index 8f640f83f3bdd..26daffe730b94 100644
--- a/docs/sources/operations/storage/tsdb.md
+++ b/docs/sources/operations/storage/tsdb.md
@@ -69,7 +69,7 @@ querier:
### Limits
-We've added a user per-tenant limit called `tsdb_max_query_parallelism` in the `limits_config`. This functions the same as the prior `max_query_parallelism` configuration but applies to tsdb queries instead. Since the TSDB index will create many more smaller queries compared to the other index types before it, we've added a separate configuration so they can coexist. This is helpful when transitioning between index types. The default parallelism is `512` which should work well for most cases, but you can extend it globally in the `limits_config` or per-tenant in the `overrides` file as needed.
+We've added a user per-tenant limit called `tsdb_max_query_parallelism` in the `limits_config`. This functions the same as the prior `max_query_parallelism` configuration but applies to tsdb queries instead. Since the TSDB index will create many more smaller queries compared to the other index types before it, we've added a separate configuration so they can coexist. This is helpful when transitioning between index types. The default parallelism is `128` which should work well for most cases, but you can extend it globally in the `limits_config` or per-tenant in the `overrides` file as needed.
### Dynamic Query Sharding
|
docs
|
Update reference to the default tsdb-max-query-parallelism value to be 128 (#14837)
|
bd20171975e913e429048a0a30328811fc4c8a87
|
2024-07-25 21:36:07
|
洪阿南
|
fix(blooms): Improve error wrap to make ignoreNotFound work when fetching blocks (#13656)
| false
|
diff --git a/pkg/storage/stores/shipper/bloomshipper/client.go b/pkg/storage/stores/shipper/bloomshipper/client.go
index f6da2168ae91f..2ce0e0a149ee3 100644
--- a/pkg/storage/stores/shipper/bloomshipper/client.go
+++ b/pkg/storage/stores/shipper/bloomshipper/client.go
@@ -316,7 +316,7 @@ func (b *BloomClient) GetBlock(ctx context.Context, ref BlockRef) (BlockDirector
rc, _, err := b.client.GetObject(ctx, key)
if err != nil {
- return BlockDirectory{}, fmt.Errorf("failed to get block file %s: %w", key, err)
+ return BlockDirectory{}, errors.Wrap(err, fmt.Sprintf("failed to get block file %s", key))
}
defer rc.Close()
|
fix
|
Improve error wrap to make ignoreNotFound work when fetching blocks (#13656)
|
45e5b427b1d5a794de7034ddcf249d09c2730ca8
|
2024-01-31 22:46:19
|
Meng Ye
|
docs: fix row_shards doc (#11795)
| false
|
diff --git a/docs/sources/configure/_index.md b/docs/sources/configure/_index.md
index 283a2c9dd59a9..25e4f70f987c3 100644
--- a/docs/sources/configure/_index.md
+++ b/docs/sources/configure/_index.md
@@ -4577,7 +4577,7 @@ chunks:
[tags: <map of string to string>]
# How many shards will be created. Only used if schema is v10 or greater.
-[row_shards: <int>]
+[row_shards: <int> | default = 16]
```
### aws_storage_config
diff --git a/pkg/storage/config/schema_config.go b/pkg/storage/config/schema_config.go
index d4b5902516d20..9cdda249ea520 100644
--- a/pkg/storage/config/schema_config.go
+++ b/pkg/storage/config/schema_config.go
@@ -164,7 +164,7 @@ type PeriodConfig struct {
Schema string `yaml:"schema" doc:"description=The schema version to use, current recommended schema is v12."`
IndexTables IndexPeriodicTableConfig `yaml:"index" doc:"description=Configures how the index is updated and stored."`
ChunkTables PeriodicTableConfig `yaml:"chunks" doc:"description=Configured how the chunks are updated and stored."`
- RowShards uint32 `yaml:"row_shards" doc:"description=How many shards will be created. Only used if schema is v10 or greater."`
+ RowShards uint32 `yaml:"row_shards" doc:"default=16|description=How many shards will be created. Only used if schema is v10 or greater."`
// Integer representation of schema used for hot path calculation. Populated on unmarshaling.
schemaInt *int `yaml:"-"`
|
docs
|
fix row_shards doc (#11795)
|
a8bd3a88bf6fcf7f2ae0977bf91ae4d7b0e82aa4
|
2025-01-27 20:47:26
|
renovate[bot]
|
fix(deps): update module github.com/bmatcuk/doublestar/v4 to v4.8.1 (main) (#15973)
| false
|
diff --git a/.github/renovate.json5 b/.github/renovate.json5
index 362b8537f8f23..adf3b06bf08e4 100644
--- a/.github/renovate.json5
+++ b/.github/renovate.json5
@@ -89,7 +89,7 @@
},
"osvVulnerabilityAlerts": true,
"prConcurrentLimit": 10,
- "rebaseWhen": "conflicted",
+ "rebaseWhen": "auto",
"branchPrefix": "deps-update/",
"postUpdateOptions": [
"gomodTidy"
diff --git a/go.mod b/go.mod
index 0c46a0aaba238..78252a4e3cdb0 100644
--- a/go.mod
+++ b/go.mod
@@ -22,7 +22,7 @@ require (
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
github.com/aws/aws-sdk-go v1.55.6
github.com/baidubce/bce-sdk-go v0.9.215
- github.com/bmatcuk/doublestar/v4 v4.8.0
+ github.com/bmatcuk/doublestar/v4 v4.8.1
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500
github.com/cespare/xxhash/v2 v2.3.0
github.com/containerd/fifo v1.1.0
diff --git a/go.sum b/go.sum
index 196e5be89f696..fe4e54ea6f18f 100644
--- a/go.sum
+++ b/go.sum
@@ -238,8 +238,8 @@ github.com/bitly/go-hostpool v0.1.0 h1:XKmsF6k5el6xHG3WPJ8U0Ku/ye7njX7W81Ng7O2io
github.com/bitly/go-hostpool v0.1.0/go.mod h1:4gOCgp6+NZnVqlKyZ/iBZFTAJKembaVENUpMkpg42fw=
github.com/bluele/gcache v0.0.2 h1:WcbfdXICg7G/DGBh1PFfcirkWOQV+v077yF1pSy3DGw=
github.com/bluele/gcache v0.0.2/go.mod h1:m15KV+ECjptwSPxKhOhQoAFQVtUFjTVkc3H8o0t/fp0=
-github.com/bmatcuk/doublestar/v4 v4.8.0 h1:DSXtrypQddoug1459viM9X9D3dp1Z7993fw36I2kNcQ=
-github.com/bmatcuk/doublestar/v4 v4.8.0/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc=
+github.com/bmatcuk/doublestar/v4 v4.8.1 h1:54Bopc5c2cAvhLRAzqOGCYHYyhcDHsFF4wWIR5wKP38=
+github.com/bmatcuk/doublestar/v4 v4.8.1/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
diff --git a/vendor/github.com/bmatcuk/doublestar/v4/README.md b/vendor/github.com/bmatcuk/doublestar/v4/README.md
index 2e88266effe4f..b417a2c453429 100644
--- a/vendor/github.com/bmatcuk/doublestar/v4/README.md
+++ b/vendor/github.com/bmatcuk/doublestar/v4/README.md
@@ -376,8 +376,9 @@ Character classes support the following:
Class | Meaning
---------- | -------
-`[abc]` | matches any single character within the set
-`[a-z]` | matches any single character in the range
+`[abc123]` | matches any single character within the set
+`[a-z0-9]` | matches any single character in the range a-z or 0-9
+`[125-79]` | matches any single character within the set 129, or the range 5-7
`[^class]` | matches any single character which does *not* match the class
`[!class]` | same as `^`: negates the class
diff --git a/vendor/github.com/bmatcuk/doublestar/v4/match.go b/vendor/github.com/bmatcuk/doublestar/v4/match.go
index c0f20afa438df..a21259db46bbe 100644
--- a/vendor/github.com/bmatcuk/doublestar/v4/match.go
+++ b/vendor/github.com/bmatcuk/doublestar/v4/match.go
@@ -319,10 +319,10 @@ MATCH:
// we've reached the end of `name`; we've successfully matched if we've also
// reached the end of `pattern`, or if the rest of `pattern` can match a
// zero-length string
- return isZeroLengthPattern(pattern[patIdx:], separator)
+ return isZeroLengthPattern(pattern[patIdx:], separator, validate)
}
-func isZeroLengthPattern(pattern string, separator rune) (ret bool, err error) {
+func isZeroLengthPattern(pattern string, separator rune, validate bool) (ret bool, err error) {
// `/**`, `**/`, and `/**/` are special cases - a pattern such as `path/to/a/**` or `path/to/a/**/`
// *should* match `path/to/a` because `a` might be a directory
if pattern == "" ||
@@ -350,18 +350,18 @@ func isZeroLengthPattern(pattern string, separator rune) (ret bool, err error) {
}
commaIdx += patIdx
- ret, err = isZeroLengthPattern(pattern[patIdx:commaIdx]+pattern[closingIdx+1:], separator)
+ ret, err = isZeroLengthPattern(pattern[patIdx:commaIdx]+pattern[closingIdx+1:], separator, validate)
if ret || err != nil {
return
}
patIdx = commaIdx + 1
}
- return isZeroLengthPattern(pattern[patIdx:closingIdx]+pattern[closingIdx+1:], separator)
+ return isZeroLengthPattern(pattern[patIdx:closingIdx]+pattern[closingIdx+1:], separator, validate)
}
// no luck - validate the rest of the pattern
- if !doValidatePattern(pattern, separator) {
+ if validate && !doValidatePattern(pattern, separator) {
return false, ErrBadPattern
}
return false, nil
diff --git a/vendor/modules.txt b/vendor/modules.txt
index b961906c11701..1398c1b28a6f1 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -507,7 +507,7 @@ github.com/bboreham/go-loser
# github.com/beorn7/perks v1.0.1
## explicit; go 1.11
github.com/beorn7/perks/quantile
-# github.com/bmatcuk/doublestar/v4 v4.8.0
+# github.com/bmatcuk/doublestar/v4 v4.8.1
## explicit; go 1.16
github.com/bmatcuk/doublestar/v4
# github.com/buger/jsonparser v1.1.1
|
fix
|
update module github.com/bmatcuk/doublestar/v4 to v4.8.1 (main) (#15973)
|
27431b7e7efbdf729610025e2f1ecf7dc7dc1f06
|
2025-03-06 00:40:21
|
Paul Rogers
|
chore: Linting update for new golangci (#16572)
| false
|
diff --git a/clients/pkg/logentry/stages/labelallow.go b/clients/pkg/logentry/stages/labelallow.go
index 8d7c6276d7daf..0b8cbfe8b8f00 100644
--- a/clients/pkg/logentry/stages/labelallow.go
+++ b/clients/pkg/logentry/stages/labelallow.go
@@ -17,7 +17,7 @@ const (
type LabelAllowConfig []string
func validateLabelAllowConfig(c LabelAllowConfig) error {
- if c == nil || len(c) < 1 {
+ if len(c) < 1 {
return errors.New(ErrEmptyLabelAllowStageConfig)
}
diff --git a/clients/pkg/logentry/stages/labeldrop.go b/clients/pkg/logentry/stages/labeldrop.go
index e36d67c67f34f..03710cf9fb03d 100644
--- a/clients/pkg/logentry/stages/labeldrop.go
+++ b/clients/pkg/logentry/stages/labeldrop.go
@@ -17,7 +17,7 @@ const (
type LabelDropConfig []string
func validateLabelDropConfig(c LabelDropConfig) error {
- if c == nil || len(c) < 1 {
+ if len(c) < 1 {
return errors.New(ErrEmptyLabelDropStageConfig)
}
diff --git a/clients/pkg/promtail/client/client_writeto_test.go b/clients/pkg/promtail/client/client_writeto_test.go
index 4044d1641fb12..d0c63228013d2 100644
--- a/clients/pkg/promtail/client/client_writeto_test.go
+++ b/clients/pkg/promtail/client/client_writeto_test.go
@@ -222,9 +222,9 @@ func bench(numWriters, totalLines int, b *testing.B) {
// 4. After all are written, call a SeriesReset. This will block the entire series map and will hopefully block
// some other writing routine.
func startWriter(segmentNum, seriesToReset int, target *clientWriteTo, lines int, series record.RefSeries, maxInitialSleep time.Duration) {
- randomSleepMax := func(max time.Duration) {
+ randomSleepMax := func(maxVal time.Duration) {
// random sleep to add some jitter
- s := int64(rand.Uint64()) % int64(max)
+ s := int64(rand.Uint64()) % int64(maxVal)
time.Sleep(time.Duration(s))
}
// random sleep to add some jitter
diff --git a/clients/pkg/promtail/wal/timer.go b/clients/pkg/promtail/wal/timer.go
index bd646cc94b0d4..50d658be4cdf2 100644
--- a/clients/pkg/promtail/wal/timer.go
+++ b/clients/pkg/promtail/wal/timer.go
@@ -5,34 +5,34 @@ import "time"
// backoffTimer is a time.Timer that allows one to move between a minimum and maximum interval, using an exponential backoff
// strategy. It safely re-uses just one time.Timer instance internally.
type backoffTimer struct {
- timer *time.Timer
- curr, min, max time.Duration
- C <-chan time.Time
+ timer *time.Timer
+ curr, minVal, maxVal time.Duration
+ C <-chan time.Time
}
-func newBackoffTimer(min, max time.Duration) *backoffTimer {
+func newBackoffTimer(minVal, maxVal time.Duration) *backoffTimer {
// note that the first timer created will be stopped without ever consuming it, since it's once we can omit it
// since the timer is recycled, we can keep the channel
- t := time.NewTimer(min)
+ t := time.NewTimer(minVal)
return &backoffTimer{
- timer: t,
- min: min,
- max: max,
- curr: min,
- C: t.C,
+ timer: t,
+ minVal: minVal,
+ maxVal: maxVal,
+ curr: minVal,
+ C: t.C,
}
}
func (bt *backoffTimer) backoff() {
bt.curr = bt.curr * 2
- if bt.curr > bt.max {
- bt.curr = bt.max
+ if bt.curr > bt.maxVal {
+ bt.curr = bt.maxVal
}
bt.recycle()
}
func (bt *backoffTimer) reset() {
- bt.curr = bt.min
+ bt.curr = bt.minVal
bt.recycle()
}
diff --git a/clients/pkg/promtail/wal/timer_test.go b/clients/pkg/promtail/wal/timer_test.go
index 71c114559fbdc..d866d67925445 100644
--- a/clients/pkg/promtail/wal/timer_test.go
+++ b/clients/pkg/promtail/wal/timer_test.go
@@ -12,23 +12,23 @@ const (
)
func TestBackoffTimer(t *testing.T) {
- var min = time.Millisecond * 300
- var max = time.Second
- timer := newBackoffTimer(min, max)
+ var minVal = time.Millisecond * 300
+ var maxVal = time.Second
+ timer := newBackoffTimer(minVal, maxVal)
now := time.Now()
<-timer.C
- require.WithinDuration(t, now.Add(min), time.Now(), delta, "expected backing off timer to fire in the minimum")
+ require.WithinDuration(t, now.Add(minVal), time.Now(), delta, "expected backing off timer to fire in the minimum")
// backoff, and expect it will take twice the time
now = time.Now()
timer.backoff()
<-timer.C
- require.WithinDuration(t, now.Add(min*2), time.Now(), delta, "expected backing off timer to fire in the twice the minimum")
+ require.WithinDuration(t, now.Add(minVal*2), time.Now(), delta, "expected backing off timer to fire in the twice the minimum")
// backoff capped, backoff will actually be 1200ms, but capped at 1000
now = time.Now()
timer.backoff()
<-timer.C
- require.WithinDuration(t, now.Add(max), time.Now(), delta, "expected backing off timer to fire in the max")
+ require.WithinDuration(t, now.Add(maxVal), time.Now(), delta, "expected backing off timer to fire in the max")
}
diff --git a/pkg/analytics/seed.go b/pkg/analytics/seed.go
index dab97993ccece..927e75c4e655d 100644
--- a/pkg/analytics/seed.go
+++ b/pkg/analytics/seed.go
@@ -20,7 +20,7 @@ type ClusterSeed struct {
// Merge implements the memberlist.Mergeable interface.
// It allow to merge the content of two different seeds.
-func (c *ClusterSeed) Merge(mergeable memberlist.Mergeable, _ bool) (change memberlist.Mergeable, error error) {
+func (c *ClusterSeed) Merge(mergeable memberlist.Mergeable, _ bool) (change memberlist.Mergeable, err error) {
if mergeable == nil {
return nil, nil
}
diff --git a/pkg/analytics/stats.go b/pkg/analytics/stats.go
index 664c2b2dd2e05..03301bb049ef5 100644
--- a/pkg/analytics/stats.go
+++ b/pkg/analytics/stats.go
@@ -312,17 +312,17 @@ func (s *Statistics) String() string {
func (s *Statistics) Value() map[string]interface{} {
stdvar := s.value.Load() / float64(s.count.Load())
stddev := math.Sqrt(stdvar)
- min := s.min.Load()
- max := s.max.Load()
+ minVal := s.min.Load()
+ maxVal := s.max.Load()
result := map[string]interface{}{
"avg": s.avg.Load(),
"count": s.count.Load(),
}
- if !math.IsInf(min, 0) {
- result["min"] = min
+ if !math.IsInf(minVal, 0) {
+ result["min"] = minVal
}
- if !math.IsInf(max, 0) {
- result["max"] = s.max.Load()
+ if !math.IsInf(maxVal, 0) {
+ result["max"] = maxVal
}
if !math.IsNaN(stddev) {
result["stddev"] = stddev
@@ -335,20 +335,20 @@ func (s *Statistics) Value() map[string]interface{} {
func (s *Statistics) Record(v float64) {
for {
- min := s.min.Load()
- if min <= v {
+ minVal := s.min.Load()
+ if minVal <= v {
break
}
- if s.min.CompareAndSwap(min, v) {
+ if s.min.CompareAndSwap(minVal, v) {
break
}
}
for {
- max := s.max.Load()
- if max >= v {
+ maxVal := s.max.Load()
+ if maxVal >= v {
break
}
- if s.max.CompareAndSwap(max, v) {
+ if s.max.CompareAndSwap(maxVal, v) {
break
}
}
diff --git a/pkg/blockbuilder/scheduler/status.go b/pkg/blockbuilder/scheduler/status.go
index c2c8dabb5f23c..20f74d04bf519 100644
--- a/pkg/blockbuilder/scheduler/status.go
+++ b/pkg/blockbuilder/scheduler/status.go
@@ -15,7 +15,7 @@ import (
var defaultPageContent string
var defaultPageTemplate = template.Must(template.New("webpage").Funcs(template.FuncMap{
"durationSince": func(t time.Time) string { return time.Since(t).Truncate(time.Second).String() },
- "offsetsLen": func(min, max int64) int64 { return max - min },
+ "offsetsLen": func(minVal, maxVal int64) int64 { return maxVal - minVal },
"humanize": humanize.Comma,
}).Parse(defaultPageContent))
diff --git a/pkg/bloombuild/builder/batch_test.go b/pkg/bloombuild/builder/batch_test.go
index cedba1480e2f6..f8ee57073b71f 100644
--- a/pkg/bloombuild/builder/batch_test.go
+++ b/pkg/bloombuild/builder/batch_test.go
@@ -209,8 +209,8 @@ func TestOverlappingBlocksIter(t *testing.T) {
}
}
-func genBlockRef(min, max model.Fingerprint) bloomshipper.BlockRef {
- bounds := v1.NewBounds(min, max)
+func genBlockRef(minVal, maxVal model.Fingerprint) bloomshipper.BlockRef {
+ bounds := v1.NewBounds(minVal, maxVal)
return bloomshipper.BlockRef{
Ref: bloomshipper.Ref{
Bounds: bounds,
diff --git a/pkg/bloombuild/planner/plannertest/utils.go b/pkg/bloombuild/planner/plannertest/utils.go
index 706e0abdf00a7..ccf40adb35f45 100644
--- a/pkg/bloombuild/planner/plannertest/utils.go
+++ b/pkg/bloombuild/planner/plannertest/utils.go
@@ -24,13 +24,13 @@ func TsdbID(n int) tsdb.SingleTenantTSDBIdentifier {
}
}
-func GenMeta(min, max model.Fingerprint, sources []int, blocks []bloomshipper.BlockRef) bloomshipper.Meta {
+func GenMeta(minVal, maxVal model.Fingerprint, sources []int, blocks []bloomshipper.BlockRef) bloomshipper.Meta {
m := bloomshipper.Meta{
MetaRef: bloomshipper.MetaRef{
Ref: bloomshipper.Ref{
TenantID: "fakeTenant",
TableName: TestTable.Addr(),
- Bounds: v1.NewBounds(min, max),
+ Bounds: v1.NewBounds(minVal, maxVal),
},
},
Blocks: blocks,
@@ -41,13 +41,13 @@ func GenMeta(min, max model.Fingerprint, sources []int, blocks []bloomshipper.Bl
return m
}
-func GenBlockRef(min, max model.Fingerprint) bloomshipper.BlockRef {
+func GenBlockRef(minVal, maxVal model.Fingerprint) bloomshipper.BlockRef {
startTS, endTS := TestDay.Bounds()
return bloomshipper.BlockRef{
Ref: bloomshipper.Ref{
TenantID: "fakeTenant",
TableName: TestTable.Addr(),
- Bounds: v1.NewBounds(min, max),
+ Bounds: v1.NewBounds(minVal, maxVal),
StartTimestamp: startTS,
EndTimestamp: endTS,
Checksum: 0,
diff --git a/pkg/bloombuild/planner/tableIterator.go b/pkg/bloombuild/planner/tableIterator.go
index c17458a04806c..a94f84c2fb4a2 100644
--- a/pkg/bloombuild/planner/tableIterator.go
+++ b/pkg/bloombuild/planner/tableIterator.go
@@ -13,8 +13,8 @@ type dayRangeIterator struct {
err error
}
-func newDayRangeIterator(min, max config.DayTime, schemaCfg config.SchemaConfig) *dayRangeIterator {
- return &dayRangeIterator{min: min, max: max, cur: min.Dec(), schemaCfg: schemaCfg}
+func newDayRangeIterator(minVal, maxVal config.DayTime, schemaCfg config.SchemaConfig) *dayRangeIterator {
+ return &dayRangeIterator{min: minVal, max: maxVal, cur: minVal.Dec(), schemaCfg: schemaCfg}
}
func (r *dayRangeIterator) TotalDays() int {
diff --git a/pkg/bloombuild/planner/versioned_range_test.go b/pkg/bloombuild/planner/versioned_range_test.go
index 3eb2df160c36b..04ba22cc4a5d0 100644
--- a/pkg/bloombuild/planner/versioned_range_test.go
+++ b/pkg/bloombuild/planner/versioned_range_test.go
@@ -20,8 +20,8 @@ func Test_TsdbTokenRange(t *testing.T) {
added bool
err bool
}
- mk := func(version int, min, max model.Fingerprint) addition {
- return addition{version, v1.FingerprintBounds{Min: min, Max: max}}
+ mk := func(version int, minVal, maxVal model.Fingerprint) addition {
+ return addition{version, v1.FingerprintBounds{Min: minVal, Max: maxVal}}
}
tok := func(version int, through model.Fingerprint) tsdbToken {
return tsdbToken{version: version, through: through}
diff --git a/pkg/bloomgateway/util.go b/pkg/bloomgateway/util.go
index 21803d9c84dab..1aa1783640e06 100644
--- a/pkg/bloomgateway/util.go
+++ b/pkg/bloomgateway/util.go
@@ -57,20 +57,20 @@ func partitionTasksByBlock(tasks []Task, blocks []bloomshipper.BlockRef) []block
for _, task := range tasks {
refs := task.series
- min := sort.Search(len(refs), func(i int) bool {
+ minVal := sort.Search(len(refs), func(i int) bool {
return block.Cmp(refs[i].Fingerprint) > v1.Before
})
- max := sort.Search(len(refs), func(i int) bool {
+ maxVal := sort.Search(len(refs), func(i int) bool {
return block.Cmp(refs[i].Fingerprint) == v1.After
})
// All fingerprints fall outside of the consumer's range
- if min == len(refs) || max == 0 || min == max {
+ if minVal == len(refs) || maxVal == 0 || minVal == maxVal {
continue
}
- bounded.tasks = append(bounded.tasks, task.Copy(refs[min:max]))
+ bounded.tasks = append(bounded.tasks, task.Copy(refs[minVal:maxVal]))
}
if len(bounded.tasks) > 0 {
diff --git a/pkg/chunkenc/util_test.go b/pkg/chunkenc/util_test.go
index bcbe9cc1e8be0..789de74136428 100644
--- a/pkg/chunkenc/util_test.go
+++ b/pkg/chunkenc/util_test.go
@@ -48,7 +48,7 @@ func fillChunk(c Chunk) int64 {
return fillChunkClose(c, true)
}
-func fillChunkClose(c Chunk, close bool) int64 {
+func fillChunkClose(c Chunk, doClose bool) int64 {
i := int64(0)
inserted := int64(0)
entry := &logproto.Entry{
@@ -73,13 +73,13 @@ func fillChunkClose(c Chunk, close bool) int64 {
entry.Line = testdata.LogString(i)
}
- if close {
+ if doClose {
_ = c.Close()
}
return inserted
}
-func fillChunkRandomOrder(c Chunk, close bool) {
+func fillChunkRandomOrder(c Chunk, doClose bool) {
ub := int64(1 << 30)
i := int64(0)
random := rand.New(rand.NewSource(42))
@@ -98,7 +98,7 @@ func fillChunkRandomOrder(c Chunk, close bool) {
entry.Line = testdata.LogString(i)
}
- if close {
+ if doClose {
_ = c.Close()
}
}
diff --git a/pkg/compactor/generationnumber/gennumber_loader.go b/pkg/compactor/generationnumber/gennumber_loader.go
index c2edb62dc1664..5ac92c18981e9 100644
--- a/pkg/compactor/generationnumber/gennumber_loader.go
+++ b/pkg/compactor/generationnumber/gennumber_loader.go
@@ -100,7 +100,7 @@ func (l *GenNumberLoader) GetResultsCacheGenNumber(tenantIDs []string) string {
}
func (l *GenNumberLoader) getCacheGenNumbersPerTenants(tenantIDs []string) string {
- var max int
+ var maxVal int
for _, tenantID := range tenantIDs {
genNumber := l.getCacheGenNumber(tenantID)
if genNumber == "" {
@@ -112,15 +112,15 @@ func (l *GenNumberLoader) getCacheGenNumbersPerTenants(tenantIDs []string) strin
level.Error(log.Logger).Log("msg", "error parsing resultsCacheGenNumber", "user", tenantID, "err", err)
}
- if number > max {
- max = number
+ if number > maxVal {
+ maxVal = number
}
}
- if max == 0 {
+ if maxVal == 0 {
return ""
}
- return fmt.Sprint(max)
+ return fmt.Sprint(maxVal)
}
func (l *GenNumberLoader) getCacheGenNumber(userID string) string {
diff --git a/pkg/dataobj/internal/dataset/column_test.go b/pkg/dataobj/internal/dataset/column_test.go
index ed78defca329f..4b599fd6f245f 100644
--- a/pkg/dataobj/internal/dataset/column_test.go
+++ b/pkg/dataobj/internal/dataset/column_test.go
@@ -198,7 +198,7 @@ func TestColumnBuilder_Cardinality(t *testing.T) {
require.Equal(t, uint64(3), col.Info.Statistics.CardinalityCount)
}
-func getMinMax(t *testing.T, stats *datasetmd.Statistics) (min, max Value) {
+func getMinMax(t *testing.T, stats *datasetmd.Statistics) (minVal, maxVal Value) {
t.Helper()
require.NotNil(t, stats)
diff --git a/pkg/distributor/tee.go b/pkg/distributor/tee.go
index 04acb1e22c0df..1680cb84f0ba9 100644
--- a/pkg/distributor/tee.go
+++ b/pkg/distributor/tee.go
@@ -6,14 +6,14 @@ type Tee interface {
}
// WrapTee wraps a new Tee around an existing Tee.
-func WrapTee(existing, new Tee) Tee {
+func WrapTee(existing, newTee Tee) Tee {
if existing == nil {
- return new
+ return newTee
}
if multi, ok := existing.(*multiTee); ok {
- return &multiTee{append(multi.tees, new)}
+ return &multiTee{append(multi.tees, newTee)}
}
- return &multiTee{tees: []Tee{existing, new}}
+ return &multiTee{tees: []Tee{existing, newTee}}
}
type multiTee struct {
diff --git a/pkg/ingester/checkpoint_test.go b/pkg/ingester/checkpoint_test.go
index 317bcb7ce4f50..de731b9846f71 100644
--- a/pkg/ingester/checkpoint_test.go
+++ b/pkg/ingester/checkpoint_test.go
@@ -332,18 +332,18 @@ func TestIngesterWALBackpressureCheckpoint(t *testing.T) {
require.Nil(t, services.StartAndAwaitRunning(context.Background(), i))
}
-func expectCheckpoint(t *testing.T, walDir string, shouldExist bool, max time.Duration) {
+func expectCheckpoint(t *testing.T, walDir string, shouldExist bool, maxVal time.Duration) {
once := make(chan struct{}, 1)
once <- struct{}{}
- deadline := time.After(max)
+ deadline := time.After(maxVal)
for {
select {
case <-deadline:
require.Fail(t, "timeout while waiting for checkpoint existence:", shouldExist)
case <-once: // Trick to ensure we check immediately before deferring to ticker.
default:
- <-time.After(max / 10) // check 10x over the duration
+ <-time.After(maxVal / 10) // check 10x over the duration
}
fs, err := os.ReadDir(walDir)
diff --git a/pkg/ingester/owned_streams.go b/pkg/ingester/owned_streams.go
index 56c5a77fa768e..6747348747719 100644
--- a/pkg/ingester/owned_streams.go
+++ b/pkg/ingester/owned_streams.go
@@ -43,7 +43,7 @@ func (s *ownedStreamService) getOwnedStreamCount() int {
return int(s.ownedStreamCount.Load())
}
-func (s *ownedStreamService) updateFixedLimit() (old, new int32) {
+func (s *ownedStreamService) updateFixedLimit() (old, newVal int32) {
newLimit, _, _, _ := s.limiter.GetStreamCountLimit(s.tenantID)
return s.fixedLimit.Swap(int32(newLimit)), int32(newLimit)
diff --git a/pkg/iter/cache.go b/pkg/iter/cache.go
index 3066bdbb67b29..b6d3208e9691a 100644
--- a/pkg/iter/cache.go
+++ b/pkg/iter/cache.go
@@ -23,10 +23,10 @@ type cachedIterator struct {
// NewCachedIterator creates an iterator that cache iteration result and can be iterated again
// after closing it without re-using the underlaying iterator `it`.
-func NewCachedIterator(it EntryIterator, cap int) CacheEntryIterator {
+func NewCachedIterator(it EntryIterator, capacity int) CacheEntryIterator {
c := &cachedIterator{
wrapped: it,
- cache: make([]entryWithLabels, 0, cap),
+ cache: make([]entryWithLabels, 0, capacity),
curr: -1,
}
return c
@@ -120,10 +120,10 @@ type cachedSampleIterator struct {
// NewCachedSampleIterator creates an iterator that cache iteration result and can be iterated again
// after closing it without re-using the underlaying iterator `it`.
-func NewCachedSampleIterator(it SampleIterator, cap int) CacheSampleIterator {
+func NewCachedSampleIterator(it SampleIterator, capacity int) CacheSampleIterator {
c := &cachedSampleIterator{
wrapped: it,
- cache: make([]sampleWithLabels, 0, cap),
+ cache: make([]sampleWithLabels, 0, capacity),
curr: -1,
}
return c
diff --git a/pkg/iter/v2/iter.go b/pkg/iter/v2/iter.go
index 506b1a4f35091..4a2301d9d224b 100644
--- a/pkg/iter/v2/iter.go
+++ b/pkg/iter/v2/iter.go
@@ -227,10 +227,10 @@ func (it *CounterIter[T]) Count() int {
return it.count
}
-func WithClose[T any](itr Iterator[T], close func() bool) *CloseIter[T] {
+func WithClose[T any](itr Iterator[T], closeFunc func() bool) *CloseIter[T] {
return &CloseIter[T]{
Iterator: itr,
- close: close,
+ close: closeFunc,
}
}
diff --git a/pkg/loghttp/push/push.go b/pkg/loghttp/push/push.go
index dccbe75ce8645..f96ff520d267b 100644
--- a/pkg/loghttp/push/push.go
+++ b/pkg/loghttp/push/push.go
@@ -95,7 +95,7 @@ func (EmptyLimits) PolicyFor(_ string, _ labels.Labels) string {
}
// StreamResolver is a request-scoped interface that provides retention period and policy for a given stream.
-// The values returned by the resolver will not chance throught the handling of the request
+// The values returned by the resolver will not chance thought the handling of the request
type StreamResolver interface {
RetentionPeriodFor(lbs labels.Labels) time.Duration
RetentionHoursFor(lbs labels.Labels) string
@@ -105,7 +105,7 @@ type StreamResolver interface {
type (
RequestParser func(userID string, r *http.Request, limits Limits, tracker UsageTracker, streamResolver StreamResolver, logPushRequestStreams bool, logger log.Logger) (*logproto.PushRequest, *Stats, error)
RequestParserWrapper func(inner RequestParser) RequestParser
- ErrorWriter func(w http.ResponseWriter, error string, code int, logger log.Logger)
+ ErrorWriter func(w http.ResponseWriter, errorStr string, code int, logger log.Logger)
)
type PolicyWithRetentionWithBytes map[string]map[time.Duration]int64
@@ -376,7 +376,7 @@ func RetentionPeriodToString(retentionPeriod time.Duration) string {
// > 503 Service Unavailable
// > 504 Gateway Timeout
// In loki, we expect clients to retry on 500 errors, so we map 500 errors to 503.
-func OTLPError(w http.ResponseWriter, error string, code int, logger log.Logger) {
+func OTLPError(w http.ResponseWriter, errorStr string, code int, logger log.Logger) {
// Map 500 errors to 503. 500 errors are never retried on the client side, but 503 are.
if code == http.StatusInternalServerError {
code = http.StatusServiceUnavailable
@@ -386,7 +386,7 @@ func OTLPError(w http.ResponseWriter, error string, code int, logger log.Logger)
w.WriteHeader(code)
// Status 0 because we omit the Status.code field.
- status := grpcstatus.New(0, error).Proto()
+ status := grpcstatus.New(0, errorStr).Proto()
respBytes, err := proto.Marshal(status)
if err != nil {
level.Error(logger).Log("msg", "failed to marshal error response", "error", err)
@@ -411,8 +411,8 @@ func OTLPError(w http.ResponseWriter, error string, code int, logger log.Logger)
var _ ErrorWriter = OTLPError
-func HTTPError(w http.ResponseWriter, error string, code int, _ log.Logger) {
- http.Error(w, error, code)
+func HTTPError(w http.ResponseWriter, errorStr string, code int, _ log.Logger) {
+ http.Error(w, errorStr, code)
}
var _ ErrorWriter = HTTPError
diff --git a/pkg/logql/log/filter.go b/pkg/logql/log/filter.go
index e6a93ff744cba..4d4842baa7244 100644
--- a/pkg/logql/log/filter.go
+++ b/pkg/logql/log/filter.go
@@ -322,16 +322,16 @@ func newOrFilter(left MatcherFilterer, right MatcherFilterer) MatcherFilterer {
}
// ChainOrMatcherFilterer is a syntax sugar to chain multiple `or` filters. (1 or many)
-func ChainOrMatcherFilterer(curr, new MatcherFilterer) MatcherFilterer {
+func ChainOrMatcherFilterer(curr, newFilterer MatcherFilterer) MatcherFilterer {
if curr == nil {
- return new
+ return newFilterer
}
- return newOrFilter(curr, new)
+ return newOrFilter(curr, newFilterer)
}
// ChainOrFilter is a syntax sugar to chain multiple `or` filters. (1 or many)
-func ChainOrFilter(curr, new Filterer) Filterer {
- return ChainOrMatcherFilterer(WrapFilterer(curr), WrapFilterer(new))
+func ChainOrFilter(curr, newFilterer Filterer) Filterer {
+ return ChainOrMatcherFilterer(WrapFilterer(curr), WrapFilterer(newFilterer))
}
func (a orFilter) Filter(line []byte) bool {
diff --git a/pkg/logql/range_vector.go b/pkg/logql/range_vector.go
index 4f5e5d7aca5e1..e7865e5703d3a 100644
--- a/pkg/logql/range_vector.go
+++ b/pkg/logql/range_vector.go
@@ -403,23 +403,23 @@ func avgOverTime(samples []promql.FPoint) float64 {
}
func maxOverTime(samples []promql.FPoint) float64 {
- max := samples[0].F
+ maxVal := samples[0].F
for _, v := range samples {
- if v.F > max || math.IsNaN(max) {
- max = v.F
+ if v.F > maxVal || math.IsNaN(maxVal) {
+ maxVal = v.F
}
}
- return max
+ return maxVal
}
func minOverTime(samples []promql.FPoint) float64 {
- min := samples[0].F
+ minVal := samples[0].F
for _, v := range samples {
- if v.F < min || math.IsNaN(min) {
- min = v.F
+ if v.F < minVal || math.IsNaN(minVal) {
+ minVal = v.F
}
}
- return min
+ return minVal
}
// stdvarOverTime calculates the variance using Welford's online algorithm.
diff --git a/pkg/logql/sketch/cms.go b/pkg/logql/sketch/cms.go
index 67f72be976c19..2b24823257e89 100644
--- a/pkg/logql/sketch/cms.go
+++ b/pkg/logql/sketch/cms.go
@@ -72,26 +72,26 @@ func (s *CountMinSketch) Increment(event []byte) {
func (s *CountMinSketch) ConservativeAdd(event []byte, count float64) (float64, uint32, uint32) {
s.HyperLogLog.Insert(event)
- min := float64(math.MaxUint64)
+ minVal := float64(math.MaxUint64)
h1, h2 := hashn(event)
// inline Count to save time/memory
var pos uint32
for i := uint32(0); i < s.Depth; i++ {
pos = s.getPos(h1, h2, i)
- if s.Counters[i][pos] < min {
- min = s.Counters[i][pos]
+ if s.Counters[i][pos] < minVal {
+ minVal = s.Counters[i][pos]
}
}
- min += count
+ minVal += count
for i := uint32(0); i < s.Depth; i++ {
pos = s.getPos(h1, h2, i)
v := s.Counters[i][pos]
- if v < min {
- s.Counters[i][pos] = min
+ if v < minVal {
+ s.Counters[i][pos] = minVal
}
}
- return min, h1, h2
+ return minVal, h1, h2
}
func (s *CountMinSketch) ConservativeIncrement(event []byte) (float64, uint32, uint32) {
@@ -100,17 +100,17 @@ func (s *CountMinSketch) ConservativeIncrement(event []byte) (float64, uint32, u
// Count returns the approximate min count for the given input.
func (s *CountMinSketch) Count(event []byte) float64 {
- min := float64(math.MaxUint64)
+ minVal := float64(math.MaxUint64)
h1, h2 := hashn(event)
var pos uint32
for i := uint32(0); i < s.Depth; i++ {
pos = s.getPos(h1, h2, i)
- if s.Counters[i][pos] < min {
- min = s.Counters[i][pos]
+ if s.Counters[i][pos] < minVal {
+ minVal = s.Counters[i][pos]
}
}
- return min
+ return minVal
}
// Merge the given sketch into this one.
diff --git a/pkg/logql/sketch/cms_test.go b/pkg/logql/sketch/cms_test.go
index fe439da10da01..a9a89d856393f 100644
--- a/pkg/logql/sketch/cms_test.go
+++ b/pkg/logql/sketch/cms_test.go
@@ -18,13 +18,13 @@ func TestCMS(_ *testing.T) {
numStreams := 10
maxPerStream := 100
events := make([]event, 0)
- max := int64(0)
+ maxVal := int64(0)
for j := 0; j < numStreams-k; j++ {
num := int64(maxPerStream)
n := rand.Int63n(num) + 1
- if n > max {
- max = n
+ if n > maxVal {
+ maxVal = n
}
for z := 0; z < int(n); z++ {
events = append(events, event{name: strconv.Itoa(j), count: 1})
@@ -32,7 +32,7 @@ func TestCMS(_ *testing.T) {
}
// then another set of things more than the max of the previous entries
for z := numStreams - k; z < numStreams; z++ {
- n := rand.Int63n(int64(maxPerStream)) + 1 + max
+ n := rand.Int63n(int64(maxPerStream)) + 1 + maxVal
for x := 0; x < int(n); x++ {
events = append(events, event{name: strconv.Itoa(z), count: 1})
}
diff --git a/pkg/logql/sketch/topk_test.go b/pkg/logql/sketch/topk_test.go
index d375d159e8c60..cb8b3fdf9e843 100644
--- a/pkg/logql/sketch/topk_test.go
+++ b/pkg/logql/sketch/topk_test.go
@@ -21,20 +21,20 @@ type event struct {
}
func TestTopkCardinality(t *testing.T) {
- max := 1000000
+ maxVal := 1000000
topk, err := newCMSTopK(100, 10, 10)
assert.NoError(t, err)
- for i := 0; i < max; i++ {
+ for i := 0; i < maxVal; i++ {
topk.Observe(strconv.Itoa(i))
}
c, bigEnough := topk.Cardinality()
// hll has a typical error accuracy of 2%
- assert.True(t, (c >= uint64(float64(max)*0.98)) && (c <= uint64(float64(max)*1.02)))
+ assert.True(t, (c >= uint64(float64(maxVal)*0.98)) && (c <= uint64(float64(maxVal)*1.02)))
assert.False(t, bigEnough)
- topk, err = NewCMSTopkForCardinality(nil, 100, max)
+ topk, err = NewCMSTopkForCardinality(nil, 100, maxVal)
assert.NoError(t, err)
- for i := 0; i < max; i++ {
+ for i := 0; i < maxVal; i++ {
topk.Observe(strconv.Itoa(i))
}
c, bigEnough = topk.Cardinality()
@@ -47,14 +47,14 @@ func TestTopK_Merge(t *testing.T) {
k := 1
maxPerStream := 1000
events := make([]event, 0)
- max := int64(0)
+ maxVal := int64(0)
r := rand.New(rand.NewSource(99))
for i := 0; i < nStreams-k; i++ {
num := int64(maxPerStream)
n := r.Int63n(num) + 1
- if n > max {
- max = n
+ if n > maxVal {
+ maxVal = n
}
for j := 0; j < int(n); j++ {
events = append(events, event{name: strconv.Itoa(i), count: 1})
@@ -62,7 +62,7 @@ func TestTopK_Merge(t *testing.T) {
}
// then another set of things more than the max of the previous entries
for i := nStreams - k; i < nStreams; i++ {
- n := rand.Int63n(int64(maxPerStream)) + 1 + max
+ n := rand.Int63n(int64(maxPerStream)) + 1 + maxVal
for j := 0; j < int(n); j++ {
events = append(events, event{name: strconv.Itoa(i), count: 1})
}
diff --git a/pkg/logql/syntax/query_scanner.go b/pkg/logql/syntax/query_scanner.go
index 23f79104591b7..eeaa0151de3e1 100644
--- a/pkg/logql/syntax/query_scanner.go
+++ b/pkg/logql/syntax/query_scanner.go
@@ -364,12 +364,12 @@ func isHex(ch rune) bool { return '0' <= ch && ch <= '9' || 'a' <= lower(ch)
func (s *Scanner) digits(ch0 rune, base int, invalid *rune) (ch rune, digsep int) {
ch = ch0
if base <= 10 {
- max := rune('0' + base)
+ maxVal := rune('0' + base)
for isDecimal(ch) || ch == '_' {
ds := 1
if ch == '_' {
ds = 2
- } else if ch >= max && *invalid == 0 {
+ } else if ch >= maxVal && *invalid == 0 {
*invalid = ch
}
digsep |= ds
diff --git a/pkg/pattern/iter/merge.go b/pkg/pattern/iter/merge.go
index cebae37c80260..f2ec33c3ee69e 100644
--- a/pkg/pattern/iter/merge.go
+++ b/pkg/pattern/iter/merge.go
@@ -19,13 +19,13 @@ type patternSample struct {
sample logproto.PatternSample
}
-var max = patternSample{
+var maxSample = patternSample{
pattern: "",
sample: logproto.PatternSample{Timestamp: math.MaxInt64},
}
func NewMerge(iters ...Iterator) Iterator {
- tree := loser.New(iters, max, func(s Iterator) patternSample {
+ tree := loser.New(iters, maxSample, func(s Iterator) patternSample {
return patternSample{
pattern: s.Pattern(),
sample: s.At(),
diff --git a/pkg/querier/queryrange/limits.go b/pkg/querier/queryrange/limits.go
index 36ab4e350c9a4..c3c28bdd53e9f 100644
--- a/pkg/querier/queryrange/limits.go
+++ b/pkg/querier/queryrange/limits.go
@@ -460,9 +460,9 @@ type SemaphoreWithTiming struct {
sem *semaphore.Weighted
}
-func NewSemaphoreWithTiming(max int64) *SemaphoreWithTiming {
+func NewSemaphoreWithTiming(maxVal int64) *SemaphoreWithTiming {
return &SemaphoreWithTiming{
- sem: semaphore.NewWeighted(max),
+ sem: semaphore.NewWeighted(maxVal),
}
}
diff --git a/pkg/querier/queryrange/limits_test.go b/pkg/querier/queryrange/limits_test.go
index ca8757468eee2..13ded9c9f3ce2 100644
--- a/pkg/querier/queryrange/limits_test.go
+++ b/pkg/querier/queryrange/limits_test.go
@@ -231,11 +231,11 @@ func Test_MaxQueryParallelism(t *testing.T) {
maxQueryParallelism := 2
var count atomic.Int32
- var max atomic.Int32
+ var maxVal atomic.Int32
h := base.HandlerFunc(func(_ context.Context, _ base.Request) (base.Response, error) {
cur := count.Inc()
- if cur > max.Load() {
- max.Store(cur)
+ if cur > maxVal.Load() {
+ maxVal.Store(cur)
}
defer count.Dec()
// simulate some work
@@ -261,7 +261,7 @@ func Test_MaxQueryParallelism(t *testing.T) {
})
}),
).Do(ctx, &LokiRequest{})
- maxFound := int(max.Load())
+ maxFound := int(maxVal.Load())
require.LessOrEqual(t, maxFound, maxQueryParallelism, "max query parallelism: ", maxFound, " went over the configured one:", maxQueryParallelism)
}
diff --git a/pkg/storage/bloom/v1/bounds.go b/pkg/storage/bloom/v1/bounds.go
index aebcece85138f..20ec172ae469b 100644
--- a/pkg/storage/bloom/v1/bounds.go
+++ b/pkg/storage/bloom/v1/bounds.go
@@ -59,8 +59,8 @@ func ParseBoundsFromParts(a, b string) (FingerprintBounds, error) {
return NewBounds(minFingerprint, maxFingerprint), nil
}
-func NewBounds(min, max model.Fingerprint) FingerprintBounds {
- return FingerprintBounds{Min: min, Max: max}
+func NewBounds(minVal, maxVal model.Fingerprint) FingerprintBounds {
+ return FingerprintBounds{Min: minVal, Max: maxVal}
}
func (b FingerprintBounds) Hash(h hash.Hash32) error {
@@ -120,8 +120,8 @@ func (b FingerprintBounds) Bounds() (model.Fingerprint, model.Fingerprint) {
}
// Slice returns a new fingerprint bounds clipped to the target bounds or nil if there is no overlap
-func (b FingerprintBounds) Slice(min, max model.Fingerprint) *FingerprintBounds {
- return b.Intersection(FingerprintBounds{Min: min, Max: max})
+func (b FingerprintBounds) Slice(minVal, maxVal model.Fingerprint) *FingerprintBounds {
+ return b.Intersection(FingerprintBounds{Min: minVal, Max: maxVal})
}
// Within returns whether the fingerprint is fully within the target bounds
diff --git a/pkg/storage/bloom/v1/builder_test.go b/pkg/storage/bloom/v1/builder_test.go
index fa8ccbc87a3d9..76d4168c2b399 100644
--- a/pkg/storage/bloom/v1/builder_test.go
+++ b/pkg/storage/bloom/v1/builder_test.go
@@ -221,10 +221,10 @@ func TestMergeBuilder(t *testing.T) {
indexBuf := bytes.NewBuffer(nil)
bloomsBuf := bytes.NewBuffer(nil)
- min := i * numSeries / nBlocks
- max := (i + 2) * numSeries / nBlocks // allow some overlap
- if max > len(data) {
- max = len(data)
+ minVal := i * numSeries / nBlocks
+ maxVal := (i + 2) * numSeries / nBlocks // allow some overlap
+ if maxVal > len(data) {
+ maxVal = len(data)
}
writer := NewMemoryBlockWriter(indexBuf, bloomsBuf)
@@ -236,7 +236,7 @@ func TestMergeBuilder(t *testing.T) {
)
require.Nil(t, err)
- itr := iter.NewSliceIter(data[min:max])
+ itr := iter.NewSliceIter(data[minVal:maxVal])
_, err = builder.BuildFrom(itr)
require.Nil(t, err)
blocks = append(blocks, iter.NewPeekIter(NewBlockQuerier(NewBlock(reader, NewMetrics(nil)), &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize).Iter()))
diff --git a/pkg/storage/bloom/v1/filter/buckets_test.go b/pkg/storage/bloom/v1/filter/buckets_test.go
index 31fc4f828897d..55d5f31e6a635 100644
--- a/pkg/storage/bloom/v1/filter/buckets_test.go
+++ b/pkg/storage/bloom/v1/filter/buckets_test.go
@@ -22,8 +22,8 @@ import (
func TestMaxBucketValue(t *testing.T) {
b := NewBuckets(10, 2)
- if max := b.MaxBucketValue(); max != 3 {
- t.Errorf("Expected 3, got %d", max)
+ if maxVal := b.MaxBucketValue(); maxVal != 3 {
+ t.Errorf("Expected 3, got %d", maxVal)
}
}
diff --git a/pkg/storage/chunk/client/aws/dynamodb_storage_client.go b/pkg/storage/chunk/client/aws/dynamodb_storage_client.go
index a97321fa15ee0..10516e4f87513 100644
--- a/pkg/storage/chunk/client/aws/dynamodb_storage_client.go
+++ b/pkg/storage/chunk/client/aws/dynamodb_storage_client.go
@@ -691,12 +691,12 @@ func (b dynamoDBWriteBatch) Delete(tableName, hashValue string, rangeValue []byt
})
}
-// Fill 'b' with WriteRequests from 'from' until 'b' has at most max requests. Remove those requests from 'from'.
-func (b dynamoDBWriteBatch) TakeReqs(from dynamoDBWriteBatch, max int) {
+// Fill 'b' with WriteRequests from 'from' until 'b' has at most maxVal requests. Remove those requests from 'from'.
+func (b dynamoDBWriteBatch) TakeReqs(from dynamoDBWriteBatch, maxVal int) {
outLen, inLen := b.Len(), from.Len()
toFill := inLen
- if max > 0 {
- toFill = min(inLen, max-outLen)
+ if maxVal > 0 {
+ toFill = min(inLen, maxVal-outLen)
}
for toFill > 0 {
for tableName, fromReqs := range from {
@@ -738,12 +738,12 @@ func (b dynamoDBReadRequest) Add(tableName, hashValue string, rangeValue []byte)
})
}
-// Fill 'b' with ReadRequests from 'from' until 'b' has at most max requests. Remove those requests from 'from'.
-func (b dynamoDBReadRequest) TakeReqs(from dynamoDBReadRequest, max int) {
+// Fill 'b' with ReadRequests from 'from' until 'b' has at most maxVal requests. Remove those requests from 'from'.
+func (b dynamoDBReadRequest) TakeReqs(from dynamoDBReadRequest, maxVal int) {
outLen, inLen := b.Len(), from.Len()
toFill := inLen
- if max > 0 {
- toFill = min(inLen, max-outLen)
+ if maxVal > 0 {
+ toFill = min(inLen, maxVal-outLen)
}
for toFill > 0 {
for tableName, fromReqs := range from {
diff --git a/pkg/storage/chunk/client/hedging/hedging.go b/pkg/storage/chunk/client/hedging/hedging.go
index 879c102acc464..d88e07e954a69 100644
--- a/pkg/storage/chunk/client/hedging/hedging.go
+++ b/pkg/storage/chunk/client/hedging/hedging.go
@@ -118,10 +118,10 @@ type limitedHedgingRoundTripper struct {
limiter *rate.Limiter
}
-func newLimitedHedgingRoundTripper(max int, next http.RoundTripper) *limitedHedgingRoundTripper {
+func newLimitedHedgingRoundTripper(maxVal int, next http.RoundTripper) *limitedHedgingRoundTripper {
return &limitedHedgingRoundTripper{
next: next,
- limiter: rate.NewLimiter(rate.Limit(max), max),
+ limiter: rate.NewLimiter(rate.Limit(maxVal), maxVal),
}
}
diff --git a/pkg/storage/stores/shipper/bloomshipper/cache.go b/pkg/storage/stores/shipper/bloomshipper/cache.go
index 838866e1dee81..ef7dea43cc213 100644
--- a/pkg/storage/stores/shipper/bloomshipper/cache.go
+++ b/pkg/storage/stores/shipper/bloomshipper/cache.go
@@ -167,13 +167,13 @@ func (b *BlockDirectory) resolveSize() error {
// The passed function `close` is called when the the returned querier is closed.
func (b BlockDirectory) BlockQuerier(
alloc mempool.Allocator,
- close func() error,
+ closeFunc func() error,
maxPageSize int,
metrics *v1.Metrics,
) *CloseableBlockQuerier {
return &CloseableBlockQuerier{
BlockQuerier: v1.NewBlockQuerier(b.Block(metrics), alloc, maxPageSize),
BlockRef: b.BlockRef,
- close: close,
+ close: closeFunc,
}
}
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/index/postings.go b/pkg/storage/stores/shipper/indexshipper/tsdb/index/postings.go
index 7c2dd99023b7d..268a72a93c268 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/index/postings.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/index/postings.go
@@ -853,11 +853,11 @@ type ShardedPostings struct {
// ---[shard0]--- # Shard membership
// -[--shard0--]- # Series returned by shardedPostings
func NewShardedPostings(p Postings, fpFilter FingerprintFilter, offsets FingerprintOffsets) *ShardedPostings {
- min, max := offsets.Range(fpFilter)
+ minVal, maxVal := offsets.Range(fpFilter)
return &ShardedPostings{
p: p,
- minOffset: min,
- maxOffset: max,
+ minOffset: minVal,
+ maxOffset: maxVal,
}
}
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/index/postingsstats.go b/pkg/storage/stores/shipper/indexshipper/tsdb/index/postingsstats.go
index 5e5880720ac19..02119c057309d 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/index/postingsstats.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/index/postingsstats.go
@@ -31,10 +31,10 @@ type maxHeap struct {
Items []Stat
}
-func (m *maxHeap) init(len int) {
- m.maxLength = len
+func (m *maxHeap) init(lenVal int) {
+ m.maxLength = lenVal
m.minValue = math.MaxUint64
- m.Items = make([]Stat, 0, len)
+ m.Items = make([]Stat, 0, lenVal)
}
func (m *maxHeap) push(item Stat) {
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/index/postingsstats_test.go b/pkg/storage/stores/shipper/indexshipper/tsdb/index/postingsstats_test.go
index 7ce51c795fbc0..19c2b3e7f209d 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/index/postingsstats_test.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/index/postingsstats_test.go
@@ -20,10 +20,10 @@ import (
func TestPostingsStats(t *testing.T) {
stats := &maxHeap{}
- max := 3000000
+ maxVal := 3000000
heapLength := 10
stats.init(heapLength)
- for i := 0; i < max; i++ {
+ for i := 0; i < maxVal; i++ {
item := Stat{
Name: "Label-da",
Count: uint64(i),
@@ -35,7 +35,7 @@ func TestPostingsStats(t *testing.T) {
data := stats.get()
require.Equal(t, 10, len(data))
for i := 0; i < heapLength; i++ {
- require.Equal(t, uint64(max-i), data[i].Count)
+ require.Equal(t, uint64(maxVal-i), data[i].Count)
}
}
@@ -57,12 +57,12 @@ func TestPostingsStats2(t *testing.T) {
func BenchmarkPostingStatsMaxHep(b *testing.B) {
stats := &maxHeap{}
- max := 9000000
+ maxVal := 9000000
heapLength := 10
b.ResetTimer()
for n := 0; n < b.N; n++ {
stats.init(heapLength)
- for i := 0; i < max; i++ {
+ for i := 0; i < maxVal; i++ {
item := Stat{
Name: "Label-da",
Count: uint64(i),
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/sharding/sharding_test.go b/pkg/storage/stores/shipper/indexshipper/tsdb/sharding/sharding_test.go
index fc476223848ae..833dbc9db5f4f 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/sharding/sharding_test.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/sharding/sharding_test.go
@@ -36,11 +36,11 @@ func TestSizedFPs_Sort(t *testing.T) {
}
func TestSizedFPs_ShardsFor(t *testing.T) {
- mkShard := func(min, max model.Fingerprint, streams, chks, entries, bytes uint64) logproto.Shard {
+ mkShard := func(minVal, maxVal model.Fingerprint, streams, chks, entries, bytes uint64) logproto.Shard {
return logproto.Shard{
Bounds: logproto.FPBounds{
- Min: min,
- Max: max,
+ Min: minVal,
+ Max: maxVal,
},
Stats: &stats.Stats{
Streams: streams,
diff --git a/pkg/tool/rules/compare.go b/pkg/tool/rules/compare.go
index 2d64c534e88d1..78e105c347625 100644
--- a/pkg/tool/rules/compare.go
+++ b/pkg/tool/rules/compare.go
@@ -131,9 +131,9 @@ func rulesEqual(a, b *rulefmt.RuleNode) bool {
// CompareNamespaces returns the differences between the two provided
// namespaces
-func CompareNamespaces(original, new RuleNamespace) NamespaceChange {
+func CompareNamespaces(original, newNamespace RuleNamespace) NamespaceChange {
result := NamespaceChange{
- Namespace: new.Namespace,
+ Namespace: newNamespace.Namespace,
State: Unchanged,
GroupsUpdated: []UpdatedRuleGroup{},
GroupsCreated: []rwrulefmt.RuleGroup{},
@@ -145,7 +145,7 @@ func CompareNamespaces(original, new RuleNamespace) NamespaceChange {
origMap[g.Name] = g
}
- for _, newGroup := range new.Groups {
+ for _, newGroup := range newNamespace.Groups {
origGroup, found := origMap[newGroup.Name]
if !found {
result.State = Updated
diff --git a/pkg/util/loser/tree.go b/pkg/util/loser/tree.go
index aa99e02991194..892b4f9dc8b77 100644
--- a/pkg/util/loser/tree.go
+++ b/pkg/util/loser/tree.go
@@ -6,13 +6,13 @@ type Sequence interface {
Next() bool // Advances and returns true if there is a value at this new position.
}
-func New[E any, S Sequence](sequences []S, maxVal E, at func(S) E, less func(E, E) bool, close func(S)) *Tree[E, S] {
+func New[E any, S Sequence](sequences []S, maxVal E, at func(S) E, less func(E, E) bool, closeFunc func(S)) *Tree[E, S] {
nSequences := len(sequences)
t := Tree[E, S]{
maxVal: maxVal,
at: at,
less: less,
- close: close,
+ close: closeFunc,
nodes: make([]node[E, S], nSequences*2),
}
for i, s := range sequences {
|
chore
|
Linting update for new golangci (#16572)
|
36e819800474d715ec6b71fc3123546fd372f739
|
2021-09-17 13:04:05
|
Karen Miller
|
docs: Organize and edit the LogQL section (#4342)
| false
|
diff --git a/docs/sources/logql/_index.md b/docs/sources/logql/_index.md
index 4f1659f69df2f..b4bf7fd3802d5 100644
--- a/docs/sources/logql/_index.md
+++ b/docs/sources/logql/_index.md
@@ -2,7 +2,7 @@
title: LogQL
weight: 700
---
-# LogQL: Log Query Language
+# LogQL: Log query language
LogQL is Loki's PromQL-inspired query language.
Queries act as if they are a distributed `grep` to aggregate log sources.
@@ -10,751 +10,13 @@ LogQL uses labels and operators for filtering.
There are two types of LogQL queries:
-- *Log queries* return the contents of log lines.
-- *Metric queries* extend log queries to calculate values
+- [Log queries](log_queries/) return the contents of log lines.
+- [Metric queries](metric_queries/) extend log queries to calculate values
based on query results.
-## Log Queries
+## Binary operators
-All LogQL queries contain a **log stream selector**.
-
-
-
-The log stream selector determines how many log streams (unique sources of log content, such as files) will be searched.
-A more granular log stream selector then reduces the number of searched streams to a manageable volume.
-This means that the labels passed to the log stream selector will affect the relative performance of the query's execution.
-
-Optionally, the log stream selector can be followed by a **log pipeline**. A log pipeline is a set of stage expressions that are chained together and applied to the selected log streams. Each expression can filter out, parse, or mutate log lines and their respective labels.
-
-The following example shows a full log query in action:
-
-```logql
-{container="query-frontend",namespace="loki-dev"} |= "metrics.go" | logfmt | duration > 10s and throughput_mb < 500
-```
-
-The query is composed of:
-
-- a log stream selector `{container="query-frontend",namespace="loki-dev"}` which targets the `query-frontend` container in the `loki-dev` namespace.
-- a log pipeline `|= "metrics.go" | logfmt | duration > 10s and throughput_mb < 500` which will filter out log that contains the word `metrics.go`, then parses each log line to extract more labels and filter with them.
-
-> To avoid escaping special characters you can use the `` ` ``(backtick) instead of `"` when quoting strings.
-For example `` `\w+` `` is the same as `"\\w+"`.
-This is specially useful when writing a regular expression which contains multiple backslashes that require escaping.
-
-### Log Stream Selector
-
-The stream selector determines which log streams to include in a query's results.
-The stream selector is specified by one or more comma-separated key-value pairs. Each key is a log label and each value is that label's value.
-Curly braces (`{` and `}`) delimit the stream selector.
-
-Consider this stream selector:
-
-```logql
-{app="mysql",name="mysql-backup"}
-```
-
-All log streams that have both a label of `app` whose value is `mysql`
-and a label of `name` whose value is `mysql-backup` will be included in
-the query results.
-A stream may contain other pairs of labels and values,
-but only the specified pairs within the stream selector are used to determine
-which streams will be included within the query results.
-
-The same rules that apply for [Prometheus Label Selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#instant-vector-selectors) apply for Loki log stream selectors.
-
-The `=` operator after the label name is a **label matching operator**.
-The following label matching operators are supported:
-
-- `=`: exactly equal
-- `!=`: not equal
-- `=~`: regex matches
-- `!~`: regex does not match
-
-Regex log stream examples:
-
-- `{name =~ "mysql.+"}`
-- `{name !~ "mysql.+"}`
-- `` {name !~ `mysql-\d+`} ``
-
-**Note:** The `=~` regex operator is fully anchored, meaning regex must match against the *entire* string, including newlines. The regex `.` character does not match newlines by default. If you want the regex dot character to match newlines you can use the single-line flag, like so: `(?s)search_term.+` matches `search_term\n`.
-
-### Log Pipeline
-
-A log pipeline can be appended to a log stream selector to further process and filter log streams. It usually is composed of one or multiple expressions, each expressions is executed in sequence for each log line. If an expression filters out a log line, the pipeline will stop at this point and start processing the next line.
-
-Some expressions can mutate the log content and respective labels.
-For example,
-
-```
-| line_format "{{.status_code}}"`)
-```
-
-will be available for further filtering and processing following expressions or metric queries.
-
-A log pipeline can be composed of:
-
-- [Line Filter Expression](#line-filter-expression)
-- [Parser Expression](#parser-expression)
-- [Label Filter Expression](#label-filter-expression)
-- [Line Format Expression](#line-format-expression)
-- [Labels Format Expression](#labels-format-expression)
-- [Unwrap Expression](#unwrapped-range-aggregations). An unwrapped expression is only used within metric queries.
-
-#### Line Filter Expression
-
-The line filter expression does a distributed `grep`
-over the aggregated logs from the matching log streams.
-It searches the contents of the log line,
-discarding those lines that do not match the case sensitive expression.
-
-Each line filter expression has a **filter operator**
-followed by text or a regular expression.
-These filter operators are supported:
-
-- `|=`: Log line contains string
-- `!=`: Log line does not contain string
-- `|~`: Log line contains a match to the regular expression
-- `!~`: Log line does not contain a match to the regular expression
-
-Line filter expression examples:
-
-- Keep log lines that have the substring "error":
-
- ```
- |= "error"
- ```
-
- A complete query using this example:
-
- ```
- {job="mysql"} |= "error"
- ```
-
-- Discard log lines that have the substring "kafka.server:type=ReplicaManager":
-
- ```
- != "kafka.server:type=ReplicaManager"
- ```
-
- A complete query using this example:
-
- ```
- {instance=~"kafka-[23]",name="kafka"} != "kafka.server:type=ReplicaManager"
- ```
-
-- Keep log lines that contain a substring that starts with `tsdb-ops` and ends with `io:2003`. A complete query with a regular expression:
-
- ```
- {name="kafka"} |~ "tsdb-ops.*io:2003"
- ```
-
-- Keep log lines that contain a substring that starts with `error=`,
-and is followed by 1 or more word characters. A complete query with a regular expression:
-
- ```
- {name="cassandra"} |~ `error=\w+`
- ```
-
-Filter operators can be chained.
-Filters are applied sequentially.
-Query results will have satisfied every filter.
-This complete query example will give results that include the string `error`,
-and do not include the string `timeout`.
-
-```logql
-{job="mysql"} |= "error" != "timeout"
-```
-
-When using `|~` and `!~`, Go (as in [Golang](https://golang.org/)) [RE2 syntax](https://github.com/google/re2/wiki/Syntax) regex may be used.
-The matching is case-sensitive by default.
-Switch to case-insensitive matching by prefixing the regular expression
-with `(?i)`.
-
-While line filter expressions could be placed anywhere within a log pipeline,
-it is almost always better to have them at the beginning.
-Placing them at the beginning improves the performance of the query,
-as it only does further processing when a line matches.
-For example,
- while the results will be the same,
-the query specified with
-
-```
-{job="mysql"} |= "error" | json | line_format "{{.err}}"
-```
-
-will always run faster than
-
-```
-{job="mysql"} | json | line_format "{{.message}}" |= "error"
-```
-
-Line filter expressions are the fastest way to filter logs once the
-log stream selectors have been applied.
-
-Line filter expressions have support matching IP addresses. See [Matching IP addresses](ip/) for details.
-
-#### Parser Expression
-
-Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations](#metric-queries).
-
-Extracted label keys are automatically sanitized by all parsers, to follow Prometheus metric name convention.(They can only contain ASCII letters and digits, as well as underscores and colons. They cannot start with a digit.)
-
-For instance, the pipeline `| json` will produce the following mapping:
-```json
-{ "a.b": {c: "d"}, e: "f" }
-```
-->
-```
-{a_b_c="d", e="f"}
-```
-
-In case of errors, for instance if the line is not in the expected format, the log line won't be filtered but instead will get a new `__error__` label added.
-
-If an extracted label key name already exists in the original log stream, the extracted label key will be suffixed with the `_extracted` keyword to make the distinction between the two labels. You can forcefully override the original label using a [label formatter expression](#labels-format-expression). However if an extracted key appears twice, only the latest label value will be kept.
-
-Loki supports [JSON](#json), [logfmt](#logfmt), [pattern](#pattern), [regexp](#regexp) and [unpack](#unpack) parsers.
-
-It's easier to use the predefined parsers `json` and `logfmt` when you can. If you can't, the `pattern` and `regexp` parsers can be used for log lines with an unusual structure. The `pattern` parser is easier and faster to write; it also outperforms the `regexp` parser.
-Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers](#multiple-parsers).
-
-##### JSON
-
-The **json** parser operates in two modes:
-
-1. **without** parameters:
-
- Adding `| json` to your pipeline will extract all json properties as labels if the log line is a valid json document.
- Nested properties are flattened into label keys using the `_` separator.
-
- Note: **Arrays are skipped**.
-
- For example the json parsers will extract from the following document:
-
- ```json
- {
- "protocol": "HTTP/2.0",
- "servers": ["129.0.1.1","10.2.1.3"],
- "request": {
- "time": "6.032",
- "method": "GET",
- "host": "foo.grafana.net",
- "size": "55",
- "headers": {
- "Accept": "*/*",
- "User-Agent": "curl/7.68.0"
- }
- },
- "response": {
- "status": 401,
- "size": "228",
- "latency_seconds": "6.031"
- }
- }
- ```
-
- The following list of labels:
-
- ```kv
- "protocol" => "HTTP/2.0"
- "request_time" => "6.032"
- "request_method" => "GET"
- "request_host" => "foo.grafana.net"
- "request_size" => "55"
- "response_status" => "401"
- "response_size" => "228"
- "response_size" => "228"
- ```
-
-2. **with** parameters:
-
- Using `| json label="expression", another="expression"` in your pipeline will extract only the
- specified json fields to labels. You can specify one or more expressions in this way, the same
- as [`label_format`](#labels-format-expression); all expressions must be quoted.
-
- Currently, we only support field access (`my.field`, `my["field"]`) and array access (`list[0]`), and any combination
- of these in any level of nesting (`my.list[0]["field"]`).
-
- For example, `| json first_server="servers[0]", ua="request.headers[\"User-Agent\"]` will extract from the following document:
-
- ```json
- {
- "protocol": "HTTP/2.0",
- "servers": ["129.0.1.1","10.2.1.3"],
- "request": {
- "time": "6.032",
- "method": "GET",
- "host": "foo.grafana.net",
- "size": "55",
- "headers": {
- "Accept": "*/*",
- "User-Agent": "curl/7.68.0"
- }
- },
- "response": {
- "status": 401,
- "size": "228",
- "latency_seconds": "6.031"
- }
- }
- ```
-
- The following list of labels:
-
- ```kv
- "first_server" => "129.0.1.1"
- "ua" => "curl/7.68.0"
- ```
-
- If an array or an object returned by an expression, it will be assigned to the label in json format.
-
- For example, `| json server_list="servers", headers="request.headers` will extract:
-
- ```kv
- "server_list" => `["129.0.1.1","10.2.1.3"]`
- "headers" => `{"Accept": "*/*", "User-Agent": "curl/7.68.0"}`
- ```
-
-##### logfmt
-
-The **logfmt** parser can be added using the `| logfmt` and will extract all keys and values from the [logfmt](https://brandur.org/logfmt) formatted log line.
-
-For example the following log line:
-
-```logfmt
-at=info method=GET path=/ host=grafana.net fwd="124.133.124.161" service=8ms status=200
-```
-
-will get those labels extracted:
-
-```kv
-"at" => "info"
-"method" => "GET"
-"path" => "/"
-"host" => "grafana.net"
-"fwd" => "124.133.124.161"
-"service" => "8ms"
-"status" => "200"
-```
-
-##### Pattern
-
-<span style="background-color:#f3f973;">The pattern parser is a beta feature.</span>
-
-The pattern parser allows the explicit extraction of fields from log lines by defining a pattern expression (`| pattern "<pattern-expression>"`). The expression matches the structure of a log line.
-
-Consider this NGINX log line.
-
-```log
-0.191.12.2 - - [10/Jun/2021:09:14:29 +0000] "GET /api/plugins/versioncheck HTTP/1.1" 200 2 "-" "Go-http-client/2.0" "13.76.247.102, 34.120.177.193" "TLSv1.2" "US" ""
-```
-
-This log line can be parsed with the expression
-
-`<ip> - - <_> "<method> <uri> <_>" <status> <size> <_> "<agent>" <_>`
-
-to extract these fields:
-
-```kv
-"ip" => "0.191.12.2"
-"method" => "GET"
-"uri" => "/api/plugins/versioncheck"
-"status" => "200"
-"size" => "2"
-"agent" => "Go-http-client/2.0"
-```
-
-A pattern expression is composed of captures and literals.
-
-A capture is a field name delimited by the `<` and `>` characters. `<example>` defines the field name `example`.
-An unnamed capture appears as `<_>`. The unnamed capture skips matched content.
-
-Captures are matched from the line beginning or the previous set of literals, to the line end or the next set of literals.
-If a capture is not matched, the pattern parser will stop.
-
-Literals can be any sequence of UTF-8 characters, including whitespace characters.
-
-By default, a pattern expression is anchored at the start of the log line. If the expression start with literals, then the log line must also start with the same set of literals. Use `<_>` at the beginning of the expression to anchor the expression at the start.
-
-Consider the log line
-
-```log
-level=debug ts=2021-06-10T09:24:13.472094048Z caller=logging.go:66 traceID=0568b66ad2d9294c msg="POST /loki/api/v1/push (204) 16.652862ms"
-```
-
-To match `msg="`, use the expression:
-
-```pattern
-<_> msg="<method> <path> (<status>) <latency>"
-```
-
-A pattern expression is invalid if
-
-- It does not contain any named capture.
-- It contains two consecutive captures not separated by whitespace characters.
-
-##### regexp
-
-Unlike the logfmt and json, which extract implicitly all values and takes no parameters, the **regexp** parser takes a single parameter `| regexp "<re>"` which is the regular expression using the [Golang](https://golang.org/) [RE2 syntax](https://github.com/google/re2/wiki/Syntax).
-
-The regular expression must contain a least one named sub-match (e.g `(?P<name>re)`), each sub-match will extract a different label.
-
-For example the parser `| regexp "(?P<method>\\w+) (?P<path>[\\w|/]+) \\((?P<status>\\d+?)\\) (?P<duration>.*)"` will extract from the following line:
-
-```log
-POST /api/prom/api/v1/query_range (200) 1.5s
-```
-
-those labels:
-
-```kv
-"method" => "POST"
-"path" => "/api/prom/api/v1/query_range"
-"status" => "200"
-"duration" => "1.5s"
-```
-
-##### unpack
-
-The `unpack` parser parses a JSON log line, unpacking all embedded labels in the [`pack`](../clients/promtail/stages/pack/) stage.
-**A special property `_entry` will also be used to replace the original log line**.
-
-For example, using `| unpack` with the log line:
-
-```json
-{
- "container": "myapp",
- "pod": "pod-3223f",
- "_entry": "original log message"
-}
-```
-
-extracts the `container` and `pod` labels; it sets `original log message` as the new log line.
-
-You can combine the `unpack` and `json` parsers (or any other parsers) if the original embedded log line is of a specific format.
-
-#### Label Filter Expression
-
-Label filter expression allows filtering log line using their original and extracted labels. It can contain multiple predicates.
-
-A predicate contains a **label identifier**, an **operation** and a **value** to compare the label with.
-
-For example with `cluster="namespace"` the cluster is the label identifier, the operation is `=` and the value is "namespace". The label identifier is always on the right side of the operation.
-
-We support multiple **value** types which are automatically inferred from the query input.
-
-- **String** is double quoted or backticked such as `"200"` or \``us-central1`\`.
-- **[Duration](https://golang.org/pkg/time/#ParseDuration)** is a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-- **Number** are floating-point number (64bits), such as`250`, `89.923`.
-- **Bytes** is a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "42MB", "1.5Kib" or "20b". Valid bytes units are "b", "kib", "kb", "mib", "mb", "gib", "gb", "tib", "tb", "pib", "pb", "eib", "eb".
-
-String type work exactly like Prometheus label matchers use in [log stream selector](#log-stream-selector). This means you can use the same operations (`=`,`!=`,`=~`,`!~`).
-
-> The string type is the only one that can filter out a log line with a label `__error__`.
-
-Using Duration, Number and Bytes will convert the label value prior to comparision and support the following comparators:
-
-- `==` or `=` for equality.
-- `!=` for inequality.
-- `>` and `>=` for greater than and greater than or equal.
-- `<` and `<=` for lesser than and lesser than or equal.
-
-For instance, `logfmt | duration > 1m and bytes_consumed > 20MB`
-
-If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors](#pipeline-errors) section.
-
-You can chain multiple predicates using `and` and `or` which respectively express the `and` and `or` binary operations. `and` can be equivalently expressed by a comma, a space or another pipe. Label filters can be place anywhere in a log pipeline.
-
-This means that all the following expressions are equivalent:
-
-```logql
-| duration >= 20ms or size == 20kb and method!~"2.."
-| duration >= 20ms or size == 20kb | method!~"2.."
-| duration >= 20ms or size == 20kb , method!~"2.."
-| duration >= 20ms or size == 20kb method!~"2.."
-
-```
-
-By default the precedence of multiple predicates is right to left. You can wrap predicates with parenthesis to force a different precedence left to right.
-
-For example the following are equivalent.
-
-```logql
-| duration >= 20ms or method="GET" and size <= 20KB
-| ((duration >= 20ms or method="GET") and size <= 20KB)
-```
-
-It will evaluate first `duration >= 20ms or method="GET"`. To evaluate first `method="GET" and size <= 20KB`, make sure to use proper parenthesis as shown below.
-
-```logql
-| duration >= 20ms or (method="GET" and size <= 20KB)
-```
-
-> Label filter expressions are the only expression allowed after the [unwrap expression](#unwrapped-range-aggregations). This is mainly to allow filtering errors from the metric extraction (see [errors](#pipeline-errors)).
-
-Label filter expressions have support matching IP addresses. See [Matching IP addresses](ip/) for details.
-
-#### Line Format Expression
-
-The line format expression can rewrite the log line content by using the [text/template](https://golang.org/pkg/text/template/) format.
-It takes a single string parameter `| line_format "{{.label_name}}"`, which is the template format. All labels are injected variables into the template and are available to use with the `{{.label_name}}` notation.
-
-For example the following expression:
-
-```logql
-{container="frontend"} | logfmt | line_format "{{.query}} {{.duration}}"
-```
-
-Will extract and rewrite the log line to only contains the query and the duration of a request.
-
-You can use double quoted string for the template or backticks `` `{{.label_name}}` `` to avoid the need to escape special characters.
-
-`line_format` also supports `math` functions. Example:
-
-If we have the following labels `ip=1.1.1.1`, `status=200` and `duration=3000`(ms), we can divide the duration by `1000` to get the value in seconds.
-
-```logql
-{container="frontend"} | logfmt | line_format "{{.ip}} {{.status}} {{div .duration 1000}}"
-```
-
-The above query will give us the `line` as `1.1.1.1 200 3`
-
-See [template functions](template_functions/) to learn about available functions in the template format.
-
-#### Labels Format Expression
-
-The `| label_format` expression can rename, modify or add labels. It takes as parameter a comma separated list of equality operations, enabling multiple operations at once.
-
-When both side are label identifiers, for example `dst=src`, the operation will rename the `src` label into `dst`.
-
-The left side can alternatively be a template string (double quoted or backtick), for example `dst="{{.status}} {{.query}}"`, in which case the `dst` label value is replaced by the result of the [text/template](https://golang.org/pkg/text/template/) evaluation. This is the same template engine as the `| line_format` expression, which means labels are available as variables and you can use the same list of [functions](functions/).
-
-In both cases, if the destination label doesn't exist, then a new one is created.
-
-The renaming form `dst=src` will _drop_ the `src` label after remapping it to the `dst` label. However, the _template_ form will preserve the referenced labels, such that `dst="{{.src}}"` results in both `dst` and `src` having the same value.
-
-> A single label name can only appear once per expression. This means `| label_format foo=bar,foo="new"` is not allowed but you can use two expressions for the desired effect: `| label_format foo=bar | label_format foo="new"`
-
-### Log Queries Examples
-
-#### Multiple filtering
-
-Filtering should be done first using label matchers, then line filters (when possible) and finally using label filters. The following query demonstrate this.
-
-```logql
-{cluster="ops-tools1", namespace="loki-dev", job="loki-dev/query-frontend"} |= "metrics.go" !="out of order" | logfmt | duration > 30s or status_code!="200"
-```
-
-#### Multiple parsers
-
-To extract the method and the path of the following logfmt log line:
-
-```log
-level=debug ts=2020-10-02T10:10:42.092268913Z caller=logging.go:66 traceID=a9d4d8a928d8db1 msg="POST /api/prom/api/v1/query_range (200) 1.5s"
-```
-
-You can use multiple parsers (logfmt and regexp) like this.
-
-```logql
-{job="cortex-ops/query-frontend"} | logfmt | line_format "{{.msg}}" | regexp "(?P<method>\\w+) (?P<path>[\\w|/]+) \\((?P<status>\\d+?)\\) (?P<duration>.*)"
-```
-
-This is possible because the `| line_format` reformats the log line to become `POST /api/prom/api/v1/query_range (200) 1.5s` which can then be parsed with the `| regexp ...` parser.
-
-#### Formatting
-
-The following query shows how you can reformat a log line to make it easier to read on screen.
-
-```logql
-{cluster="ops-tools1", name="querier", namespace="loki-dev"}
- |= "metrics.go" != "loki-canary"
- | logfmt
- | query != ""
- | label_format query="{{ Replace .query \"\\n\" \"\" -1 }}"
- | line_format "{{ .ts}}\t{{.duration}}\ttraceID = {{.traceID}}\t{{ printf \"%-100.100s\" .query }} "
-```
-
-Label formatting is used to sanitize the query while the line format reduce the amount of information and creates a tabular output.
-
-For those given log line:
-
-```log
-level=info ts=2020-10-23T20:32:18.094668233Z caller=metrics.go:81 org_id=29 traceID=1980d41501b57b68 latency=fast query="{cluster=\"ops-tools1\", job=\"cortex-ops/query-frontend\"} |= \"query_range\"" query_type=filter range_type=range length=15m0s step=7s duration=650.22401ms status=200 throughput_mb=1.529717 total_bytes_mb=0.994659
-level=info ts=2020-10-23T20:32:18.068866235Z caller=metrics.go:81 org_id=29 traceID=1980d41501b57b68 latency=fast query="{cluster=\"ops-tools1\", job=\"cortex-ops/query-frontend\"} |= \"query_range\"" query_type=filter range_type=range length=15m0s step=7s duration=624.008132ms status=200 throughput_mb=0.693449 total_bytes_mb=0.432718
-```
-
-The result would be:
-
-```log
-2020-10-23T20:32:18.094668233Z 650.22401ms traceID = 1980d41501b57b68 {cluster="ops-tools1", job="cortex-ops/query-frontend"} |= "query_range"
-2020-10-23T20:32:18.068866235Z 624.008132ms traceID = 1980d41501b57b68 {cluster="ops-tools1", job="cortex-ops/query-frontend"} |= "query_range"
-```
-
-## Metric Queries
-
-LogQL supports applying a function to log query results.
-This powerful feature creates metrics from logs.
-
-Metric queries can be used to calculate things such as the rate of error messages, or the top N log sources with the most amount of logs over the last 3 hours.
-
-Combined with log parsers, metrics queries can also be used to calculate metrics from a sample value within the log line, such as latency or request size.
-All labels, including extracted ones, will be available for aggregations and generation of new series.
-
-### Range Vector aggregation
-
-LogQL shares the [range vector](https://prometheus.io/docs/prometheus/latest/querying/basics/#range-vector-selectors) concept of Prometheus.
-In Loki, the selected range of samples is a range of selected log or label values.
-
-The aggregation is applied over a time duration.
-Loki defines [Time Durations](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations) with the same syntax as Prometheus.
-
-Loki supports two types of range vector aggregations: log range aggregations and unwrapped range aggregations.
-
-#### Log Range Aggregations
-
-A log range aggregation is a query followed by a duration.
-A function is applied to aggregate the query over the duration.
-The duration can be placed
-after the log stream selector or at end of the log pipeline.
-
-The functions:
-
-- `rate(log-range)`: calculates the number of entries per second
-- `count_over_time(log-range)`: counts the entries for each log stream within the given range.
-- `bytes_rate(log-range)`: calculates the number of bytes per second for each stream.
-- `bytes_over_time(log-range)`: counts the amount of bytes used by each log stream for a given range.
-- `absent_over_time(log-range)`: returns an empty vector if the range vector passed to it has any elements and a 1-element vector with the value 1 if the range vector passed to it has no elements. (`absent_over_time` is useful for alerting on when no time series and logs stream exist for label combination for a certain amount of time.)
-
-Examples:
-
-- Count all the log lines within the last five minutes for the MySQL job.
-
- ```logql
- count_over_time({job="mysql"}[5m])
- ```
-
-- This aggregation includes filters and parsers.
- It returns the per-second rate of all non-timeout errors within the last minutes per host for the MySQL job and only includes errors whose duration is above ten seconds.
-
- ```logql
- sum by (host) (rate({job="mysql"} |= "error" != "timeout" | json | duration > 10s [1m]))
- ```
-
-#### Unwrapped Range Aggregations
-
-Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors](#pipeline-errors).
-
-The unwrap expression is noted `| unwrap label_identifier` where the label identifier is the label name to use for extracting sample values.
-
-Since label values are string, by default a conversion into a float (64bits) will be attempted, in case of failure the `__error__` label is added to the sample.
-Optionally the label identifier can be wrapped by a conversion function `| unwrap <function>(label_identifier)`, which will attempt to convert the label value from a specific format.
-
-We currently support the functions:
-- `duration_seconds(label_identifier)` (or its short equivalent `duration`) which will convert the label value in seconds from the [go duration format](https://golang.org/pkg/time/#ParseDuration) (e.g `5m`, `24s30ms`).
-- `bytes(label_identifier)` which will convert the label value to raw bytes applying the bytes unit (e.g. `5 MiB`, `3k`, `1G`).
-
-Supported function for operating over unwrapped ranges are:
-
-- `rate(unwrapped-range)`: calculates per second rate of all values in the specified interval.
-- `sum_over_time(unwrapped-range)`: the sum of all values in the specified interval.
-- `avg_over_time(unwrapped-range)`: the average value of all points in the specified interval.
-- `max_over_time(unwrapped-range)`: the maximum value of all points in the specified interval.
-- `min_over_time(unwrapped-range)`: the minimum value of all points in the specified interval
-- `first_over_time(unwrapped-range)`: the first value of all points in the specified interval
-- `last_over_time(unwrapped-range)`: the last value of all points in the specified interval
-- `stdvar_over_time(unwrapped-range)`: the population standard variance of the values in the specified interval.
-- `stddev_over_time(unwrapped-range)`: the population standard deviation of the values in the specified interval.
-- `quantile_over_time(scalar,unwrapped-range)`: the φ-quantile (0 ≤ φ ≤ 1) of the values in the specified interval.
-- `absent_over_time(unwrapped-range)`: returns an empty vector if the range vector passed to it has any elements and a 1-element vector with the value 1 if the range vector passed to it has no elements. (`absent_over_time` is useful for alerting on when no time series and logs stream exist for label combination for a certain amount of time.)
-
-Except for `sum_over_time`,`absent_over_time` and `rate`, unwrapped range aggregations support grouping.
-
-```logql
-<aggr-op>([parameter,] <unwrapped-range>) [without|by (<label list>)]
-```
-
-Which can be used to aggregate over distinct labels dimensions by including a `without` or `by` clause.
-
-`without` removes the listed labels from the result vector, while all other labels are preserved the output. `by` does the opposite and drops labels that are not listed in the `by` clause, even if their label values are identical between all elements of the vector.
-
-#### Unwrapped Examples
-
-```logql
-quantile_over_time(0.99,
- {cluster="ops-tools1",container="ingress-nginx"}
- | json
- | __error__ = ""
- | unwrap request_time [1m])) by (path)
-```
-
-This example calculates the p99 of the nginx-ingress latency by path.
-
-```logql
-sum by (org_id) (
- sum_over_time(
- {cluster="ops-tools1",container="loki-dev"}
- |= "metrics.go"
- | logfmt
- | unwrap bytes_processed [1m])
- )
-```
-
-This calculates the amount of bytes processed per organization id.
-
-### Aggregation operators
-
-Like [PromQL](https://prometheus.io/docs/prometheus/latest/querying/operators/#aggregation-operators), LogQL supports a subset of built-in aggregation operators that can be used to aggregate the element of a single vector, resulting in a new vector of fewer elements but with aggregated values:
-
-- `sum`: Calculate sum over labels
-- `min`: Select minimum over labels
-- `max`: Select maximum over labels
-- `avg`: Calculate the average over labels
-- `stddev`: Calculate the population standard deviation over labels
-- `stdvar`: Calculate the population standard variance over labels
-- `count`: Count number of elements in the vector
-- `bottomk`: Select smallest k elements by sample value
-- `topk`: Select largest k elements by sample value
-
-The aggregation operators can either be used to aggregate over all label values or a set of distinct label values by including a `without` or a `by` clause:
-
-```logql
-<aggr-op>([parameter,] <vector expression>) [without|by (<label list>)]
-```
-
-`parameter` is only required when using `topk` and `bottomk`.
-`topk` and `bottomk` are different from other aggregators in that a subset of the input samples, including the original labels, are returned in the result vector.
-
-`by` and `without` are only used to group the input vector.
-The `without` clause removes the listed labels from the resulting vector, keeping all others.
-The `by` clause does the opposite, dropping labels that are not listed in the clause, even if their label values are identical between all elements of the vector.
-
-#### Vector Aggregations Examples
-
-Get the top 10 applications by the highest log throughput:
-
-```logql
-topk(10,sum(rate({region="us-east1"}[5m])) by (name))
-```
-
-Get the count of logs for the last five minutes, grouping
-by level:
-
-```logql
-sum(count_over_time({job="mysql"}[5m])) by (level)
-```
-
-Get the rate of HTTP GET of /home requests from NGINX logs by region:
-
-```logql
-avg(rate(({job="nginx"} |= "GET" | json | path="/home")[10s])) by (region)
-```
-
-### Functions
-
-Loki supports several functions to operate on data. These are described in detail in the expression language [functions](functions/) page.
-
-### Binary Operators
-
-#### Arithmetic Binary Operators
+### Arithmetic operators
The following binary arithmetic operators exist in Loki:
@@ -777,7 +39,7 @@ The result is propagated into the result vector with the grouping labels becomin
Pay special attention to [operator order](#operator-order) when chaining arithmetic operators.
-##### Arithmetic Examples
+#### Arithmetic Examples
Implement a health check with a simple query:
@@ -797,7 +59,7 @@ Get proportion of warning logs to error logs for the `foo` app
sum(rate({app="foo", level="warn"}[1m])) / sum(rate({app="foo", level="error"}[1m]))
```
-#### Logical/set binary operators
+### Logical and set operators
These logical/set binary operators are only defined between two vectors:
@@ -813,7 +75,7 @@ Other elements are dropped.
`vector1 unless vector2` results in a vector consisting of the elements of vector1 for which there are no elements in vector2 with exactly matching label sets.
All matching elements in both vectors are dropped.
-##### Binary Operators Examples
+##### Binary operators examples
This contrived query will return the intersection of these queries, effectively `rate({app="bar"})`:
@@ -821,7 +83,7 @@ This contrived query will return the intersection of these queries, effectively
rate({app=~"foo|bar"}[1m]) and rate({app="bar"}[1m])
```
-#### Comparison operators
+### Comparison operators
- `==` (equality)
- `!=` (inequality)
@@ -870,7 +132,7 @@ Same as above, but vectors have their values set to `1` if they pass the compari
sum without(app) (count_over_time({app="foo"}[1m])) > bool sum without(app) (count_over_time({app="bar"}[1m]))
```
-#### Operator order
+### Order of operations
When chaining or combining operators, you have to consider operator precedence:
Generally, you can assume regular [mathematical convention](https://en.wikipedia.org/wiki/Order_of_operations) with operators on the same precedence level being left-associative.
@@ -881,7 +143,7 @@ More details can be found in the [Golang language documentation](https://golang.
`2 * 3 % 2` is evaluated as `(2 * 3) % 2`.
-### Comments
+## Comments
LogQL queries can be commented using the `#` character:
@@ -898,7 +160,7 @@ With multi-line LogQL queries, the query parser can exclude whole or partial lin
| bar="baz" # this checks if bar = "baz"
```
-### Pipeline Errors
+## Pipeline Errors
There are multiple reasons which cause pipeline processing errors, such as:
@@ -932,3 +194,33 @@ quantile_over_time(
```
>Metric queries cannot contains errors, in case errors are found during execution, Loki will return an error and appropriate status code.
+
+## Functions
+
+Loki supports functions to operate on data.
+
+### label_replace()
+
+For each timeseries in `v`,
+
+```
+label_replace(v instant-vector,
+ dst_label string,
+ replacement string,
+ src_label string,
+ regex string)
+```
+matches the regular expression `regex` against the label `src_label`.
+If it matches, then the timeseries is returned with the label `dst_label` replaced by the expansion of `replacement`.
+
+`$1` is replaced with the first matching subgroup,
+`$2` with the second etc.
+If the regular expression doesn't match,
+then the timeseries is returned unchanged.
+
+This example will return a vector with each time series having a `foo` label with the value `a` added to it:
+
+```logql
+label_replace(rate({job="api-server",service="a:c"} |= "err" [1m]), "foo", "$1",
+ "service", "(.*):.*")
+```
diff --git a/docs/sources/logql/functions.md b/docs/sources/logql/functions.md
deleted file mode 100644
index 0cbfe1dfe69e9..0000000000000
--- a/docs/sources/logql/functions.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Functions
-weight: 10
----
-
-# Functions
-
-## label_replace()
-
-For each timeseries in `v`, `label_replace(v instant-vector, dst_label string, replacement string, src_label string, regex string)` matches the regular expression `regex` against the label `src_label`. If it matches, then the timeseries is returned with the label `dst_label` replaced by the expansion of `replacement`. `$1` is replaced with the first matching subgroup, `$2` with the second etc. If the regular expression doesn't match then the timeseries is returned unchanged.
-
-This example will return a vector with each time series having a `foo` label with the value `a` added to it:
-
-```logql
-label_replace(rate({job="api-server",service="a:c"} |= "err" [1m]), "foo", "$1", "service", "(.*):.*")
-```
diff --git a/docs/sources/logql/ip.md b/docs/sources/logql/ip.md
index 3023bdeb87a1c..905e7f8d831be 100644
--- a/docs/sources/logql/ip.md
+++ b/docs/sources/logql/ip.md
@@ -1,6 +1,6 @@
---
title: Matching IP addresses
-weight: 30
+weight: 40
---
# Matching IP addresses
diff --git a/docs/sources/logql/log_queries.md b/docs/sources/logql/log_queries.md
new file mode 100644
index 0000000000000..02ca648f2bed4
--- /dev/null
+++ b/docs/sources/logql/log_queries.md
@@ -0,0 +1,581 @@
+---
+title: Log queries
+weight: 10
+---
+# Log queries
+
+All LogQL queries contain a **log stream selector**.
+
+
+
+
+Optionally, the log stream selector can be followed by a **log pipeline**. A log pipeline is a set of stage expressions that are chained together and applied to the selected log streams. Each expression can filter out, parse, or mutate log lines and their respective labels.
+
+The following example shows a full log query in action:
+
+```logql
+{container="query-frontend",namespace="loki-dev"} |= "metrics.go" | logfmt | duration > 10s and throughput_mb < 500
+```
+
+The query is composed of:
+
+- a log stream selector `{container="query-frontend",namespace="loki-dev"}` which targets the `query-frontend` container in the `loki-dev` namespace.
+- a log pipeline `|= "metrics.go" | logfmt | duration > 10s and throughput_mb < 500` which will filter out log that contains the word `metrics.go`, then parses each log line to extract more labels and filter with them.
+
+> To avoid escaping special characters you can use the `` ` ``(backtick) instead of `"` when quoting strings.
+For example `` `\w+` `` is the same as `"\\w+"`.
+This is specially useful when writing a regular expression which contains multiple backslashes that require escaping.
+
+## Log stream selector
+
+The stream selector determines which log streams to include in a query's results.
+A log stream is a unique source of log content, such as a file.
+A more granular log stream selector then reduces the number of searched streams to a manageable volume.
+This means that the labels passed to the log stream selector will affect the relative performance of the query's execution.
+
+The log stream selector is specified by one or more comma-separated key-value pairs. Each key is a log label and each value is that label's value.
+Curly braces (`{` and `}`) delimit the stream selector.
+
+Consider this stream selector:
+
+```logql
+{app="mysql",name="mysql-backup"}
+```
+
+All log streams that have both a label of `app` whose value is `mysql`
+and a label of `name` whose value is `mysql-backup` will be included in
+the query results.
+A stream may contain other pairs of labels and values,
+but only the specified pairs within the stream selector are used to determine
+which streams will be included within the query results.
+
+The same rules that apply for [Prometheus Label Selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#instant-vector-selectors) apply for Loki log stream selectors.
+
+The `=` operator after the label name is a **label matching operator**.
+The following label matching operators are supported:
+
+- `=`: exactly equal
+- `!=`: not equal
+- `=~`: regex matches
+- `!~`: regex does not match
+
+Regex log stream examples:
+
+- `{name =~ "mysql.+"}`
+- `{name !~ "mysql.+"}`
+- `` {name !~ `mysql-\d+`} ``
+
+**Note:** The `=~` regex operator is fully anchored, meaning regex must match against the *entire* string, including newlines. The regex `.` character does not match newlines by default. If you want the regex dot character to match newlines you can use the single-line flag, like so: `(?s)search_term.+` matches `search_term\n`.
+
+## Log pipeline
+
+A log pipeline can be appended to a log stream selector to further process and filter log streams. It is composed of a set of expressions. Each expression is executed in left to right sequence for each log line. If an expression filters out a log line, the pipeline will stop processing the current log line and start processing the next log line.
+
+Some expressions can mutate the log content and respective labels,
+which will be then be available for further filtering and processing in subsequent expressions.
+An example that mutates is the expression
+
+```
+| line_format "{{.status_code}}"
+```
+
+
+Log pipeline expressions fall into one of three categories:
+
+- Filtering expressions: [line filter expressions](#line-filter-expression)
+and
+[label filter expressions](#label-filter-expression)
+- [Parsing expressions](#parser-expression)
+- Formatting expressions: [line format expressions](#line-format-expression)
+and
+[label format expressions](#labels-format-expression)
+
+### Line filter expression
+
+The line filter expression does a distributed `grep`
+over the aggregated logs from the matching log streams.
+It searches the contents of the log line,
+discarding those lines that do not match the case sensitive expression.
+
+Each line filter expression has a **filter operator**
+followed by text or a regular expression.
+These filter operators are supported:
+
+- `|=`: Log line contains string
+- `!=`: Log line does not contain string
+- `|~`: Log line contains a match to the regular expression
+- `!~`: Log line does not contain a match to the regular expression
+
+Line filter expression examples:
+
+- Keep log lines that have the substring "error":
+
+ ```
+ |= "error"
+ ```
+
+ A complete query using this example:
+
+ ```
+ {job="mysql"} |= "error"
+ ```
+
+- Discard log lines that have the substring "kafka.server:type=ReplicaManager":
+
+ ```
+ != "kafka.server:type=ReplicaManager"
+ ```
+
+ A complete query using this example:
+
+ ```
+ {instance=~"kafka-[23]",name="kafka"} != "kafka.server:type=ReplicaManager"
+ ```
+
+- Keep log lines that contain a substring that starts with `tsdb-ops` and ends with `io:2003`. A complete query with a regular expression:
+
+ ```
+ {name="kafka"} |~ "tsdb-ops.*io:2003"
+ ```
+
+- Keep log lines that contain a substring that starts with `error=`,
+and is followed by 1 or more word characters. A complete query with a regular expression:
+
+ ```
+ {name="cassandra"} |~ `error=\w+`
+ ```
+
+Filter operators can be chained.
+Filters are applied sequentially.
+Query results will have satisfied every filter.
+This complete query example will give results that include the string `error`,
+and do not include the string `timeout`.
+
+```logql
+{job="mysql"} |= "error" != "timeout"
+```
+
+When using `|~` and `!~`, Go (as in [Golang](https://golang.org/)) [RE2 syntax](https://github.com/google/re2/wiki/Syntax) regex may be used.
+The matching is case-sensitive by default.
+Switch to case-insensitive matching by prefixing the regular expression
+with `(?i)`.
+
+While line filter expressions could be placed anywhere within a log pipeline,
+it is almost always better to have them at the beginning.
+Placing them at the beginning improves the performance of the query,
+as it only does further processing when a line matches.
+For example,
+ while the results will be the same,
+the query specified with
+
+```
+{job="mysql"} |= "error" | json | line_format "{{.err}}"
+```
+
+will always run faster than
+
+```
+{job="mysql"} | json | line_format "{{.message}}" |= "error"
+```
+
+Line filter expressions are the fastest way to filter logs once the
+log stream selectors have been applied.
+
+Line filter expressions have support matching IP addresses. See [Matching IP addresses](ip/) for details.
+
+### Label filter expression
+
+Label filter expression allows filtering log line using their original and extracted labels. It can contain multiple predicates.
+
+A predicate contains a **label identifier**, an **operation** and a **value** to compare the label with.
+
+For example with `cluster="namespace"` the cluster is the label identifier, the operation is `=` and the value is "namespace". The label identifier is always on the right side of the operation.
+
+We support multiple **value** types which are automatically inferred from the query input.
+
+- **String** is double quoted or backticked such as `"200"` or \``us-central1`\`.
+- **[Duration](https://golang.org/pkg/time/#ParseDuration)** is a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+- **Number** are floating-point number (64bits), such as`250`, `89.923`.
+- **Bytes** is a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "42MB", "1.5Kib" or "20b". Valid bytes units are "b", "kib", "kb", "mib", "mb", "gib", "gb", "tib", "tb", "pib", "pb", "eib", "eb".
+
+String type work exactly like Prometheus label matchers use in [log stream selector](#log-stream-selector). This means you can use the same operations (`=`,`!=`,`=~`,`!~`).
+
+> The string type is the only one that can filter out a log line with a label `__error__`.
+
+Using Duration, Number and Bytes will convert the label value prior to comparision and support the following comparators:
+
+- `==` or `=` for equality.
+- `!=` for inequality.
+- `>` and `>=` for greater than and greater than or equal.
+- `<` and `<=` for lesser than and lesser than or equal.
+
+For instance, `logfmt | duration > 1m and bytes_consumed > 20MB`
+
+If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors](#pipeline-errors) section.
+
+You can chain multiple predicates using `and` and `or` which respectively express the `and` and `or` binary operations. `and` can be equivalently expressed by a comma, a space or another pipe. Label filters can be place anywhere in a log pipeline.
+
+This means that all the following expressions are equivalent:
+
+```logql
+| duration >= 20ms or size == 20kb and method!~"2.."
+| duration >= 20ms or size == 20kb | method!~"2.."
+| duration >= 20ms or size == 20kb , method!~"2.."
+| duration >= 20ms or size == 20kb method!~"2.."
+
+```
+
+By default the precedence of multiple predicates is right to left. You can wrap predicates with parenthesis to force a different precedence left to right.
+
+For example the following are equivalent.
+
+```logql
+| duration >= 20ms or method="GET" and size <= 20KB
+| ((duration >= 20ms or method="GET") and size <= 20KB)
+```
+
+It will evaluate first `duration >= 20ms or method="GET"`. To evaluate first `method="GET" and size <= 20KB`, make sure to use proper parenthesis as shown below.
+
+```logql
+| duration >= 20ms or (method="GET" and size <= 20KB)
+```
+
+> Label filter expressions are the only expression allowed after the [unwrap expression](#unwrapped-range-aggregations). This is mainly to allow filtering errors from the metric extraction (see [errors](#pipeline-errors)).
+
+Label filter expressions have support matching IP addresses. See [Matching IP addresses](ip/) for details.
+
+### Parser expression
+
+Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations](#metric-queries).
+
+Extracted label keys are automatically sanitized by all parsers, to follow Prometheus metric name convention.(They can only contain ASCII letters and digits, as well as underscores and colons. They cannot start with a digit.)
+
+For instance, the pipeline `| json` will produce the following mapping:
+```json
+{ "a.b": {c: "d"}, e: "f" }
+```
+->
+```
+{a_b_c="d", e="f"}
+```
+
+In case of errors, for instance if the line is not in the expected format, the log line won't be filtered but instead will get a new `__error__` label added.
+
+If an extracted label key name already exists in the original log stream, the extracted label key will be suffixed with the `_extracted` keyword to make the distinction between the two labels. You can forcefully override the original label using a [label formatter expression](#labels-format-expression). However if an extracted key appears twice, only the latest label value will be kept.
+
+Loki supports [JSON](#json), [logfmt](#logfmt), [pattern](#pattern), [regexp](#regexp) and [unpack](#unpack) parsers.
+
+It's easier to use the predefined parsers `json` and `logfmt` when you can. If you can't, the `pattern` and `regexp` parsers can be used for log lines with an unusual structure. The `pattern` parser is easier and faster to write; it also outperforms the `regexp` parser.
+Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in [Multiple parsers](#multiple-parsers).
+
+#### JSON
+
+The **json** parser operates in two modes:
+
+1. **without** parameters:
+
+ Adding `| json` to your pipeline will extract all json properties as labels if the log line is a valid json document.
+ Nested properties are flattened into label keys using the `_` separator.
+
+ Note: **Arrays are skipped**.
+
+ For example the json parsers will extract from the following document:
+
+ ```json
+ {
+ "protocol": "HTTP/2.0",
+ "servers": ["129.0.1.1","10.2.1.3"],
+ "request": {
+ "time": "6.032",
+ "method": "GET",
+ "host": "foo.grafana.net",
+ "size": "55",
+ "headers": {
+ "Accept": "*/*",
+ "User-Agent": "curl/7.68.0"
+ }
+ },
+ "response": {
+ "status": 401,
+ "size": "228",
+ "latency_seconds": "6.031"
+ }
+ }
+ ```
+
+ The following list of labels:
+
+ ```kv
+ "protocol" => "HTTP/2.0"
+ "request_time" => "6.032"
+ "request_method" => "GET"
+ "request_host" => "foo.grafana.net"
+ "request_size" => "55"
+ "response_status" => "401"
+ "response_size" => "228"
+ "response_size" => "228"
+ ```
+
+2. **with** parameters:
+
+ Using `| json label="expression", another="expression"` in your pipeline will extract only the
+ specified json fields to labels. You can specify one or more expressions in this way, the same
+ as [`label_format`](#labels-format-expression); all expressions must be quoted.
+
+ Currently, we only support field access (`my.field`, `my["field"]`) and array access (`list[0]`), and any combination
+ of these in any level of nesting (`my.list[0]["field"]`).
+
+ For example, `| json first_server="servers[0]", ua="request.headers[\"User-Agent\"]` will extract from the following document:
+
+ ```json
+ {
+ "protocol": "HTTP/2.0",
+ "servers": ["129.0.1.1","10.2.1.3"],
+ "request": {
+ "time": "6.032",
+ "method": "GET",
+ "host": "foo.grafana.net",
+ "size": "55",
+ "headers": {
+ "Accept": "*/*",
+ "User-Agent": "curl/7.68.0"
+ }
+ },
+ "response": {
+ "status": 401,
+ "size": "228",
+ "latency_seconds": "6.031"
+ }
+ }
+ ```
+
+ The following list of labels:
+
+ ```kv
+ "first_server" => "129.0.1.1"
+ "ua" => "curl/7.68.0"
+ ```
+
+ If an array or an object returned by an expression, it will be assigned to the label in json format.
+
+ For example, `| json server_list="servers", headers="request.headers` will extract:
+
+ ```kv
+ "server_list" => `["129.0.1.1","10.2.1.3"]`
+ "headers" => `{"Accept": "*/*", "User-Agent": "curl/7.68.0"}`
+ ```
+
+#### logfmt
+
+The **logfmt** parser can be added using the `| logfmt` and will extract all keys and values from the [logfmt](https://brandur.org/logfmt) formatted log line.
+
+For example the following log line:
+
+```logfmt
+at=info method=GET path=/ host=grafana.net fwd="124.133.124.161" service=8ms status=200
+```
+
+will get those labels extracted:
+
+```kv
+"at" => "info"
+"method" => "GET"
+"path" => "/"
+"host" => "grafana.net"
+"fwd" => "124.133.124.161"
+"service" => "8ms"
+"status" => "200"
+```
+
+#### Pattern
+
+<span style="background-color:#f3f973;">The pattern parser is a beta feature.</span>
+
+The pattern parser allows the explicit extraction of fields from log lines by defining a pattern expression (`| pattern "<pattern-expression>"`). The expression matches the structure of a log line.
+
+Consider this NGINX log line.
+
+```log
+0.191.12.2 - - [10/Jun/2021:09:14:29 +0000] "GET /api/plugins/versioncheck HTTP/1.1" 200 2 "-" "Go-http-client/2.0" "13.76.247.102, 34.120.177.193" "TLSv1.2" "US" ""
+```
+
+This log line can be parsed with the expression
+
+`<ip> - - <_> "<method> <uri> <_>" <status> <size> <_> "<agent>" <_>`
+
+to extract these fields:
+
+```kv
+"ip" => "0.191.12.2"
+"method" => "GET"
+"uri" => "/api/plugins/versioncheck"
+"status" => "200"
+"size" => "2"
+"agent" => "Go-http-client/2.0"
+```
+
+A pattern expression is composed of captures and literals.
+
+A capture is a field name delimited by the `<` and `>` characters. `<example>` defines the field name `example`.
+An unnamed capture appears as `<_>`. The unnamed capture skips matched content.
+
+Captures are matched from the line beginning or the previous set of literals, to the line end or the next set of literals.
+If a capture is not matched, the pattern parser will stop.
+
+Literals can be any sequence of UTF-8 characters, including whitespace characters.
+
+By default, a pattern expression is anchored at the start of the log line. If the expression start with literals, then the log line must also start with the same set of literals. Use `<_>` at the beginning of the expression to anchor the expression at the start.
+
+Consider the log line
+
+```log
+level=debug ts=2021-06-10T09:24:13.472094048Z caller=logging.go:66 traceID=0568b66ad2d9294c msg="POST /loki/api/v1/push (204) 16.652862ms"
+```
+
+To match `msg="`, use the expression:
+
+```pattern
+<_> msg="<method> <path> (<status>) <latency>"
+```
+
+A pattern expression is invalid if
+
+- It does not contain any named capture.
+- It contains two consecutive captures not separated by whitespace characters.
+
+#### Regular expression
+
+Unlike the logfmt and json, which extract implicitly all values and takes no parameters, the regexp parser takes a single parameter `| regexp "<re>"` which is the regular expression using the [Golang](https://golang.org/) [RE2 syntax](https://github.com/google/re2/wiki/Syntax).
+
+The regular expression must contain a least one named sub-match (e.g `(?P<name>re)`), each sub-match will extract a different label.
+
+For example the parser `| regexp "(?P<method>\\w+) (?P<path>[\\w|/]+) \\((?P<status>\\d+?)\\) (?P<duration>.*)"` will extract from the following line:
+
+```log
+POST /api/prom/api/v1/query_range (200) 1.5s
+```
+
+those labels:
+
+```kv
+"method" => "POST"
+"path" => "/api/prom/api/v1/query_range"
+"status" => "200"
+"duration" => "1.5s"
+```
+
+#### unpack
+
+The `unpack` parser parses a JSON log line, unpacking all embedded labels in the [`pack`](../clients/promtail/stages/pack/) stage.
+**A special property `_entry` will also be used to replace the original log line**.
+
+For example, using `| unpack` with the log line:
+
+```json
+{
+ "container": "myapp",
+ "pod": "pod-3223f",
+ "_entry": "original log message"
+}
+```
+
+extracts the `container` and `pod` labels; it sets `original log message` as the new log line.
+
+You can combine the `unpack` and `json` parsers (or any other parsers) if the original embedded log line is of a specific format.
+
+### Line format expression
+
+The line format expression can rewrite the log line content by using the [text/template](https://golang.org/pkg/text/template/) format.
+It takes a single string parameter `| line_format "{{.label_name}}"`, which is the template format. All labels are injected variables into the template and are available to use with the `{{.label_name}}` notation.
+
+For example the following expression:
+
+```logql
+{container="frontend"} | logfmt | line_format "{{.query}} {{.duration}}"
+```
+
+Will extract and rewrite the log line to only contains the query and the duration of a request.
+
+You can use double quoted string for the template or backticks `` `{{.label_name}}` `` to avoid the need to escape special characters.
+
+`line_format` also supports `math` functions. Example:
+
+If we have the following labels `ip=1.1.1.1`, `status=200` and `duration=3000`(ms), we can divide the duration by `1000` to get the value in seconds.
+
+```logql
+{container="frontend"} | logfmt | line_format "{{.ip}} {{.status}} {{div .duration 1000}}"
+```
+
+The above query will give us the `line` as `1.1.1.1 200 3`
+
+See [template functions](template_functions/) to learn about available functions in the template format.
+
+### Labels format expression
+
+The `| label_format` expression can rename, modify or add labels. It takes as parameter a comma separated list of equality operations, enabling multiple operations at once.
+
+When both side are label identifiers, for example `dst=src`, the operation will rename the `src` label into `dst`.
+
+The left side can alternatively be a template string (double quoted or backtick), for example `dst="{{.status}} {{.query}}"`, in which case the `dst` label value is replaced by the result of the [text/template](https://golang.org/pkg/text/template/) evaluation. This is the same template engine as the `| line_format` expression, which means labels are available as variables and you can use the same list of [functions](functions/).
+
+In both cases, if the destination label doesn't exist, then a new one is created.
+
+The renaming form `dst=src` will _drop_ the `src` label after remapping it to the `dst` label. However, the _template_ form will preserve the referenced labels, such that `dst="{{.src}}"` results in both `dst` and `src` having the same value.
+
+> A single label name can only appear once per expression. This means `| label_format foo=bar,foo="new"` is not allowed but you can use two expressions for the desired effect: `| label_format foo=bar | label_format foo="new"`
+
+## Log queries examples
+
+### Multiple filtering
+
+Filtering should be done first using label matchers, then line filters (when possible) and finally using label filters. The following query demonstrate this.
+
+```logql
+{cluster="ops-tools1", namespace="loki-dev", job="loki-dev/query-frontend"} |= "metrics.go" !="out of order" | logfmt | duration > 30s or status_code!="200"
+```
+
+### Multiple parsers
+
+To extract the method and the path of the following logfmt log line:
+
+```log
+level=debug ts=2020-10-02T10:10:42.092268913Z caller=logging.go:66 traceID=a9d4d8a928d8db1 msg="POST /api/prom/api/v1/query_range (200) 1.5s"
+```
+
+You can use multiple parsers (logfmt and regexp) like this.
+
+```logql
+{job="cortex-ops/query-frontend"} | logfmt | line_format "{{.msg}}" | regexp "(?P<method>\\w+) (?P<path>[\\w|/]+) \\((?P<status>\\d+?)\\) (?P<duration>.*)"
+```
+
+This is possible because the `| line_format` reformats the log line to become `POST /api/prom/api/v1/query_range (200) 1.5s` which can then be parsed with the `| regexp ...` parser.
+
+### Formatting
+
+The following query shows how you can reformat a log line to make it easier to read on screen.
+
+```logql
+{cluster="ops-tools1", name="querier", namespace="loki-dev"}
+ |= "metrics.go" != "loki-canary"
+ | logfmt
+ | query != ""
+ | label_format query="{{ Replace .query \"\\n\" \"\" -1 }}"
+ | line_format "{{ .ts}}\t{{.duration}}\ttraceID = {{.traceID}}\t{{ printf \"%-100.100s\" .query }} "
+```
+
+Label formatting is used to sanitize the query while the line format reduce the amount of information and creates a tabular output.
+
+For those given log line:
+
+```log
+level=info ts=2020-10-23T20:32:18.094668233Z caller=metrics.go:81 org_id=29 traceID=1980d41501b57b68 latency=fast query="{cluster=\"ops-tools1\", job=\"cortex-ops/query-frontend\"} |= \"query_range\"" query_type=filter range_type=range length=15m0s step=7s duration=650.22401ms status=200 throughput_mb=1.529717 total_bytes_mb=0.994659
+level=info ts=2020-10-23T20:32:18.068866235Z caller=metrics.go:81 org_id=29 traceID=1980d41501b57b68 latency=fast query="{cluster=\"ops-tools1\", job=\"cortex-ops/query-frontend\"} |= \"query_range\"" query_type=filter range_type=range length=15m0s step=7s duration=624.008132ms status=200 throughput_mb=0.693449 total_bytes_mb=0.432718
+```
+
+The result would be:
+
+```log
+2020-10-23T20:32:18.094668233Z 650.22401ms traceID = 1980d41501b57b68 {cluster="ops-tools1", job="cortex-ops/query-frontend"} |= "query_range"
+2020-10-23T20:32:18.068866235Z 624.008132ms traceID = 1980d41501b57b68 {cluster="ops-tools1", job="cortex-ops/query-frontend"} |= "query_range"
+```
+
diff --git a/docs/sources/logql/metric_queries.md b/docs/sources/logql/metric_queries.md
new file mode 100644
index 0000000000000..34d1906e413a1
--- /dev/null
+++ b/docs/sources/logql/metric_queries.md
@@ -0,0 +1,164 @@
+---
+title: Metric queries
+weight: 20
+---
+
+# Metric queries
+
+Metric queries extend log queries by applying a function to log query results.
+This powerful feature creates metrics from logs.
+
+Metric queries can be used to calculate the rate of error messages or the top N log sources with the greatest quantity of logs over the last 3 hours.
+
+Combined with parsers, metric queries can also be used to calculate metrics from a sample value within the log line, such as latency or request size.
+All labels, including extracted ones, will be available for aggregations and generation of new series.
+
+## Range Vector aggregation
+
+LogQL shares the [range vector](https://prometheus.io/docs/prometheus/latest/querying/basics/#range-vector-selectors) concept of Prometheus.
+In Loki, the selected range of samples is a range of selected log or label values.
+
+The aggregation is applied over a time duration.
+Loki defines [Time Durations](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations) with the same syntax as Prometheus.
+
+Loki supports two types of range vector aggregations: log range aggregations and unwrapped range aggregations.
+
+### Log range aggregations
+
+A log range aggregation is a query followed by a duration.
+A function is applied to aggregate the query over the duration.
+The duration can be placed
+after the log stream selector or at end of the log pipeline.
+
+The functions:
+
+- `rate(log-range)`: calculates the number of entries per second
+- `count_over_time(log-range)`: counts the entries for each log stream within the given range.
+- `bytes_rate(log-range)`: calculates the number of bytes per second for each stream.
+- `bytes_over_time(log-range)`: counts the amount of bytes used by each log stream for a given range.
+- `absent_over_time(log-range)`: returns an empty vector if the range vector passed to it has any elements and a 1-element vector with the value 1 if the range vector passed to it has no elements. (`absent_over_time` is useful for alerting on when no time series and logs stream exist for label combination for a certain amount of time.)
+
+Examples:
+
+- Count all the log lines within the last five minutes for the MySQL job.
+
+ ```logql
+ count_over_time({job="mysql"}[5m])
+ ```
+
+- This aggregation includes filters and parsers.
+ It returns the per-second rate of all non-timeout errors within the last minutes per host for the MySQL job and only includes errors whose duration is above ten seconds.
+
+ ```logql
+ sum by (host) (rate({job="mysql"} |= "error" != "timeout" | json | duration > 10s [1m]))
+ ```
+
+### Unwrapped range aggregations
+
+Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be used within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors](#pipeline-errors).
+
+The unwrap expression is noted `| unwrap label_identifier` where the label identifier is the label name to use for extracting sample values.
+
+Since label values are string, by default a conversion into a float (64bits) will be attempted, in case of failure the `__error__` label is added to the sample.
+Optionally the label identifier can be wrapped by a conversion function `| unwrap <function>(label_identifier)`, which will attempt to convert the label value from a specific format.
+
+We currently support the functions:
+- `duration_seconds(label_identifier)` (or its short equivalent `duration`) which will convert the label value in seconds from the [go duration format](https://golang.org/pkg/time/#ParseDuration) (e.g `5m`, `24s30ms`).
+- `bytes(label_identifier)` which will convert the label value to raw bytes applying the bytes unit (e.g. `5 MiB`, `3k`, `1G`).
+
+Supported function for operating over unwrapped ranges are:
+
+- `rate(unwrapped-range)`: calculates per second rate of all values in the specified interval.
+- `sum_over_time(unwrapped-range)`: the sum of all values in the specified interval.
+- `avg_over_time(unwrapped-range)`: the average value of all points in the specified interval.
+- `max_over_time(unwrapped-range)`: the maximum value of all points in the specified interval.
+- `min_over_time(unwrapped-range)`: the minimum value of all points in the specified interval
+- `first_over_time(unwrapped-range)`: the first value of all points in the specified interval
+- `last_over_time(unwrapped-range)`: the last value of all points in the specified interval
+- `stdvar_over_time(unwrapped-range)`: the population standard variance of the values in the specified interval.
+- `stddev_over_time(unwrapped-range)`: the population standard deviation of the values in the specified interval.
+- `quantile_over_time(scalar,unwrapped-range)`: the φ-quantile (0 ≤ φ ≤ 1) of the values in the specified interval.
+- `absent_over_time(unwrapped-range)`: returns an empty vector if the range vector passed to it has any elements and a 1-element vector with the value 1 if the range vector passed to it has no elements. (`absent_over_time` is useful for alerting on when no time series and logs stream exist for label combination for a certain amount of time.)
+
+Except for `sum_over_time`,`absent_over_time` and `rate`, unwrapped range aggregations support grouping.
+
+```logql
+<aggr-op>([parameter,] <unwrapped-range>) [without|by (<label list>)]
+```
+
+Which can be used to aggregate over distinct labels dimensions by including a `without` or `by` clause.
+
+`without` removes the listed labels from the result vector, while all other labels are preserved the output. `by` does the opposite and drops labels that are not listed in the `by` clause, even if their label values are identical between all elements of the vector.
+
+### Unwrapped examples
+
+```logql
+quantile_over_time(0.99,
+ {cluster="ops-tools1",container="ingress-nginx"}
+ | json
+ | __error__ = ""
+ | unwrap request_time [1m])) by (path)
+```
+
+This example calculates the p99 of the nginx-ingress latency by path.
+
+```logql
+sum by (org_id) (
+ sum_over_time(
+ {cluster="ops-tools1",container="loki-dev"}
+ |= "metrics.go"
+ | logfmt
+ | unwrap bytes_processed [1m])
+ )
+```
+
+This calculates the amount of bytes processed per organization ID.
+
+## Built-in aggregation operators
+
+Like [PromQL](https://prometheus.io/docs/prometheus/latest/querying/operators/#aggregation-operators), LogQL supports a subset of built-in aggregation operators that can be used to aggregate the element of a single vector, resulting in a new vector of fewer elements but with aggregated values:
+
+- `sum`: Calculate sum over labels
+- `avg`: Calculate the average over labels
+- `min`: Select minimum over labels
+- `max`: Select maximum over labels
+- `stddev`: Calculate the population standard deviation over labels
+- `stdvar`: Calculate the population standard variance over labels
+- `count`: Count number of elements in the vector
+- `topk`: Select largest k elements by sample value
+- `bottomk`: Select smallest k elements by sample value
+
+The aggregation operators can either be used to aggregate over all label values or a set of distinct label values by including a `without` or a `by` clause:
+
+```logql
+<aggr-op>([parameter,] <vector expression>) [without|by (<label list>)]
+```
+
+`parameter` is required when using `topk` and `bottomk`.
+`topk` and `bottomk` are different from other aggregators in that a subset of the input samples, including the original labels, are returned in the result vector.
+
+`by` and `without` are only used to group the input vector.
+The `without` clause removes the listed labels from the resulting vector, keeping all others.
+The `by` clause does the opposite, dropping labels that are not listed in the clause, even if their label values are identical between all elements of the vector.
+
+### Vector aggregation examples
+
+Get the top 10 applications by the highest log throughput:
+
+```logql
+topk(10,sum(rate({region="us-east1"}[5m])) by (name))
+```
+
+Get the count of log lines for the last five minutes for a specified job, grouping
+by level:
+
+```logql
+sum(count_over_time({job="mysql"}[5m])) by (level)
+```
+
+Get the rate of HTTP GET requests to the `/home` endpoint for NGINX logs by region:
+
+```logql
+avg(rate(({job="nginx"} |= "GET" | json | path="/home")[10s])) by (region)
+```
+
diff --git a/docs/sources/logql/query_components.png b/docs/sources/logql/query_components.png
index 1defab9a95dbd..220aced5752ec 100644
Binary files a/docs/sources/logql/query_components.png and b/docs/sources/logql/query_components.png differ
diff --git a/docs/sources/logql/query_examples.md b/docs/sources/logql/query_examples.md
index 420e81b7428e3..af6e80fb7d35a 100644
--- a/docs/sources/logql/query_examples.md
+++ b/docs/sources/logql/query_examples.md
@@ -1,6 +1,6 @@
---
title: Query examples
-weight: 40
+weight: 50
---
# Query examples
diff --git a/docs/sources/logql/template_functions.md b/docs/sources/logql/template_functions.md
index 0996dac6a0391..241cf200aa1d7 100644
--- a/docs/sources/logql/template_functions.md
+++ b/docs/sources/logql/template_functions.md
@@ -1,6 +1,6 @@
---
title: Template functions
-weight: 20
+weight: 30
---
# Template functions
|
docs
|
Organize and edit the LogQL section (#4342)
|
ff0da882d6fad4b2a51b4cb2245e846e75f03fdb
|
2025-02-07 16:48:47
|
Jackson Coelho
|
ci: refactor helm diff ci (#16143)
| false
|
diff --git a/.github/workflows/helm-diff-ci.yml b/.github/workflows/helm-diff-ci.yml
index 051efe24e3387..90373b2266171 100644
--- a/.github/workflows/helm-diff-ci.yml
+++ b/.github/workflows/helm-diff-ci.yml
@@ -11,10 +11,31 @@ permissions:
pull-requests: write
jobs:
- single-binary-diff:
- name: Single Binary Scenario
+ helm-diff:
+ name: ${{ matrix.scenario.name }}
runs-on: ubuntu-latest
timeout-minutes: 10
+ strategy:
+ matrix:
+ scenario:
+ - name: Single Binary Scenario
+ values_file: default-single-binary-values.yaml
+ use_k3d: true
+ - name: Default Values Scenario
+ values_file: default-values.yaml
+ use_k3d: true
+ - name: Ingress Values Scenario
+ values_file: ingress-values.yaml
+ use_k3d: true
+ - name: Legacy Monitoring Values Scenario
+ values_file: legacy-monitoring-values.yaml
+ use_k3d: true
+ - name: Simple Scalable AWS Kube IRSA Values Scenario
+ values_file: simple-scalable-aws-kube-irsa-values.yaml
+ use_k3d: false
+ - name: Simple Thanos Values Scenario
+ values_file: simple-thanos-values.yaml
+ use_k3d: false
steps:
- name: Checkout code
@@ -31,9 +52,11 @@ jobs:
helm repo update
- name: Setup K3D
+ if: ${{ matrix.scenario.use_k3d }}
uses: ./.github/actions/setup-k3d
- name: Setup Helm plugins
+ if: ${{ matrix.scenario.use_k3d }}
run: |
helm plugin install https://github.com/databus23/helm-diff
@@ -41,221 +64,20 @@ jobs:
run: |
helm dependency build production/helm/loki
- - name: Install latest helm release
- run: |
- helm install --create-namespace loki-release grafana/loki -f production/helm/loki/scenarios/default-single-binary-values.yaml
-
- - name: Run helm diff
- id: helm-diff
- env:
- HELM_DIFF_USE_UPGRADE_DRY_RUN: true
- run: |
- helm diff upgrade loki-release -f production/helm/loki/scenarios/default-single-binary-values.yaml production/helm/loki | tee helm_diff_output.txt
-
- - name: Convert Helm Diff Output to Markdown
- id: convert_diff
- run: |
- cat helm_diff_output.txt >> formatted_diff_output.md
-
- - name: Upload diff output as artifact
- id: upload_diff
- uses: actions/upload-artifact@v4
- with:
- name: single-binary-diff-output
- path: formatted_diff_output.md
- retention-days: 2
-
- default-values-diff:
- name: Default Values Scenario
- runs-on: ubuntu-latest
-
- steps:
- - name: Checkout code
- uses: actions/checkout@v4
-
- - name: Setup Helm
- uses: azure/setup-helm@v4
-
- - name: Add required Helm repositories
- run: |
- helm repo add minio https://charts.min.io/
- helm repo add grafana https://grafana.github.io/helm-charts
- helm repo add grafana-operator https://grafana.github.io/helm-charts
- helm repo update
-
- - name: Setup K3D
- uses: ./.github/actions/setup-k3d
-
- - name: Setup Helm plugins
- run: |
- helm plugin install https://github.com/databus23/helm-diff
-
- - name: Build helm dependencies
- run: |
- helm dependency build production/helm/loki
-
- - name: Install latest helm release
- run: |
- helm install --create-namespace loki-release grafana/loki -f production/helm/loki/scenarios/default-values.yaml
-
- - name: Run helm diff
- id: helm-diff
- env:
- HELM_DIFF_USE_UPGRADE_DRY_RUN: true
- run: |
- helm diff upgrade loki-release -f production/helm/loki/scenarios/default-values.yaml production/helm/loki | tee helm_diff_output.txt
-
- - name: Convert Helm Diff Output to Markdown
- id: convert_diff
- run: |
- cat helm_diff_output.txt >> formatted_diff_output.md
-
- - name: Upload diff output as artifact
- uses: actions/upload-artifact@v4
- id: upload_diff
- with:
- name: default-values-diff-output
- path: formatted_diff_output.md
- retention-days: 2
-
- ingress-values-diff:
- name: Ingress Values Scenario
- runs-on: ubuntu-latest
-
- steps:
- - name: Checkout code
- uses: actions/checkout@v4
-
- - name: Setup Helm
- uses: azure/setup-helm@v4
-
- - name: Add required Helm repositories
- run: |
- helm repo add minio https://charts.min.io/
- helm repo add grafana https://grafana.github.io/helm-charts
- helm repo add grafana-operator https://grafana.github.io/helm-charts
- helm repo update
-
- - name: Setup K3D
- uses: ./.github/actions/setup-k3d
-
- - name: Setup Helm plugins
- run: |
- helm plugin install https://github.com/databus23/helm-diff
-
- - name: Build helm dependencies
- run: |
- helm dependency build production/helm/loki
-
- - name: Install latest helm release
- run: |
- helm install --create-namespace loki-release grafana/loki -f production/helm/loki/scenarios/ingress-values.yaml
-
- - name: Run helm diff
- id: helm-diff
- env:
- HELM_DIFF_USE_UPGRADE_DRY_RUN: true
- run: |
- helm diff upgrade loki-release -f production/helm/loki/scenarios/ingress-values.yaml production/helm/loki | tee helm_diff_output.txt
-
- - name: Convert Helm Diff Output to Markdown
- id: convert_diff
- run: |
- cat helm_diff_output.txt >> formatted_diff_output.md
-
- - name: Upload diff output as artifact
- uses: actions/upload-artifact@v4
- id: upload_diff
- with:
- name: ingress-diff-output
- path: formatted_diff_output.md
- retention-days: 2
-
- legacy-monitoring-values-diff:
- name: Legacy Monitoring Values Scenario
- runs-on: ubuntu-latest
-
- steps:
- - name: Checkout code
- uses: actions/checkout@v4
-
- - name: Setup Helm
- uses: azure/setup-helm@v4
-
- - name: Add required Helm repositories
- run: |
- helm repo add minio https://charts.min.io/
- helm repo add grafana https://grafana.github.io/helm-charts
- helm repo add grafana-operator https://grafana.github.io/helm-charts
- helm repo update
-
- - name: Setup K3D
- uses: ./.github/actions/setup-k3d
-
- - name: Setup Helm plugins
- run: |
- helm plugin install https://github.com/databus23/helm-diff
-
- - name: Build helm dependencies
- run: |
- helm dependency build production/helm/loki
-
- - name: Install latest helm release
- run: |
- helm install --create-namespace loki-release grafana/loki -f production/helm/loki/scenarios/legacy-monitoring-values.yaml
-
- - name: Run helm diff
- id: helm-diff
+ # Conditional steps based on whether K3D is used
+ - name: Run diff with K3D
+ if: ${{ matrix.scenario.use_k3d }}
env:
HELM_DIFF_USE_UPGRADE_DRY_RUN: true
run: |
- helm diff upgrade loki-release -f production/helm/loki/scenarios/legacy-monitoring-values.yaml production/helm/loki | tee helm_diff_output.txt
+ helm install --create-namespace loki-release grafana/loki -f production/helm/loki/scenarios/${{ matrix.scenario.values_file }}
+ helm diff upgrade loki-release -f production/helm/loki/scenarios/${{ matrix.scenario.values_file }} production/helm/loki | tee helm_diff_output.txt
- - name: Convert Helm Diff Output to Markdown
- id: convert_diff
- run: |
- cat helm_diff_output.txt >> formatted_diff_output.md
-
- - name: Upload diff output as artifact
- uses: actions/upload-artifact@v4
- id: upload_diff
- with:
- name: legacy-monitoring-diff-output
- path: formatted_diff_output.md
- retention-days: 2
-
- simple-scalable-aws-kube-irsa-values-diff:
- name: Simple Scalable AWS Kube IRSA Values Scenario
- runs-on: ubuntu-latest
-
- steps:
- - name: Checkout code
- uses: actions/checkout@v4
-
- - name: Setup Helm
- uses: azure/setup-helm@v4
-
- - name: Add required Helm repositories
- run: |
- helm repo add minio https://charts.min.io/
- helm repo add grafana https://grafana.github.io/helm-charts
- helm repo add grafana-operator https://grafana.github.io/helm-charts
- helm repo update
-
- - name: Build helm dependencies
- run: |
- helm dependency build production/helm/loki
-
- - name: Generate latest manifests
- run: |
- helm template loki-release grafana/loki -f production/helm/loki/scenarios/simple-scalable-aws-kube-irsa-values.yaml > release-manifest.yaml
-
- - name: Generate current manifest
- run: |
- helm template loki-release production/helm/loki -f production/helm/loki/scenarios/simple-scalable-aws-kube-irsa-values.yaml > current-manifest.yaml
-
- - name: Compare manifests
+ - name: Run diff without K3D
+ if: ${{ !matrix.scenario.use_k3d }}
run: |
+ helm template loki-release grafana/loki -f production/helm/loki/scenarios/${{ matrix.scenario.values_file }} > release-manifest.yaml
+ helm template loki-release production/helm/loki -f production/helm/loki/scenarios/${{ matrix.scenario.values_file }} > current-manifest.yaml
diff current-manifest.yaml release-manifest.yaml > helm_diff_output.txt || true
- name: Convert Helm Diff Output to Markdown
@@ -267,54 +89,7 @@ jobs:
uses: actions/upload-artifact@v4
id: upload_diff
with:
- name: simple-scalable-aws-kube-irsa-diff-output
- path: formatted_diff_output.md
- retention-days: 2
-
- simple-thanos-values-diff:
- name: Simple Thanos Values Scenario
- runs-on: ubuntu-latest
-
- steps:
- - name: Checkout code
- uses: actions/checkout@v4
-
- - name: Setup Helm
- uses: azure/setup-helm@v4
-
- - name: Add required Helm repositories
- run: |
- helm repo add minio https://charts.min.io/
- helm repo add grafana https://grafana.github.io/helm-charts
- helm repo add grafana-operator https://grafana.github.io/helm-charts
- helm repo update
-
- - name: Build helm dependencies
- run: |
- helm dependency build production/helm/loki
-
- - name: Generate latest manifests
- run: |
- helm template loki-release grafana/loki -f production/helm/loki/scenarios/simple-thanos-values.yaml > release-manifest.yaml
-
- - name: Generate current manifest
- run: |
- helm template loki-release production/helm/loki -f production/helm/loki/scenarios/simple-thanos-values.yaml > current-manifest.yaml
-
- - name: Compare manifests
- run: |
- diff current-manifest.yaml release-manifest.yaml > helm_diff_output.txt || true
-
- - name: Convert Helm Diff Output to Markdown
- id: convert_diff
- run: |
- cat helm_diff_output.txt >> formatted_diff_output.md
-
- - name: Upload diff output as artifact
- uses: actions/upload-artifact@v4
- id: upload_diff
- with:
- name: simple-thanos-diff-output
+ name: ${{ matrix.scenario.name }}-diff-output
path: formatted_diff_output.md
retention-days: 2
@@ -322,14 +97,7 @@ jobs:
name: Summary Diffs
runs-on: ubuntu-latest
if: github.event.pull_request.head.repo.fork == false
- needs:
- - single-binary-diff
- - default-values-diff
- - ingress-values-diff
- - legacy-monitoring-values-diff
- - simple-scalable-aws-kube-irsa-values-diff
- - simple-thanos-values-diff
-
+ needs: [helm-diff]
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -337,99 +105,24 @@ jobs:
persist-credentials: false
- uses: actions/download-artifact@v4
- with:
- name: single-binary-diff-output
- path: single-binary-diff
-
- - uses: actions/download-artifact@v4
- with:
- name: default-values-diff-output
- path: default-values-diff
-
- - uses: actions/download-artifact@v4
- with:
- name: ingress-diff-output
- path: ingress-values-diff
- - uses: actions/download-artifact@v4
- with:
- name: legacy-monitoring-diff-output
- path: legacy-monitoring-values-diff
-
- - uses: actions/download-artifact@v4
- with:
- name: simple-scalable-aws-kube-irsa-diff-output
- path: simple-scalable-aws-kube-irsa-values-diff
-
- - uses: actions/download-artifact@v4
- with:
- name: simple-thanos-diff-output
- path: simple-thanos-values-diff
-
- # TODO: Make step more generic and dynamic add the scenarios as needed
- name: Combine diff outputs
run: |
echo "## Helm Diff Output - Summary" > formatted_diff_output.md
- echo "<details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo "<summary>Single Binary Scenario</summary>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo '```diff' >> formatted_diff_output.md
- cat single-binary-diff/formatted_diff_output.md >> formatted_diff_output.md
- echo '```' >> formatted_diff_output.md
- echo "</details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
-
- echo "<details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo "<summary>Default Values Scenario</summary>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo '```diff' >> formatted_diff_output.md
- cat default-values-diff/formatted_diff_output.md >> formatted_diff_output.md
- echo '```' >> formatted_diff_output.md
- echo "</details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
-
- echo "<details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo "<summary>Ingress Values Scenario</summary>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo '```diff' >> formatted_diff_output.md
- cat ingress-values-diff/formatted_diff_output.md >> formatted_diff_output.md
- echo '```' >> formatted_diff_output.md
- echo "</details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
-
- echo "<details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo "<summary>Legacy Monitoring Scenario</summary>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo '```diff' >> formatted_diff_output.md
- cat legacy-monitoring-values-diff/formatted_diff_output.md >> formatted_diff_output.md
- echo '```' >> formatted_diff_output.md
- echo "</details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
-
- echo "<details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo "<summary>Simple Scalable AWS Kube IRSA Scenario</summary>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo '```diff' >> formatted_diff_output.md
- cat simple-scalable-aws-kube-irsa-values-diff/formatted_diff_output.md >> formatted_diff_output.md
- echo '```' >> formatted_diff_output.md
- echo "</details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
-
- echo "<details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo "<summary>Simple Thanos Scenario</summary>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
- echo '```diff' >> formatted_diff_output.md
- cat simple-thanos-values-diff/formatted_diff_output.md >> formatted_diff_output.md
- echo '```' >> formatted_diff_output.md
- echo "</details>" >> formatted_diff_output.md
- echo "" >> formatted_diff_output.md
+ for scenario in */formatted_diff_output.md; do
+ scenario_name=$(dirname "$scenario")
+
+ echo "<details>" >> formatted_diff_output.md
+ echo "" >> formatted_diff_output.md
+ echo "<summary>${scenario_name}</summary>" >> formatted_diff_output.md
+ echo "" >> formatted_diff_output.md
+ echo '```diff' >> formatted_diff_output.md
+ cat "$scenario" >> formatted_diff_output.md
+ echo '```' >> formatted_diff_output.md
+ echo "</details>" >> formatted_diff_output.md
+ echo "" >> formatted_diff_output.md
+ done
- name: Post diff as PR comment
uses: marocchino/sticky-pull-request-comment@v2
diff --git a/production/helm/loki/scenarios/README.md b/production/helm/loki/scenarios/README.md
index 496286bb2009d..a69d8b299295d 100644
--- a/production/helm/loki/scenarios/README.md
+++ b/production/helm/loki/scenarios/README.md
@@ -8,6 +8,19 @@ We deploy the scenario with the latest release and then we execute a helm diff w
>*NOTE*: the helm diff output file will be available for each scenario inside github action to download for 2 days, after this you may need to re-run the job if you would like to download the output files.
+## Add new scenario to the CI
+
+To add a new scenario in the CI, you would just add a new entry to the matrix configuration:
+
+```
+strategy:
+ matrix:
+ scenario:
+ - name: New Scenario
+ values_file: new-scenario-values.yaml
+ use_k3d: true # or false depending on requirements
+```
+
## Run scenarios locally
All this process that we run in the CI can be done locally, the following steps would explain how.
|
ci
|
refactor helm diff ci (#16143)
|
74885a20c735e8cd02bb590336bb7ae81c54bb33
|
2025-01-09 05:38:04
|
renovate[bot]
|
fix(deps): update module google.golang.org/protobuf to v1.36.2 (#15635)
| false
|
diff --git a/go.mod b/go.mod
index dd11345babaea..ef312d46309fa 100644
--- a/go.mod
+++ b/go.mod
@@ -147,7 +147,7 @@ require (
go4.org/netipx v0.0.0-20230125063823-8449b0a6169f
golang.org/x/oauth2 v0.25.0
golang.org/x/text v0.21.0
- google.golang.org/protobuf v1.36.1
+ google.golang.org/protobuf v1.36.2
gotest.tools v2.2.0+incompatible
k8s.io/apimachinery v0.32.0
k8s.io/utils v0.0.0-20241104163129-6fe5fd82f078
diff --git a/go.sum b/go.sum
index 85319be119f5d..0452a1696821f 100644
--- a/go.sum
+++ b/go.sum
@@ -1653,8 +1653,8 @@ google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlba
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
-google.golang.org/protobuf v1.36.1 h1:yBPeRvTftaleIgM3PZ/WBIZ7XM/eEYAaEyCwvyjq/gk=
-google.golang.org/protobuf v1.36.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
+google.golang.org/protobuf v1.36.2 h1:R8FeyR1/eLmkutZOM5CWghmo5itiG9z0ktFlTVLuTmU=
+google.golang.org/protobuf v1.36.2/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
diff --git a/vendor/google.golang.org/protobuf/internal/impl/message_opaque.go b/vendor/google.golang.org/protobuf/internal/impl/message_opaque.go
index d407dd791e89f..d7ec53f074ac4 100644
--- a/vendor/google.golang.org/protobuf/internal/impl/message_opaque.go
+++ b/vendor/google.golang.org/protobuf/internal/impl/message_opaque.go
@@ -88,9 +88,7 @@ func opaqueInitHook(mi *MessageInfo) bool {
mi.oneofs = map[protoreflect.Name]*oneofInfo{}
for i := 0; i < mi.Desc.Oneofs().Len(); i++ {
od := mi.Desc.Oneofs().Get(i)
- if !od.IsSynthetic() {
- mi.oneofs[od.Name()] = makeOneofInfo(od, si.structInfo, mi.Exporter)
- }
+ mi.oneofs[od.Name()] = makeOneofInfoOpaque(mi, od, si.structInfo, mi.Exporter)
}
mi.denseFields = make([]*fieldInfo, fds.Len()*2)
@@ -119,6 +117,26 @@ func opaqueInitHook(mi *MessageInfo) bool {
return true
}
+func makeOneofInfoOpaque(mi *MessageInfo, od protoreflect.OneofDescriptor, si structInfo, x exporter) *oneofInfo {
+ oi := &oneofInfo{oneofDesc: od}
+ if od.IsSynthetic() {
+ fd := od.Fields().Get(0)
+ index, _ := presenceIndex(mi.Desc, fd)
+ oi.which = func(p pointer) protoreflect.FieldNumber {
+ if p.IsNil() {
+ return 0
+ }
+ if !mi.present(p, index) {
+ return 0
+ }
+ return od.Fields().Get(0).Number()
+ }
+ return oi
+ }
+ // Dispatch to non-opaque oneof implementation for non-synthetic oneofs.
+ return makeOneofInfo(od, si, x)
+}
+
func (mi *MessageInfo) fieldInfoForMapOpaque(si opaqueStructInfo, fd protoreflect.FieldDescriptor, fs reflect.StructField) fieldInfo {
ft := fs.Type
if ft.Kind() != reflect.Map {
diff --git a/vendor/google.golang.org/protobuf/internal/version/version.go b/vendor/google.golang.org/protobuf/internal/version/version.go
index 3018450df7998..386c823aa641c 100644
--- a/vendor/google.golang.org/protobuf/internal/version/version.go
+++ b/vendor/google.golang.org/protobuf/internal/version/version.go
@@ -52,7 +52,7 @@ import (
const (
Major = 1
Minor = 36
- Patch = 1
+ Patch = 2
PreRelease = ""
)
diff --git a/vendor/modules.txt b/vendor/modules.txt
index a22ba57b6d10c..bf14fd4052559 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -2184,7 +2184,7 @@ google.golang.org/grpc/xds/internal/xdsclient/xdsresource/version
## explicit; go 1.21
google.golang.org/grpc/stats/opentelemetry
google.golang.org/grpc/stats/opentelemetry/internal
-# google.golang.org/protobuf v1.36.1
+# google.golang.org/protobuf v1.36.2
## explicit; go 1.21
google.golang.org/protobuf/encoding/protodelim
google.golang.org/protobuf/encoding/protojson
|
fix
|
update module google.golang.org/protobuf to v1.36.2 (#15635)
|
f4ab1e3e89ac66e1848764dc17826abde929fdc5
|
2023-09-27 02:41:19
|
iTrooz
|
doc: Fix link typos in documentation (#10722)
| false
|
diff --git a/docs/sources/get-started/overview.md b/docs/sources/get-started/overview.md
index f418b1cf8ddf7..5ad24e4f92048 100644
--- a/docs/sources/get-started/overview.md
+++ b/docs/sources/get-started/overview.md
@@ -24,7 +24,7 @@ A typical Loki-based logging stack consists of 3 components:
- **Agent** - An agent or client, for example Promtail, which is distributed with Loki, or the Grafana Agent. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.
-- **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes]{{< relref "../get-started/deployment-modes/" >}}.
+- **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes]({{< relref "../get-started/deployment-modes" >}}).
- **[Grafana](https://github.com/grafana/grafana)** for querying and displaying log data. You can also query logs from the command line, using [LogCLI]({{< relref "../query/logcli" >}}) or using the Loki API directly.
@@ -35,7 +35,7 @@ In its most common deployment, “simple scalable mode”, Loki decouples reques
If needed, each of Loki's components can also be run as microservices designed to run natively within Kubernetes.
- **Multi-tenancy** - Loki allows multiple tenants to share a single Loki instance. With multi-tenancy, the data and requests of each tenant is completely isolated from the others.
-Multi-tenancy is [configured] (../operations/multi-tenancy) by assigning a tenant ID in the agent.
+Multi-tenancy is [configured]({{< relref "../operations/multi-tenancy" >}}) by assigning a tenant ID in the agent.
- **Third-party integrations** - Several third-party agents (clients) have support for Loki, via plugins. This lets you keep your existing observability setup while also shipping logs to Loki.
|
doc
|
Fix link typos in documentation (#10722)
|
a8d1839b224580446f2ebb8532ba27f20714bac8
|
2022-10-04 16:02:51
|
Periklis Tsirakidis
|
operator: Move Loki operand from v2.6.1 to main-ec0bf70 (#7298)
| false
|
diff --git a/operator/bundle/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/manifests/loki-operator.clusterserviceversion.yaml
index 92193f430ad6c..1b7fa2495683b 100644
--- a/operator/bundle/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/manifests/loki-operator.clusterserviceversion.yaml
@@ -1193,7 +1193,7 @@ spec:
- /manager
env:
- name: RELATED_IMAGE_LOKI
- value: quay.io/openshift-logging/loki:v2.6.1
+ value: docker.io/grafana/loki:main-ec0bf70
- name: RELATED_IMAGE_GATEWAY
value: quay.io/observatorium/api:latest
- name: RELATED_IMAGE_OPA
@@ -1321,7 +1321,7 @@ spec:
provider:
name: Grafana.com
relatedImages:
- - image: quay.io/openshift-logging/loki:v2.6.1
+ - image: docker.io/grafana/loki:main-ec0bf70
name: loki
- image: quay.io/observatorium/api:latest
name: gateway
diff --git a/operator/config/overlays/development/manager_related_image_patch.yaml b/operator/config/overlays/development/manager_related_image_patch.yaml
index 92a1c7a32f5f3..f26143a21cc18 100644
--- a/operator/config/overlays/development/manager_related_image_patch.yaml
+++ b/operator/config/overlays/development/manager_related_image_patch.yaml
@@ -9,6 +9,6 @@ spec:
- name: manager
env:
- name: RELATED_IMAGE_LOKI
- value: docker.io/grafana/loki:2.6.1
+ value: docker.io/grafana/loki:main-ec0bf70
- name: RELATED_IMAGE_GATEWAY
value: quay.io/observatorium/api:latest
diff --git a/operator/config/overlays/openshift/manager_related_image_patch.yaml b/operator/config/overlays/openshift/manager_related_image_patch.yaml
index ca74c8fb60210..589a8610ee434 100644
--- a/operator/config/overlays/openshift/manager_related_image_patch.yaml
+++ b/operator/config/overlays/openshift/manager_related_image_patch.yaml
@@ -9,7 +9,7 @@ spec:
- name: manager
env:
- name: RELATED_IMAGE_LOKI
- value: quay.io/openshift-logging/loki:v2.6.1
+ value: docker.io/grafana/loki:main-ec0bf70
- name: RELATED_IMAGE_GATEWAY
value: quay.io/observatorium/api:latest
- name: RELATED_IMAGE_OPA
diff --git a/operator/config/overlays/production/manager_related_image_patch.yaml b/operator/config/overlays/production/manager_related_image_patch.yaml
index 92a1c7a32f5f3..f26143a21cc18 100644
--- a/operator/config/overlays/production/manager_related_image_patch.yaml
+++ b/operator/config/overlays/production/manager_related_image_patch.yaml
@@ -9,6 +9,6 @@ spec:
- name: manager
env:
- name: RELATED_IMAGE_LOKI
- value: docker.io/grafana/loki:2.6.1
+ value: docker.io/grafana/loki:main-ec0bf70
- name: RELATED_IMAGE_GATEWAY
value: quay.io/observatorium/api:latest
diff --git a/operator/internal/manifests/config.go b/operator/internal/manifests/config.go
index e63085a1e5723..b86a1ee80d837 100644
--- a/operator/internal/manifests/config.go
+++ b/operator/internal/manifests/config.go
@@ -79,6 +79,11 @@ func ConfigOptions(opt Options) config.Options {
Stack: opt.Stack,
Namespace: opt.Namespace,
Name: opt.Name,
+ Compactor: config.Address{
+ FQDN: fqdn(NewCompactorHTTPService(opt).GetName(), opt.Namespace),
+ Port: httpPort,
+ Protocol: protocol,
+ },
FrontendWorker: config.Address{
FQDN: fqdn(NewQueryFrontendGRPCService(opt).GetName(), opt.Namespace),
Port: grpcPort,
diff --git a/operator/internal/manifests/internal/config/build_test.go b/operator/internal/manifests/internal/config/build_test.go
index 0c44172ca17cc..d6a33492d7d02 100644
--- a/operator/internal/manifests/internal/config/build_test.go
+++ b/operator/internal/manifests/internal/config/build_test.go
@@ -26,6 +26,7 @@ common:
access_key_id: test
secret_access_key: test123
s3forcepathstyle: true
+ compactor_address: http://loki-compactor-http-lokistack-dev.default.svc.cluster.local:3100
compactor:
compaction_interval: 2h
working_directory: /tmp/loki/compactor
@@ -189,6 +190,11 @@ overrides:
},
Namespace: "test-ns",
Name: "test",
+ Compactor: Address{
+ FQDN: "loki-compactor-http-lokistack-dev.default.svc.cluster.local",
+ Port: 3100,
+ Protocol: "http",
+ },
FrontendWorker: Address{
FQDN: "loki-query-frontend-grpc-lokistack-dev.default.svc.cluster.local",
Port: 9095,
@@ -256,6 +262,7 @@ common:
access_key_id: test
secret_access_key: test123
s3forcepathstyle: true
+ compactor_address: http://loki-compactor-http-lokistack-dev.default.svc.cluster.local:3100
compactor:
compaction_interval: 2h
working_directory: /tmp/loki/compactor
@@ -436,6 +443,11 @@ overrides:
},
Namespace: "test-ns",
Name: "test",
+ Compactor: Address{
+ FQDN: "loki-compactor-http-lokistack-dev.default.svc.cluster.local",
+ Port: 3100,
+ Protocol: "http",
+ },
FrontendWorker: Address{
FQDN: "loki-query-frontend-grpc-lokistack-dev.default.svc.cluster.local",
Port: 9095,
@@ -506,6 +518,11 @@ func TestBuild_ConfigAndRuntimeConfig_CreateLokiConfigFailed(t *testing.T) {
},
Namespace: "test-ns",
Name: "test",
+ Compactor: Address{
+ FQDN: "loki-compactor-http-lokistack-dev.default.svc.cluster.local",
+ Port: 3100,
+ Protocol: "http",
+ },
FrontendWorker: Address{
FQDN: "loki-query-frontend-grpc-lokistack-dev.default.svc.cluster.local",
Port: 9095,
@@ -572,6 +589,7 @@ common:
access_key_id: test
secret_access_key: test123
s3forcepathstyle: true
+ compactor_address: http://loki-compactor-http-lokistack-dev.default.svc.cluster.local:3100
compactor:
compaction_interval: 2h
working_directory: /tmp/loki/compactor
@@ -789,6 +807,11 @@ overrides:
},
Namespace: "test-ns",
Name: "test",
+ Compactor: Address{
+ FQDN: "loki-compactor-http-lokistack-dev.default.svc.cluster.local",
+ Port: 3100,
+ Protocol: "http",
+ },
FrontendWorker: Address{
FQDN: "loki-query-frontend-grpc-lokistack-dev.default.svc.cluster.local",
Port: 9095,
@@ -903,6 +926,7 @@ common:
access_key_id: test
secret_access_key: test123
s3forcepathstyle: true
+ compactor_address: http://loki-compactor-http-lokistack-dev.default.svc.cluster.local:3100
compactor:
compaction_interval: 2h
working_directory: /tmp/loki/compactor
@@ -1120,6 +1144,11 @@ overrides:
},
Namespace: "test-ns",
Name: "test",
+ Compactor: Address{
+ FQDN: "loki-compactor-http-lokistack-dev.default.svc.cluster.local",
+ Port: 3100,
+ Protocol: "http",
+ },
FrontendWorker: Address{
FQDN: "loki-query-frontend-grpc-lokistack-dev.default.svc.cluster.local",
Port: 9095,
@@ -1235,6 +1264,7 @@ common:
access_key_id: test
secret_access_key: test123
s3forcepathstyle: true
+ compactor_address: http://loki-compactor-http-lokistack-dev.default.svc.cluster.local:3100
compactor:
compaction_interval: 2h
working_directory: /tmp/loki/compactor
@@ -1465,6 +1495,11 @@ overrides:
},
Namespace: "test-ns",
Name: "test",
+ Compactor: Address{
+ FQDN: "loki-compactor-http-lokistack-dev.default.svc.cluster.local",
+ Port: 3100,
+ Protocol: "http",
+ },
FrontendWorker: Address{
FQDN: "loki-query-frontend-grpc-lokistack-dev.default.svc.cluster.local",
Port: 9095,
@@ -1597,6 +1632,7 @@ common:
access_key_id: test
secret_access_key: test123
s3forcepathstyle: true
+ compactor_address: http://loki-compactor-http-lokistack-dev.default.svc.cluster.local:3100
compactor:
compaction_interval: 2h
working_directory: /tmp/loki/compactor
@@ -1827,6 +1863,11 @@ overrides:
FQDN: "loki-index-gateway-grpc-lokistack-dev.default.svc.cluster.local",
Port: 9095,
},
+ Compactor: Address{
+ FQDN: "loki-compactor-http-lokistack-dev.default.svc.cluster.local",
+ Port: 3100,
+ Protocol: "http",
+ },
StorageDirectory: "/tmp/loki",
MaxConcurrent: MaxConcurrent{
AvailableQuerierCPUCores: 2,
diff --git a/operator/internal/manifests/internal/config/loki-config.yaml b/operator/internal/manifests/internal/config/loki-config.yaml
index 0c19091e79c3d..a288138d83280 100644
--- a/operator/internal/manifests/internal/config/loki-config.yaml
+++ b/operator/internal/manifests/internal/config/loki-config.yaml
@@ -44,6 +44,7 @@ common:
region_name: {{ .Region }}
container_name: {{ .Container }}
{{- end }}
+ compactor_address: {{ .Compactor.Protocol }}://{{ .Compactor.FQDN }}:{{ .Compactor.Port }}
compactor:
compaction_interval: 2h
working_directory: {{ .StorageDirectory }}/compactor
diff --git a/operator/internal/manifests/internal/config/options.go b/operator/internal/manifests/internal/config/options.go
index 708af4f9d8e3d..70f7eb9b0abcc 100644
--- a/operator/internal/manifests/internal/config/options.go
+++ b/operator/internal/manifests/internal/config/options.go
@@ -15,6 +15,7 @@ type Options struct {
Namespace string
Name string
+ Compactor Address
FrontendWorker Address
GossipRing Address
Querier Address
|
operator
|
Move Loki operand from v2.6.1 to main-ec0bf70 (#7298)
|
e7b9455327446a0960967db134d76c4cb11156d7
|
2024-01-11 18:28:40
|
Robert Jacob
|
operator: React to changes in ConfigMap used for storage CA (#11624)
| false
|
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index 9ea61a0dba4e5..f6cfa9a5cda01 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,6 @@
## Main
+- [11624](https://github.com/grafana/loki/pull/11624) **xperimental**: React to changes in ConfigMap used for storage CA
- [11481](https://github.com/grafana/loki/pull/11481) **JoaoBraveCoding**: Adds AWS STS support
- [11533](https://github.com/grafana/loki/pull/11533) **periklis**: Add serviceaccount per LokiStack resource
- [11158](https://github.com/grafana/loki/pull/11158) **btaani**: operator: Add warning for old schema configuration
diff --git a/operator/controllers/loki/lokistack_controller.go b/operator/controllers/loki/lokistack_controller.go
index 487390d7287bd..629ee85d5edd7 100644
--- a/operator/controllers/loki/lokistack_controller.go
+++ b/operator/controllers/loki/lokistack_controller.go
@@ -94,12 +94,7 @@ var (
})
createUpdateOrDeletePred = builder.WithPredicates(predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
- if e.ObjectOld.GetGeneration() == 0 && len(e.ObjectOld.GetAnnotations()) == 0 {
- return e.ObjectOld.GetResourceVersion() != e.ObjectNew.GetResourceVersion()
- }
-
- return e.ObjectOld.GetGeneration() != e.ObjectNew.GetGeneration() ||
- cmp.Diff(e.ObjectOld.GetAnnotations(), e.ObjectNew.GetAnnotations()) != ""
+ return e.ObjectOld.GetResourceVersion() != e.ObjectNew.GetResourceVersion()
},
CreateFunc: func(e event.CreateEvent) bool { return true },
DeleteFunc: func(e event.DeleteEvent) bool { return true },
@@ -207,7 +202,8 @@ func (r *LokiStackReconciler) buildController(bld k8s.Builder) error {
Owns(&rbacv1.Role{}, updateOrDeleteOnlyPred).
Owns(&rbacv1.RoleBinding{}, updateOrDeleteOnlyPred).
Watches(&corev1.Service{}, r.enqueueForAlertManagerServices(), createUpdateOrDeletePred).
- Watches(&corev1.Secret{}, r.enqueueForStorageSecret(), createUpdateOrDeletePred)
+ Watches(&corev1.Secret{}, r.enqueueForStorageSecret(), createUpdateOrDeletePred).
+ Watches(&corev1.ConfigMap{}, r.enqueueForStorageCA(), createUpdateOrDeletePred)
if r.FeatureGates.LokiStackAlerts {
bld = bld.Owns(&monitoringv1.PrometheusRule{}, updateOrDeleteOnlyPred)
@@ -324,3 +320,35 @@ func (r *LokiStackReconciler) enqueueForStorageSecret() handler.EventHandler {
return requests
})
}
+
+func (r *LokiStackReconciler) enqueueForStorageCA() handler.EventHandler {
+ return handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, obj client.Object) []reconcile.Request {
+ lokiStacks := &lokiv1.LokiStackList{}
+ if err := r.Client.List(ctx, lokiStacks, client.InNamespace(obj.GetNamespace())); err != nil {
+ r.Log.Error(err, "Error listing LokiStack resources for storage CA update")
+ return nil
+ }
+
+ var requests []reconcile.Request
+ for _, stack := range lokiStacks.Items {
+ if stack.Spec.Storage.TLS == nil {
+ continue
+ }
+
+ storageTLS := stack.Spec.Storage.TLS
+ if obj.GetName() != storageTLS.CA {
+ continue
+ }
+
+ requests = append(requests, reconcile.Request{
+ NamespacedName: types.NamespacedName{
+ Namespace: stack.Namespace,
+ Name: stack.Name,
+ },
+ })
+ r.Log.Info("Enqueued request for LokiStack because of Storage CA resource change", "LokiStack", stack.Name, "ConfigMap", obj.GetName())
+ }
+
+ return requests
+ })
+}
diff --git a/operator/controllers/loki/lokistack_controller_test.go b/operator/controllers/loki/lokistack_controller_test.go
index d8eae5a1ec66f..7421b63331b5d 100644
--- a/operator/controllers/loki/lokistack_controller_test.go
+++ b/operator/controllers/loki/lokistack_controller_test.go
@@ -203,8 +203,8 @@ func TestLokiStackController_RegisterWatchedResources(t *testing.T) {
table := []test{
{
src: &openshiftconfigv1.APIServer{},
- index: 2,
- watchesCallsCount: 3,
+ index: 3,
+ watchesCallsCount: 4,
featureGates: configv1.FeatureGates{
OpenShift: configv1.OpenShiftFeatureGates{
ClusterTLSPolicy: true,
@@ -214,8 +214,8 @@ func TestLokiStackController_RegisterWatchedResources(t *testing.T) {
},
{
src: &openshiftconfigv1.Proxy{},
- index: 2,
- watchesCallsCount: 3,
+ index: 3,
+ watchesCallsCount: 4,
featureGates: configv1.FeatureGates{
OpenShift: configv1.OpenShiftFeatureGates{
ClusterProxy: true,
@@ -226,14 +226,21 @@ func TestLokiStackController_RegisterWatchedResources(t *testing.T) {
{
src: &corev1.Service{},
index: 0,
- watchesCallsCount: 2,
+ watchesCallsCount: 3,
featureGates: configv1.FeatureGates{},
pred: createUpdateOrDeletePred,
},
{
src: &corev1.Secret{},
index: 1,
- watchesCallsCount: 2,
+ watchesCallsCount: 3,
+ featureGates: configv1.FeatureGates{},
+ pred: createUpdateOrDeletePred,
+ },
+ {
+ src: &corev1.ConfigMap{},
+ index: 2,
+ watchesCallsCount: 3,
featureGates: configv1.FeatureGates{},
pred: createUpdateOrDeletePred,
},
diff --git a/operator/internal/handlers/internal/storage/ca_configmap.go b/operator/internal/handlers/internal/storage/ca_configmap.go
index ccb4f93d06a34..ce70591e55cfa 100644
--- a/operator/internal/handlers/internal/storage/ca_configmap.go
+++ b/operator/internal/handlers/internal/storage/ca_configmap.go
@@ -1,9 +1,38 @@
package storage
-import corev1 "k8s.io/api/core/v1"
+import (
+ "crypto/sha1"
+ "fmt"
-// IsValidCAConfigMap checks if the given CA configMap has an
-// non-empty entry for the key
-func IsValidCAConfigMap(cm *corev1.ConfigMap, key string) bool {
- return cm.Data[key] != ""
+ corev1 "k8s.io/api/core/v1"
+)
+
+type caKeyError string
+
+func (e caKeyError) Error() string {
+ return fmt.Sprintf("key not present or data empty: %s", string(e))
+}
+
+// CheckCAConfigMap checks if the given CA configMap has an non-empty entry for the key used as CA certificate.
+// If the key is present it will return a hash of the current key name and contents.
+func CheckCAConfigMap(cm *corev1.ConfigMap, key string) (string, error) {
+ data := cm.Data[key]
+ if data == "" {
+ return "", caKeyError(key)
+ }
+
+ h := sha1.New()
+ if _, err := h.Write([]byte(key)); err != nil {
+ return "", err
+ }
+
+ if _, err := h.Write(hashSeparator); err != nil {
+ return "", err
+ }
+
+ if _, err := h.Write([]byte(data)); err != nil {
+ return "", err
+ }
+
+ return fmt.Sprintf("%x", h.Sum(nil)), nil
}
diff --git a/operator/internal/handlers/internal/storage/ca_configmap_test.go b/operator/internal/handlers/internal/storage/ca_configmap_test.go
index 1e164f5a25413..bd3d4d56a690a 100644
--- a/operator/internal/handlers/internal/storage/ca_configmap_test.go
+++ b/operator/internal/handlers/internal/storage/ca_configmap_test.go
@@ -11,9 +11,10 @@ import (
func TestIsValidConfigMap(t *testing.T) {
type test struct {
- name string
- cm *corev1.ConfigMap
- valid bool
+ name string
+ cm *corev1.ConfigMap
+ wantHash string
+ wantErrorMsg string
}
table := []test{
{
@@ -23,11 +24,13 @@ func TestIsValidConfigMap(t *testing.T) {
"service-ca.crt": "has-some-data",
},
},
- valid: true,
+ wantHash: "de6ae206d4920549d21c24ad9721e87a9b1ec7dc",
+ wantErrorMsg: "",
},
{
- name: "missing `service-ca.crt` key",
- cm: &corev1.ConfigMap{},
+ name: "missing `service-ca.crt` key",
+ cm: &corev1.ConfigMap{},
+ wantErrorMsg: "key not present or data empty: service-ca.crt",
},
{
name: "missing CA content",
@@ -36,6 +39,7 @@ func TestIsValidConfigMap(t *testing.T) {
"service-ca.crt": "",
},
},
+ wantErrorMsg: "key not present or data empty: service-ca.crt",
},
}
for _, tst := range table {
@@ -43,8 +47,14 @@ func TestIsValidConfigMap(t *testing.T) {
t.Run(tst.name, func(t *testing.T) {
t.Parallel()
- ok := storage.IsValidCAConfigMap(tst.cm, "service-ca.crt")
- require.Equal(t, tst.valid, ok)
+ hash, err := storage.CheckCAConfigMap(tst.cm, "service-ca.crt")
+
+ require.Equal(t, tst.wantHash, hash)
+ if tst.wantErrorMsg == "" {
+ require.NoError(t, err)
+ } else {
+ require.EqualError(t, err, tst.wantErrorMsg)
+ }
})
}
}
diff --git a/operator/internal/handlers/lokistack_create_or_update.go b/operator/internal/handlers/lokistack_create_or_update.go
index 49c84af4dcf4b..a6963f7574321 100644
--- a/operator/internal/handlers/lokistack_create_or_update.go
+++ b/operator/internal/handlers/lokistack_create_or_update.go
@@ -134,14 +134,17 @@ func CreateOrUpdateLokiStack(
caKey = tlsConfig.CAKey
}
- if !storage.IsValidCAConfigMap(&cm, caKey) {
+ var caHash string
+ caHash, err = storage.CheckCAConfigMap(&cm, caKey)
+ if err != nil {
return &status.DegradedError{
- Message: "Invalid object storage CA configmap contents: missing key or no contents",
+ Message: fmt.Sprintf("Invalid object storage CA configmap contents: %s", err),
Reason: lokiv1.ReasonInvalidObjectStorageCAConfigMap,
Requeue: false,
}
}
+ objStore.SecretSHA1 = fmt.Sprintf("%s;%s", objStore.SecretSHA1, caHash)
objStore.TLS = &storageoptions.TLSConfig{CA: cm.Name, Key: caKey}
}
diff --git a/operator/internal/handlers/lokistack_create_or_update_test.go b/operator/internal/handlers/lokistack_create_or_update_test.go
index 79928b4a82e50..b2158fe4d2ba2 100644
--- a/operator/internal/handlers/lokistack_create_or_update_test.go
+++ b/operator/internal/handlers/lokistack_create_or_update_test.go
@@ -997,7 +997,7 @@ func TestCreateOrUpdateLokiStack_WhenInvalidCAConfigMap_SetDegraded(t *testing.T
}
degradedErr := &status.DegradedError{
- Message: "Invalid object storage CA configmap contents: missing key or no contents",
+ Message: "Invalid object storage CA configmap contents: key not present or data empty: service-ca.crt",
Reason: lokiv1.ReasonInvalidObjectStorageCAConfigMap,
Requeue: false,
}
|
operator
|
React to changes in ConfigMap used for storage CA (#11624)
|
cecadf2a18639ff22a305e8658bca2084d229e0b
|
2022-01-05 21:44:28
|
Kaviraj
|
chore: Remove `cortex/util/test` dependency (#5050)
| false
|
diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go
index 77b7a93975664..f39da9d93541f 100644
--- a/pkg/distributor/distributor_test.go
+++ b/pkg/distributor/distributor_test.go
@@ -11,7 +11,6 @@ import (
"testing"
"time"
- "github.com/cortexproject/cortex/pkg/util/test"
"github.com/go-kit/log"
"github.com/grafana/dskit/flagext"
"github.com/grafana/dskit/kv"
@@ -33,6 +32,7 @@ import (
"github.com/grafana/loki/pkg/runtime"
fe "github.com/grafana/loki/pkg/util/flagext"
loki_net "github.com/grafana/loki/pkg/util/net"
+ "github.com/grafana/loki/pkg/util/test"
"github.com/grafana/loki/pkg/validation"
)
diff --git a/pkg/ruler/registry_test.go b/pkg/ruler/registry_test.go
index 5493d1afbc998..1244638c9f925 100644
--- a/pkg/ruler/registry_test.go
+++ b/pkg/ruler/registry_test.go
@@ -10,7 +10,6 @@ import (
"testing"
"time"
- "github.com/cortexproject/cortex/pkg/util/test"
"github.com/go-kit/log"
promConfig "github.com/prometheus/common/config"
"github.com/prometheus/common/model"
@@ -22,6 +21,7 @@ import (
"github.com/grafana/loki/pkg/ruler/storage/instance"
"github.com/grafana/loki/pkg/ruler/util"
+ "github.com/grafana/loki/pkg/util/test"
"github.com/grafana/loki/pkg/validation"
)
diff --git a/pkg/ruler/storage/instance/instance_test.go b/pkg/ruler/storage/instance/instance_test.go
index 413e2449985ef..c59b7b908834a 100644
--- a/pkg/ruler/storage/instance/instance_test.go
+++ b/pkg/ruler/storage/instance/instance_test.go
@@ -13,7 +13,6 @@ import (
"testing"
"time"
- "github.com/cortexproject/cortex/pkg/util/test"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
@@ -23,6 +22,8 @@ import (
"github.com/prometheus/prometheus/storage"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/pkg/util/test"
)
func TestConfig_Unmarshal_Defaults(t *testing.T) {
diff --git a/vendor/github.com/cortexproject/cortex/pkg/util/test/poll.go b/pkg/util/test/poll.go
similarity index 100%
rename from vendor/github.com/cortexproject/cortex/pkg/util/test/poll.go
rename to pkg/util/test/poll.go
diff --git a/vendor/modules.txt b/vendor/modules.txt
index bc3c1835a6d6d..7af39b6dbe50c 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -302,7 +302,6 @@ github.com/cortexproject/cortex/pkg/util/limiter
github.com/cortexproject/cortex/pkg/util/log
github.com/cortexproject/cortex/pkg/util/math
github.com/cortexproject/cortex/pkg/util/spanlogger
-github.com/cortexproject/cortex/pkg/util/test
github.com/cortexproject/cortex/pkg/util/validation
github.com/cortexproject/cortex/tools/querytee
# github.com/cristalhq/hedgedhttp v0.7.0
|
chore
|
Remove `cortex/util/test` dependency (#5050)
|
69d6ee293ee0b8c9de85847a3b0394149615d7f0
|
2023-03-31 17:15:35
|
Sandeep Sukhani
|
tsdb: use unique uploader name while building tsdb index filename, same as boltdb-shipper (#8975)
| false
|
diff --git a/pkg/storage/store_test.go b/pkg/storage/store_test.go
index 71a53fb983c0c..7bbae8d250e2e 100644
--- a/pkg/storage/store_test.go
+++ b/pkg/storage/store_test.go
@@ -1496,7 +1496,7 @@ func TestStore_BoltdbTsdbSameIndexPrefix(t *testing.T) {
tsdbFiles, err := os.ReadDir(filepath.Join(cfg.FSConfig.Directory, "index", indexTables[1].Name()))
require.NoError(t, err)
require.Len(t, tsdbFiles, 1)
- require.Regexp(t, regexp.MustCompile(fmt.Sprintf(`\d{10}-%s\.tsdb\.gz`, ingesterName)), tsdbFiles[0].Name())
+ require.Regexp(t, regexp.MustCompile(fmt.Sprintf(`\d{10}-%s-\d{19}\.tsdb\.gz`, ingesterName)), tsdbFiles[0].Name())
store, err = NewStore(cfg, config.ChunkStoreConfig{}, schemaConfig, limits, cm, nil, util_log.Logger)
require.NoError(t, err)
diff --git a/pkg/storage/stores/indexshipper/shipper.go b/pkg/storage/stores/indexshipper/shipper.go
index f45ccc45f1dc7..262f3ddccecd0 100644
--- a/pkg/storage/stores/indexshipper/shipper.go
+++ b/pkg/storage/stores/indexshipper/shipper.go
@@ -4,6 +4,8 @@ import (
"context"
"flag"
"fmt"
+ "os"
+ "path"
"sync"
"time"
@@ -12,6 +14,7 @@ import (
"golang.org/x/sync/errgroup"
"github.com/grafana/loki/pkg/storage/chunk/client"
+ "github.com/grafana/loki/pkg/storage/chunk/client/util"
"github.com/grafana/loki/pkg/storage/config"
"github.com/grafana/loki/pkg/storage/stores/indexshipper/downloads"
"github.com/grafana/loki/pkg/storage/stores/indexshipper/gatewayclient"
@@ -96,6 +99,35 @@ func (cfg *Config) Validate() error {
return storage.ValidateSharedStoreKeyPrefix(cfg.SharedStoreKeyPrefix)
}
+// GetUniqueUploaderName builds a unique uploader name using IngesterName + `-` + <nanosecond-timestamp>.
+// The name is persisted in the configured ActiveIndexDirectory and reused when already exists.
+func (cfg *Config) GetUniqueUploaderName() (string, error) {
+ uploader := fmt.Sprintf("%s-%d", cfg.IngesterName, time.Now().UnixNano())
+
+ uploaderFilePath := path.Join(cfg.ActiveIndexDirectory, "uploader", "name")
+ if err := util.EnsureDirectory(path.Dir(uploaderFilePath)); err != nil {
+ return "", err
+ }
+
+ _, err := os.Stat(uploaderFilePath)
+ if err != nil {
+ if !os.IsNotExist(err) {
+ return "", err
+ }
+ if err := os.WriteFile(uploaderFilePath, []byte(uploader), 0o666); err != nil {
+ return "", err
+ }
+ } else {
+ ub, err := os.ReadFile(uploaderFilePath)
+ if err != nil {
+ return "", err
+ }
+ uploader = string(ub)
+ }
+
+ return uploader, nil
+}
+
type indexShipper struct {
cfg Config
openIndexFileFunc index.OpenIndexFileFunc
diff --git a/pkg/storage/stores/shipper/shipper_index_client.go b/pkg/storage/stores/shipper/shipper_index_client.go
index 42a1020393082..e8f6f3b2f4d78 100644
--- a/pkg/storage/stores/shipper/shipper_index_client.go
+++ b/pkg/storage/stores/shipper/shipper_index_client.go
@@ -4,10 +4,7 @@ import (
"context"
"flag"
"fmt"
- "os"
- "path"
"sync"
- "time"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
@@ -16,7 +13,6 @@ import (
"github.com/grafana/loki/pkg/storage/chunk/client"
"github.com/grafana/loki/pkg/storage/chunk/client/local"
- "github.com/grafana/loki/pkg/storage/chunk/client/util"
"github.com/grafana/loki/pkg/storage/config"
"github.com/grafana/loki/pkg/storage/stores/indexshipper"
"github.com/grafana/loki/pkg/storage/stores/indexshipper/downloads"
@@ -90,7 +86,7 @@ func (i *indexClient) init(storageClient client.ObjectClient, limits downloads.L
}
if i.cfg.Mode != indexshipper.ModeReadOnly {
- uploader, err := i.getUploaderName()
+ uploader, err := i.cfg.GetUniqueUploaderName()
if err != nil {
return err
}
@@ -112,33 +108,6 @@ func (i *indexClient) init(storageClient client.ObjectClient, limits downloads.L
return nil
}
-func (i *indexClient) getUploaderName() (string, error) {
- uploader := fmt.Sprintf("%s-%d", i.cfg.IngesterName, time.Now().UnixNano())
-
- uploaderFilePath := path.Join(i.cfg.ActiveIndexDirectory, "uploader", "name")
- if err := util.EnsureDirectory(path.Dir(uploaderFilePath)); err != nil {
- return "", err
- }
-
- _, err := os.Stat(uploaderFilePath)
- if err != nil {
- if !os.IsNotExist(err) {
- return "", err
- }
- if err := os.WriteFile(uploaderFilePath, []byte(uploader), 0o666); err != nil {
- return "", err
- }
- } else {
- ub, err := os.ReadFile(uploaderFilePath)
- if err != nil {
- return "", err
- }
- uploader = string(ub)
- }
-
- return uploader, nil
-}
-
func (i *indexClient) Stop() {
i.stopOnce.Do(i.stop)
}
diff --git a/pkg/storage/stores/tsdb/identifier.go b/pkg/storage/stores/tsdb/identifier.go
index ab148dacd2999..749c431986e33 100644
--- a/pkg/storage/stores/tsdb/identifier.go
+++ b/pkg/storage/stores/tsdb/identifier.go
@@ -67,6 +67,7 @@ type SingleTenantTSDBIdentifier struct {
Checksum uint32
}
+// str builds filename with format <file-creation-ts> + `-` + `compactor` + `-` + <oldest-chunk-start-ts> + `-` + <latest-chunk-end-ts> `-` + <index-checksum>
func (i SingleTenantTSDBIdentifier) str() string {
return fmt.Sprintf(
"%d-%s-%d-%d-%x.tsdb",
@@ -138,6 +139,7 @@ type MultitenantTSDBIdentifier struct {
ts time.Time
}
+// Name builds filename with format <file-creation-ts> + `-` + `<nodeName>
func (id MultitenantTSDBIdentifier) Name() string {
return fmt.Sprintf("%d-%s.tsdb", id.ts.Unix(), id.nodeName)
}
diff --git a/pkg/storage/stores/tsdb/store.go b/pkg/storage/stores/tsdb/store.go
index f9c0f8576dff0..67ca42f3b34c4 100644
--- a/pkg/storage/stores/tsdb/store.go
+++ b/pkg/storage/stores/tsdb/store.go
@@ -133,11 +133,11 @@ func (s *store) init(indexShipperCfg indexshipper.Config, objectClient client.Ob
}
if indexShipperCfg.Mode != indexshipper.ModeReadOnly {
-
- var (
- nodeName = indexShipperCfg.IngesterName
- dir = indexShipperCfg.ActiveIndexDirectory
- )
+ dir := indexShipperCfg.ActiveIndexDirectory
+ nodeName, err := indexShipperCfg.GetUniqueUploaderName()
+ if err != nil {
+ return err
+ }
tsdbMetrics := NewMetrics(reg)
tsdbManager := NewTSDBManager(
|
tsdb
|
use unique uploader name while building tsdb index filename, same as boltdb-shipper (#8975)
|
2a2b52808c508be96ff61bdde2d6740a741fb053
|
2025-01-22 03:09:56
|
Owen Diehl
|
perf(ingester): refactor lock acquisitions related to `not_owned` series limit functionality (#15839)
| false
|
diff --git a/pkg/ingester/instance.go b/pkg/ingester/instance.go
index c6afcacfbdfde..80905bff23505 100644
--- a/pkg/ingester/instance.go
+++ b/pkg/ingester/instance.go
@@ -1183,7 +1183,7 @@ func (i *instance) updateOwnedStreams(isOwnedStream func(*stream) (bool, error))
}()
var err error
- i.streams.WithLock(func() {
+ i.streams.WithRLock(func() {
i.ownedStreamsSvc.resetStreamCounts()
err = i.streams.ForEach(func(s *stream) (bool, error) {
ownedStream, err := isOwnedStream(s)
diff --git a/pkg/ingester/limiter_test.go b/pkg/ingester/limiter_test.go
index b611db4d109e1..78e579187a502 100644
--- a/pkg/ingester/limiter_test.go
+++ b/pkg/ingester/limiter_test.go
@@ -130,7 +130,7 @@ func TestStreamCountLimiter_AssertNewStreamAllowed(t *testing.T) {
ownedStreamSvc := &ownedStreamService{
fixedLimit: atomic.NewInt32(testData.fixedLimit),
- ownedStreamCount: testData.ownedStreamCount,
+ ownedStreamCount: atomic.NewInt64(int64(testData.ownedStreamCount)),
}
strategy := &fixedStrategy{localLimit: testData.calculatedLocalLimit}
limiter := NewLimiter(limits, NilMetrics, strategy, &TenantBasedStrategy{limits: limits})
diff --git a/pkg/ingester/metrics.go b/pkg/ingester/metrics.go
index ff4db43747676..5f144038bc094 100644
--- a/pkg/ingester/metrics.go
+++ b/pkg/ingester/metrics.go
@@ -318,8 +318,8 @@ func newIngesterMetrics(r prometheus.Registerer, metricsNamespace string) *inges
Namespace: constants.Loki,
Name: "ingester_streams_ownership_check_duration_ms",
Help: "Distribution of streams ownership check durations in milliseconds.",
- // 100ms to 5s.
- Buckets: []float64{100, 250, 350, 500, 750, 1000, 1500, 2000, 5000},
+ // 1ms -> 16s
+ Buckets: prometheus.ExponentialBuckets(1, 4, 8),
}),
duplicateLogBytesTotal: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
diff --git a/pkg/ingester/owned_streams.go b/pkg/ingester/owned_streams.go
index 3bb729815e718..56c5a77fa768e 100644
--- a/pkg/ingester/owned_streams.go
+++ b/pkg/ingester/owned_streams.go
@@ -21,17 +21,18 @@ type ownedStreamService struct {
tenantID string
limiter *Limiter
fixedLimit *atomic.Int32
- ownedStreamCount int
+ ownedStreamCount *atomic.Int64
lock sync.RWMutex
notOwnedStreams map[model.Fingerprint]any
}
func newOwnedStreamService(tenantID string, limiter *Limiter) *ownedStreamService {
svc := &ownedStreamService{
- tenantID: tenantID,
- limiter: limiter,
- fixedLimit: atomic.NewInt32(0),
- notOwnedStreams: make(map[model.Fingerprint]any),
+ tenantID: tenantID,
+ limiter: limiter,
+ fixedLimit: atomic.NewInt32(0),
+ ownedStreamCount: atomic.NewInt64(0),
+ notOwnedStreams: make(map[model.Fingerprint]any),
}
svc.updateFixedLimit()
@@ -39,9 +40,7 @@ func newOwnedStreamService(tenantID string, limiter *Limiter) *ownedStreamServic
}
func (s *ownedStreamService) getOwnedStreamCount() int {
- s.lock.RLock()
- defer s.lock.RUnlock()
- return s.ownedStreamCount
+ return int(s.ownedStreamCount.Load())
}
func (s *ownedStreamService) updateFixedLimit() (old, new int32) {
@@ -55,12 +54,15 @@ func (s *ownedStreamService) getFixedLimit() int {
}
func (s *ownedStreamService) trackStreamOwnership(fp model.Fingerprint, owned bool) {
- s.lock.Lock()
- defer s.lock.Unlock()
+ // only need to inc the owned count; can use sync atomics.
if owned {
- s.ownedStreamCount++
+ s.ownedStreamCount.Inc()
return
}
+
+ // need to update map; lock required
+ s.lock.Lock()
+ defer s.lock.Unlock()
notOwnedStreamsMetric.Inc()
s.notOwnedStreams[fp] = nil
}
@@ -74,13 +76,13 @@ func (s *ownedStreamService) trackRemovedStream(fp model.Fingerprint) {
delete(s.notOwnedStreams, fp)
return
}
- s.ownedStreamCount--
+ s.ownedStreamCount.Dec()
}
func (s *ownedStreamService) resetStreamCounts() {
s.lock.Lock()
defer s.lock.Unlock()
- s.ownedStreamCount = 0
+ s.ownedStreamCount.Store(0)
notOwnedStreamsMetric.Sub(float64(len(s.notOwnedStreams)))
s.notOwnedStreams = make(map[model.Fingerprint]any)
}
diff --git a/pkg/ingester/recalculate_owned_streams_test.go b/pkg/ingester/recalculate_owned_streams_test.go
index 3e531dcdef66f..f3bea57f69bae 100644
--- a/pkg/ingester/recalculate_owned_streams_test.go
+++ b/pkg/ingester/recalculate_owned_streams_test.go
@@ -37,7 +37,7 @@ func Test_recalculateOwnedStreams_newRecalculateOwnedStreamsIngester(t *testing.
func Test_recalculateOwnedStreams_recalculateWithIngesterStrategy(t *testing.T) {
tests := map[string]struct {
featureEnabled bool
- expectedOwnedStreamCount int
+ expectedOwnedStreamCount int64
expectedNotOwnedStreamCount int
}{
"expected streams ownership to be recalculated": {
@@ -101,7 +101,7 @@ func Test_recalculateOwnedStreams_recalculateWithIngesterStrategy(t *testing.T)
mockRing.addMapping(createStream(t, tenant, 100), true)
mockRing.addMapping(createStream(t, tenant, 250), true)
- require.Equal(t, 7, tenant.ownedStreamsSvc.ownedStreamCount)
+ require.Equal(t, int64(7), tenant.ownedStreamsSvc.ownedStreamCount.Load())
require.Len(t, tenant.ownedStreamsSvc.notOwnedStreams, 0)
mockTenantsSupplier := &mockTenantsSuplier{tenants: []*instance{tenant}}
@@ -116,7 +116,7 @@ func Test_recalculateOwnedStreams_recalculateWithIngesterStrategy(t *testing.T)
if testData.featureEnabled {
require.Equal(t, 50, tenant.ownedStreamsSvc.getFixedLimit(), "fixed limit must be updated after recalculation")
}
- require.Equal(t, testData.expectedOwnedStreamCount, tenant.ownedStreamsSvc.ownedStreamCount)
+ require.Equal(t, testData.expectedOwnedStreamCount, tenant.ownedStreamsSvc.ownedStreamCount.Load())
require.Len(t, tenant.ownedStreamsSvc.notOwnedStreams, testData.expectedNotOwnedStreamCount)
})
}
|
perf
|
refactor lock acquisitions related to `not_owned` series limit functionality (#15839)
|
fd627f2a49d94b29091b12e461acb58335363d9e
|
2022-05-24 17:29:55
|
Periklis Tsirakidis
|
operator: Add rules support (#5986)
| false
|
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index d2910c12a2e59..1a72b2e1e056c 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -2,6 +2,7 @@
- [6199](https://github.com/grafana/loki/pull/6199) **Red-GV**: Update GCP secret volume path
- [6125](https://github.com/grafana/loki/pull/6125) **sasagarw**: Add method to get authenticated from GCP
+- [5986](https://github.com/grafana/loki/pull/5986) **periklis**: Add support for Loki Rules reconciliation
- [5987](https://github.com/grafana/loki/pull/5987) **Red-GV**: Update logerr to v2.0.0
- [5907](https://github.com/grafana/loki/pull/5907) **xperimental**: Do not include non-static labels in pod selectors
- [5893](https://github.com/grafana/loki/pull/5893) **periklis**: Align PVC storage size requests for all lokistack t-shirt sizes
diff --git a/operator/Makefile b/operator/Makefile
index 1eb02b06d5f9f..60a78cbae7eae 100644
--- a/operator/Makefile
+++ b/operator/Makefile
@@ -103,7 +103,7 @@ help: ## Display this help.
.PHONY: deps
deps: go.mod go.sum
- go mod tidy
+ go mod tidy -compat=1.17
go mod download
go mod verify
diff --git a/operator/PROJECT b/operator/PROJECT
index 29cbcc3a9d702..208dd422dbc0c 100644
--- a/operator/PROJECT
+++ b/operator/PROJECT
@@ -5,10 +5,10 @@ plugins:
manifests.sdk.operatorframework.io/v2: {}
scorecard.sdk.operatorframework.io/v2: {}
projectName: loki-operator
-repo: github.com/grafana/loki
+repo: github.com/grafana/loki/operator
resources:
- api:
- crdVersion: v1beta1
+ crdVersion: v1
namespaced: true
controller: true
domain: grafana.com
@@ -16,4 +16,28 @@ resources:
kind: LokiStack
path: github.com/grafana/loki/operator/api/v1beta1
version: v1beta1
+- api:
+ crdVersion: v1
+ namespaced: true
+ controller: true
+ domain: grafana.com
+ group: loki
+ kind: AlertingRule
+ path: github.com/grafana/loki/operator/api/v1beta1
+ version: v1beta1
+ webhooks:
+ validation: true
+ webhookVersion: v1
+- api:
+ crdVersion: v1
+ namespaced: true
+ controller: true
+ domain: grafana.com
+ group: loki
+ kind: RecordingRule
+ path: github.com/grafana/loki/operator/api/v1beta1
+ version: v1beta1
+ webhooks:
+ validation: true
+ webhookVersion: v1
version: "3"
diff --git a/operator/api/v1beta1/alertingrule_types.go b/operator/api/v1beta1/alertingrule_types.go
new file mode 100644
index 0000000000000..6881c1d06aef7
--- /dev/null
+++ b/operator/api/v1beta1/alertingrule_types.go
@@ -0,0 +1,133 @@
+package v1beta1
+
+import (
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+// AlertingRuleSpec defines the desired state of AlertingRule
+type AlertingRuleSpec struct {
+ // TenantID of tenant where the alerting rules are evaluated in.
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Tenant ID"
+ TenantID string `json:"tenantID"`
+
+ // List of groups for alerting rules.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Groups"
+ Groups []*AlertingRuleGroup `json:"groups"`
+}
+
+// AlertingRuleGroup defines a group of Loki alerting rules.
+type AlertingRuleGroup struct {
+ // Name of the alerting rule group. Must be unique within all alerting rules.
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Name"
+ Name string `json:"name"`
+
+ // Interval defines the time interval between evaluation of the given
+ // alerting rule.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +kubebuilder:default:="1m"
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Evaluation Interval"
+ Interval PrometheusDuration `json:"interval"`
+
+ // Limit defines the number of alerts an alerting rule can produce. 0 is no limit.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,xDescriptors="urn:alm:descriptor:com.tectonic.ui:number",displayName="Limit of firing alerts"
+ Limit int32 `json:"limit,omitempty"`
+
+ // Rules defines a list of alerting rules
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Rules"
+ Rules []*AlertingRuleGroupSpec `json:"rules"`
+}
+
+// AlertingRuleGroupSpec defines the spec for a Loki alerting rule.
+type AlertingRuleGroupSpec struct {
+ // The name of the alert. Must be a valid label value.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Name"
+ Alert string `json:"alert,omitempty"`
+
+ // The LogQL expression to evaluate. Every evaluation cycle this is
+ // evaluated at the current time, and all resultant time series become
+ // pending/firing alerts.
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="LogQL Expression"
+ Expr string `json:"expr"`
+
+ // Alerts are considered firing once they have been returned for this long.
+ // Alerts which have not yet fired for long enough are considered pending.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Firing Threshold"
+ For PrometheusDuration `json:"for,omitempty"`
+
+ // Annotations to add to each alert.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Annotations"
+ Annotations map[string]string `json:"annotations,omitempty"`
+
+ // Labels to add to each alert.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Labels"
+ Labels map[string]string `json:"labels,omitempty"`
+}
+
+// AlertingRuleStatus defines the observed state of AlertingRule
+type AlertingRuleStatus struct {
+ // Conditions of the AlertingRule generation health.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=status,xDescriptors="urn:alm:descriptor:io.kubernetes.conditions"
+ Conditions []metav1.Condition `json:"conditions,omitempty"`
+}
+
+//+kubebuilder:object:root=true
+//+kubebuilder:subresource:status
+
+// AlertingRule is the Schema for the alertingrules API
+//
+// +operator-sdk:csv:customresourcedefinitions:displayName="AlertingRule",resources={{LokiStack,v1beta1}}
+type AlertingRule struct {
+ metav1.TypeMeta `json:",inline"`
+ metav1.ObjectMeta `json:"metadata,omitempty"`
+
+ Spec AlertingRuleSpec `json:"spec,omitempty"`
+ Status AlertingRuleStatus `json:"status,omitempty"`
+}
+
+//+kubebuilder:object:root=true
+
+// AlertingRuleList contains a list of AlertingRule
+type AlertingRuleList struct {
+ metav1.TypeMeta `json:",inline"`
+ metav1.ListMeta `json:"metadata,omitempty"`
+ Items []AlertingRule `json:"items"`
+}
+
+func init() {
+ SchemeBuilder.Register(&AlertingRule{}, &AlertingRuleList{})
+}
diff --git a/operator/api/v1beta1/alertingrule_webhook.go b/operator/api/v1beta1/alertingrule_webhook.go
new file mode 100644
index 0000000000000..0bfdf87ecdbd2
--- /dev/null
+++ b/operator/api/v1beta1/alertingrule_webhook.go
@@ -0,0 +1,105 @@
+package v1beta1
+
+import (
+ "github.com/grafana/loki/pkg/logql/syntax"
+
+ "github.com/prometheus/common/model"
+
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/util/validation/field"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/webhook"
+)
+
+// SetupWebhookWithManager registers the AlertingRuleWebhook to the controller-runtime manager
+// or returns an error.
+func (r *AlertingRule) SetupWebhookWithManager(mgr ctrl.Manager) error {
+ return ctrl.NewWebhookManagedBy(mgr).
+ For(r).
+ Complete()
+}
+
+//+kubebuilder:webhook:path=/validate-loki-grafana-com-v1beta1-alertingrule,mutating=false,failurePolicy=fail,sideEffects=None,groups=loki.grafana.com,resources=alertingrules,verbs=create;update,versions=v1beta1,name=valertingrule.kb.io,admissionReviewVersions=v1
+
+var _ webhook.Validator = &AlertingRule{}
+
+// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
+func (r *AlertingRule) ValidateCreate() error {
+ return r.validate()
+}
+
+// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
+func (r *AlertingRule) ValidateUpdate(_ runtime.Object) error {
+ return r.validate()
+}
+
+// ValidateDelete implements webhook.Validator so a webhook will be registered for the type
+func (r *AlertingRule) ValidateDelete() error {
+ // Do nothing
+ return nil
+}
+
+func (r *AlertingRule) validate() error {
+ var allErrs field.ErrorList
+
+ found := make(map[string]bool)
+
+ for i, g := range r.Spec.Groups {
+ // Check for group name uniqueness
+ if found[g.Name] {
+ allErrs = append(allErrs, field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(i).Child("Name"),
+ g.Name,
+ ErrGroupNamesNotUnique.Error(),
+ ))
+ }
+
+ found[g.Name] = true
+
+ // Check if rule evaluation period is a valid PromQL duration
+ _, err := model.ParseDuration(string(g.Interval))
+ if err != nil {
+ allErrs = append(allErrs, field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(i).Child("Interval"),
+ g.Interval,
+ ErrParseEvaluationInterval.Error(),
+ ))
+ }
+
+ for j, r := range g.Rules {
+ // Check if alert for period is a valid PromQL duration
+ if r.Alert != "" {
+ _, err := model.ParseDuration(string(r.For))
+ if err != nil {
+ allErrs = append(allErrs, field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(i).Child("Rules").Index(j).Child("For"),
+ r.For,
+ ErrParseAlertForPeriod.Error(),
+ ))
+ }
+ }
+
+ // Check if the LogQL parser can parse the rule expression
+ _, err := syntax.ParseExpr(r.Expr)
+ if err != nil {
+ allErrs = append(allErrs, field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(i).Child("Rules").Index(j).Child("Expr"),
+ r.Expr,
+ ErrParseLogQLExpression.Error(),
+ ))
+ }
+ }
+ }
+
+ if len(allErrs) == 0 {
+ return nil
+ }
+
+ return apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "AlertingRule"},
+ r.Name,
+ allErrs,
+ )
+}
diff --git a/operator/api/v1beta1/alertingrule_webhook_test.go b/operator/api/v1beta1/alertingrule_webhook_test.go
new file mode 100644
index 0000000000000..699d52ef9536a
--- /dev/null
+++ b/operator/api/v1beta1/alertingrule_webhook_test.go
@@ -0,0 +1,233 @@
+package v1beta1_test
+
+import (
+ "testing"
+
+ "github.com/grafana/loki/operator/api/v1beta1"
+ "github.com/stretchr/testify/require"
+
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/util/validation/field"
+)
+
+var att = []struct {
+ desc string
+ spec v1beta1.AlertingRuleSpec
+ err *apierrors.StatusError
+}{
+ {
+ desc: "valid spec",
+ spec: v1beta1.AlertingRuleSpec{
+ Groups: []*v1beta1.AlertingRuleGroup{
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ Limit: 10,
+ Rules: []*v1beta1.AlertingRuleGroupSpec{
+ {
+ Alert: "first-alert",
+ For: v1beta1.PrometheusDuration("10m"),
+ Expr: `sum(rate({app="foo", env="production"} |= "error" [5m])) by (job)`,
+ Annotations: map[string]string{
+ "annot": "something",
+ },
+ Labels: map[string]string{
+ "severity": "critical",
+ },
+ },
+ {
+ Alert: "second-alert",
+ For: v1beta1.PrometheusDuration("10m"),
+ Expr: `sum(rate({app="foo", env="stage"} |= "error" [5m])) by (job)`,
+ Annotations: map[string]string{
+ "env": "something",
+ },
+ Labels: map[string]string{
+ "severity": "warning",
+ },
+ },
+ },
+ },
+ {
+ Name: "second",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ Limit: 10,
+ Rules: []*v1beta1.AlertingRuleGroupSpec{
+ {
+ Alert: "third-alert",
+ For: v1beta1.PrometheusDuration("10m"),
+ Expr: `sum(rate({app="foo", env="production"} |= "error" [5m])) by (job)`,
+ Annotations: map[string]string{
+ "annot": "something",
+ },
+ Labels: map[string]string{
+ "severity": "critical",
+ },
+ },
+ {
+ Alert: "fourth-alert",
+ For: v1beta1.PrometheusDuration("10m"),
+ Expr: `sum(rate({app="foo", env="stage"} |= "error" [5m])) by (job)`,
+ Annotations: map[string]string{
+ "env": "something",
+ },
+ Labels: map[string]string{
+ "severity": "warning",
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ {
+ desc: "not unique group names",
+ spec: v1beta1.AlertingRuleSpec{
+ Groups: []*v1beta1.AlertingRuleGroup{
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ },
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "AlertingRule"},
+ "testing-rule",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(1).Child("Name"),
+ "first",
+ v1beta1.ErrGroupNamesNotUnique.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "parse eval interval err",
+ spec: v1beta1.AlertingRuleSpec{
+ Groups: []*v1beta1.AlertingRuleGroup{
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1mo"),
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "AlertingRule"},
+ "testing-rule",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(0).Child("Interval"),
+ "1mo",
+ v1beta1.ErrParseEvaluationInterval.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "parse for interval err",
+ spec: v1beta1.AlertingRuleSpec{
+ Groups: []*v1beta1.AlertingRuleGroup{
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ Rules: []*v1beta1.AlertingRuleGroupSpec{
+ {
+ Alert: "an-alert",
+ For: v1beta1.PrometheusDuration("10years"),
+ Expr: `sum(rate({label="value"}[1m]))`,
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "AlertingRule"},
+ "testing-rule",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(0).Child("Rules").Index(0).Child("For"),
+ "10years",
+ v1beta1.ErrParseAlertForPeriod.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "parse LogQL expression err",
+ spec: v1beta1.AlertingRuleSpec{
+ Groups: []*v1beta1.AlertingRuleGroup{
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ Rules: []*v1beta1.AlertingRuleGroupSpec{
+ {
+ Expr: "this is not a valid expression",
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "AlertingRule"},
+ "testing-rule",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(0).Child("Rules").Index(0).Child("Expr"),
+ "this is not a valid expression",
+ v1beta1.ErrParseLogQLExpression.Error(),
+ ),
+ },
+ ),
+ },
+}
+
+func TestAlertingRuleValidationWebhook_ValidateCreate(t *testing.T) {
+ for _, tc := range att {
+ tc := tc
+ t.Run(tc.desc, func(t *testing.T) {
+ t.Parallel()
+ l := v1beta1.AlertingRule{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "testing-rule",
+ },
+ Spec: tc.spec,
+ }
+
+ err := l.ValidateCreate()
+ if err != nil {
+ require.Equal(t, tc.err, err)
+ } else {
+ require.NoError(t, err)
+ }
+ })
+ }
+}
+
+func TestAlertingRuleValidationWebhook_ValidateUpdate(t *testing.T) {
+ for _, tc := range att {
+ tc := tc
+ t.Run(tc.desc, func(t *testing.T) {
+ t.Parallel()
+ l := v1beta1.AlertingRule{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "testing-rule",
+ },
+ Spec: tc.spec,
+ }
+
+ err := l.ValidateUpdate(&v1beta1.AlertingRule{})
+ if err != nil {
+ require.Equal(t, tc.err, err)
+ } else {
+ require.NoError(t, err)
+ }
+ })
+ }
+}
diff --git a/operator/api/v1beta1/lokistack_types.go b/operator/api/v1beta1/lokistack_types.go
index c71597f74a814..2563fb2c6169e 100644
--- a/operator/api/v1beta1/lokistack_types.go
+++ b/operator/api/v1beta1/lokistack_types.go
@@ -305,6 +305,13 @@ type LokiTemplateSpec struct {
// +kubebuilder:validation:Optional
// +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Index Gateway pods"
IndexGateway *LokiComponentSpec `json:"indexGateway,omitempty"`
+
+ // Ruler defines the ruler component spec.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Ruler pods"
+ Ruler *LokiComponentSpec `json:"ruler,omitempty"`
}
// ObjectStorageSecretType defines the type of storage which can be used with the Loki cluster.
@@ -475,6 +482,32 @@ type LimitsSpec struct {
Tenants map[string]LimitsTemplateSpec `json:"tenants,omitempty"`
}
+// RulesSpec deifnes the spec for the ruler component.
+type RulesSpec struct {
+ // Enabled defines a flag to enable/disable the ruler component
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,xDescriptors="urn:alm:descriptor:com.tectonic.ui:booleanSwitch",displayName="Enable"
+ Enabled bool `json:"enabled"`
+
+ // A selector to select which LokiRules to mount for loading alerting/recording
+ // rules from.
+ //
+ // +optional
+ // +kubebuilder:validation:optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Selector"
+ Selector *metav1.LabelSelector `json:"selector,omitempty"`
+
+ // Namespaces to be selected for PrometheusRules discovery. If unspecified, only
+ // the same namespace as the LokiStack object is in is used.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Namespace Selector"
+ NamespaceSelector *metav1.LabelSelector `json:"namespaceSelector,omitempty"`
+}
+
// LokiStackSpec defines the desired state of LokiStack
type LokiStackSpec struct {
@@ -513,9 +546,17 @@ type LokiStackSpec struct {
// +optional
// +kubebuilder:validation:Optional
// +kubebuilder:validation:Minimum:=1
+ // +kubebuilder:default:=1
// +operator-sdk:csv:customresourcedefinitions:type=spec,xDescriptors="urn:alm:descriptor:com.tectonic.ui:number",displayName="Replication Factor"
ReplicationFactor int32 `json:"replicationFactor"`
+ // Rules defines the spec for the ruler component
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,xDescriptors="urn:alm:descriptor:com.tectonic.ui:advanced",displayName="Rules"
+ Rules *RulesSpec `json:"rules,omitempty"`
+
// Limits defines the limits to be applied to log stream processing.
//
// +optional
@@ -639,6 +680,13 @@ type LokiStackComponentStatus struct {
// +kubebuilder:validation:Optional
// +operator-sdk:csv:customresourcedefinitions:type=status,xDescriptors="urn:alm:descriptor:com.tectonic.ui:podStatuses",displayName="Gateway",order=5
Gateway PodStatusMap `json:"gateway,omitempty"`
+
+ // Ruler is a map to the per pod status of the lokistack ruler statefulset.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=status,xDescriptors="urn:alm:descriptor:com.tectonic.ui:podStatuses",displayName="Ruler",order=6
+ Ruler PodStatusMap `json:"ruler,omitempty"`
}
// LokiStackStatus defines the observed state of LokiStack
diff --git a/operator/api/v1beta1/recordingrule_types.go b/operator/api/v1beta1/recordingrule_types.go
new file mode 100644
index 0000000000000..4cd08e96a8ec5
--- /dev/null
+++ b/operator/api/v1beta1/recordingrule_types.go
@@ -0,0 +1,111 @@
+package v1beta1
+
+import (
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+// RecordingRuleSpec defines the desired state of RecordingRule
+type RecordingRuleSpec struct {
+ // TenantID of tenant where the recording rules are evaluated in.
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Tenant ID"
+ TenantID string `json:"tenantID"`
+
+ // List of groups for recording rules.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Groups"
+ Groups []*RecordingRuleGroup `json:"groups"`
+}
+
+// RecordingRuleGroup defines a group of Loki recording rules.
+type RecordingRuleGroup struct {
+ // Name of the recording rule group. Must be unique within all recording rules.
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Name"
+ Name string `json:"name"`
+
+ // Interval defines the time interval between evaluation of the given
+ // recoding rule.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +kubebuilder:default:="1m"
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Evaluation Interval"
+ Interval PrometheusDuration `json:"interval"`
+
+ // Limit defines the number of series a recording rule can produce. 0 is no limit.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,xDescriptors="urn:alm:descriptor:com.tectonic.ui:number",displayName="Limit of produced series"
+ Limit int32 `json:"limit,omitempty"`
+
+ // Rules defines a list of recording rules
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Rules"
+ Rules []*RecordingRuleGroupSpec `json:"rules"`
+}
+
+// RecordingRuleGroupSpec defines the spec for a Loki recording rule.
+type RecordingRuleGroupSpec struct {
+ // The name of the time series to output to. Must be a valid metric name.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Metric Name"
+ Record string `json:"record,omitempty"`
+
+ // The LogQL expression to evaluate. Every evaluation cycle this is
+ // evaluated at the current time, and all resultant time series become
+ // pending/firing alerts.
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="LogQL Expression"
+ Expr string `json:"expr"`
+}
+
+// RecordingRuleStatus defines the observed state of RecordingRule
+type RecordingRuleStatus struct {
+ // Conditions of the RecordingRule generation health.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=status,xDescriptors="urn:alm:descriptor:io.kubernetes.conditions"
+ Conditions []metav1.Condition `json:"conditions,omitempty"`
+}
+
+//+kubebuilder:object:root=true
+//+kubebuilder:subresource:status
+
+// RecordingRule is the Schema for the recordingrules API
+//
+// +operator-sdk:csv:customresourcedefinitions:displayName="RecordingRule",resources={{LokiStack,v1beta1}}
+type RecordingRule struct {
+ metav1.TypeMeta `json:",inline"`
+ metav1.ObjectMeta `json:"metadata,omitempty"`
+
+ Spec RecordingRuleSpec `json:"spec,omitempty"`
+ Status RecordingRuleStatus `json:"status,omitempty"`
+}
+
+//+kubebuilder:object:root=true
+
+// RecordingRuleList contains a list of RecordingRule
+type RecordingRuleList struct {
+ metav1.TypeMeta `json:",inline"`
+ metav1.ListMeta `json:"metadata,omitempty"`
+ Items []RecordingRule `json:"items"`
+}
+
+func init() {
+ SchemeBuilder.Register(&RecordingRule{}, &RecordingRuleList{})
+}
diff --git a/operator/api/v1beta1/recordingrule_webhook.go b/operator/api/v1beta1/recordingrule_webhook.go
new file mode 100644
index 0000000000000..b07ff7d36b30a
--- /dev/null
+++ b/operator/api/v1beta1/recordingrule_webhook.go
@@ -0,0 +1,104 @@
+package v1beta1
+
+import (
+ "github.com/grafana/loki/pkg/logql/syntax"
+
+ "github.com/prometheus/common/model"
+
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/util/validation/field"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/webhook"
+)
+
+// SetupWebhookWithManager registers the RecordingRuleWebhook to the controller-runtime manager
+// or returns an error.
+func (r *RecordingRule) SetupWebhookWithManager(mgr ctrl.Manager) error {
+ return ctrl.NewWebhookManagedBy(mgr).
+ For(r).
+ Complete()
+}
+
+//+kubebuilder:webhook:path=/validate-loki-grafana-com-v1beta1-recordingrule,mutating=false,failurePolicy=fail,sideEffects=None,groups=loki.grafana.com,resources=recordingrules,verbs=create;update,versions=v1beta1,name=vrecordingrule.kb.io,admissionReviewVersions=v1
+
+var _ webhook.Validator = &RecordingRule{}
+
+// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
+func (r *RecordingRule) ValidateCreate() error {
+ return r.validate()
+}
+
+// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
+func (r *RecordingRule) ValidateUpdate(_ runtime.Object) error {
+ return r.validate()
+}
+
+// ValidateDelete implements webhook.Validator so a webhook will be registered for the type
+func (r *RecordingRule) ValidateDelete() error {
+ // Do nothing
+ return nil
+}
+
+func (r *RecordingRule) validate() error {
+ var allErrs field.ErrorList
+
+ found := make(map[string]bool)
+
+ for i, g := range r.Spec.Groups {
+ // Check for group name uniqueness
+ if found[g.Name] {
+ allErrs = append(allErrs, field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(i).Child("Name"),
+ g.Name,
+ ErrGroupNamesNotUnique.Error(),
+ ))
+ }
+
+ found[g.Name] = true
+
+ // Check if rule evaluation period is a valid PromQL duration
+ _, err := model.ParseDuration(string(g.Interval))
+ if err != nil {
+ allErrs = append(allErrs, field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(i).Child("Interval"),
+ g.Interval,
+ ErrParseEvaluationInterval.Error(),
+ ))
+ }
+
+ for j, r := range g.Rules {
+ // Check if recording rule name is a valid PromQL Label Name
+ if r.Record != "" {
+ if !model.IsValidMetricName(model.LabelValue(r.Record)) {
+ allErrs = append(allErrs, field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(i).Child("Rules").Index(j).Child("Record"),
+ r.Record,
+ ErrInvalidRecordMetricName.Error(),
+ ))
+ }
+ }
+
+ // Check if the LogQL parser can parse the rule expression
+ _, err := syntax.ParseExpr(r.Expr)
+ if err != nil {
+ allErrs = append(allErrs, field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(i).Child("Rules").Index(j).Child("Expr"),
+ r.Expr,
+ ErrParseLogQLExpression.Error(),
+ ))
+ }
+ }
+ }
+
+ if len(allErrs) == 0 {
+ return nil
+ }
+
+ return apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "RecordingRule"},
+ r.Name,
+ allErrs,
+ )
+}
diff --git a/operator/api/v1beta1/recordingrule_webhook_test.go b/operator/api/v1beta1/recordingrule_webhook_test.go
new file mode 100644
index 0000000000000..c2c1c8d7e4d84
--- /dev/null
+++ b/operator/api/v1beta1/recordingrule_webhook_test.go
@@ -0,0 +1,202 @@
+package v1beta1_test
+
+import (
+ "testing"
+
+ "github.com/grafana/loki/operator/api/v1beta1"
+ "github.com/stretchr/testify/require"
+
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/util/validation/field"
+)
+
+var rtt = []struct {
+ desc string
+ spec v1beta1.RecordingRuleSpec
+ err *apierrors.StatusError
+}{
+ {
+ desc: "valid spec",
+ spec: v1beta1.RecordingRuleSpec{
+ Groups: []*v1beta1.RecordingRuleGroup{
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ Rules: []*v1beta1.RecordingRuleGroupSpec{
+ {
+ Record: "valid:record:name",
+ Expr: `sum(rate({app="foo", env="production"} |= "error" [5m])) by (job)`,
+ },
+ {
+ Record: "valid:second:name",
+ Expr: `sum(rate({app="foo", env="stage"} |= "error" [5m])) by (job)`,
+ },
+ },
+ },
+ {
+ Name: "second",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ Rules: []*v1beta1.RecordingRuleGroupSpec{
+ {
+ Record: "nginx:requests:rate1m",
+ Expr: `sum(rate({container="nginx"}[1m]))`,
+ },
+ {
+ Record: "banana:requests:rate5m",
+ Expr: `sum(rate({container="banana"}[1m]))`,
+ },
+ },
+ },
+ },
+ },
+ },
+ {
+ desc: "not unique group names",
+ spec: v1beta1.RecordingRuleSpec{
+ Groups: []*v1beta1.RecordingRuleGroup{
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ },
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "RecordingRule"},
+ "testing-rule",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(1).Child("Name"),
+ "first",
+ v1beta1.ErrGroupNamesNotUnique.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "parse eval interval err",
+ spec: v1beta1.RecordingRuleSpec{
+ Groups: []*v1beta1.RecordingRuleGroup{
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1mo"),
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "RecordingRule"},
+ "testing-rule",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(0).Child("Interval"),
+ "1mo",
+ v1beta1.ErrParseEvaluationInterval.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "invalid record metric name",
+ spec: v1beta1.RecordingRuleSpec{
+ Groups: []*v1beta1.RecordingRuleGroup{
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ Rules: []*v1beta1.RecordingRuleGroupSpec{
+ {
+ Record: "invalid&metric:name",
+ Expr: `sum(rate({label="value"}[1m]))`,
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "RecordingRule"},
+ "testing-rule",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(0).Child("Rules").Index(0).Child("Record"),
+ "invalid&metric:name",
+ v1beta1.ErrInvalidRecordMetricName.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "parse LogQL expression err",
+ spec: v1beta1.RecordingRuleSpec{
+ Groups: []*v1beta1.RecordingRuleGroup{
+ {
+ Name: "first",
+ Interval: v1beta1.PrometheusDuration("1m"),
+ Rules: []*v1beta1.RecordingRuleGroupSpec{
+ {
+ Expr: "this is not a valid expression",
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "RecordingRule"},
+ "testing-rule",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("Spec").Child("Groups").Index(0).Child("Rules").Index(0).Child("Expr"),
+ "this is not a valid expression",
+ v1beta1.ErrParseLogQLExpression.Error(),
+ ),
+ },
+ ),
+ },
+}
+
+func TestRecordingRuleValidationWebhook_ValidateCreate(t *testing.T) {
+ for _, tc := range rtt {
+ tc := tc
+ t.Run(tc.desc, func(t *testing.T) {
+ t.Parallel()
+ l := v1beta1.RecordingRule{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "testing-rule",
+ },
+ Spec: tc.spec,
+ }
+
+ err := l.ValidateCreate()
+ if err != nil {
+ require.Equal(t, tc.err, err)
+ } else {
+ require.NoError(t, err)
+ }
+ })
+ }
+}
+
+func TestRecordingRuleValidationWebhook_ValidateUpdate(t *testing.T) {
+ for _, tc := range rtt {
+ tc := tc
+ t.Run(tc.desc, func(t *testing.T) {
+ t.Parallel()
+ l := v1beta1.RecordingRule{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "testing-rule",
+ },
+ Spec: tc.spec,
+ }
+
+ err := l.ValidateUpdate(&v1beta1.RecordingRule{})
+ if err != nil {
+ require.Equal(t, tc.err, err)
+ } else {
+ require.NoError(t, err)
+ }
+ })
+ }
+}
diff --git a/operator/api/v1beta1/v1beta1.go b/operator/api/v1beta1/v1beta1.go
new file mode 100644
index 0000000000000..38aef62ea637d
--- /dev/null
+++ b/operator/api/v1beta1/v1beta1.go
@@ -0,0 +1,21 @@
+package v1beta1
+
+import "errors"
+
+// PrometheusDuration defines the type for Prometheus durations.
+//
+// +kubebuilder:validation:Pattern:="((([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?|0)"
+type PrometheusDuration string
+
+var (
+ // ErrGroupNamesNotUnique is the error type when loki groups have not unique names.
+ ErrGroupNamesNotUnique = errors.New("Group names are not unique")
+ // ErrInvalidRecordMetricName when any loki recording rule has a invalid PromQL metric name.
+ ErrInvalidRecordMetricName = errors.New("Failed to parse record metric name")
+ // ErrParseAlertForPeriod when any loki alerting rule for period is not a valid PromQL duration.
+ ErrParseAlertForPeriod = errors.New("Failed to parse alert firing period")
+ // ErrParseEvaluationInterval when any loki group evaluation internal is not a valid PromQL duration.
+ ErrParseEvaluationInterval = errors.New("Failed to parse evaluation")
+ // ErrParseLogQLExpression when any loki rule expression is not a valid LogQL expression.
+ ErrParseLogQLExpression = errors.New("Failed to parse LogQL expression")
+)
diff --git a/operator/api/v1beta1/zz_generated.deepcopy.go b/operator/api/v1beta1/zz_generated.deepcopy.go
index 037f360de672a..316cf5800a14b 100644
--- a/operator/api/v1beta1/zz_generated.deepcopy.go
+++ b/operator/api/v1beta1/zz_generated.deepcopy.go
@@ -6,11 +6,173 @@
package v1beta1
import (
- "k8s.io/api/core/v1"
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- runtime "k8s.io/apimachinery/pkg/runtime"
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/runtime"
)
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *AlertingRule) DeepCopyInto(out *AlertingRule) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
+ in.Spec.DeepCopyInto(&out.Spec)
+ in.Status.DeepCopyInto(&out.Status)
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AlertingRule.
+func (in *AlertingRule) DeepCopy() *AlertingRule {
+ if in == nil {
+ return nil
+ }
+ out := new(AlertingRule)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *AlertingRule) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *AlertingRuleGroup) DeepCopyInto(out *AlertingRuleGroup) {
+ *out = *in
+ if in.Rules != nil {
+ in, out := &in.Rules, &out.Rules
+ *out = make([]*AlertingRuleGroupSpec, len(*in))
+ for i := range *in {
+ if (*in)[i] != nil {
+ in, out := &(*in)[i], &(*out)[i]
+ *out = new(AlertingRuleGroupSpec)
+ (*in).DeepCopyInto(*out)
+ }
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AlertingRuleGroup.
+func (in *AlertingRuleGroup) DeepCopy() *AlertingRuleGroup {
+ if in == nil {
+ return nil
+ }
+ out := new(AlertingRuleGroup)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *AlertingRuleGroupSpec) DeepCopyInto(out *AlertingRuleGroupSpec) {
+ *out = *in
+ if in.Annotations != nil {
+ in, out := &in.Annotations, &out.Annotations
+ *out = make(map[string]string, len(*in))
+ for key, val := range *in {
+ (*out)[key] = val
+ }
+ }
+ if in.Labels != nil {
+ in, out := &in.Labels, &out.Labels
+ *out = make(map[string]string, len(*in))
+ for key, val := range *in {
+ (*out)[key] = val
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AlertingRuleGroupSpec.
+func (in *AlertingRuleGroupSpec) DeepCopy() *AlertingRuleGroupSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(AlertingRuleGroupSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *AlertingRuleList) DeepCopyInto(out *AlertingRuleList) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ListMeta.DeepCopyInto(&out.ListMeta)
+ if in.Items != nil {
+ in, out := &in.Items, &out.Items
+ *out = make([]AlertingRule, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AlertingRuleList.
+func (in *AlertingRuleList) DeepCopy() *AlertingRuleList {
+ if in == nil {
+ return nil
+ }
+ out := new(AlertingRuleList)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *AlertingRuleList) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *AlertingRuleSpec) DeepCopyInto(out *AlertingRuleSpec) {
+ *out = *in
+ if in.Groups != nil {
+ in, out := &in.Groups, &out.Groups
+ *out = make([]*AlertingRuleGroup, len(*in))
+ for i := range *in {
+ if (*in)[i] != nil {
+ in, out := &(*in)[i], &(*out)[i]
+ *out = new(AlertingRuleGroup)
+ (*in).DeepCopyInto(*out)
+ }
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AlertingRuleSpec.
+func (in *AlertingRuleSpec) DeepCopy() *AlertingRuleSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(AlertingRuleSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *AlertingRuleStatus) DeepCopyInto(out *AlertingRuleStatus) {
+ *out = *in
+ if in.Conditions != nil {
+ in, out := &in.Conditions, &out.Conditions
+ *out = make([]v1.Condition, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AlertingRuleStatus.
+func (in *AlertingRuleStatus) DeepCopy() *AlertingRuleStatus {
+ if in == nil {
+ return nil
+ }
+ out := new(AlertingRuleStatus)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AuthenticationSpec) DeepCopyInto(out *AuthenticationSpec) {
*out = *in
@@ -144,7 +306,7 @@ func (in *LokiComponentSpec) DeepCopyInto(out *LokiComponentSpec) {
}
if in.Tolerations != nil {
in, out := &in.Tolerations, &out.Tolerations
- *out = make([]v1.Toleration, len(*in))
+ *out = make([]corev1.Toleration, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
@@ -296,6 +458,21 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
(*out)[key] = outVal
}
}
+ if in.Ruler != nil {
+ in, out := &in.Ruler, &out.Ruler
+ *out = make(PodStatusMap, len(*in))
+ for key, val := range *in {
+ var outVal []string
+ if val == nil {
+ (*out)[key] = nil
+ } else {
+ in, out := &val, &outVal
+ *out = make([]string, len(*in))
+ copy(*out, *in)
+ }
+ (*out)[key] = outVal
+ }
+ }
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LokiStackComponentStatus.
@@ -344,6 +521,11 @@ func (in *LokiStackList) DeepCopyObject() runtime.Object {
func (in *LokiStackSpec) DeepCopyInto(out *LokiStackSpec) {
*out = *in
out.Storage = in.Storage
+ if in.Rules != nil {
+ in, out := &in.Rules, &out.Rules
+ *out = new(RulesSpec)
+ (*in).DeepCopyInto(*out)
+ }
if in.Limits != nil {
in, out := &in.Limits, &out.Limits
*out = new(LimitsSpec)
@@ -377,7 +559,7 @@ func (in *LokiStackStatus) DeepCopyInto(out *LokiStackStatus) {
in.Components.DeepCopyInto(&out.Components)
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
- *out = make([]metav1.Condition, len(*in))
+ *out = make([]v1.Condition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
@@ -432,6 +614,11 @@ func (in *LokiTemplateSpec) DeepCopyInto(out *LokiTemplateSpec) {
*out = new(LokiComponentSpec)
(*in).DeepCopyInto(*out)
}
+ if in.Ruler != nil {
+ in, out := &in.Ruler, &out.Ruler
+ *out = new(LokiComponentSpec)
+ (*in).DeepCopyInto(*out)
+ }
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LokiTemplateSpec.
@@ -554,6 +741,154 @@ func (in *QueryLimitSpec) DeepCopy() *QueryLimitSpec {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *RecordingRule) DeepCopyInto(out *RecordingRule) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
+ in.Spec.DeepCopyInto(&out.Spec)
+ in.Status.DeepCopyInto(&out.Status)
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RecordingRule.
+func (in *RecordingRule) DeepCopy() *RecordingRule {
+ if in == nil {
+ return nil
+ }
+ out := new(RecordingRule)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *RecordingRule) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *RecordingRuleGroup) DeepCopyInto(out *RecordingRuleGroup) {
+ *out = *in
+ if in.Rules != nil {
+ in, out := &in.Rules, &out.Rules
+ *out = make([]*RecordingRuleGroupSpec, len(*in))
+ for i := range *in {
+ if (*in)[i] != nil {
+ in, out := &(*in)[i], &(*out)[i]
+ *out = new(RecordingRuleGroupSpec)
+ **out = **in
+ }
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RecordingRuleGroup.
+func (in *RecordingRuleGroup) DeepCopy() *RecordingRuleGroup {
+ if in == nil {
+ return nil
+ }
+ out := new(RecordingRuleGroup)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *RecordingRuleGroupSpec) DeepCopyInto(out *RecordingRuleGroupSpec) {
+ *out = *in
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RecordingRuleGroupSpec.
+func (in *RecordingRuleGroupSpec) DeepCopy() *RecordingRuleGroupSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(RecordingRuleGroupSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *RecordingRuleList) DeepCopyInto(out *RecordingRuleList) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ListMeta.DeepCopyInto(&out.ListMeta)
+ if in.Items != nil {
+ in, out := &in.Items, &out.Items
+ *out = make([]RecordingRule, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RecordingRuleList.
+func (in *RecordingRuleList) DeepCopy() *RecordingRuleList {
+ if in == nil {
+ return nil
+ }
+ out := new(RecordingRuleList)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *RecordingRuleList) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *RecordingRuleSpec) DeepCopyInto(out *RecordingRuleSpec) {
+ *out = *in
+ if in.Groups != nil {
+ in, out := &in.Groups, &out.Groups
+ *out = make([]*RecordingRuleGroup, len(*in))
+ for i := range *in {
+ if (*in)[i] != nil {
+ in, out := &(*in)[i], &(*out)[i]
+ *out = new(RecordingRuleGroup)
+ (*in).DeepCopyInto(*out)
+ }
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RecordingRuleSpec.
+func (in *RecordingRuleSpec) DeepCopy() *RecordingRuleSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(RecordingRuleSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *RecordingRuleStatus) DeepCopyInto(out *RecordingRuleStatus) {
+ *out = *in
+ if in.Conditions != nil {
+ in, out := &in.Conditions, &out.Conditions
+ *out = make([]v1.Condition, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RecordingRuleStatus.
+func (in *RecordingRuleStatus) DeepCopy() *RecordingRuleStatus {
+ if in == nil {
+ return nil
+ }
+ out := new(RecordingRuleStatus)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RoleBindingsSpec) DeepCopyInto(out *RoleBindingsSpec) {
*out = *in
@@ -609,6 +944,31 @@ func (in *RoleSpec) DeepCopy() *RoleSpec {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *RulesSpec) DeepCopyInto(out *RulesSpec) {
+ *out = *in
+ if in.Selector != nil {
+ in, out := &in.Selector, &out.Selector
+ *out = new(v1.LabelSelector)
+ (*in).DeepCopyInto(*out)
+ }
+ if in.NamespaceSelector != nil {
+ in, out := &in.NamespaceSelector, &out.NamespaceSelector
+ *out = new(v1.LabelSelector)
+ (*in).DeepCopyInto(*out)
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RulesSpec.
+func (in *RulesSpec) DeepCopy() *RulesSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(RulesSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Subject) DeepCopyInto(out *Subject) {
*out = *in
diff --git a/operator/bundle/manifests/loki-operator-webhook-service_v1_service.yaml b/operator/bundle/manifests/loki-operator-webhook-service_v1_service.yaml
new file mode 100644
index 0000000000000..ce1a245ce7c82
--- /dev/null
+++ b/operator/bundle/manifests/loki-operator-webhook-service_v1_service.yaml
@@ -0,0 +1,24 @@
+apiVersion: v1
+kind: Service
+metadata:
+ annotations:
+ service.beta.openshift.io/serving-cert-secret-name: loki-operator-webhook-service
+ creationTimestamp: null
+ labels:
+ app.kubernetes.io/instance: loki-operator-v0.0.1
+ app.kubernetes.io/managed-by: operator-lifecycle-manager
+ app.kubernetes.io/name: loki-operator
+ app.kubernetes.io/part-of: cluster-logging
+ app.kubernetes.io/version: 0.0.1
+ name: loki-operator-webhook-service
+spec:
+ ports:
+ - port: 443
+ protocol: TCP
+ targetPort: 9443
+ selector:
+ app.kubernetes.io/managed-by: operator-lifecycle-manager
+ app.kubernetes.io/name: loki-operator
+ app.kubernetes.io/part-of: cluster-logging
+status:
+ loadBalancer: {}
diff --git a/operator/bundle/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/manifests/loki-operator.clusterserviceversion.yaml
index 33bb7ab8eb1d2..7dc3032df8ebb 100644
--- a/operator/bundle/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/manifests/loki-operator.clusterserviceversion.yaml
@@ -4,6 +4,46 @@ metadata:
annotations:
alm-examples: |-
[
+ {
+ "apiVersion": "loki.grafana.com/v1beta1",
+ "kind": "AlertingRule",
+ "metadata": {
+ "name": "alertingrule-sample"
+ },
+ "spec": {
+ "groups": [
+ {
+ "interval": "10m",
+ "name": "alerting-rules-group",
+ "rules": [
+ {
+ "alert": "HighPercentageError",
+ "annotations": {
+ "summary": "High request latency"
+ },
+ "expr": "sum(rate({app=\"foo\", env=\"production\"} |= \"error\" [5m])) by (job)\n /\nsum(rate({app=\"foo\", env=\"production\"}[5m])) by (job)\n \u003e 0.05\n",
+ "for": "10m",
+ "labels": {
+ "severity": "page"
+ }
+ },
+ {
+ "alert": "HttpCredentialsLeaked",
+ "annotations": {
+ "message": "{{ $labels.job }} is leaking http basic auth credentials."
+ },
+ "expr": "sum by (cluster, job, pod) (count_over_time({namespace=\"prod\"} |~ \"http(s?)://(\\\\w+):(\\\\w+)@\" [5m]) \u003e 0)",
+ "for": "10m",
+ "labels": {
+ "severity": "critical"
+ }
+ }
+ ]
+ }
+ ],
+ "tenantID": "test-tenant"
+ }
+ },
{
"apiVersion": "loki.grafana.com/v1beta1",
"kind": "LokiStack",
@@ -19,6 +59,32 @@ metadata:
},
"storageClassName": "standard"
}
+ },
+ {
+ "apiVersion": "loki.grafana.com/v1beta1",
+ "kind": "RecordingRule",
+ "metadata": {
+ "name": "recordingrule-sample"
+ },
+ "spec": {
+ "groups": [
+ {
+ "interval": "10m",
+ "name": "recording-rules-group",
+ "rules": [
+ {
+ "expr": "sum(rate({container=\"myservice\"}[10m]))\n",
+ "record": "myservice:requests:rate10m"
+ },
+ {
+ "expr": "sum(rate({container=\"otherservice\"}[1m]))\n",
+ "record": "otherservice:requests:rate1m"
+ }
+ ]
+ }
+ ],
+ "tenantID": "test-tenant"
+ }
}
]
capabilities: Full Lifecycle
@@ -51,6 +117,64 @@ spec:
apiservicedefinitions: {}
customresourcedefinitions:
owned:
+ - description: AlertingRule is the Schema for the alertingrules API
+ displayName: AlertingRule
+ kind: AlertingRule
+ name: alertingrules.loki.grafana.com
+ resources:
+ - kind: LokiStack
+ name: ""
+ version: v1beta1
+ specDescriptors:
+ - description: List of groups for alerting rules.
+ displayName: Groups
+ path: groups
+ - description: Interval defines the time interval between evaluation of the
+ given alerting rule.
+ displayName: Evaluation Interval
+ path: groups[0].interval
+ - description: Limit defines the number of alerts an alerting rule can produce.
+ 0 is no limit.
+ displayName: Limit of firing alerts
+ path: groups[0].limit
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:number
+ - description: Name of the alerting rule group. Must be unique within all alerting
+ rules.
+ displayName: Name
+ path: groups[0].name
+ - description: Rules defines a list of alerting rules
+ displayName: Rules
+ path: groups[0].rules
+ - description: The name of the alert. Must be a valid label value.
+ displayName: Name
+ path: groups[0].rules[0].alert
+ - description: Annotations to add to each alert.
+ displayName: Annotations
+ path: groups[0].rules[0].annotations
+ - description: The LogQL expression to evaluate. Every evaluation cycle this
+ is evaluated at the current time, and all resultant time series become pending/firing
+ alerts.
+ displayName: LogQL Expression
+ path: groups[0].rules[0].expr
+ - description: Alerts are considered firing once they have been returned for
+ this long. Alerts which have not yet fired for long enough are considered
+ pending.
+ displayName: Firing Threshold
+ path: groups[0].rules[0].for
+ - description: Labels to add to each alert.
+ displayName: Labels
+ path: groups[0].rules[0].labels
+ - description: TenantID of tenant where the alerting rules are evaluated in.
+ displayName: Tenant ID
+ path: tenantID
+ statusDescriptors:
+ - description: Conditions of the AlertingRule generation health.
+ displayName: Conditions
+ path: conditions
+ x-descriptors:
+ - urn:alm:descriptor:io.kubernetes.conditions
+ version: v1beta1
- description: LokiStack is the Schema for the lokistacks API
displayName: LokiStack
kind: LokiStack
@@ -227,6 +351,24 @@ spec:
path: replicationFactor
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: Rules defines the spec for the ruler component
+ displayName: Rules
+ path: rules
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:advanced
+ - description: Enabled defines a flag to enable/disable the ruler component
+ displayName: Enable
+ path: rules.enabled
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: Namespaces to be selected for PrometheusRules discovery. If unspecified,
+ only the same namespace as the LokiStack object is in is used.
+ displayName: Namespace Selector
+ path: rules.namespaceSelector
+ - description: A selector to select which LokiRules to mount for loading alerting/recording
+ rules from.
+ displayName: Selector
+ path: rules.selector
- description: Size defines one of the support Loki deployment scale out sizes.
displayName: LokiStack Size
path: size
@@ -320,6 +462,14 @@ spec:
path: template.queryFrontend.replicas
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:hidden
+ - description: Ruler defines the ruler component spec.
+ displayName: Ruler pods
+ path: template.ruler
+ - description: Replicas defines the number of replica pods of the component.
+ displayName: Replicas
+ path: template.ruler.replicas
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:hidden
- description: Tenants defines the per-tenant authentication and authorization
spec for the lokistack-gateway component.
displayName: Tenants Configuration
@@ -418,12 +568,65 @@ spec:
path: components.indexGateway
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:podStatuses
+ - description: Ruler is a map to the per pod status of the lokistack ruler statefulset.
+ displayName: Ruler
+ path: components.ruler
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:podStatuses
- description: Conditions of the Loki deployment health.
displayName: Conditions
path: conditions
x-descriptors:
- urn:alm:descriptor:io.kubernetes.conditions
version: v1beta1
+ - description: RecordingRule is the Schema for the recordingrules API
+ displayName: RecordingRule
+ kind: RecordingRule
+ name: recordingrules.loki.grafana.com
+ resources:
+ - kind: LokiStack
+ name: ""
+ version: v1beta1
+ specDescriptors:
+ - description: List of groups for recording rules.
+ displayName: Groups
+ path: groups
+ - description: Interval defines the time interval between evaluation of the
+ given recoding rule.
+ displayName: Evaluation Interval
+ path: groups[0].interval
+ - description: Limit defines the number of series a recording rule can produce.
+ 0 is no limit.
+ displayName: Limit of produced series
+ path: groups[0].limit
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:number
+ - description: Name of the recording rule group. Must be unique within all recording
+ rules.
+ displayName: Name
+ path: groups[0].name
+ - description: Rules defines a list of recording rules
+ displayName: Rules
+ path: groups[0].rules
+ - description: The LogQL expression to evaluate. Every evaluation cycle this
+ is evaluated at the current time, and all resultant time series become pending/firing
+ alerts.
+ displayName: LogQL Expression
+ path: groups[0].rules[0].expr
+ - description: The name of the time series to output to. Must be a valid metric
+ name.
+ displayName: Metric Name
+ path: groups[0].rules[0].record
+ - description: TenantID of tenant where the recording rules are evaluated in.
+ displayName: Tenant ID
+ path: tenantID
+ statusDescriptors:
+ - description: Conditions of the RecordingRule generation health.
+ displayName: Conditions
+ path: conditions
+ x-descriptors:
+ - urn:alm:descriptor:io.kubernetes.conditions
+ version: v1beta1
description: |
The Loki Operator for OCP provides a means for configuring and managing a Loki stack for cluster logging.
## Prerequisites and Requirements
@@ -495,6 +698,32 @@ spec:
- create
- get
- update
+ - apiGroups:
+ - loki.grafana.com
+ resources:
+ - alertingrules
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - loki.grafana.com
+ resources:
+ - alertingrules/finalizers
+ verbs:
+ - update
+ - apiGroups:
+ - loki.grafana.com
+ resources:
+ - alertingrules/status
+ verbs:
+ - get
+ - patch
+ - update
- apiGroups:
- loki.grafana.com
resources:
@@ -521,6 +750,32 @@ spec:
- get
- patch
- update
+ - apiGroups:
+ - loki.grafana.com
+ resources:
+ - recordingrules
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - loki.grafana.com
+ resources:
+ - recordingrules/finalizers
+ verbs:
+ - update
+ - apiGroups:
+ - loki.grafana.com
+ resources:
+ - recordingrules/status
+ verbs:
+ - get
+ - patch
+ - update
- apiGroups:
- monitoring.coreos.com
resources:
@@ -635,6 +890,9 @@ spec:
periodSeconds: 20
name: manager
ports:
+ - containerPort: 9443
+ name: webhook-server
+ protocol: TCP
- containerPort: 8080
name: metrics
readinessProbe:
@@ -646,6 +904,10 @@ spec:
resources: {}
securityContext:
allowPrivilegeEscalation: false
+ volumeMounts:
+ - mountPath: /tmp/k8s-webhook-server/serving-certs
+ name: webhook-cert
+ readOnly: true
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
@@ -666,6 +928,10 @@ spec:
kubernetes.io/os: linux
terminationGracePeriodSeconds: 10
volumes:
+ - name: webhook-cert
+ secret:
+ defaultMode: 420
+ secretName: loki-operator-webhook-service
- name: loki-operator-metrics-cert
secret:
defaultMode: 420
@@ -726,3 +992,44 @@ spec:
- image: quay.io/observatorium/opa-openshift:latest
name: opa
version: 0.0.1
+ webhookdefinitions:
+ - admissionReviewVersions:
+ - v1
+ containerPort: 443
+ deploymentName: loki-operator-controller-manager
+ failurePolicy: Fail
+ generateName: valertingrule.kb.io
+ rules:
+ - apiGroups:
+ - loki.grafana.com
+ apiVersions:
+ - v1beta1
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - alertingrules
+ sideEffects: None
+ targetPort: 9443
+ type: ValidatingAdmissionWebhook
+ webhookPath: /validate-loki-grafana-com-v1beta1-alertingrule
+ - admissionReviewVersions:
+ - v1
+ containerPort: 443
+ deploymentName: loki-operator-controller-manager
+ failurePolicy: Fail
+ generateName: vrecordingrule.kb.io
+ rules:
+ - apiGroups:
+ - loki.grafana.com
+ apiVersions:
+ - v1beta1
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - recordingrules
+ sideEffects: None
+ targetPort: 9443
+ type: ValidatingAdmissionWebhook
+ webhookPath: /validate-loki-grafana-com-v1beta1-recordingrule
diff --git a/operator/bundle/manifests/loki.grafana.com_alertingrules.yaml b/operator/bundle/manifests/loki.grafana.com_alertingrules.yaml
new file mode 100644
index 0000000000000..b49d788391f04
--- /dev/null
+++ b/operator/bundle/manifests/loki.grafana.com_alertingrules.yaml
@@ -0,0 +1,194 @@
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.8.0
+ creationTimestamp: null
+ labels:
+ app.kubernetes.io/instance: loki-operator-v0.0.1
+ app.kubernetes.io/managed-by: operator-lifecycle-manager
+ app.kubernetes.io/name: loki-operator
+ app.kubernetes.io/part-of: cluster-logging
+ app.kubernetes.io/version: 0.0.1
+ name: alertingrules.loki.grafana.com
+spec:
+ group: loki.grafana.com
+ names:
+ kind: AlertingRule
+ listKind: AlertingRuleList
+ plural: alertingrules
+ singular: alertingrule
+ scope: Namespaced
+ versions:
+ - name: v1beta1
+ schema:
+ openAPIV3Schema:
+ description: AlertingRule is the Schema for the alertingrules API
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: AlertingRuleSpec defines the desired state of AlertingRule
+ properties:
+ groups:
+ description: List of groups for alerting rules.
+ items:
+ description: AlertingRuleGroup defines a group of Loki alerting
+ rules.
+ properties:
+ interval:
+ default: 1m
+ description: Interval defines the time interval between evaluation
+ of the given alerting rule.
+ pattern: ((([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?|0)
+ type: string
+ limit:
+ description: Limit defines the number of alerts an alerting
+ rule can produce. 0 is no limit.
+ format: int32
+ type: integer
+ name:
+ description: Name of the alerting rule group. Must be unique
+ within all alerting rules.
+ type: string
+ rules:
+ description: Rules defines a list of alerting rules
+ items:
+ description: AlertingRuleGroupSpec defines the spec for a
+ Loki alerting rule.
+ properties:
+ alert:
+ description: The name of the alert. Must be a valid label
+ value.
+ type: string
+ annotations:
+ additionalProperties:
+ type: string
+ description: Annotations to add to each alert.
+ type: object
+ expr:
+ description: The LogQL expression to evaluate. Every evaluation
+ cycle this is evaluated at the current time, and all
+ resultant time series become pending/firing alerts.
+ type: string
+ for:
+ description: Alerts are considered firing once they have
+ been returned for this long. Alerts which have not yet
+ fired for long enough are considered pending.
+ pattern: ((([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?|0)
+ type: string
+ labels:
+ additionalProperties:
+ type: string
+ description: Labels to add to each alert.
+ type: object
+ required:
+ - expr
+ type: object
+ type: array
+ required:
+ - name
+ - rules
+ type: object
+ type: array
+ tenantID:
+ description: TenantID of tenant where the alerting rules are evaluated
+ in.
+ type: string
+ required:
+ - tenantID
+ type: object
+ status:
+ description: AlertingRuleStatus defines the observed state of AlertingRule
+ properties:
+ conditions:
+ description: Conditions of the AlertingRule generation health.
+ items:
+ description: "Condition contains details for one aspect of the current
+ state of this API Resource. --- This struct is intended for direct
+ use as an array at the field path .status.conditions. For example,
+ type FooStatus struct{ // Represents the observations of a foo's
+ current state. // Known .status.conditions.type are: \"Available\",
+ \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge
+ // +listType=map // +listMapKey=type Conditions []metav1.Condition
+ `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\"
+ protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }"
+ properties:
+ lastTransitionTime:
+ description: lastTransitionTime is the last time the condition
+ transitioned from one status to another. This should be when
+ the underlying condition changed. If that is not known, then
+ using the time when the API field changed is acceptable.
+ format: date-time
+ type: string
+ message:
+ description: message is a human readable message indicating
+ details about the transition. This may be an empty string.
+ maxLength: 32768
+ type: string
+ observedGeneration:
+ description: observedGeneration represents the .metadata.generation
+ that the condition was set based upon. For instance, if .metadata.generation
+ is currently 12, but the .status.conditions[x].observedGeneration
+ is 9, the condition is out of date with respect to the current
+ state of the instance.
+ format: int64
+ minimum: 0
+ type: integer
+ reason:
+ description: reason contains a programmatic identifier indicating
+ the reason for the condition's last transition. Producers
+ of specific condition types may define expected values and
+ meanings for this field, and whether the values are considered
+ a guaranteed API. The value should be a CamelCase string.
+ This field may not be empty.
+ maxLength: 1024
+ minLength: 1
+ pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
+ type: string
+ status:
+ description: status of the condition, one of True, False, Unknown.
+ enum:
+ - "True"
+ - "False"
+ - Unknown
+ type: string
+ type:
+ description: type of condition in CamelCase or in foo.example.com/CamelCase.
+ --- Many .condition.type values are consistent across resources
+ like Available, but because arbitrary conditions can be useful
+ (see .node.status.conditions), the ability to deconflict is
+ important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
+ maxLength: 316
+ pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
+ type: string
+ required:
+ - lastTransitionTime
+ - message
+ - reason
+ - status
+ - type
+ type: object
+ type: array
+ type: object
+ type: object
+ served: true
+ storage: true
+ subresources:
+ status: {}
+status:
+ acceptedNames:
+ kind: ""
+ plural: ""
+ conditions: []
+ storedVersions: []
diff --git a/operator/bundle/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/manifests/loki.grafana.com_lokistacks.yaml
index 44f87761cdb76..839617badfa29 100644
--- a/operator/bundle/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/manifests/loki.grafana.com_lokistacks.yaml
@@ -196,10 +196,112 @@ spec:
- Unmanaged
type: string
replicationFactor:
+ default: 1
description: ReplicationFactor defines the policy for log stream replication.
format: int32
minimum: 1
type: integer
+ rules:
+ description: Rules defines the spec for the ruler component
+ properties:
+ enabled:
+ description: Enabled defines a flag to enable/disable the ruler
+ component
+ type: boolean
+ namespaceSelector:
+ description: Namespaces to be selected for PrometheusRules discovery.
+ If unspecified, only the same namespace as the LokiStack object
+ is in is used.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label selector
+ requirements. The requirements are ANDed.
+ items:
+ description: A label selector requirement is a selector
+ that contains values, a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key that the selector
+ applies to.
+ type: string
+ operator:
+ description: operator represents a key's relationship
+ to a set of values. Valid operators are In, NotIn,
+ Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string values. If
+ the operator is In or NotIn, the values array must
+ be non-empty. If the operator is Exists or DoesNotExist,
+ the values array must be empty. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value} pairs. A
+ single {key,value} in the matchLabels map is equivalent
+ to an element of matchExpressions, whose key field is "key",
+ the operator is "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ selector:
+ description: A selector to select which LokiRules to mount for
+ loading alerting/recording rules from.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label selector
+ requirements. The requirements are ANDed.
+ items:
+ description: A label selector requirement is a selector
+ that contains values, a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key that the selector
+ applies to.
+ type: string
+ operator:
+ description: operator represents a key's relationship
+ to a set of values. Valid operators are In, NotIn,
+ Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string values. If
+ the operator is In or NotIn, the values array must
+ be non-empty. If the operator is Exists or DoesNotExist,
+ the values array must be empty. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value} pairs. A
+ single {key,value} in the matchLabels map is equivalent
+ to an element of matchExpressions, whose key field is "key",
+ the operator is "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ required:
+ - enabled
+ type: object
size:
description: Size defines one of the support Loki deployment scale
out sizes.
@@ -651,6 +753,64 @@ spec:
type: object
type: array
type: object
+ ruler:
+ description: Ruler defines the ruler component spec.
+ properties:
+ nodeSelector:
+ additionalProperties:
+ type: string
+ description: NodeSelector defines the labels required by a
+ node to schedule the component onto it.
+ type: object
+ replicas:
+ description: Replicas defines the number of replica pods of
+ the component.
+ format: int32
+ type: integer
+ tolerations:
+ description: Tolerations defines the tolerations required
+ by a node to schedule the component onto it.
+ items:
+ description: The pod this Toleration is attached to tolerates
+ any taint that matches the triple <key,value,effect> using
+ the matching operator <operator>.
+ properties:
+ effect:
+ description: Effect indicates the taint effect to match.
+ Empty means match all taint effects. When specified,
+ allowed values are NoSchedule, PreferNoSchedule and
+ NoExecute.
+ type: string
+ key:
+ description: Key is the taint key that the toleration
+ applies to. Empty means match all taint keys. If the
+ key is empty, operator must be Exists; this combination
+ means to match all values and all keys.
+ type: string
+ operator:
+ description: Operator represents a key's relationship
+ to the value. Valid operators are Exists and Equal.
+ Defaults to Equal. Exists is equivalent to wildcard
+ for value, so that a pod can tolerate all taints of
+ a particular category.
+ type: string
+ tolerationSeconds:
+ description: TolerationSeconds represents the period
+ of time the toleration (which must be of effect NoExecute,
+ otherwise this field is ignored) tolerates the taint.
+ By default, it is not set, which means tolerate the
+ taint forever (do not evict). Zero and negative values
+ will be treated as 0 (evict immediately) by the system.
+ format: int64
+ type: integer
+ value:
+ description: Value is the taint value the toleration
+ matches to. If the operator is Exists, the value should
+ be empty, otherwise just a regular string.
+ type: string
+ type: object
+ type: array
+ type: object
type: object
tenants:
description: Tenants defines the per-tenant authentication and authorization
@@ -874,6 +1034,14 @@ spec:
description: QueryFrontend is a map to the per pod status of the
query frontend deployment
type: object
+ ruler:
+ additionalProperties:
+ items:
+ type: string
+ type: array
+ description: Ruler is a map to the per pod status of the lokistack
+ ruler statefulset.
+ type: object
type: object
conditions:
description: Conditions of the Loki deployment health.
diff --git a/operator/bundle/manifests/loki.grafana.com_recordingrules.yaml b/operator/bundle/manifests/loki.grafana.com_recordingrules.yaml
new file mode 100644
index 0000000000000..6a5a5ceddb34d
--- /dev/null
+++ b/operator/bundle/manifests/loki.grafana.com_recordingrules.yaml
@@ -0,0 +1,178 @@
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.8.0
+ creationTimestamp: null
+ labels:
+ app.kubernetes.io/instance: loki-operator-v0.0.1
+ app.kubernetes.io/managed-by: operator-lifecycle-manager
+ app.kubernetes.io/name: loki-operator
+ app.kubernetes.io/part-of: cluster-logging
+ app.kubernetes.io/version: 0.0.1
+ name: recordingrules.loki.grafana.com
+spec:
+ group: loki.grafana.com
+ names:
+ kind: RecordingRule
+ listKind: RecordingRuleList
+ plural: recordingrules
+ singular: recordingrule
+ scope: Namespaced
+ versions:
+ - name: v1beta1
+ schema:
+ openAPIV3Schema:
+ description: RecordingRule is the Schema for the recordingrules API
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: RecordingRuleSpec defines the desired state of RecordingRule
+ properties:
+ groups:
+ description: List of groups for recording rules.
+ items:
+ description: RecordingRuleGroup defines a group of Loki recording
+ rules.
+ properties:
+ interval:
+ default: 1m
+ description: Interval defines the time interval between evaluation
+ of the given recoding rule.
+ pattern: ((([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?|0)
+ type: string
+ limit:
+ description: Limit defines the number of series a recording
+ rule can produce. 0 is no limit.
+ format: int32
+ type: integer
+ name:
+ description: Name of the recording rule group. Must be unique
+ within all recording rules.
+ type: string
+ rules:
+ description: Rules defines a list of recording rules
+ items:
+ description: RecordingRuleGroupSpec defines the spec for a
+ Loki recording rule.
+ properties:
+ expr:
+ description: The LogQL expression to evaluate. Every evaluation
+ cycle this is evaluated at the current time, and all
+ resultant time series become pending/firing alerts.
+ type: string
+ record:
+ description: The name of the time series to output to.
+ Must be a valid metric name.
+ type: string
+ required:
+ - expr
+ type: object
+ type: array
+ required:
+ - name
+ - rules
+ type: object
+ type: array
+ tenantID:
+ description: TenantID of tenant where the recording rules are evaluated
+ in.
+ type: string
+ required:
+ - tenantID
+ type: object
+ status:
+ description: RecordingRuleStatus defines the observed state of RecordingRule
+ properties:
+ conditions:
+ description: Conditions of the RecordingRule generation health.
+ items:
+ description: "Condition contains details for one aspect of the current
+ state of this API Resource. --- This struct is intended for direct
+ use as an array at the field path .status.conditions. For example,
+ type FooStatus struct{ // Represents the observations of a foo's
+ current state. // Known .status.conditions.type are: \"Available\",
+ \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge
+ // +listType=map // +listMapKey=type Conditions []metav1.Condition
+ `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\"
+ protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }"
+ properties:
+ lastTransitionTime:
+ description: lastTransitionTime is the last time the condition
+ transitioned from one status to another. This should be when
+ the underlying condition changed. If that is not known, then
+ using the time when the API field changed is acceptable.
+ format: date-time
+ type: string
+ message:
+ description: message is a human readable message indicating
+ details about the transition. This may be an empty string.
+ maxLength: 32768
+ type: string
+ observedGeneration:
+ description: observedGeneration represents the .metadata.generation
+ that the condition was set based upon. For instance, if .metadata.generation
+ is currently 12, but the .status.conditions[x].observedGeneration
+ is 9, the condition is out of date with respect to the current
+ state of the instance.
+ format: int64
+ minimum: 0
+ type: integer
+ reason:
+ description: reason contains a programmatic identifier indicating
+ the reason for the condition's last transition. Producers
+ of specific condition types may define expected values and
+ meanings for this field, and whether the values are considered
+ a guaranteed API. The value should be a CamelCase string.
+ This field may not be empty.
+ maxLength: 1024
+ minLength: 1
+ pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
+ type: string
+ status:
+ description: status of the condition, one of True, False, Unknown.
+ enum:
+ - "True"
+ - "False"
+ - Unknown
+ type: string
+ type:
+ description: type of condition in CamelCase or in foo.example.com/CamelCase.
+ --- Many .condition.type values are consistent across resources
+ like Available, but because arbitrary conditions can be useful
+ (see .node.status.conditions), the ability to deconflict is
+ important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
+ maxLength: 316
+ pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
+ type: string
+ required:
+ - lastTransitionTime
+ - message
+ - reason
+ - status
+ - type
+ type: object
+ type: array
+ type: object
+ type: object
+ served: true
+ storage: true
+ subresources:
+ status: {}
+status:
+ acceptedNames:
+ kind: ""
+ plural: ""
+ conditions: []
+ storedVersions: []
diff --git a/operator/config/certmanager/certificate.yaml b/operator/config/certmanager/certificate.yaml
index 52d866183c76d..d7c5227840ecf 100644
--- a/operator/config/certmanager/certificate.yaml
+++ b/operator/config/certmanager/certificate.yaml
@@ -22,4 +22,4 @@ spec:
issuerRef:
kind: Issuer
name: selfsigned-issuer
- secretName: webhook-server-cert # this secret will not be prefixed, since it's not managed by kustomize
+ secretName: loki-operator-webhook-server-cert
diff --git a/operator/config/crd/bases/loki.grafana.com_alertingrules.yaml b/operator/config/crd/bases/loki.grafana.com_alertingrules.yaml
new file mode 100644
index 0000000000000..eaae585a082db
--- /dev/null
+++ b/operator/config/crd/bases/loki.grafana.com_alertingrules.yaml
@@ -0,0 +1,189 @@
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.8.0
+ creationTimestamp: null
+ name: alertingrules.loki.grafana.com
+spec:
+ group: loki.grafana.com
+ names:
+ kind: AlertingRule
+ listKind: AlertingRuleList
+ plural: alertingrules
+ singular: alertingrule
+ scope: Namespaced
+ versions:
+ - name: v1beta1
+ schema:
+ openAPIV3Schema:
+ description: AlertingRule is the Schema for the alertingrules API
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: AlertingRuleSpec defines the desired state of AlertingRule
+ properties:
+ groups:
+ description: List of groups for alerting rules.
+ items:
+ description: AlertingRuleGroup defines a group of Loki alerting
+ rules.
+ properties:
+ interval:
+ default: 1m
+ description: Interval defines the time interval between evaluation
+ of the given alerting rule.
+ pattern: ((([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?|0)
+ type: string
+ limit:
+ description: Limit defines the number of alerts an alerting
+ rule can produce. 0 is no limit.
+ format: int32
+ type: integer
+ name:
+ description: Name of the alerting rule group. Must be unique
+ within all alerting rules.
+ type: string
+ rules:
+ description: Rules defines a list of alerting rules
+ items:
+ description: AlertingRuleGroupSpec defines the spec for a
+ Loki alerting rule.
+ properties:
+ alert:
+ description: The name of the alert. Must be a valid label
+ value.
+ type: string
+ annotations:
+ additionalProperties:
+ type: string
+ description: Annotations to add to each alert.
+ type: object
+ expr:
+ description: The LogQL expression to evaluate. Every evaluation
+ cycle this is evaluated at the current time, and all
+ resultant time series become pending/firing alerts.
+ type: string
+ for:
+ description: Alerts are considered firing once they have
+ been returned for this long. Alerts which have not yet
+ fired for long enough are considered pending.
+ pattern: ((([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?|0)
+ type: string
+ labels:
+ additionalProperties:
+ type: string
+ description: Labels to add to each alert.
+ type: object
+ required:
+ - expr
+ type: object
+ type: array
+ required:
+ - name
+ - rules
+ type: object
+ type: array
+ tenantID:
+ description: TenantID of tenant where the alerting rules are evaluated
+ in.
+ type: string
+ required:
+ - tenantID
+ type: object
+ status:
+ description: AlertingRuleStatus defines the observed state of AlertingRule
+ properties:
+ conditions:
+ description: Conditions of the AlertingRule generation health.
+ items:
+ description: "Condition contains details for one aspect of the current
+ state of this API Resource. --- This struct is intended for direct
+ use as an array at the field path .status.conditions. For example,
+ type FooStatus struct{ // Represents the observations of a foo's
+ current state. // Known .status.conditions.type are: \"Available\",
+ \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge
+ // +listType=map // +listMapKey=type Conditions []metav1.Condition
+ `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\"
+ protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }"
+ properties:
+ lastTransitionTime:
+ description: lastTransitionTime is the last time the condition
+ transitioned from one status to another. This should be when
+ the underlying condition changed. If that is not known, then
+ using the time when the API field changed is acceptable.
+ format: date-time
+ type: string
+ message:
+ description: message is a human readable message indicating
+ details about the transition. This may be an empty string.
+ maxLength: 32768
+ type: string
+ observedGeneration:
+ description: observedGeneration represents the .metadata.generation
+ that the condition was set based upon. For instance, if .metadata.generation
+ is currently 12, but the .status.conditions[x].observedGeneration
+ is 9, the condition is out of date with respect to the current
+ state of the instance.
+ format: int64
+ minimum: 0
+ type: integer
+ reason:
+ description: reason contains a programmatic identifier indicating
+ the reason for the condition's last transition. Producers
+ of specific condition types may define expected values and
+ meanings for this field, and whether the values are considered
+ a guaranteed API. The value should be a CamelCase string.
+ This field may not be empty.
+ maxLength: 1024
+ minLength: 1
+ pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
+ type: string
+ status:
+ description: status of the condition, one of True, False, Unknown.
+ enum:
+ - "True"
+ - "False"
+ - Unknown
+ type: string
+ type:
+ description: type of condition in CamelCase or in foo.example.com/CamelCase.
+ --- Many .condition.type values are consistent across resources
+ like Available, but because arbitrary conditions can be useful
+ (see .node.status.conditions), the ability to deconflict is
+ important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
+ maxLength: 316
+ pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
+ type: string
+ required:
+ - lastTransitionTime
+ - message
+ - reason
+ - status
+ - type
+ type: object
+ type: array
+ type: object
+ type: object
+ served: true
+ storage: true
+ subresources:
+ status: {}
+status:
+ acceptedNames:
+ kind: ""
+ plural: ""
+ conditions: []
+ storedVersions: []
diff --git a/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml b/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
index 6e43ce06d871c..629c40770c1e6 100644
--- a/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
+++ b/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
@@ -191,10 +191,112 @@ spec:
- Unmanaged
type: string
replicationFactor:
+ default: 1
description: ReplicationFactor defines the policy for log stream replication.
format: int32
minimum: 1
type: integer
+ rules:
+ description: Rules defines the spec for the ruler component
+ properties:
+ enabled:
+ description: Enabled defines a flag to enable/disable the ruler
+ component
+ type: boolean
+ namespaceSelector:
+ description: Namespaces to be selected for PrometheusRules discovery.
+ If unspecified, only the same namespace as the LokiStack object
+ is in is used.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label selector
+ requirements. The requirements are ANDed.
+ items:
+ description: A label selector requirement is a selector
+ that contains values, a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key that the selector
+ applies to.
+ type: string
+ operator:
+ description: operator represents a key's relationship
+ to a set of values. Valid operators are In, NotIn,
+ Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string values. If
+ the operator is In or NotIn, the values array must
+ be non-empty. If the operator is Exists or DoesNotExist,
+ the values array must be empty. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value} pairs. A
+ single {key,value} in the matchLabels map is equivalent
+ to an element of matchExpressions, whose key field is "key",
+ the operator is "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ selector:
+ description: A selector to select which LokiRules to mount for
+ loading alerting/recording rules from.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label selector
+ requirements. The requirements are ANDed.
+ items:
+ description: A label selector requirement is a selector
+ that contains values, a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key that the selector
+ applies to.
+ type: string
+ operator:
+ description: operator represents a key's relationship
+ to a set of values. Valid operators are In, NotIn,
+ Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string values. If
+ the operator is In or NotIn, the values array must
+ be non-empty. If the operator is Exists or DoesNotExist,
+ the values array must be empty. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value} pairs. A
+ single {key,value} in the matchLabels map is equivalent
+ to an element of matchExpressions, whose key field is "key",
+ the operator is "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ required:
+ - enabled
+ type: object
size:
description: Size defines one of the support Loki deployment scale
out sizes.
@@ -646,6 +748,64 @@ spec:
type: object
type: array
type: object
+ ruler:
+ description: Ruler defines the ruler component spec.
+ properties:
+ nodeSelector:
+ additionalProperties:
+ type: string
+ description: NodeSelector defines the labels required by a
+ node to schedule the component onto it.
+ type: object
+ replicas:
+ description: Replicas defines the number of replica pods of
+ the component.
+ format: int32
+ type: integer
+ tolerations:
+ description: Tolerations defines the tolerations required
+ by a node to schedule the component onto it.
+ items:
+ description: The pod this Toleration is attached to tolerates
+ any taint that matches the triple <key,value,effect> using
+ the matching operator <operator>.
+ properties:
+ effect:
+ description: Effect indicates the taint effect to match.
+ Empty means match all taint effects. When specified,
+ allowed values are NoSchedule, PreferNoSchedule and
+ NoExecute.
+ type: string
+ key:
+ description: Key is the taint key that the toleration
+ applies to. Empty means match all taint keys. If the
+ key is empty, operator must be Exists; this combination
+ means to match all values and all keys.
+ type: string
+ operator:
+ description: Operator represents a key's relationship
+ to the value. Valid operators are Exists and Equal.
+ Defaults to Equal. Exists is equivalent to wildcard
+ for value, so that a pod can tolerate all taints of
+ a particular category.
+ type: string
+ tolerationSeconds:
+ description: TolerationSeconds represents the period
+ of time the toleration (which must be of effect NoExecute,
+ otherwise this field is ignored) tolerates the taint.
+ By default, it is not set, which means tolerate the
+ taint forever (do not evict). Zero and negative values
+ will be treated as 0 (evict immediately) by the system.
+ format: int64
+ type: integer
+ value:
+ description: Value is the taint value the toleration
+ matches to. If the operator is Exists, the value should
+ be empty, otherwise just a regular string.
+ type: string
+ type: object
+ type: array
+ type: object
type: object
tenants:
description: Tenants defines the per-tenant authentication and authorization
@@ -869,6 +1029,14 @@ spec:
description: QueryFrontend is a map to the per pod status of the
query frontend deployment
type: object
+ ruler:
+ additionalProperties:
+ items:
+ type: string
+ type: array
+ description: Ruler is a map to the per pod status of the lokistack
+ ruler statefulset.
+ type: object
type: object
conditions:
description: Conditions of the Loki deployment health.
diff --git a/operator/config/crd/bases/loki.grafana.com_recordingrules.yaml b/operator/config/crd/bases/loki.grafana.com_recordingrules.yaml
new file mode 100644
index 0000000000000..3f7adde453a94
--- /dev/null
+++ b/operator/config/crd/bases/loki.grafana.com_recordingrules.yaml
@@ -0,0 +1,173 @@
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.8.0
+ creationTimestamp: null
+ name: recordingrules.loki.grafana.com
+spec:
+ group: loki.grafana.com
+ names:
+ kind: RecordingRule
+ listKind: RecordingRuleList
+ plural: recordingrules
+ singular: recordingrule
+ scope: Namespaced
+ versions:
+ - name: v1beta1
+ schema:
+ openAPIV3Schema:
+ description: RecordingRule is the Schema for the recordingrules API
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: RecordingRuleSpec defines the desired state of RecordingRule
+ properties:
+ groups:
+ description: List of groups for recording rules.
+ items:
+ description: RecordingRuleGroup defines a group of Loki recording
+ rules.
+ properties:
+ interval:
+ default: 1m
+ description: Interval defines the time interval between evaluation
+ of the given recoding rule.
+ pattern: ((([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?|0)
+ type: string
+ limit:
+ description: Limit defines the number of series a recording
+ rule can produce. 0 is no limit.
+ format: int32
+ type: integer
+ name:
+ description: Name of the recording rule group. Must be unique
+ within all recording rules.
+ type: string
+ rules:
+ description: Rules defines a list of recording rules
+ items:
+ description: RecordingRuleGroupSpec defines the spec for a
+ Loki recording rule.
+ properties:
+ expr:
+ description: The LogQL expression to evaluate. Every evaluation
+ cycle this is evaluated at the current time, and all
+ resultant time series become pending/firing alerts.
+ type: string
+ record:
+ description: The name of the time series to output to.
+ Must be a valid metric name.
+ type: string
+ required:
+ - expr
+ type: object
+ type: array
+ required:
+ - name
+ - rules
+ type: object
+ type: array
+ tenantID:
+ description: TenantID of tenant where the recording rules are evaluated
+ in.
+ type: string
+ required:
+ - tenantID
+ type: object
+ status:
+ description: RecordingRuleStatus defines the observed state of RecordingRule
+ properties:
+ conditions:
+ description: Conditions of the RecordingRule generation health.
+ items:
+ description: "Condition contains details for one aspect of the current
+ state of this API Resource. --- This struct is intended for direct
+ use as an array at the field path .status.conditions. For example,
+ type FooStatus struct{ // Represents the observations of a foo's
+ current state. // Known .status.conditions.type are: \"Available\",
+ \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge
+ // +listType=map // +listMapKey=type Conditions []metav1.Condition
+ `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\"
+ protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }"
+ properties:
+ lastTransitionTime:
+ description: lastTransitionTime is the last time the condition
+ transitioned from one status to another. This should be when
+ the underlying condition changed. If that is not known, then
+ using the time when the API field changed is acceptable.
+ format: date-time
+ type: string
+ message:
+ description: message is a human readable message indicating
+ details about the transition. This may be an empty string.
+ maxLength: 32768
+ type: string
+ observedGeneration:
+ description: observedGeneration represents the .metadata.generation
+ that the condition was set based upon. For instance, if .metadata.generation
+ is currently 12, but the .status.conditions[x].observedGeneration
+ is 9, the condition is out of date with respect to the current
+ state of the instance.
+ format: int64
+ minimum: 0
+ type: integer
+ reason:
+ description: reason contains a programmatic identifier indicating
+ the reason for the condition's last transition. Producers
+ of specific condition types may define expected values and
+ meanings for this field, and whether the values are considered
+ a guaranteed API. The value should be a CamelCase string.
+ This field may not be empty.
+ maxLength: 1024
+ minLength: 1
+ pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
+ type: string
+ status:
+ description: status of the condition, one of True, False, Unknown.
+ enum:
+ - "True"
+ - "False"
+ - Unknown
+ type: string
+ type:
+ description: type of condition in CamelCase or in foo.example.com/CamelCase.
+ --- Many .condition.type values are consistent across resources
+ like Available, but because arbitrary conditions can be useful
+ (see .node.status.conditions), the ability to deconflict is
+ important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
+ maxLength: 316
+ pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
+ type: string
+ required:
+ - lastTransitionTime
+ - message
+ - reason
+ - status
+ - type
+ type: object
+ type: array
+ type: object
+ type: object
+ served: true
+ storage: true
+ subresources:
+ status: {}
+status:
+ acceptedNames:
+ kind: ""
+ plural: ""
+ conditions: []
+ storedVersions: []
diff --git a/operator/config/crd/kustomization.yaml b/operator/config/crd/kustomization.yaml
index b92b3cc3c333d..2241baa2c8661 100644
--- a/operator/config/crd/kustomization.yaml
+++ b/operator/config/crd/kustomization.yaml
@@ -3,17 +3,23 @@
# It should be run by config/default
resources:
- bases/loki.grafana.com_lokistacks.yaml
+- bases/loki.grafana.com_alertingrules.yaml
+- bases/loki.grafana.com_recordingrules.yaml
# +kubebuilder:scaffold:crdkustomizeresource
patchesStrategicMerge:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
#- patches/webhook_in_lokistacks.yaml
+#- patches/webhook_in_alertingrules.yaml
+#- patches/webhook_in_recordingrules.yaml
# +kubebuilder:scaffold:crdkustomizewebhookpatch
# [CERTMANAGER] To enable webhook, uncomment all the sections with [CERTMANAGER] prefix.
# patches here are for enabling the CA injection for each CRD
#- patches/cainjection_in_lokistacks.yaml
+#- patches/cainjection_in_alertingrules.yaml
+#- patches/cainjection_in_recordingrules.yaml
# +kubebuilder:scaffold:crdkustomizecainjectionpatch
# the following config is for teaching kustomize how to do kustomization for CRDs.
diff --git a/operator/config/crd/patches/cainjection_in_lokistacks.yaml b/operator/config/crd/patches/cainjection_in_lokistacks.yaml
deleted file mode 100644
index bacdd76044954..0000000000000
--- a/operator/config/crd/patches/cainjection_in_lokistacks.yaml
+++ /dev/null
@@ -1,8 +0,0 @@
-# The following patch adds a directive for certmanager to inject CA into the CRD
-# CRD conversion requires k8s 1.13 or later.
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- annotations:
- cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
- name: lokistacks.loki.grafana.com
diff --git a/operator/config/crd/patches/webhook_in_lokistacks.yaml b/operator/config/crd/patches/webhook_in_lokistacks.yaml
deleted file mode 100644
index c9750f0e9abdc..0000000000000
--- a/operator/config/crd/patches/webhook_in_lokistacks.yaml
+++ /dev/null
@@ -1,14 +0,0 @@
-# The following patch enables a conversion webhook for the CRD
-# CRD conversion requires k8s 1.13 or later.
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: lokistacks.loki.grafana.com
-spec:
- conversion:
- strategy: Webhook
- webhookClientConfig:
- service:
- namespace: system
- name: webhook-service
- path: /convert
diff --git a/operator/config/manifests/bases/loki-operator.clusterserviceversion.yaml b/operator/config/manifests/bases/loki-operator.clusterserviceversion.yaml
index 632913c32f898..036625d996fe5 100644
--- a/operator/config/manifests/bases/loki-operator.clusterserviceversion.yaml
+++ b/operator/config/manifests/bases/loki-operator.clusterserviceversion.yaml
@@ -31,6 +31,64 @@ spec:
apiservicedefinitions: {}
customresourcedefinitions:
owned:
+ - description: AlertingRule is the Schema for the alertingrules API
+ displayName: AlertingRule
+ kind: AlertingRule
+ name: alertingrules.loki.grafana.com
+ resources:
+ - kind: LokiStack
+ name: ""
+ version: v1beta1
+ specDescriptors:
+ - description: List of groups for alerting rules.
+ displayName: Groups
+ path: groups
+ - description: Interval defines the time interval between evaluation of the
+ given alerting rule.
+ displayName: Evaluation Interval
+ path: groups[0].interval
+ - description: Limit defines the number of alerts an alerting rule can produce.
+ 0 is no limit.
+ displayName: Limit of firing alerts
+ path: groups[0].limit
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:number
+ - description: Name of the alerting rule group. Must be unique within all alerting
+ rules.
+ displayName: Name
+ path: groups[0].name
+ - description: Rules defines a list of alerting rules
+ displayName: Rules
+ path: groups[0].rules
+ - description: The name of the alert. Must be a valid label value.
+ displayName: Name
+ path: groups[0].rules[0].alert
+ - description: Annotations to add to each alert.
+ displayName: Annotations
+ path: groups[0].rules[0].annotations
+ - description: The LogQL expression to evaluate. Every evaluation cycle this
+ is evaluated at the current time, and all resultant time series become pending/firing
+ alerts.
+ displayName: LogQL Expression
+ path: groups[0].rules[0].expr
+ - description: Alerts are considered firing once they have been returned for
+ this long. Alerts which have not yet fired for long enough are considered
+ pending.
+ displayName: Firing Threshold
+ path: groups[0].rules[0].for
+ - description: Labels to add to each alert.
+ displayName: Labels
+ path: groups[0].rules[0].labels
+ - description: TenantID of tenant where the alerting rules are evaluated in.
+ displayName: Tenant ID
+ path: tenantID
+ statusDescriptors:
+ - description: Conditions of the AlertingRule generation health.
+ displayName: Conditions
+ path: conditions
+ x-descriptors:
+ - urn:alm:descriptor:io.kubernetes.conditions
+ version: v1beta1
- description: LokiStack is the Schema for the lokistacks API
displayName: LokiStack
kind: LokiStack
@@ -207,6 +265,24 @@ spec:
path: replicationFactor
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: Rules defines the spec for the ruler component
+ displayName: Rules
+ path: rules
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:advanced
+ - description: Enabled defines a flag to enable/disable the ruler component
+ displayName: Enable
+ path: rules.enabled
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: Namespaces to be selected for PrometheusRules discovery. If unspecified,
+ only the same namespace as the LokiStack object is in is used.
+ displayName: Namespace Selector
+ path: rules.namespaceSelector
+ - description: A selector to select which LokiRules to mount for loading alerting/recording
+ rules from.
+ displayName: Selector
+ path: rules.selector
- description: Size defines one of the support Loki deployment scale out sizes.
displayName: LokiStack Size
path: size
@@ -300,6 +376,14 @@ spec:
path: template.queryFrontend.replicas
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:hidden
+ - description: Ruler defines the ruler component spec.
+ displayName: Ruler pods
+ path: template.ruler
+ - description: Replicas defines the number of replica pods of the component.
+ displayName: Replicas
+ path: template.ruler.replicas
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:hidden
- description: Tenants defines the per-tenant authentication and authorization
spec for the lokistack-gateway component.
displayName: Tenants Configuration
@@ -398,12 +482,65 @@ spec:
path: components.indexGateway
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:podStatuses
+ - description: Ruler is a map to the per pod status of the lokistack ruler statefulset.
+ displayName: Ruler
+ path: components.ruler
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:podStatuses
- description: Conditions of the Loki deployment health.
displayName: Conditions
path: conditions
x-descriptors:
- urn:alm:descriptor:io.kubernetes.conditions
version: v1beta1
+ - description: RecordingRule is the Schema for the recordingrules API
+ displayName: RecordingRule
+ kind: RecordingRule
+ name: recordingrules.loki.grafana.com
+ resources:
+ - kind: LokiStack
+ name: ""
+ version: v1beta1
+ specDescriptors:
+ - description: List of groups for recording rules.
+ displayName: Groups
+ path: groups
+ - description: Interval defines the time interval between evaluation of the
+ given recoding rule.
+ displayName: Evaluation Interval
+ path: groups[0].interval
+ - description: Limit defines the number of series a recording rule can produce.
+ 0 is no limit.
+ displayName: Limit of produced series
+ path: groups[0].limit
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:number
+ - description: Name of the recording rule group. Must be unique within all recording
+ rules.
+ displayName: Name
+ path: groups[0].name
+ - description: Rules defines a list of recording rules
+ displayName: Rules
+ path: groups[0].rules
+ - description: The LogQL expression to evaluate. Every evaluation cycle this
+ is evaluated at the current time, and all resultant time series become pending/firing
+ alerts.
+ displayName: LogQL Expression
+ path: groups[0].rules[0].expr
+ - description: The name of the time series to output to. Must be a valid metric
+ name.
+ displayName: Metric Name
+ path: groups[0].rules[0].record
+ - description: TenantID of tenant where the recording rules are evaluated in.
+ displayName: Tenant ID
+ path: tenantID
+ statusDescriptors:
+ - description: Conditions of the RecordingRule generation health.
+ displayName: Conditions
+ path: conditions
+ x-descriptors:
+ - urn:alm:descriptor:io.kubernetes.conditions
+ version: v1beta1
description: |
The Loki Operator for OCP provides a means for configuring and managing a Loki stack for cluster logging.
## Prerequisites and Requirements
diff --git a/operator/config/overlays/openshift/auth_proxy_service_annotations_patch.yaml b/operator/config/overlays/openshift/auth_proxy_service_annotations_patch.yaml
new file mode 100644
index 0000000000000..a24f8b83bc8ca
--- /dev/null
+++ b/operator/config/overlays/openshift/auth_proxy_service_annotations_patch.yaml
@@ -0,0 +1,7 @@
+apiVersion: v1
+kind: Service
+metadata:
+ annotations:
+ service.beta.openshift.io/serving-cert-secret-name: loki-operator-metrics
+ labels:
+ name: controller-manager-metrics-service
diff --git a/operator/config/overlays/openshift/kustomization.yaml b/operator/config/overlays/openshift/kustomization.yaml
index 6990c95b18723..4961b9adb04d3 100644
--- a/operator/config/overlays/openshift/kustomization.yaml
+++ b/operator/config/overlays/openshift/kustomization.yaml
@@ -2,12 +2,7 @@ resources:
- ../../crd
- ../../rbac
- ../../manager
-# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
-# crd/kustomization.yaml
-#- ../webhook
-# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
-#- ../certmanager
-# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
+- ../../webhook
- ../../prometheus
# Adds namespace to all resources.
@@ -31,60 +26,15 @@ labels:
app.kubernetes.io/version: "0.0.1"
patchesStrategicMerge:
-# Protect the /metrics endpoint by putting it behind auth.
-# If you want your controller-manager to expose the /metrics
-# endpoint w/o any authn/z, please comment the following line.
+- auth_proxy_service_annotations_patch.yaml
- manager_auth_proxy_patch.yaml
- manager_related_image_patch.yaml
- manager_run_flags_patch.yaml
+- manager_webhook_patch.yaml
- prometheus_service_monitor_patch.yaml
+- webhook_service_annotations_patch.yaml
-# apiVersion: kustomize.config.k8s.io/v1beta1
-# kind: Kustomization
images:
- name: controller
newName: quay.io/openshift-logging/loki-operator
newTag: v0.0.1
-
-# Mount the controller config file for loading manager configurations
-# through a ComponentConfig type
-#- manager_config_patch.yaml
-
-# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
-# crd/kustomization.yaml
-#- manager_webhook_patch.yaml
-
-# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.
-# Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.
-# 'CERTMANAGER' needs to be enabled to use ca injection
-#- webhookcainjection_patch.yaml
-
-# the following config is for teaching kustomize how to do var substitution
-vars:
-# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
-#- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
-# objref:
-# kind: Certificate
-# group: cert-manager.io
-# version: v1
-# name: serving-cert # this name should match the one in certificate.yaml
-# fieldref:
-# fieldpath: metadata.namespace
-#- name: CERTIFICATE_NAME
-# objref:
-# kind: Certificate
-# group: cert-manager.io
-# version: v1
-# name: serving-cert # this name should match the one in certificate.yaml
-#- name: SERVICE_NAMESPACE # namespace of the service
-# objref:
-# kind: Service
-# version: v1
-# name: webhook-service
-# fieldref:
-# fieldpath: metadata.namespace
-#- name: SERVICE_NAME
-# objref:
-# kind: Service
-# version: v1
-# name: webhook-service
diff --git a/operator/config/overlays/openshift/manager_webhook_patch.yaml b/operator/config/overlays/openshift/manager_webhook_patch.yaml
new file mode 100644
index 0000000000000..dc95a45f77ced
--- /dev/null
+++ b/operator/config/overlays/openshift/manager_webhook_patch.yaml
@@ -0,0 +1,22 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: controller-manager
+spec:
+ template:
+ spec:
+ containers:
+ - name: manager
+ ports:
+ - containerPort: 9443
+ name: webhook-server
+ protocol: TCP
+ volumeMounts:
+ - mountPath: /tmp/k8s-webhook-server/serving-certs
+ name: webhook-cert
+ readOnly: true
+ volumes:
+ - name: webhook-cert
+ secret:
+ defaultMode: 420
+ secretName: loki-operator-webhook-service
diff --git a/operator/config/overlays/openshift/webhook_service_annotations_patch.yaml b/operator/config/overlays/openshift/webhook_service_annotations_patch.yaml
new file mode 100644
index 0000000000000..3d6d200ef59ea
--- /dev/null
+++ b/operator/config/overlays/openshift/webhook_service_annotations_patch.yaml
@@ -0,0 +1,7 @@
+apiVersion: v1
+kind: Service
+metadata:
+ annotations:
+ service.beta.openshift.io/serving-cert-secret-name: loki-operator-webhook-service
+ name: webhook-service
+ namespace: system
diff --git a/operator/config/overlays/production/kustomization.yaml b/operator/config/overlays/production/kustomization.yaml
index 9cdeba4c4e97c..eefc70e44ef8d 100644
--- a/operator/config/overlays/production/kustomization.yaml
+++ b/operator/config/overlays/production/kustomization.yaml
@@ -2,12 +2,8 @@ resources:
- ../../crd
- ../../rbac
- ../../manager
-# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
-# crd/kustomization.yaml
-#- ../webhook
-# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
-#- ../certmanager
-# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
+- ../../webhook
+- ../../certmanager
- ../../prometheus
# Adds namespace to all resources.
@@ -31,58 +27,46 @@ labels:
app.kubernetes.io/version: "0.0.1"
patchesStrategicMerge:
-# Protect the /metrics endpoint by putting it behind auth.
-# If you want your controller-manager to expose the /metrics
-# endpoint w/o any authn/z, please comment the following line.
- manager_auth_proxy_patch.yaml
- manager_related_image_patch.yaml
- manager_run_flags_patch.yaml
+- manager_webhook_patch.yaml
- prometheus_service_monitor_patch.yaml
+- webhookcainjection_patch.yaml
images:
- name: controller
- newName: quay.io/viaq/loki-operator
+ # Change this to docker.io/grafana/loki-operator once the following issue is resolved:
+ # https://github.com/grafana/loki/issues/5617
+ newName: quay.io/openshift-logging/loki-operator
newTag: v0.0.1
-# Mount the controller config file for loading manager configurations
-# through a ComponentConfig type
-#- manager_config_patch.yaml
-
-# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
-# crd/kustomization.yaml
-#- manager_webhook_patch.yaml
-
-# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.
-# Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.
-# 'CERTMANAGER' needs to be enabled to use ca injection
-#- webhookcainjection_patch.yaml
-
# the following config is for teaching kustomize how to do var substitution
vars:
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
-#- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
-# objref:
-# kind: Certificate
-# group: cert-manager.io
-# version: v1
-# name: serving-cert # this name should match the one in certificate.yaml
-# fieldref:
-# fieldpath: metadata.namespace
-#- name: CERTIFICATE_NAME
-# objref:
-# kind: Certificate
-# group: cert-manager.io
-# version: v1
-# name: serving-cert # this name should match the one in certificate.yaml
-#- name: SERVICE_NAMESPACE # namespace of the service
-# objref:
-# kind: Service
-# version: v1
-# name: webhook-service
-# fieldref:
-# fieldpath: metadata.namespace
-#- name: SERVICE_NAME
-# objref:
-# kind: Service
-# version: v1
-# name: webhook-service
+- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
+ objref:
+ kind: Certificate
+ group: cert-manager.io
+ version: v1
+ name: serving-cert # this name should match the one in certificate.yaml
+ fieldref:
+ fieldpath: metadata.namespace
+- name: CERTIFICATE_NAME
+ objref:
+ kind: Certificate
+ group: cert-manager.io
+ version: v1
+ name: serving-cert # this name should match the one in certificate.yaml
+- name: SERVICE_NAMESPACE # namespace of the service
+ objref:
+ kind: Service
+ version: v1
+ name: webhook-service
+ fieldref:
+ fieldpath: metadata.namespace
+- name: SERVICE_NAME
+ objref:
+ kind: Service
+ version: v1
+ name: webhook-service
diff --git a/operator/config/overlays/production/manager_webhook_patch.yaml b/operator/config/overlays/production/manager_webhook_patch.yaml
new file mode 100644
index 0000000000000..e43b1f0e94b6c
--- /dev/null
+++ b/operator/config/overlays/production/manager_webhook_patch.yaml
@@ -0,0 +1,22 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: controller-manager
+spec:
+ template:
+ spec:
+ containers:
+ - name: manager
+ ports:
+ - containerPort: 9443
+ name: webhook-server
+ protocol: TCP
+ volumeMounts:
+ - mountPath: /tmp/k8s-webhook-server/serving-certs
+ name: webhook-cert
+ readOnly: true
+ volumes:
+ - name: webhook-cert
+ secret:
+ defaultMode: 420
+ secretName: loki-operator-webhook-server-cert
diff --git a/operator/config/overlays/production/webhookcainjection_patch.yaml b/operator/config/overlays/production/webhookcainjection_patch.yaml
new file mode 100644
index 0000000000000..cbcbf762a647b
--- /dev/null
+++ b/operator/config/overlays/production/webhookcainjection_patch.yaml
@@ -0,0 +1,18 @@
+# This patch add annotation to admission webhook config and
+# the variables $(CERTIFICATE_NAMESPACE) and $(CERTIFICATE_NAME) will be substituted by kustomize.
+#
+# [WEBHOOK] To enable mutating webhook hook, uncomment the following section
+#
+# apiVersion: admissionregistration.k8s.io/v1
+# kind: MutatingWebhookConfiguration
+# metadata:
+# name: mutating-webhook-configuration
+# annotations:
+# cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
+---
+apiVersion: admissionregistration.k8s.io/v1
+kind: ValidatingWebhookConfiguration
+metadata:
+ name: validating-webhook-configuration
+ annotations:
+ cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
diff --git a/operator/config/rbac/alertingrule_editor_role.yaml b/operator/config/rbac/alertingrule_editor_role.yaml
new file mode 100644
index 0000000000000..0fd22318b586f
--- /dev/null
+++ b/operator/config/rbac/alertingrule_editor_role.yaml
@@ -0,0 +1,24 @@
+# permissions for end users to edit alertingrules.
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: alertingrule-editor-role
+rules:
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - alertingrules
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - alertingrules/status
+ verbs:
+ - get
diff --git a/operator/config/rbac/alertingrule_viewer_role.yaml b/operator/config/rbac/alertingrule_viewer_role.yaml
new file mode 100644
index 0000000000000..f0cdc7d96492c
--- /dev/null
+++ b/operator/config/rbac/alertingrule_viewer_role.yaml
@@ -0,0 +1,20 @@
+# permissions for end users to view alertingrules.
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: alertingrule-viewer-role
+rules:
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - alertingrules
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - alertingrules/status
+ verbs:
+ - get
diff --git a/operator/config/rbac/auth_proxy_service.yaml b/operator/config/rbac/auth_proxy_service.yaml
index 0e34e68d70f40..7a0b8d71dfa71 100644
--- a/operator/config/rbac/auth_proxy_service.yaml
+++ b/operator/config/rbac/auth_proxy_service.yaml
@@ -1,8 +1,6 @@
apiVersion: v1
kind: Service
metadata:
- annotations:
- service.beta.openshift.io/serving-cert-secret-name: loki-operator-metrics
labels:
name: controller-manager-metrics-service
spec:
diff --git a/operator/config/rbac/recordingrule_editor_role.yaml b/operator/config/rbac/recordingrule_editor_role.yaml
new file mode 100644
index 0000000000000..8278e82e6d9bb
--- /dev/null
+++ b/operator/config/rbac/recordingrule_editor_role.yaml
@@ -0,0 +1,24 @@
+# permissions for end users to edit recordingrules.
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: recordingrule-editor-role
+rules:
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - recordingrules
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - recordingrules/status
+ verbs:
+ - get
diff --git a/operator/config/rbac/recordingrule_viewer_role.yaml b/operator/config/rbac/recordingrule_viewer_role.yaml
new file mode 100644
index 0000000000000..bcb176c97af58
--- /dev/null
+++ b/operator/config/rbac/recordingrule_viewer_role.yaml
@@ -0,0 +1,20 @@
+# permissions for end users to view recordingrules.
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: recordingrule-viewer-role
+rules:
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - recordingrules
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - recordingrules/status
+ verbs:
+ - get
diff --git a/operator/config/rbac/role.yaml b/operator/config/rbac/role.yaml
index d9d10d73f1801..9ec496bfe1097 100644
--- a/operator/config/rbac/role.yaml
+++ b/operator/config/rbac/role.yaml
@@ -59,6 +59,32 @@ rules:
- create
- get
- update
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - alertingrules
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - alertingrules/finalizers
+ verbs:
+ - update
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - alertingrules/status
+ verbs:
+ - get
+ - patch
+ - update
- apiGroups:
- loki.grafana.com
resources:
@@ -85,6 +111,32 @@ rules:
- get
- patch
- update
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - recordingrules
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - recordingrules/finalizers
+ verbs:
+ - update
+- apiGroups:
+ - loki.grafana.com
+ resources:
+ - recordingrules/status
+ verbs:
+ - get
+ - patch
+ - update
- apiGroups:
- monitoring.coreos.com
resources:
diff --git a/operator/config/samples/kustomization.yaml b/operator/config/samples/kustomization.yaml
index 1ba3ca969be6a..34f348fcffb8b 100644
--- a/operator/config/samples/kustomization.yaml
+++ b/operator/config/samples/kustomization.yaml
@@ -1,4 +1,6 @@
## Append samples you want in your CSV to this file as resources ##
resources:
- loki_v1beta1_lokistack.yaml
+- loki_v1beta1_alertingrule.yaml
+- loki_v1beta1_recordingrule.yaml
# +kubebuilder:scaffold:manifestskustomizesamples
diff --git a/operator/config/samples/loki_v1beta1_alertingrule.yaml b/operator/config/samples/loki_v1beta1_alertingrule.yaml
new file mode 100644
index 0000000000000..271ca2be247b2
--- /dev/null
+++ b/operator/config/samples/loki_v1beta1_alertingrule.yaml
@@ -0,0 +1,28 @@
+apiVersion: loki.grafana.com/v1beta1
+kind: AlertingRule
+metadata:
+ name: alertingrule-sample
+spec:
+ tenantID: test-tenant
+ groups:
+ - name: alerting-rules-group
+ interval: 10m
+ rules:
+ - alert: HighPercentageError
+ expr: |
+ sum(rate({app="foo", env="production"} |= "error" [5m])) by (job)
+ /
+ sum(rate({app="foo", env="production"}[5m])) by (job)
+ > 0.05
+ for: 10m
+ labels:
+ severity: page
+ annotations:
+ summary: High request latency
+ - alert: HttpCredentialsLeaked
+ annotations:
+ message: "{{ $labels.job }} is leaking http basic auth credentials."
+ expr: 'sum by (cluster, job, pod) (count_over_time({namespace="prod"} |~ "http(s?)://(\\w+):(\\w+)@" [5m]) > 0)'
+ for: 10m
+ labels:
+ severity: critical
diff --git a/operator/config/samples/loki_v1beta1_recordingrule.yaml b/operator/config/samples/loki_v1beta1_recordingrule.yaml
new file mode 100644
index 0000000000000..ab106feec49ce
--- /dev/null
+++ b/operator/config/samples/loki_v1beta1_recordingrule.yaml
@@ -0,0 +1,16 @@
+apiVersion: loki.grafana.com/v1beta1
+kind: RecordingRule
+metadata:
+ name: recordingrule-sample
+spec:
+ tenantID: test-tenant
+ groups:
+ - name: recording-rules-group
+ interval: 10m
+ rules:
+ - record: "myservice:requests:rate10m"
+ expr: |
+ sum(rate({container="myservice"}[10m]))
+ - record: "otherservice:requests:rate1m"
+ expr: |
+ sum(rate({container="otherservice"}[1m]))
diff --git a/operator/config/webhook/kustomization.yaml b/operator/config/webhook/kustomization.yaml
new file mode 100644
index 0000000000000..9cf26134e4d53
--- /dev/null
+++ b/operator/config/webhook/kustomization.yaml
@@ -0,0 +1,6 @@
+resources:
+- manifests.yaml
+- service.yaml
+
+configurations:
+- kustomizeconfig.yaml
diff --git a/operator/config/webhook/kustomizeconfig.yaml b/operator/config/webhook/kustomizeconfig.yaml
new file mode 100644
index 0000000000000..25e21e3c963f0
--- /dev/null
+++ b/operator/config/webhook/kustomizeconfig.yaml
@@ -0,0 +1,25 @@
+# the following config is for teaching kustomize where to look at when substituting vars.
+# It requires kustomize v2.1.0 or newer to work properly.
+nameReference:
+- kind: Service
+ version: v1
+ fieldSpecs:
+ - kind: MutatingWebhookConfiguration
+ group: admissionregistration.k8s.io
+ path: webhooks/clientConfig/service/name
+ - kind: ValidatingWebhookConfiguration
+ group: admissionregistration.k8s.io
+ path: webhooks/clientConfig/service/name
+
+namespace:
+- kind: MutatingWebhookConfiguration
+ group: admissionregistration.k8s.io
+ path: webhooks/clientConfig/service/namespace
+ create: true
+- kind: ValidatingWebhookConfiguration
+ group: admissionregistration.k8s.io
+ path: webhooks/clientConfig/service/namespace
+ create: true
+
+varReference:
+- path: metadata/annotations
diff --git a/operator/config/webhook/manifests.yaml b/operator/config/webhook/manifests.yaml
new file mode 100644
index 0000000000000..d5f9777037833
--- /dev/null
+++ b/operator/config/webhook/manifests.yaml
@@ -0,0 +1,47 @@
+---
+apiVersion: admissionregistration.k8s.io/v1
+kind: ValidatingWebhookConfiguration
+metadata:
+ creationTimestamp: null
+ name: validating-webhook-configuration
+webhooks:
+- admissionReviewVersions:
+ - v1
+ clientConfig:
+ service:
+ name: webhook-service
+ namespace: system
+ path: /validate-loki-grafana-com-v1beta1-alertingrule
+ failurePolicy: Fail
+ name: valertingrule.kb.io
+ rules:
+ - apiGroups:
+ - loki.grafana.com
+ apiVersions:
+ - v1beta1
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - alertingrules
+ sideEffects: None
+- admissionReviewVersions:
+ - v1
+ clientConfig:
+ service:
+ name: webhook-service
+ namespace: system
+ path: /validate-loki-grafana-com-v1beta1-recordingrule
+ failurePolicy: Fail
+ name: vrecordingrule.kb.io
+ rules:
+ - apiGroups:
+ - loki.grafana.com
+ apiVersions:
+ - v1beta1
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - recordingrules
+ sideEffects: None
diff --git a/operator/config/webhook/service.yaml b/operator/config/webhook/service.yaml
new file mode 100644
index 0000000000000..acd6493f95170
--- /dev/null
+++ b/operator/config/webhook/service.yaml
@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: webhook-service
+ namespace: system
+spec:
+ ports:
+ - port: 443
+ protocol: TCP
+ targetPort: 9443
diff --git a/operator/controllers/alertingrule_controller.go b/operator/controllers/alertingrule_controller.go
new file mode 100644
index 0000000000000..038ebedbdfef5
--- /dev/null
+++ b/operator/controllers/alertingrule_controller.go
@@ -0,0 +1,57 @@
+package controllers
+
+import (
+ "context"
+ "time"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/runtime"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/handler"
+ "sigs.k8s.io/controller-runtime/pkg/source"
+
+ "github.com/go-logr/logr"
+ lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
+ "github.com/grafana/loki/operator/controllers/internal/lokistack"
+)
+
+// AlertingRuleReconciler reconciles a AlertingRule object
+type AlertingRuleReconciler struct {
+ client.Client
+ Log logr.Logger
+ Scheme *runtime.Scheme
+}
+
+//+kubebuilder:rbac:groups=loki.grafana.com,resources=alertingrules,verbs=get;list;watch;create;update;patch;delete
+//+kubebuilder:rbac:groups=loki.grafana.com,resources=alertingrules/status,verbs=get;update;patch
+//+kubebuilder:rbac:groups=loki.grafana.com,resources=alertingrules/finalizers,verbs=update
+
+// Reconcile is part of the main kubernetes reconciliation loop which aims to
+// move the current state of the cluster closer to the desired state.
+// TODO(user): Modify the Reconcile function to compare the state specified by
+// the AlertingRule object against the actual cluster state, and then
+// perform operations to make the cluster state reflect the state specified by
+// the user.
+//
+// For more details, check Reconcile and its Result here:
+// - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile
+func (r *AlertingRuleReconciler) Reconcile(ctx context.Context, _ ctrl.Request) (ctrl.Result, error) {
+ err := lokistack.AnnotateForDiscoveredRules(ctx, r.Client)
+ if err != nil {
+ return ctrl.Result{
+ Requeue: true,
+ RequeueAfter: time.Second,
+ }, err
+ }
+ return ctrl.Result{}, nil
+}
+
+// SetupWithManager sets up the controller with the Manager.
+func (r *AlertingRuleReconciler) SetupWithManager(mgr ctrl.Manager) error {
+ return ctrl.NewControllerManagedBy(mgr).
+ For(&lokiv1beta1.AlertingRule{}).
+ Watches(&source.Kind{Type: &corev1.Namespace{}}, &handler.EnqueueRequestForObject{}, builder.OnlyMetadata).
+ Complete(r)
+}
diff --git a/operator/controllers/internal/lokistack/rules_discovery.go b/operator/controllers/internal/lokistack/rules_discovery.go
new file mode 100644
index 0000000000000..3882c9e0f3ad0
--- /dev/null
+++ b/operator/controllers/internal/lokistack/rules_discovery.go
@@ -0,0 +1,37 @@
+package lokistack
+
+import (
+ "context"
+ "time"
+
+ "github.com/ViaQ/logerr/v2/kverrors"
+ lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
+ "github.com/grafana/loki/operator/internal/external/k8s"
+ "k8s.io/apimachinery/pkg/labels"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+)
+
+// AnnotateForDiscoveredRules adds/updates the `loki.grafana.com/rulesDiscoveredAt` annotation
+// to all instance of LokiStack on all namespaces to trigger the reconciliation loop.
+func AnnotateForDiscoveredRules(ctx context.Context, k k8s.Client) error {
+ var stacks lokiv1beta1.LokiStackList
+ err := k.List(ctx, &stacks, client.MatchingLabelsSelector{Selector: labels.Everything()})
+ if err != nil {
+ return kverrors.Wrap(err, "failed to list any lokistack instances", "req")
+ }
+
+ for _, s := range stacks.Items {
+ ss := s.DeepCopy()
+ if ss.Annotations == nil {
+ ss.Annotations = make(map[string]string)
+ }
+
+ ss.Annotations["loki.grafana.com/rulesDiscoveredAt"] = time.Now().UTC().Format(time.RFC3339)
+
+ if err := k.Update(ctx, ss); err != nil {
+ return kverrors.Wrap(err, "failed to update lokistack `rulesDiscoveredAt` annotation", "name", ss.Name, "namespace", ss.Namespace)
+ }
+ }
+
+ return nil
+}
diff --git a/operator/controllers/lokistack_controller.go b/operator/controllers/lokistack_controller.go
index 3e46047dd8ec4..a6bf59c07216e 100644
--- a/operator/controllers/lokistack_controller.go
+++ b/operator/controllers/lokistack_controller.go
@@ -5,6 +5,7 @@ import (
"errors"
"time"
+ "github.com/google/go-cmp/cmp"
monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
"github.com/go-logr/logr"
@@ -33,12 +34,13 @@ import (
var (
createOrUpdateOnlyPred = builder.WithPredicates(predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
- // Update only if generation changes, filter out anything else.
- // We only need to check generation here, because it is only
+ // Update only if generation or annotations change, filter out anything else.
+ // We only need to check generation or annotations change here, because it is only
// updated on spec changes. On the other hand RevisionVersion
// changes also on status changes. We want to omit reconciliation
// for status updates for now.
- return e.ObjectOld.GetGeneration() != e.ObjectNew.GetGeneration()
+ return (e.ObjectOld.GetGeneration() != e.ObjectNew.GetGeneration()) ||
+ cmp.Diff(e.ObjectOld.GetAnnotations(), e.ObjectNew.GetAnnotations()) != ""
},
CreateFunc: func(e event.CreateEvent) bool { return true },
DeleteFunc: func(e event.DeleteEvent) bool { return false },
diff --git a/operator/controllers/recordingrule_controller.go b/operator/controllers/recordingrule_controller.go
new file mode 100644
index 0000000000000..8f482621717c3
--- /dev/null
+++ b/operator/controllers/recordingrule_controller.go
@@ -0,0 +1,57 @@
+package controllers
+
+import (
+ "context"
+ "time"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/runtime"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/handler"
+ "sigs.k8s.io/controller-runtime/pkg/source"
+
+ "github.com/go-logr/logr"
+ lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
+ "github.com/grafana/loki/operator/controllers/internal/lokistack"
+)
+
+// RecordingRuleReconciler reconciles a RecordingRule object
+type RecordingRuleReconciler struct {
+ client.Client
+ Log logr.Logger
+ Scheme *runtime.Scheme
+}
+
+//+kubebuilder:rbac:groups=loki.grafana.com,resources=recordingrules,verbs=get;list;watch;create;update;patch;delete
+//+kubebuilder:rbac:groups=loki.grafana.com,resources=recordingrules/status,verbs=get;update;patch
+//+kubebuilder:rbac:groups=loki.grafana.com,resources=recordingrules/finalizers,verbs=update
+
+// Reconcile is part of the main kubernetes reconciliation loop which aims to
+// move the current state of the cluster closer to the desired state.
+// TODO(user): Modify the Reconcile function to compare the state specified by
+// the RecordingRule object against the actual cluster state, and then
+// perform operations to make the cluster state reflect the state specified by
+// the user.
+//
+// For more details, check Reconcile and its Result here:
+// - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile
+func (r *RecordingRuleReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
+ err := lokistack.AnnotateForDiscoveredRules(ctx, r.Client)
+ if err != nil {
+ return ctrl.Result{
+ Requeue: true,
+ RequeueAfter: time.Second,
+ }, err
+ }
+ return ctrl.Result{}, nil
+}
+
+// SetupWithManager sets up the controller with the Manager.
+func (r *RecordingRuleReconciler) SetupWithManager(mgr ctrl.Manager) error {
+ return ctrl.NewControllerManagedBy(mgr).
+ For(&lokiv1beta1.RecordingRule{}).
+ Watches(&source.Kind{Type: &corev1.Namespace{}}, &handler.EnqueueRequestForObject{}, builder.OnlyMetadata).
+ Complete(r)
+}
diff --git a/operator/go.mod b/operator/go.mod
index be2f59c8e1702..e042641f5010e 100644
--- a/operator/go.mod
+++ b/operator/go.mod
@@ -4,7 +4,7 @@ go 1.17
require (
github.com/go-logr/logr v1.2.3
- github.com/google/uuid v1.1.2
+ github.com/google/uuid v1.2.0
github.com/imdario/mergo v0.3.12
github.com/maxbrunsfeld/counterfeiter/v6 v6.3.0
github.com/openshift/api v0.0.0-20220124143425-d74727069f6f // release-4.10
@@ -14,7 +14,7 @@ require (
github.com/stretchr/testify v1.7.1
k8s.io/api v0.23.5
k8s.io/apimachinery v0.23.5
- k8s.io/client-go v0.23.5
+ k8s.io/client-go v12.0.0+incompatible
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9
sigs.k8s.io/controller-runtime v0.11.0
sigs.k8s.io/yaml v1.3.0
@@ -23,52 +23,138 @@ require (
require github.com/ViaQ/logerr/v2 v2.0.0
require (
- cloud.google.com/go v0.81.0 // indirect
+ github.com/google/go-cmp v0.5.7
+ github.com/grafana/loki v1.6.2-0.20220420044148-f62b4ae1905c
+ gopkg.in/yaml.v2 v2.4.0
+)
+
+require (
+ cloud.google.com/go/compute v1.3.0 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
- github.com/Azure/go-autorest/autorest v0.11.18 // indirect
- github.com/Azure/go-autorest/autorest/adal v0.9.13 // indirect
+ github.com/Azure/go-autorest/autorest v0.11.24 // indirect
+ github.com/Azure/go-autorest/autorest/adal v0.9.18 // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
+ github.com/HdrHistogram/hdrhistogram-go v1.1.2 // indirect
+ github.com/Masterminds/goutils v1.1.1 // indirect
+ github.com/Masterminds/semver/v3 v3.1.1 // indirect
+ github.com/Masterminds/sprig/v3 v3.2.2 // indirect
+ github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect
+ github.com/armon/go-metrics v0.3.9 // indirect
+ github.com/aws/aws-sdk-go v1.43.10 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
+ github.com/coreos/etcd v3.3.25+incompatible // indirect
+ github.com/coreos/go-semver v0.3.0 // indirect
+ github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf // indirect
+ github.com/coreos/go-systemd/v22 v22.3.2 // indirect
+ github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
+ github.com/dennwc/varint v1.0.0 // indirect
+ github.com/dustin/go-humanize v1.0.0 // indirect
+ github.com/edsrzf/mmap-go v1.1.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
- github.com/form3tech-oss/jwt-go v3.2.3+incompatible // indirect
+ github.com/fatih/color v1.13.0 // indirect
+ github.com/felixge/httpsnoop v1.0.2 // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
+ github.com/go-kit/log v0.2.0 // indirect
+ github.com/go-logfmt/logfmt v0.5.1 // indirect
+ github.com/go-logr/stdr v1.2.2 // indirect
+ github.com/gogo/googleapis v1.4.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
+ github.com/gogo/status v1.1.0 // indirect
+ github.com/golang-jwt/jwt/v4 v4.2.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
- github.com/google/go-cmp v0.5.7 // indirect
+ github.com/golang/snappy v0.0.4 // indirect
+ github.com/google/btree v1.0.1 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
+ github.com/gorilla/mux v1.8.0 // indirect
+ github.com/grafana/dskit v0.0.0-20220331160727-49faf69f72ca // indirect
+ github.com/grafana/regexp v0.0.0-20220304100321-149c8afcd6cb // indirect
+ github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 // indirect
+ github.com/hashicorp/consul/api v1.12.0 // indirect
+ github.com/hashicorp/errwrap v1.0.0 // indirect
+ github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
+ github.com/hashicorp/go-hclog v0.16.2 // indirect
+ github.com/hashicorp/go-immutable-radix v1.3.1 // indirect
+ github.com/hashicorp/go-msgpack v0.5.5 // indirect
+ github.com/hashicorp/go-multierror v1.1.0 // indirect
+ github.com/hashicorp/go-rootcerts v1.0.2 // indirect
+ github.com/hashicorp/go-sockaddr v1.0.2 // indirect
+ github.com/hashicorp/golang-lru v0.5.4 // indirect
+ github.com/hashicorp/memberlist v0.3.0 // indirect
+ github.com/hashicorp/serf v0.9.6 // indirect
+ github.com/huandu/xstrings v1.3.1 // indirect
+ github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/jpillora/backoff v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
+ github.com/mattn/go-colorable v0.1.9 // indirect
+ github.com/mattn/go-isatty v0.0.14 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
+ github.com/miekg/dns v1.1.46 // indirect
+ github.com/mitchellh/copystructure v1.0.0 // indirect
+ github.com/mitchellh/go-homedir v1.1.0 // indirect
+ github.com/mitchellh/mapstructure v1.4.3 // indirect
+ github.com/mitchellh/reflectwalk v1.0.1 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
+ github.com/oklog/ulid v1.3.1 // indirect
+ github.com/opentracing-contrib/go-grpc v0.0.0-20210225150812-73cb765af46e // indirect
+ github.com/opentracing-contrib/go-stdlib v1.0.0 // indirect
+ github.com/opentracing/opentracing-go v1.2.0 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
+ github.com/prometheus/common/sigv4 v0.1.0 // indirect
+ github.com/prometheus/node_exporter v1.0.0-rc.0.0.20200428091818-01054558c289 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
+ github.com/prometheus/prometheus v1.8.2-0.20220303173753-edfe657b5405 // indirect
+ github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 // indirect
+ github.com/sercand/kuberesolver v2.4.0+incompatible // indirect
+ github.com/shopspring/decimal v1.2.0 // indirect
+ github.com/sirupsen/logrus v1.8.1 // indirect
+ github.com/spf13/cast v1.3.1 // indirect
github.com/spf13/pflag v1.0.5 // indirect
- golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 // indirect
- golang.org/x/mod v0.4.2 // indirect
+ github.com/stretchr/objx v0.2.0 // indirect
+ github.com/uber/jaeger-client-go v2.30.0+incompatible // indirect
+ github.com/uber/jaeger-lib v2.4.1+incompatible // indirect
+ github.com/weaveworks/common v0.0.0-20211015155308-ebe5bdc2c89e // indirect
+ github.com/weaveworks/promrus v1.2.0 // indirect
+ go.etcd.io/etcd v3.3.25+incompatible // indirect
+ go.etcd.io/etcd/api/v3 v3.5.0 // indirect
+ go.etcd.io/etcd/client/pkg/v3 v3.5.0 // indirect
+ go.etcd.io/etcd/client/v3 v3.5.0 // indirect
+ go.opentelemetry.io/otel v1.4.1 // indirect
+ go.opentelemetry.io/otel/trace v1.4.1 // indirect
+ go.uber.org/atomic v1.9.0 // indirect
+ go.uber.org/goleak v1.1.12 // indirect
+ go.uber.org/multierr v1.7.0 // indirect
+ go.uber.org/zap v1.19.1 // indirect
+ go4.org/intern v0.0.0-20210108033219-3eb7198706b2 // indirect
+ go4.org/unsafe/assume-no-moving-gc v0.0.0-20201222180813-1025295fd063 // indirect
+ golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3 // indirect
+ golang.org/x/mod v0.5.1 // indirect
golang.org/x/net v0.0.0-20220225172249-27dd8689420f // indirect
- golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f // indirect
- golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 // indirect
+ golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 // indirect
+ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
+ golang.org/x/sys v0.0.0-20220222172238-00053529121e // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
- golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
- golang.org/x/tools v0.1.6-0.20210820212750-d4cc65f0b2ff // indirect
+ golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 // indirect
+ golang.org/x/tools v0.1.9 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
+ google.golang.org/genproto v0.0.0-20220222154240-daf995802d7b // indirect
+ google.golang.org/grpc v1.44.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
- gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
+ inet.af/netaddr v0.0.0-20210707202901-70468d781e6c // indirect
k8s.io/apiextensions-apiserver v0.23.0 // indirect
k8s.io/component-base v0.23.0 // indirect
k8s.io/klog/v2 v2.60.1 // indirect
@@ -76,3 +162,5 @@ require (
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
)
+
+replace k8s.io/client-go => k8s.io/client-go v0.23.5
diff --git a/operator/go.sum b/operator/go.sum
index 236024558d4fa..86c2f268396e7 100644
--- a/operator/go.sum
+++ b/operator/go.sum
@@ -1,3 +1,4 @@
+bazil.org/fuse v0.0.0-20160811212531-371fbbdaa898/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
@@ -17,14 +18,25 @@ cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKP
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
-cloud.google.com/go v0.81.0 h1:at8Tk2zUz63cLPR0JPWm5vp77pEZmzxEQBEfRKn1VV8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
+cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAVY=
+cloud.google.com/go v0.84.0/go.mod h1:RazrYuxIK6Kb7YrzzhPoLmCVzl7Sup4NrbKPg8KHSUM=
+cloud.google.com/go v0.87.0/go.mod h1:TpDYlFy7vuLzZMMZ+B6iRiELaY7z/gJPaqbMx6mlWcY=
+cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aDQ=
+cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI=
+cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4=
+cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc=
+cloud.google.com/go v0.99.0/go.mod h1:w0Xx2nLzqWJPuozYQX+hFfCSI8WioryfRDzkoI/Y2ZA=
+cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w99A=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
+cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow=
+cloud.google.com/go/compute v1.3.0 h1:mPL/MzDDYHsh5tHRS9mhmhWlcgClCrCa6ApQCU6wnHI=
+cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
@@ -38,32 +50,64 @@ cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohl
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
+github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/azure-sdk-for-go v62.0.0+incompatible h1:8N2k27SYtc12qj5nTsuFMFJPZn5CGmgMWqTy4y9I7Jw=
+github.com/Azure/azure-sdk-for-go v62.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210608223527-2377c96fe795/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
+github.com/Azure/go-autorest v10.8.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
-github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
-github.com/Azure/go-autorest/autorest v0.11.18 h1:90Y4srNYrwOtAgVo3ndrQkTYn6kf1Eg/AjTFJ8Is2aM=
github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
-github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0=
-github.com/Azure/go-autorest/autorest/adal v0.9.13 h1:Mp5hbtOePIzM8pJVRa3YLrWWmZtoxRXqUEzCfJt3+/Q=
+github.com/Azure/go-autorest/autorest v0.11.24 h1:1fIGgHKqVm54KIPT+q8Zmd1QlVsmHqeUGso5qm2BqqE=
+github.com/Azure/go-autorest/autorest v0.11.24/go.mod h1:G6kyRlFnTuSbEYkQGawPfsCswgme4iYf6rfSKUDzbCc=
github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
-github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA=
+github.com/Azure/go-autorest/autorest/adal v0.9.18 h1:kLnPsRjzZZUF3K5REu/Kc+qMQrvuza2bwSnNdhmzLfQ=
+github.com/Azure/go-autorest/autorest/adal v0.9.18/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ=
github.com/Azure/go-autorest/autorest/date v0.3.0 h1:7gUk1U5M/CQbp9WoqinNzJar+8KY+LPI6wiWrP/myHw=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
-github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
-github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
github.com/Azure/go-autorest/autorest/mocks v0.4.1 h1:K0laFcLE6VLTOwNgSxaGbUcLPuGXlNkbVvq4cW4nIHk=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
-github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc=
+github.com/Azure/go-autorest/autorest/to v0.4.0 h1:oXVqrxakqqV1UZdSazDOPOLvOIz+XA683u8EctwboHk=
+github.com/Azure/go-autorest/autorest/to v0.4.0/go.mod h1:fE8iZBn7LQR7zH/9XU2NcPR4o9jEImooCeWJcYV/zLE=
+github.com/Azure/go-autorest/autorest/validation v0.3.1 h1:AgyqjAd94fwNAoTjl/WQXg4VvFeRFpO+UhNyRXqF1ac=
+github.com/Azure/go-autorest/autorest/validation v0.3.1/go.mod h1:yhLgjC0Wda5DYXl6JAsWyUe4KVNffhoDhG0zVzUMo3E=
github.com/Azure/go-autorest/logger v0.2.1 h1:IG7i4p/mDa2Ce4TRyAO8IHnVhAVF3RFU+ZtXWSmf4Tg=
github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
-github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
+github.com/HdrHistogram/hdrhistogram-go v1.1.2 h1:5IcZpTvzydCQeHzK4Ef/D5rrSqwxob0t8PQPMybUNFM=
+github.com/HdrHistogram/hdrhistogram-go v1.1.2/go.mod h1:yDgFjdqOqDEKOvasDdhWNXYg9BVp4O+o5f6V/ehm6Oo=
+github.com/Knetic/govaluate v3.0.1-0.20171022003610-9aa49832a739+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
+github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=
+github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
+github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc=
+github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
+github.com/Masterminds/sprig/v3 v3.2.2 h1:17jRggJu518dr3QaafizSXOjKYp94wKfABxUmyxvxX8=
+github.com/Masterminds/sprig/v3 v3.2.2/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk=
+github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
+github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
+github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
+github.com/Microsoft/go-winio v0.4.16-0.20201130162521-d1ffc52c7331/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
+github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
+github.com/Microsoft/go-winio v0.4.17-0.20210211115548-6eac466e5fa3/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.4.17-0.20210324224401-5516f17a5958/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.4.17 h1:iT12IBVClFevaf8PuVyi3UmZOVh4OqnaLxDTW2O6j3w=
+github.com/Microsoft/go-winio v0.4.17/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
+github.com/Microsoft/hcsshim v0.8.7-0.20190325164909-8abdbb8205e4/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
+github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
+github.com/Microsoft/hcsshim v0.8.9/go.mod h1:5692vkUqntj1idxauYlpoINNKeqCiG6Sg38RRsjT5y8=
+github.com/Microsoft/hcsshim v0.8.14/go.mod h1:NtVKoYxQuTLx6gEq0L96c9Ju4JbRJ4nY2ow3VK6a9Lg=
+github.com/Microsoft/hcsshim v0.8.15/go.mod h1:x38A4YbHbdxJtc0sF6oIz+RG0npwSCAvn69iY6URG00=
+github.com/Microsoft/hcsshim v0.8.16/go.mod h1:o5/SZqmR7x9JNKsW3pu+nqHm0MF8vbA+VxGOoXdC600=
+github.com/Microsoft/hcsshim v0.8.23/go.mod h1:4zegtUJth7lAvFyc6cH2gGQ5B3OFQim01nnU2M8jKDg=
+github.com/Microsoft/hcsshim/test v0.0.0-20201218223536-d3e5debf77da/go.mod h1:5hlzMzRKMLyo42nCZ9oml8AdTlq/0cvIaBv6tK1RehU=
+github.com/Microsoft/hcsshim/test v0.0.0-20210227013316-43a75bb4edd3/go.mod h1:mw7qgWloBUl75W/gVH3cQszUg1+gUITj7D6NY7ywVnY=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
@@ -72,72 +116,236 @@ github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbt
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
+github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
+github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
+github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/ViaQ/logerr/v2 v2.0.0 h1:5NOOexPjkhaga6E13JJfPa0vRtyW+nIvTYUDrheZETM=
github.com/ViaQ/logerr/v2 v2.0.0/go.mod h1:/qoWLm3YG40Sv5u75s4fvzjZ5p36xINzaxU2L+DJ9uw=
+github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g=
+github.com/afex/hystrix-go v0.0.0-20180502004556-fa1af6a1f4f5/go.mod h1:SkGFH1ia65gfNATL8TAiHDNxPzPdmEL5uirI2Uyuz6c=
github.com/agnivade/levenshtein v1.0.1/go.mod h1:CURSv5d9Uaml+FovSIICkLbAUZ9S4RqaHDIsdSBg7lM=
+github.com/ajstarks/svgo v0.0.0-20180226025133-644b8db467af/go.mod h1:K08gAheRH3/J6wwsYMMT4xOr94bZjxIelGM0+d/wbFw=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
+github.com/alecthomas/units v0.0.0-20210208195552-ff826a37aa15/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
+github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc=
+github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
+github.com/alexflint/go-filemutex v0.0.0-20171022225611-72bdc8eae2ae/go.mod h1:CgnQgUtFrFz9mxFNtED3jI5tLDjKlOM+oUF/sTk6ps0=
github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20210826220005-b48c857c3a0e/go.mod h1:F7bn7fEU90QkQ3tnmaTx3LTKLEDqnwWODIYppRQ5hnY=
+github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
+github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
+github.com/armon/go-metrics v0.3.3/go.mod h1:4O98XIr/9W0sxpJ8UaYkvjk10Iff7SnFrb4QAOwNTFc=
+github.com/armon/go-metrics v0.3.9 h1:O2sNqxBdvq8Eq5xmzljcYzAORli6RWCvEym4cJf9m18=
+github.com/armon/go-metrics v0.3.9/go.mod h1:4O98XIr/9W0sxpJ8UaYkvjk10Iff7SnFrb4QAOwNTFc=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
+github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
+github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
+github.com/aryann/difflib v0.0.0-20170710044230-e206f873d14a/go.mod h1:DAHtR1m6lCRdSC2Tm3DSWRPvIPr6xNKyeHdqDQSQT+A=
github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
+github.com/asaskevich/govalidator v0.0.0-20200108200545-475eaeb16496/go.mod h1:oGkLhpf+kjZl6xBf758TQhh5XrAeiJv/7FRz/2spLIg=
+github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535/go.mod h1:oGkLhpf+kjZl6xBf758TQhh5XrAeiJv/7FRz/2spLIg=
+github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=
+github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
+github.com/aws/aws-sdk-go v1.15.11/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0=
+github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
+github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
+github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
+github.com/aws/aws-sdk-go v1.40.11/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
+github.com/aws/aws-sdk-go v1.43.10 h1:lFX6gzTBltYBnlJBjd2DWRCmqn2CbTcs6PW99/Dme7k=
+github.com/aws/aws-sdk-go v1.43.10/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
+github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
+github.com/beevik/ntp v0.2.0/go.mod h1:hIHWr+l3+/clUnF44zdK+CWW7fO8dR5cIylAQ76NRpg=
github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM=
+github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
+github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
+github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
+github.com/bits-and-blooms/bitset v1.2.0/go.mod h1:gIdJ4wp64HaoK2YrL1Q5/N7Y16edYb8uY+O0FJTyyDA=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/bketelsen/crypt v0.0.4/go.mod h1:aI6NrJ0pMGgvZKL1iVgXLnfIFJtfV+bKCoqOes/6LfM=
+github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
+github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
+github.com/bshuster-repo/logrus-logstash-hook v0.4.1/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
+github.com/buger/jsonparser v0.0.0-20180808090653-f4dd9f5a6b44/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
+github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
+github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
+github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
+github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
+github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
+github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
+github.com/cenkalti/backoff/v4 v4.1.2/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
-github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
+github.com/checkpoint-restore/go-criu/v4 v4.1.0/go.mod h1:xUQBLp4RLc5zJtWY++yjOoMoB5lihDt7fai+75m+rGw=
+github.com/checkpoint-restore/go-criu/v5 v5.0.0/go.mod h1:cfwC0EG7HMUenopBsUf9d89JlCLQIfgVcNsNN0t6T2M=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
+github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
+github.com/cilium/ebpf v0.0.0-20200702112145-1c8d4c9ef775/go.mod h1:7cR51M8ViRLIdUjrmSXlK9pkrsDlLHbO8jiB8X8JnOc=
+github.com/cilium/ebpf v0.2.0/go.mod h1:To2CFviqOWL/M0gIMsvSMlqe7em/l1ALkX1PyjrX2Qs=
+github.com/cilium/ebpf v0.4.0/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
+github.com/cilium/ebpf v0.6.2/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
+github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
+github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
+github.com/clbanning/x2j v0.0.0-20191024224557-825249438eec/go.mod h1:jMjuTZXRI4dUb/I5gc9Hdhagfvm9+RyrPryS/auMzxE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
+github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1 h1:zH8ljVhhq7yC0MIeUL/IviMtY8hx2mK8cN9wEYb8ggw=
+github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/cockroachdb/datadriven v0.0.0-20200714090401-bf6692d28da5/go.mod h1:h6jFvWxBdQXxjopDMZyH2UVceIRfR84bdzbkoKrsWNo=
github.com/cockroachdb/errors v1.2.4/go.mod h1:rQD95gz6FARkaKkQXUksEje/d9a6wBJoCr5oaCLELYA=
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI=
+github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
+github.com/containerd/aufs v0.0.0-20200908144142-dab0cbea06f4/go.mod h1:nukgQABAEopAHvB6j7cnP5zJ+/3aVcE7hCYqvIwAHyE=
+github.com/containerd/aufs v0.0.0-20201003224125-76a6863f2989/go.mod h1:AkGGQs9NM2vtYHaUen+NljV0/baGCAPELGm2q9ZXpWU=
+github.com/containerd/aufs v0.0.0-20210316121734-20793ff83c97/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
+github.com/containerd/aufs v1.0.0/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
+github.com/containerd/btrfs v0.0.0-20201111183144-404b9149801e/go.mod h1:jg2QkJcsabfHugurUvvPhS3E08Oxiuh5W/g1ybB4e0E=
+github.com/containerd/btrfs v0.0.0-20210316141732-918d888fb676/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
+github.com/containerd/btrfs v1.0.0/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
+github.com/containerd/cgroups v0.0.0-20190717030353-c4b9ac5c7601/go.mod h1:X9rLEHIqSf/wfK8NsPqxJmeZgW4pcfzdXITDrUSJ6uI=
+github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
+github.com/containerd/cgroups v0.0.0-20200531161412-0dbf7f05ba59/go.mod h1:pA0z1pT8KYB3TCXK/ocprsh7MAkoW8bZVzPdih9snmM=
+github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
+github.com/containerd/cgroups v0.0.0-20200824123100-0b889c03f102/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
+github.com/containerd/cgroups v0.0.0-20210114181951-8a68de567b68/go.mod h1:ZJeTFisyysqgcCdecO57Dj79RfL0LNeGiFUqLYQRYLE=
+github.com/containerd/cgroups v1.0.1/go.mod h1:0SJrPIenamHDcZhEcJMNBB85rHcUsw4f25ZfBiPYRkU=
+github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
+github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
+github.com/containerd/console v0.0.0-20191206165004-02ecf6a7291e/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
+github.com/containerd/console v1.0.1/go.mod h1:XUsP6YE/mKtz6bxc+I8UiKKTP04qjQL4qcS3XoQ5xkw=
+github.com/containerd/console v1.0.2/go.mod h1:ytZPjGgY2oeTkAONYafi2kSj0aYggsf8acV1PGKCbzQ=
+github.com/containerd/containerd v1.2.10/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.1-0.20191213020239-082f7e3aed57/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.0-beta.2.0.20200729163537-40b22ef07410/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.1/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.3/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.9/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.5.0-beta.1/go.mod h1:5HfvG1V2FsKesEGQ17k5/T7V960Tmcumvqn8Mc+pCYQ=
+github.com/containerd/containerd v1.5.0-beta.3/go.mod h1:/wr9AVtEM7x9c+n0+stptlo/uBBoBORwEx6ardVcmKU=
+github.com/containerd/containerd v1.5.0-beta.4/go.mod h1:GmdgZd2zA2GYIBZ0w09ZvgqEq8EfBp/m3lcVZIvPHhI=
+github.com/containerd/containerd v1.5.0-rc.0/go.mod h1:V/IXoMqNGgBlabz3tHD2TWDoTJseu1FGOKuoA4nNb2s=
+github.com/containerd/containerd v1.5.9 h1:rs6Xg1gtIxaeyG+Smsb/0xaSDu1VgFhOCKBXxMxbsF4=
+github.com/containerd/containerd v1.5.9/go.mod h1:fvQqCfadDGga5HZyn3j4+dx56qj2I9YwBrlSdalvJYQ=
+github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20191127005431-f65d91d395eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20200710164510-efbc4488d8fe/go.mod h1:cECdGN1O8G9bgKTlLhuPJimka6Xb/Gg7vYzCTNVxhvo=
+github.com/containerd/continuity v0.0.0-20201208142359-180525291bb7/go.mod h1:kR3BEg7bDFaEddKm54WSmrol1fKWDU1nKYkgrcgZT7Y=
+github.com/containerd/continuity v0.0.0-20210208174643-50096c924a4e/go.mod h1:EXlVlkqNba9rJe3j7w3Xa924itAMLgZH4UD/Q4PExuQ=
+github.com/containerd/continuity v0.1.0/go.mod h1:ICJu0PwR54nI0yPEnJ6jcS+J7CZAUXrLh8lPo2knzsM=
+github.com/containerd/fifo v0.0.0-20180307165137-3d5202aec260/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
+github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
+github.com/containerd/fifo v0.0.0-20200410184934-f15a3290365b/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
+github.com/containerd/fifo v0.0.0-20201026212402-0724c46b320c/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
+github.com/containerd/fifo v0.0.0-20210316144830-115abcc95a1d/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
+github.com/containerd/fifo v1.0.0/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
+github.com/containerd/go-cni v1.0.1/go.mod h1:+vUpYxKvAF72G9i1WoDOiPGRtQpqsNW/ZHtSlv++smU=
+github.com/containerd/go-cni v1.0.2/go.mod h1:nrNABBHzu0ZwCug9Ije8hL2xBCYh/pjfMb1aZGrrohk=
+github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
+github.com/containerd/go-runc v0.0.0-20190911050354-e029b79d8cda/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
+github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328/go.mod h1:PpyHrqVs8FTi9vpyHwPwiNEGaACDxT/N/pLcvMSRA9g=
+github.com/containerd/go-runc v0.0.0-20201020171139-16b287bc67d0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
+github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
+github.com/containerd/imgcrypt v1.0.1/go.mod h1:mdd8cEPW7TPgNG4FpuP3sGBiQ7Yi/zak9TYCG3juvb0=
+github.com/containerd/imgcrypt v1.0.4-0.20210301171431-0ae5c75f59ba/go.mod h1:6TNsg0ctmizkrOgXRNQjAPFWpMYRWuiB6dSF4Pfa5SA=
+github.com/containerd/imgcrypt v1.1.1-0.20210312161619-7ed62a527887/go.mod h1:5AZJNI6sLHJljKuI9IHnw1pWqo/F0nGDOuR9zgTs7ow=
+github.com/containerd/imgcrypt v1.1.1/go.mod h1:xpLnwiQmEUJPvQoAapeb2SNCxz7Xr6PJrXQb0Dpc4ms=
+github.com/containerd/nri v0.0.0-20201007170849-eb1350a75164/go.mod h1:+2wGSDGFYfE5+So4M5syatU0N0f0LbWpuqyMi4/BE8c=
+github.com/containerd/nri v0.0.0-20210316161719-dbaa18c31c14/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
+github.com/containerd/nri v0.1.0/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
+github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
+github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
+github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
+github.com/containerd/ttrpc v1.0.1/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
+github.com/containerd/ttrpc v1.0.2/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
+github.com/containerd/ttrpc v1.1.0/go.mod h1:XX4ZTnoOId4HklF4edwc4DcqskFZuvXB1Evzy5KFQpQ=
+github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
+github.com/containerd/typeurl v0.0.0-20190911142611-5eb25027c9fd/go.mod h1:GeKYzf2pQcqv7tJ0AoCuuhtnqhva5LNU3U+OyKxxJpk=
+github.com/containerd/typeurl v1.0.1/go.mod h1:TB1hUtrpaiO88KEK56ijojHS1+NeF0izUACaJW2mdXg=
+github.com/containerd/typeurl v1.0.2/go.mod h1:9trJWW2sRlGub4wZJRTW83VtbOLS6hwcDZXTn6oPz9s=
+github.com/containerd/zfs v0.0.0-20200918131355-0a33824f23a2/go.mod h1:8IgZOBdv8fAgXddBT4dBXJPtxyRsejFIpXoklgxgEjw=
+github.com/containerd/zfs v0.0.0-20210301145711-11e8f1707f62/go.mod h1:A9zfAbMlQwE+/is6hi0Xw8ktpL+6glmqZYtevJgaB8Y=
+github.com/containerd/zfs v0.0.0-20210315114300-dde8f0fda960/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containerd/zfs v0.0.0-20210324211415-d5c4544f0433/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containerd/zfs v1.0.0/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containernetworking/cni v0.7.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/cni v0.8.0/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/cni v0.8.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/plugins v0.8.6/go.mod h1:qnw5mN19D8fIwkqW7oHHYDHVlzhJpcY6TQxn/fUyDDM=
+github.com/containernetworking/plugins v0.9.1/go.mod h1:xP/idU2ldlzN6m4p5LmGiwRDjeJr6FLK6vuiUwoH7P8=
+github.com/containers/ocicrypt v1.0.1/go.mod h1:MeJDzk1RJHv89LjsH0Sp5KTY3ZYkjXO/C+bKAeWFIrc=
+github.com/containers/ocicrypt v1.1.0/go.mod h1:b8AOe0YR67uU8OqfVNcznfFpAzu3rdgUV4GP9qXPfu4=
+github.com/containers/ocicrypt v1.1.1/go.mod h1:Dm55fwWm1YZAjYRaJ94z2mfZikIyIN4B0oB3dj3jFxY=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
+github.com/coreos/etcd v3.3.25+incompatible h1:0GQEw6h3YnuOVdtwygkIfJ+Omx0tZ8/QkVyXI4LkbeY=
+github.com/coreos/etcd v3.3.25+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
+github.com/coreos/go-iptables v0.4.5/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
+github.com/coreos/go-iptables v0.5.0/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
+github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
+github.com/coreos/go-systemd v0.0.0-20161114122254-48702e0da86b/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf h1:iW4rZ826su+pqaw19uhpSCzhj44qo35pNgKFGqzDKkU=
+github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/go-systemd/v22 v22.0.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
+github.com/coreos/go-systemd/v22 v22.1.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
+github.com/coreos/go-systemd/v22 v22.3.2 h1:D9/bQk5vlXQFZ6Kwuu6zaiXJ9oTPe68++AzAJc1DzSI=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180108230652-97fdf19511ea/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
+github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f h1:lBNOc5arjvs8E5mO2tbpBpLoyyu8B6e44T7hJy6potg=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
+github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
+github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
+github.com/d2g/dhcp4 v0.0.0-20170904100407-a1d1b6c41b1c/go.mod h1:Ct2BUK8SB0YC1SMSibvLzxjeJLnrYEVLULFNiHY9YfQ=
+github.com/d2g/dhcp4client v1.0.0/go.mod h1:j0hNfjhrt2SxUOw55nL0ATM/z4Yt3t2Kd1mW34z5W5s=
+github.com/d2g/dhcp4server v0.0.0-20181031114812-7d4a0a7f59a5/go.mod h1:Eo87+Kg/IX2hfWJfwxMzLyuSZyxSoAug2nGa1G2QAi8=
+github.com/d2g/hardwareaddr v0.0.0-20190221164911-e7d9fbe030e4/go.mod h1:bMl4RjIciD2oAxI7DmWRx6gbeqrkoLqv3MV0vzNad+I=
github.com/dave/dst v0.26.2/go.mod h1:UMDJuIRPfyUCC78eFuB+SV/WI8oDeyFDvM/JR6NI3IU=
github.com/dave/gopackages v0.0.0-20170318123100-46e7023ec56e/go.mod h1:i00+b/gKdIDIxuLDFob7ustLAVqhsZRk2qVZrArELGQ=
github.com/dave/jennifer v1.2.0/go.mod h1:fIb+770HOpJ2fmN9EPPKOqm1vMGhB+TwXKMZhrIygKg=
@@ -146,18 +354,51 @@ github.com/dave/rebecca v0.9.1/go.mod h1:N6XYdMD/OKw3lkF3ywh8Z6wPGuwNFDNtWYEMFWE
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/dennwc/varint v1.0.0 h1:kGNFFSSw8ToIy3obO/kKr8U9GZYUAxQEVuix4zfDWzE=
+github.com/dennwc/varint v1.0.0/go.mod h1:hnItb35rvZvJrbTALZtY/iQfDs48JKRG1RPpgziApxA=
+github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0=
+github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
+github.com/dgryski/go-sip13 v0.0.0-20200911182023-62edffca9245/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
+github.com/digitalocean/godo v1.75.0 h1:UijUv60I095CqJqGKdjY2RTPnnIa4iFddmq+1wfyS4Y=
+github.com/digitalocean/godo v1.75.0/go.mod h1:GBmu8MkjZmNARE7IXRPmkbbnocNN8+uBm0xbEVw2LCs=
+github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
+github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ=
+github.com/docker/distribution v0.0.0-20190905152932-14b96e55d84c/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY=
+github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
+github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
+github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v0.7.3-0.20190327010347-be7ac8be2ae0/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/docker v20.10.12+incompatible h1:CEeNmFM0QZIsJCZKMkZx0ZcahTiewkrgiwfYD+dfl1U=
+github.com/docker/docker v20.10.12+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
+github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
+github.com/docker/go-events v0.0.0-20170721190031-9461782956ad/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
+github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
+github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI=
+github.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=
github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
+github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
+github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
+github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
+github.com/dvyukov/go-fuzz v0.0.0-20210103155950-6a8e9d1f2415/go.mod h1:11Gm+ccJnvAhCNLlf5+cS9KjtbaD5I5zaZpFMsTHWTw=
+github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
+github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
+github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
+github.com/edsrzf/mmap-go v1.0.0/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
+github.com/edsrzf/mmap-go v1.1.0 h1:6EUwBLQ/Mcr1EYLE4Tn1VdW1A4ckqCQWZBw8Hr0kjpQ=
+github.com/edsrzf/mmap-go v1.1.0/go.mod h1:19H/e8pUPLicwkyNgOykDXkJ9F0MHE+Z52B8EIth78Q=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
+github.com/ema/qdisc v0.0.0-20190904071900-b82c76788043/go.mod h1:ix4kG2zvdUd8kEKSW0ZTr1XLks0epFpI4j745DXxlNE=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
+github.com/envoyproxy/go-control-plane v0.6.9/go.mod h1:SBwIajubJHhxtWwsL9s8ss4safvEdbitLhGGK48rN6g=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
@@ -165,50 +406,89 @@ github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5y
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
+github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
+github.com/envoyproxy/go-control-plane v0.10.1 h1:cgDRLG7bs59Zd+apAWuzLQL95obVYAymNJek76W3mgw=
+github.com/envoyproxy/go-control-plane v0.10.1/go.mod h1:AY7fTTXNdv/aJ2O5jwpxAPOWUZ7hQAEvzN5Pf27BkQQ=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
+github.com/envoyproxy/protoc-gen-validate v0.6.6 h1:BApABShi05CepE340unZKC07YxY/I8KgnWPICc3U5yM=
+github.com/envoyproxy/protoc-gen-validate v0.6.6/go.mod h1:dyJXwwfPK2VSqiB9Klm1J6romD608Ba7Hij42vrOBCo=
github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
+github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
+github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
+github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
+github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
+github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
+github.com/felixge/httpsnoop v1.0.2 h1:+nS9g82KMXccJ/wp0zyRW9ZBHFETmMGtkk+2CTTrW4o=
+github.com/felixge/httpsnoop v1.0.2/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
+github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
-github.com/form3tech-oss/jwt-go v3.2.3+incompatible h1:7ZaBxOI7TMoYBfyA3cQHErNNyAWIKUMIwqxEtgHOs5c=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
+github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
+github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
+github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.5.1 h1:mZcQUHVQUQWoPXXtuf9yuEXKudkV2sx1E06UadKWpgI=
github.com/fsnotify/fsnotify v1.5.1/go.mod h1:T3375wBYaZdLLcVNkcVbzGHY7f1l/uK5T5Ai1i3InKU=
+github.com/fullsailor/pkcs7 v0.0.0-20190404230743-d7302db945fa/go.mod h1:KnogPXtdwXqoenmZCw6S+25EAm2MkxbG0deNDu4cbSA=
+github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
github.com/getkin/kin-openapi v0.76.0/go.mod h1:660oXbgy5JFMKreazJaQTw7o+X00qeSyhcnluiMv+Xg=
github.com/getsentry/raven-go v0.2.0/go.mod h1:KungGk8q33+aIAZUIVWZDr2OfAEBsO49PX4NzFV5kcQ=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
+github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
+github.com/gin-gonic/gin v1.5.0/go.mod h1:Nd6IXA8m5kNZdNEHMBd93KT+mdY3+bewLgRvmCsR2Do=
github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
+github.com/go-kit/kit v0.10.0/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
+github.com/go-kit/log v0.2.0 h1:7i2K3eKTos3Vc0enKCfnVcgHh2olr/MyfboYq7cAcFw=
+github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
+github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA=
+github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
+github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
+github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0=
github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
+github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
+github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-logr/zapr v1.2.0 h1:n4JnPI1T3Qq1SFEi/F8rwLrZERp2bso19PJZDB9dayk=
github.com/go-logr/zapr v1.2.0/go.mod h1:Qa4Bsj2Vb+FAVeAKsLD8RLQ+YRJB8YDmOAKxaBQf7Ro=
github.com/go-openapi/analysis v0.0.0-20180825180245-b006789cd277/go.mod h1:k70tL6pCuVxPJOHXQ+wIac1FUrvNkHolPie/cLEU6hI=
github.com/go-openapi/analysis v0.17.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik=
github.com/go-openapi/analysis v0.18.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik=
github.com/go-openapi/analysis v0.19.2/go.mod h1:3P1osvZa9jKjb8ed2TPng3f0i/UY9snX6gxi44djMjk=
+github.com/go-openapi/analysis v0.19.4/go.mod h1:3P1osvZa9jKjb8ed2TPng3f0i/UY9snX6gxi44djMjk=
github.com/go-openapi/analysis v0.19.5/go.mod h1:hkEAkxagaIvIP7VTn8ygJNkd4kAYON2rCu0v0ObL0AU=
+github.com/go-openapi/analysis v0.19.10/go.mod h1:qmhS3VNFxBlquFJ0RGoDtylO9y4pgTAUNE9AEEMdlJQ=
+github.com/go-openapi/analysis v0.19.16/go.mod h1:GLInF007N83Ad3m8a/CbQ5TPzdnGT7workfHwuVjNVk=
+github.com/go-openapi/analysis v0.20.0/go.mod h1:BMchjvaHDykmRMsK40iPtvyOfFdMMxlOmQr9FBZk+Og=
github.com/go-openapi/errors v0.17.0/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0=
github.com/go-openapi/errors v0.18.0/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0=
github.com/go-openapi/errors v0.19.2/go.mod h1:qX0BLWsyaKfvhluLejVpVNwNRdXZhEbTA4kxxpKBC94=
+github.com/go-openapi/errors v0.19.3/go.mod h1:qX0BLWsyaKfvhluLejVpVNwNRdXZhEbTA4kxxpKBC94=
+github.com/go-openapi/errors v0.19.6/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M=
+github.com/go-openapi/errors v0.19.7/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M=
+github.com/go-openapi/errors v0.19.8/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M=
+github.com/go-openapi/errors v0.19.9/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M=
+github.com/go-openapi/errors v0.20.0/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M=
github.com/go-openapi/jsonpointer v0.0.0-20160704185906-46af16f9f7b1/go.mod h1:+35s3my2LFTysnkMfxsJBAMHj/DoqoB9knIWoYG/Vk0=
github.com/go-openapi/jsonpointer v0.17.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M=
github.com/go-openapi/jsonpointer v0.18.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M=
@@ -225,36 +505,122 @@ github.com/go-openapi/loads v0.17.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf
github.com/go-openapi/loads v0.18.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU=
github.com/go-openapi/loads v0.19.0/go.mod h1:72tmFy5wsWx89uEVddd0RjRWPZm92WRLhf7AC+0+OOU=
github.com/go-openapi/loads v0.19.2/go.mod h1:QAskZPMX5V0C2gvfkGZzJlINuP7Hx/4+ix5jWFxsNPs=
+github.com/go-openapi/loads v0.19.3/go.mod h1:YVfqhUCdahYwR3f3iiwQLhicVRvLlU/WO5WPaZvcvSI=
github.com/go-openapi/loads v0.19.4/go.mod h1:zZVHonKd8DXyxyw4yfnVjPzBjIQcLt0CCsn0N0ZrQsk=
+github.com/go-openapi/loads v0.19.5/go.mod h1:dswLCAdonkRufe/gSUC3gN8nTSaB9uaS2es0x5/IbjY=
+github.com/go-openapi/loads v0.19.6/go.mod h1:brCsvE6j8mnbmGBh103PT/QLHfbyDxA4hsKvYBNEGVc=
+github.com/go-openapi/loads v0.19.7/go.mod h1:brCsvE6j8mnbmGBh103PT/QLHfbyDxA4hsKvYBNEGVc=
+github.com/go-openapi/loads v0.20.0/go.mod h1:2LhKquiE513rN5xC6Aan6lYOSddlL8Mp20AW9kpviM4=
+github.com/go-openapi/loads v0.20.2/go.mod h1:hTVUotJ+UonAMMZsvakEgmWKgtulweO9vYP2bQYKA/o=
github.com/go-openapi/runtime v0.0.0-20180920151709-4f900dc2ade9/go.mod h1:6v9a6LTXWQCdL8k1AO3cvqx5OtZY/Y9wKTgaoP6YRfA=
github.com/go-openapi/runtime v0.19.0/go.mod h1:OwNfisksmmaZse4+gpV3Ne9AyMOlP1lt4sK4FXt0O64=
github.com/go-openapi/runtime v0.19.4/go.mod h1:X277bwSUBxVlCYR3r7xgZZGKVvBd/29gLDlFGtJ8NL4=
+github.com/go-openapi/runtime v0.19.15/go.mod h1:dhGWCTKRXlAfGnQG0ONViOZpjfg0m2gUt9nTQPQZuoo=
+github.com/go-openapi/runtime v0.19.16/go.mod h1:5P9104EJgYcizotuXhEuUrzVc+j1RiSjahULvYmlv98=
+github.com/go-openapi/runtime v0.19.24/go.mod h1:Lm9YGCeecBnUUkFTxPC4s1+lwrkJ0pthx8YvyjCfkgk=
+github.com/go-openapi/runtime v0.19.29/go.mod h1:BvrQtn6iVb2QmiVXRsFAm6ZCAZBpbVKFfN6QWCp582M=
github.com/go-openapi/spec v0.0.0-20160808142527-6aced65f8501/go.mod h1:J8+jY1nAiCcj+friV/PDoE1/3eeccG9LYBs0tYvLOWc=
github.com/go-openapi/spec v0.17.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI=
github.com/go-openapi/spec v0.18.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI=
github.com/go-openapi/spec v0.19.2/go.mod h1:sCxk3jxKgioEJikev4fgkNmwS+3kuYdJtcsZsD5zxMY=
github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo=
+github.com/go-openapi/spec v0.19.6/go.mod h1:Hm2Jr4jv8G1ciIAo+frC/Ft+rR2kQDh8JHKHb3gWUSk=
+github.com/go-openapi/spec v0.19.8/go.mod h1:Hm2Jr4jv8G1ciIAo+frC/Ft+rR2kQDh8JHKHb3gWUSk=
+github.com/go-openapi/spec v0.19.15/go.mod h1:+81FIL1JwC5P3/Iuuozq3pPE9dXdIEGxFutcFKaVbmU=
+github.com/go-openapi/spec v0.20.0/go.mod h1:+81FIL1JwC5P3/Iuuozq3pPE9dXdIEGxFutcFKaVbmU=
+github.com/go-openapi/spec v0.20.1/go.mod h1:93x7oh+d+FQsmsieroS4cmR3u0p/ywH649a3qwC9OsQ=
+github.com/go-openapi/spec v0.20.3/go.mod h1:gG4F8wdEDN+YPBMVnzE85Rbhf+Th2DTvA9nFPQ5AYEg=
github.com/go-openapi/strfmt v0.17.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU=
github.com/go-openapi/strfmt v0.18.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU=
github.com/go-openapi/strfmt v0.19.0/go.mod h1:+uW+93UVvGGq2qGaZxdDeJqSAqBqBdl+ZPMF/cC8nDY=
+github.com/go-openapi/strfmt v0.19.2/go.mod h1:0yX7dbo8mKIvc3XSKp7MNfxw4JytCfCD6+bY1AVL9LU=
github.com/go-openapi/strfmt v0.19.3/go.mod h1:0yX7dbo8mKIvc3XSKp7MNfxw4JytCfCD6+bY1AVL9LU=
+github.com/go-openapi/strfmt v0.19.4/go.mod h1:eftuHTlB/dI8Uq8JJOyRlieZf+WkkxUuk0dgdHXr2Qk=
+github.com/go-openapi/strfmt v0.19.5/go.mod h1:eftuHTlB/dI8Uq8JJOyRlieZf+WkkxUuk0dgdHXr2Qk=
+github.com/go-openapi/strfmt v0.19.11/go.mod h1:UukAYgTaQfqJuAFlNxxMWNvMYiwiXtLsF2VwmoFtbtc=
+github.com/go-openapi/strfmt v0.20.0/go.mod h1:UukAYgTaQfqJuAFlNxxMWNvMYiwiXtLsF2VwmoFtbtc=
+github.com/go-openapi/strfmt v0.20.1/go.mod h1:43urheQI9dNtE5lTZQfuFJvjYJKPrxicATpEfZwHUNk=
+github.com/go-openapi/strfmt v0.21.2/go.mod h1:I/XVKeLc5+MM5oPNN7P6urMOpuLXEcNrCX/rPGuWb0k=
github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I=
github.com/go-openapi/swag v0.17.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
github.com/go-openapi/swag v0.18.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
+github.com/go-openapi/swag v0.19.7/go.mod h1:ao+8BpOPyKdpQz3AOJfbeEVpLmWAvlT1IfTe5McPyhY=
+github.com/go-openapi/swag v0.19.9/go.mod h1:ao+8BpOPyKdpQz3AOJfbeEVpLmWAvlT1IfTe5McPyhY=
+github.com/go-openapi/swag v0.19.12/go.mod h1:eFdyEBkTdoAf/9RXBvj4cr1nH7GD8Kzo5HTt47gr72M=
+github.com/go-openapi/swag v0.19.13/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
+github.com/go-openapi/swag v0.19.15/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/validate v0.18.0/go.mod h1:Uh4HdOzKt19xGIGm1qHf/ofbX1YQ4Y+MYsct2VUrAJ4=
github.com/go-openapi/validate v0.19.2/go.mod h1:1tRCw7m3jtI8eNWEEliiAqUIcBztB2KDnRCRMUi7GTA=
+github.com/go-openapi/validate v0.19.3/go.mod h1:90Vh6jjkTn+OT1Eefm0ZixWNFjhtOH7vS9k0lo6zwJo=
github.com/go-openapi/validate v0.19.5/go.mod h1:8DJv2CVJQ6kGNpFW6eV9N3JviE1C85nY1c2z52x1Gk4=
+github.com/go-openapi/validate v0.19.10/go.mod h1:RKEZTUWDkxKQxN2jDT7ZnZi2bhZlbNMAuKvKB+IaGx8=
+github.com/go-openapi/validate v0.19.12/go.mod h1:Rzou8hA/CBw8donlS6WNEUQupNvUZ0waH08tGe6kAQ4=
+github.com/go-openapi/validate v0.19.15/go.mod h1:tbn/fdOwYHgrhPBzidZfJC2MIVvs9GA7monOmWBbeCI=
+github.com/go-openapi/validate v0.20.1/go.mod h1:b60iJT+xNNLfaQJUqLI7946tYiFEOuE9E4k54HpKcJ0=
+github.com/go-openapi/validate v0.20.2/go.mod h1:e7OJoKNgd0twXZwIn0A43tHbvIcr/rZIVCbJBpTUoY0=
+github.com/go-playground/locales v0.12.1/go.mod h1:IUMDtCfWo/w/mtMfIE/IG2K+Ey3ygWanZIBtBW0W2TM=
+github.com/go-playground/universal-translator v0.16.0/go.mod h1:1AnU7NaIRDWWzGEKwgtJRd2xk99HeFyHw3yid4rvQIY=
+github.com/go-resty/resty/v2 v2.1.1-0.20191201195748-d7b97669fe48 h1:JVrqSeQfdhYRFk24TvhTZWU0q8lfCojxZQFi3Ou7+uY=
+github.com/go-resty/resty/v2 v2.1.1-0.20191201195748-d7b97669fe48/go.mod h1:dZGr0i9PLlaaTD4H/hoZIDjQ+r6xq8mgbRzHZf7f2J8=
+github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
+github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
-github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
+github.com/go-zookeeper/zk v1.0.2 h1:4mx0EYENAdX/B/rbunjlt5+4RTA/a9SMHBRuSKdGxPM=
+github.com/go-zookeeper/zk v1.0.2/go.mod h1:nOB03cncLtlp4t+UAkGSV+9beXP/akpekBwL+UX1Qcw=
+github.com/gobuffalo/attrs v0.0.0-20190224210810-a9411de4debd/go.mod h1:4duuawTqi2wkkpB4ePgWMaai6/Kc6WEz83bhFwpHzj0=
+github.com/gobuffalo/depgen v0.0.0-20190329151759-d478694a28d3/go.mod h1:3STtPUQYuzV0gBVOY3vy6CfMm/ljR4pABfrTeHNLHUY=
+github.com/gobuffalo/depgen v0.1.0/go.mod h1:+ifsuy7fhi15RWncXQQKjWS9JPkdah5sZvtHc2RXGlg=
+github.com/gobuffalo/envy v1.6.15/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI=
+github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI=
+github.com/gobuffalo/flect v0.1.0/go.mod h1:d2ehjJqGOH/Kjqcoz+F7jHTBbmDb38yXA598Hb50EGs=
+github.com/gobuffalo/flect v0.1.1/go.mod h1:8JCgGVbRjJhVgD6399mQr4fx5rRfGKVzFjbj6RE/9UI=
+github.com/gobuffalo/flect v0.1.3/go.mod h1:8JCgGVbRjJhVgD6399mQr4fx5rRfGKVzFjbj6RE/9UI=
+github.com/gobuffalo/genny v0.0.0-20190329151137-27723ad26ef9/go.mod h1:rWs4Z12d1Zbf19rlsn0nurr75KqhYp52EAGGxTbBhNk=
+github.com/gobuffalo/genny v0.0.0-20190403191548-3ca520ef0d9e/go.mod h1:80lIj3kVJWwOrXWWMRzzdhW3DsrdjILVil/SFKBzF28=
+github.com/gobuffalo/genny v0.1.0/go.mod h1:XidbUqzak3lHdS//TPu2OgiFB+51Ur5f7CSnXZ/JDvo=
+github.com/gobuffalo/genny v0.1.1/go.mod h1:5TExbEyY48pfunL4QSXxlDOmdsD44RRq4mVZ0Ex28Xk=
+github.com/gobuffalo/gitgen v0.0.0-20190315122116-cc086187d211/go.mod h1:vEHJk/E9DmhejeLeNt7UVvlSGv3ziL+djtTr3yyzcOw=
+github.com/gobuffalo/gogen v0.0.0-20190315121717-8f38393713f5/go.mod h1:V9QVDIxsgKNZs6L2IYiGR8datgMhB577vzTDqypH360=
+github.com/gobuffalo/gogen v0.1.0/go.mod h1:8NTelM5qd8RZ15VjQTFkAW6qOMx5wBbW4dSCS3BY8gg=
+github.com/gobuffalo/gogen v0.1.1/go.mod h1:y8iBtmHmGc4qa3urIyo1shvOD8JftTtfcKi+71xfDNE=
+github.com/gobuffalo/logger v0.0.0-20190315122211-86e12af44bc2/go.mod h1:QdxcLw541hSGtBnhUc4gaNIXRjiDppFGaDqzbrBd3v8=
+github.com/gobuffalo/mapi v1.0.1/go.mod h1:4VAGh89y6rVOvm5A8fKFxYG+wIW6LO1FMTG9hnKStFc=
+github.com/gobuffalo/mapi v1.0.2/go.mod h1:4VAGh89y6rVOvm5A8fKFxYG+wIW6LO1FMTG9hnKStFc=
+github.com/gobuffalo/packd v0.0.0-20190315124812-a385830c7fc0/go.mod h1:M2Juc+hhDXf/PnmBANFCqx4DM3wRbgDvnVWeG2RIxq4=
+github.com/gobuffalo/packd v0.1.0/go.mod h1:M2Juc+hhDXf/PnmBANFCqx4DM3wRbgDvnVWeG2RIxq4=
+github.com/gobuffalo/packr/v2 v2.0.9/go.mod h1:emmyGweYTm6Kdper+iywB6YK5YzuKchGtJQZ0Odn4pQ=
+github.com/gobuffalo/packr/v2 v2.2.0/go.mod h1:CaAwI0GPIAv+5wKLtv8Afwl+Cm78K/I/VCm/3ptBN+0=
+github.com/gobuffalo/syncx v0.0.0-20190224160051-33c29581e754/go.mod h1:HhnNqWY95UYwwW3uSASeV7vtgYkT2t16hJgV3AEPUpw=
+github.com/godbus/dbus v0.0.0-20151105175453-c7fdd8b5cd55/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
+github.com/godbus/dbus v0.0.0-20180201030542-885f9cc04c9c/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
+github.com/godbus/dbus v0.0.0-20190402143921-271e53dc4968/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
+github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
+github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
+github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
+github.com/gogo/googleapis v0.0.0-20180223154316-0cd9801be74a/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
+github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
+github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU=
+github.com/gogo/googleapis v1.4.0 h1:zgVt4UpGxcqVOw97aRGxT4svlcmdK35fynLNctY32zI=
+github.com/gogo/googleapis v1.4.0/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
+github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
+github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
+github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
+github.com/gogo/status v1.0.3/go.mod h1:SavQ51ycCLnc7dGyJxp8YAmudx8xqiVrRf+6IXRsugc=
+github.com/gogo/status v1.1.0 h1:+eIkrewn5q6b30y+g/BJINVVdi2xH7je5MPJ3ZPK3JA=
+github.com/gogo/status v1.1.0/go.mod h1:BFv9nrluPLmrS0EmGVvLaPNmRosr9KapBYd5/hpY1WM=
+github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
+github.com/golang-jwt/jwt/v4 v4.2.0 h1:besgBTC8w8HjP6NzQdxwKH9Z5oQMZ24ThTrHp3cZ8eU=
+github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
+github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGwJL78qG/PmXZO1EjYhfJinVAhrmmHX6Z8B9k=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@@ -272,6 +638,7 @@ github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
+github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@@ -291,8 +658,14 @@ github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaS
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
+github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
+github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
+github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
+github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
+github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/btree v1.0.1 h1:gK4Kx5IaGY9CD5sPJ36FHiBJ6ZXl0kilRiiCj+jdYp4=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/cel-go v0.9.0/go.mod h1:U7ayypeSkw23szu4GaQTPJGx66c20mx8JklMSxrmI1w=
github.com/google/cel-spec v0.6.0/go.mod h1:Nwjgxy5CbjlPrtCWjeDjUyKMl8w41YBYGjsyDdqk0xA=
@@ -307,8 +680,11 @@ github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.7 h1:81/ik6ipDQS2aGcBfIN5dHDB36BwrStyeAQquSYCV4o=
github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
+github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk=
+github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
@@ -316,6 +692,7 @@ github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
+github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
github.com/google/pprof v0.0.0-20181127221834-b4f47329b966/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
@@ -328,68 +705,157 @@ github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLe
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20220218203455-0368bd9e19a7/go.mod h1:KgnwoLYCZ8IQu3XUZ8Nc/bM9CCZFOyjUNOSygVozoDg=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
-github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
+github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
+github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
+github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/gnostic v0.1.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
+github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU=
github.com/googleapis/gnostic v0.5.5 h1:9fHAtK0uDfpveeqqo1hkEZJcFvYXAiCN3UutL8F9xHw=
github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA=
-github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8=
+github.com/gophercloud/gophercloud v0.24.0 h1:jDsIMGJ1KZpAjYfQgGI2coNQj5Q83oPzuiGJRFWgMzw=
+github.com/gophercloud/gophercloud v0.24.0/go.mod h1:Q8fZtyi5zZxPS/j9aj3sSxtvj41AdQMDwyo1myduD5c=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
+github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
+github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ=
+github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
+github.com/grafana/dskit v0.0.0-20220331160727-49faf69f72ca h1:0qHzm6VS0bCsSWKHuyfpt+pdpyScdZbzY/IFIyKSYOk=
+github.com/grafana/dskit v0.0.0-20220331160727-49faf69f72ca/go.mod h1:q51XdMLLHNZJSG6KOGujC20ed2OoLFdx0hBmOEVfRs0=
+github.com/grafana/loki v1.6.2-0.20220420044148-f62b4ae1905c h1:5jOpoI5zWOC6+t18pXqol/yH5xUg23Dw8qvmgTTvNwM=
+github.com/grafana/loki v1.6.2-0.20220420044148-f62b4ae1905c/go.mod h1:8djZ/4VgjskjZq+ZhA8fkPEnXh3NHZ9CwcQnVJdjGNI=
+github.com/grafana/regexp v0.0.0-20220202152315-e74e38789280/go.mod h1:M5qHK+eWfAv8VR/265dIuEpL3fNfeC21tXXp9itM24A=
+github.com/grafana/regexp v0.0.0-20220304100321-149c8afcd6cb h1:wwzNkyaQwcXCzQuKoWz3lwngetmcyg+EhW0fF5lz73M=
+github.com/grafana/regexp v0.0.0-20220304100321-149c8afcd6cb/go.mod h1:M5qHK+eWfAv8VR/265dIuEpL3fNfeC21tXXp9itM24A=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
+github.com/grpc-ecosystem/go-grpc-middleware v1.1.0/go.mod h1:f5nM7jw/oeRSadq3xCzHAvxcr8HZnzsqU6ILg/0NiiE=
+github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 h1:+9834+KizmvFV7pXQGSXQTsaWhq2GjuNUt0aUU0YBYw=
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2m2hlwIgKw+rp3sdCBRoJY+30Y=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
+github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645/go.mod h1:6iZfnjpejD4L/4DwD7NryNaJyCQdzwWwH2MWhCA90Kw=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
+github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
+github.com/hashicorp/consul/api v1.12.0 h1:k3y1FYv6nuKyNTqj6w9gXOx5r5CfLj/k/euUeBXj1OY=
+github.com/hashicorp/consul/api v1.12.0/go.mod h1:6pVBMo0ebnYdt2S3H87XhekM/HHrUoTD2XXb/VrZVy0=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
+github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
+github.com/hashicorp/consul/sdk v0.8.0 h1:OJtKBtEjboEZvG6AOUdh4Z1Zbyu0WcxQ0qatRrZHTVU=
+github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms=
+github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
+github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
+github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
+github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
+github.com/hashicorp/go-hclog v0.12.2/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
+github.com/hashicorp/go-hclog v0.16.2 h1:K4ev2ib4LdQETX5cSZBG0DVLk1jwGqSPXBjdah3veNs=
+github.com/hashicorp/go-hclog v0.16.2/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
+github.com/hashicorp/go-immutable-radix v1.2.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
+github.com/hashicorp/go-immutable-radix v1.3.1 h1:DKHmCUm2hRBK510BaiZlwvpD40f8bJFeZnpfm2KLowc=
+github.com/hashicorp/go-immutable-radix v1.3.1/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
+github.com/hashicorp/go-msgpack v0.5.5 h1:i9R9JSrqIz0QVLz3sz+i3YJdT7TTSLcfLLzJi9aZTuI=
+github.com/hashicorp/go-msgpack v0.5.5/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
+github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
+github.com/hashicorp/go-multierror v1.1.0 h1:B9UzwGQJehnUY1yNrnwREHc3fGbC2xefo8g4TbElacI=
+github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
+github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
+github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
+github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
+github.com/hashicorp/go-sockaddr v1.0.2 h1:ztczhD1jLxIRjVejw8gFomI1BQZOe2WoVOu0SyteCQc=
+github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
+github.com/hashicorp/go-uuid v1.0.2 h1:cfejS+Tpcp13yd5nYHWDI6qVCny6wyX2Mt5SGur2IGE=
+github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
+github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
+github.com/hashicorp/mdns v1.0.4/go.mod h1:mtBihi+LeNXGtG8L9dX59gAEa12BDtBQSp4v/YAJqrc=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
+github.com/hashicorp/memberlist v0.2.4/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
+github.com/hashicorp/memberlist v0.3.0 h1:8+567mCcFDnS5ADl7lrpxPMWiFCElyUEeW0gtj34fMA=
+github.com/hashicorp/memberlist v0.3.0/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
+github.com/hashicorp/serf v0.9.6 h1:uuEX1kLR6aoda1TBttmJQKDLZE1Ob7KN0NPdE7EtCDc=
+github.com/hashicorp/serf v0.9.6/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4=
+github.com/hetznercloud/hcloud-go v1.33.1 h1:W1HdO2bRLTKU4WsyqAasDSpt54fYO4WNckWYfH5AuCQ=
+github.com/hetznercloud/hcloud-go v1.33.1/go.mod h1:XX/TQub3ge0yWR2yHWmnDVIrB+MQbda1pHxkUmDlUME=
+github.com/hodgesds/perf-utils v0.0.8/go.mod h1:F6TfvsbtrF88i++hou29dTXlI2sfsJv+gRZDtmTJkAs=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
+github.com/huandu/xstrings v1.3.1 h1:4jgBlKK6tLKFvO8u5pmYjG91cqytmDCDvGh7ECVFfFs=
+github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
+github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
+github.com/iancoleman/strcase v0.2.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
+github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/imdario/mergo v0.3.10/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
+github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
+github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
+github.com/j-keck/arping v0.0.0-20160618110441-2cf9dc699c56/go.mod h1:ymszkNOg6tORTn+6F6j+Jc8TOr5osrynvN6ivFWZ2GA=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
+github.com/jessevdk/go-flags v1.5.0/go.mod h1:Fw0T6WPc1dYxT4mKEZRfG5kJhaTDP9pj1c2EWnYs/m4=
+github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
+github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
+github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
+github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
+github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
+github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
+github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
+github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
+github.com/jsimonetti/rtnetlink v0.0.0-20190606172950-9527aa82566a/go.mod h1:Oz+70psSo5OFh8DBl0Zv2ACw7Esh6pPUphlvZG9x7uw=
+github.com/jsimonetti/rtnetlink v0.0.0-20190830100107-3784a6c7c552/go.mod h1:Oz+70psSo5OFh8DBl0Zv2ACw7Esh6pPUphlvZG9x7uw=
+github.com/jsimonetti/rtnetlink v0.0.0-20200117123717-f846d4f6c1f4/go.mod h1:WGuG/smIU4J/54PblvSbh+xvCZmpJnFgr3ds6Z55XMQ=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
+github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
@@ -399,21 +865,41 @@ github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/X
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
+github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
+github.com/karrick/godirwalk v1.8.0/go.mod h1:H5KPZjojv4lE+QYImBI8xVtrBRgYrIVsaRPx4tDPEn4=
+github.com/karrick/godirwalk v1.10.3/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
+github.com/klauspost/compress v1.11.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
+github.com/klauspost/compress v1.11.13/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
+github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
+github.com/kolo/xmlrpc v0.0.0-20201022064351-38db28db192b h1:iNjcivnc6lhbvJA3LD622NPrUponluJrBWPIwGG/3Bg=
+github.com/kolo/xmlrpc v0.0.0-20201022064351-38db28db192b/go.mod h1:pcaDhQK0/NJZEvtCO0qQPPropqV0sJOJ6YW7X+9kRwM=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
+github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
+github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
+github.com/leodido/go-urn v1.1.0/go.mod h1:+cyI34gQWZcE1eQU7NVgKkkzdXDQHr1dBMtdAPozLkw=
+github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-bc2310a04743/go.mod h1:qklhhLq1aX+mtWk9cPHPzaBjWImj5ULL6C7HFJtXQMM=
+github.com/lightstep/lightstep-tracer-go v0.18.1/go.mod h1:jlF1pusYV4pidLvZ+XD0UBX0ZE6WURAspgAczcDHrL4=
+github.com/linode/linodego v1.3.0 h1:77BPapuzhfIhXodiDUt/M76H46UiFYOytEupVN2auDI=
+github.com/linode/linodego v1.3.0/go.mod h1:PVsRxSlOiJyvG4/scTszpmZDTdgS+to3X6eS8pRrWI8=
+github.com/lufia/iostat v1.1.0/go.mod h1:rEPNA0xXgjHQjuI5Cy05sLlS2oRcSlWHRLrvh/AQ+Pg=
+github.com/lyft/protoc-gen-star v0.6.0/go.mod h1:TGAoBVkt8w7MPG72TrKIu85MIdXwDuzJYeZuUPFPNwA=
+github.com/lyft/protoc-gen-validate v0.0.13/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.5/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
@@ -423,28 +909,80 @@ github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe/go.mod h1:C1wdFJiN
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
+github.com/mailru/easyjson v0.7.1/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
+github.com/markbates/oncer v0.0.0-20181203154359-bf2de49a0be2/go.mod h1:Ld9puTsIW75CHf65OeIOkyKbteujpZVXDpWK6YGZbxE=
+github.com/markbates/safe v1.0.1/go.mod h1:nAqgmRi7cY2nqMc92/bSEeQA+R4OheNU2T1kNSCBdG0=
+github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
+github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
+github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
+github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
+github.com/mattn/go-colorable v0.1.9 h1:sqDoxXbdeALODt0DAeJCVp38ps9ZogZEAXjus69YV3U=
+github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
+github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
+github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
+github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
+github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
+github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
+github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
+github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
+github.com/mattn/go-shellwords v1.0.3/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vqg+NOMyg4B2o=
+github.com/mattn/go-xmlrpc v0.0.3/go.mod h1:mqc2dz7tP5x5BKlCahN/n+hs7OSZKJkS9JsHNBRlrxA=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/maxbrunsfeld/counterfeiter/v6 v6.3.0 h1:8E6DrFvII6QR4eJ3PkFvV+lc03P+2qwqTPLm1ax7694=
github.com/maxbrunsfeld/counterfeiter/v6 v6.3.0/go.mod h1:fcEyUyXZXoV4Abw8DX0t7wyL8mCDxXyU4iAFZfT3IHw=
+github.com/mdlayher/genetlink v1.0.0/go.mod h1:0rJ0h4itni50A86M2kHcgS85ttZazNt7a8H2a2cw0Gc=
+github.com/mdlayher/netlink v0.0.0-20190409211403-11939a169225/go.mod h1:eQB3mZE4aiYnlUsyGGCOpPETfdQq4Jhsgf1fk3cwQaA=
+github.com/mdlayher/netlink v0.0.0-20190828143259-340058475d09/go.mod h1:KxeJAFOFLG6AjpyDkQ/iIhxygIUKD+vcwqcnu43w/+M=
+github.com/mdlayher/netlink v1.0.0/go.mod h1:KxeJAFOFLG6AjpyDkQ/iIhxygIUKD+vcwqcnu43w/+M=
+github.com/mdlayher/netlink v1.1.0/go.mod h1:H4WCitaheIsdF9yOYu8CFmCgQthAPIWZmcKp9uZHgmY=
+github.com/mdlayher/wifi v0.0.0-20190303161829-b1436901ddee/go.mod h1:Evt/EIne46u9PtQbeTx2NTcqURpr5K4SvKtGmBuDPN8=
+github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
+github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
+github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI=
+github.com/miekg/dns v1.1.46 h1:uzwpxRtSVxtcIZmz/4Uz6/Rn7G11DvsaslXoy5LxQio=
+github.com/miekg/dns v1.1.46/go.mod h1:e3IlAVfNqAllflbibAZEWOXOQ+Ynzk/dDozDxY7XnME=
+github.com/miekg/pkcs11 v1.0.3/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
+github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
+github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
+github.com/mitchellh/copystructure v1.0.0 h1:Laisrj+bAB6b/yJwB5Bt3ITZhGJdqmxquMKeZ+mmkFQ=
+github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
+github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
+github.com/mitchellh/go-testing-interface v1.0.0 h1:fzU/JVNcaqHQEcVFAKeR41fkiLdIPrefOvVG1VZ96U0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
+github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
+github.com/mitchellh/mapstructure v1.3.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
+github.com/mitchellh/mapstructure v1.3.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
+github.com/mitchellh/mapstructure v1.4.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
+github.com/mitchellh/mapstructure v1.4.3 h1:OVowDSCllw/YjdLkam3/sm7wEtOy59d8ndGgCcyj8cs=
+github.com/mitchellh/mapstructure v1.4.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
+github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A=
+github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
+github.com/mitchellh/reflectwalk v1.0.1 h1:FVzMWA5RllMAKIdUSC8mdWo3XtwoecrH79BY70sEEpE=
+github.com/mitchellh/reflectwalk v1.0.1/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
+github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
+github.com/moby/sys/mountinfo v0.4.0/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
+github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
+github.com/moby/sys/symlink v0.1.0/go.mod h1:GGDODQmbFOjFsXvfLVn3+ZRxkch54RkSiGqsZeMYowQ=
+github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo=
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297/go.mod h1:vgPCkQMyxTZ7IDy8SXRufE172gr8+K/JE/7hHFxHW3A=
+github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6/go.mod h1:E2VnQOmVuvZB6UYnnDB0qG5Nq/1tD9acaOpo6xmt0Kw=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -452,126 +990,272 @@ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lN
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
+github.com/modocache/gover v0.0.0-20171022184752-b58185e213c5/go.mod h1:caMODM3PzxT8aQXRPkAt8xlV/e7d7w8GM5g0fa5F0D8=
+github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
+github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
+github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
+github.com/nats-io/jwt v0.3.0/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg=
+github.com/nats-io/jwt v0.3.2/go.mod h1:/euKqTS1ZD+zzjYrY7pseZrTtWQSjujC7xjPc8wL6eU=
+github.com/nats-io/nats-server/v2 v2.1.2/go.mod h1:Afk+wRZqkMQs/p45uXdrVLuab3gwv3Z8C4HTBu8GD/k=
+github.com/nats-io/nats.go v1.9.1/go.mod h1:ZjDU1L/7fJ09jvUSRVBR2e7+RnLiiIQyqyzEE/Zbp4w=
+github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
+github.com/nats-io/nkeys v0.1.3/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
+github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
+github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
-github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
+github.com/oklog/oklog v0.3.2/go.mod h1:FCV+B7mhrz4o+ueLpx+KqkyXRGMWOYEvfiXtdGtbWGs=
+github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
+github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU=
+github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
+github.com/onsi/ginkgo v0.0.0-20151202141238-7f8ab55aaf3b/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
-github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
-github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
+github.com/onsi/gomega v0.0.0-20151007035656-2152b45fa28a/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
+github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.10.3/go.mod h1:V9xEwhxec5O8UDM77eCW8vLymOMltsqPVYWrpDsH8xc=
github.com/onsi/gomega v1.17.0 h1:9Luw4uT5HTjHTN8+aNcSThgH1vdXnmdJ8xIfZ4wyTRE=
-github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
+github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
+github.com/opencontainers/go-digest v0.0.0-20170106003457-a6d0ee40d420/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v1.0.0-rc1.0.20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
+github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
+github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/image-spec v1.0.2 h1:9yCKha/T5XdGtO0q9Q9a6T5NUCsTn/DrBg0D7ufOcFM=
+github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc8.0.20190926000215-3e425f80a8c9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc93/go.mod h1:3NOsor4w32B2tC0Zbl8Knk4Wg84SM2ImC1fxBuqJ/H0=
+github.com/opencontainers/runc v1.0.2/go.mod h1:aTaHFFwQXuA71CiyxOdFFIorAoemI04suvGRQFzWTD0=
+github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.2-0.20190207185410-29686dbc5559/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.3-0.20200929063507-e6143ca7d51d/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
+github.com/opencontainers/selinux v1.6.0/go.mod h1:VVGKuOLlE7v4PJyT6h7mNWvq1rzqiriPsEqVhc+svHE=
+github.com/opencontainers/selinux v1.8.0/go.mod h1:RScLhm78qiWa2gbVCcGkC7tCGdgk3ogry1nUQF8Evvo=
+github.com/opencontainers/selinux v1.8.2/go.mod h1:MUIHuUEvKB1wtJjQdOyYRgOnLD2xAPP8dBsCoU0KuF8=
github.com/openshift/api v0.0.0-20220124143425-d74727069f6f h1:iOTv1WudhVm2UsoST+L+ZrA5A9w57h9vmQsdlBuqG6g=
github.com/openshift/api v0.0.0-20220124143425-d74727069f6f/go.mod h1:F/eU6jgr6Q2VhMu1mSpMmygxAELd7+BUxs3NHZ25jV4=
github.com/openshift/build-machinery-go v0.0.0-20211213093930-7e33a7eb4ce3/go.mod h1:b1BuldmJlbA/xYtdZvKi+7j5YGB44qJUJDZ9zwiNCfE=
+github.com/opentracing-contrib/go-grpc v0.0.0-20180928155321-4b5a12d3ff02/go.mod h1:JNdpVEzCpXBgIiv4ds+TzhN1hrtxq6ClLrTlT9OQRSc=
+github.com/opentracing-contrib/go-grpc v0.0.0-20210225150812-73cb765af46e h1:4cPxUYdgaGzZIT5/j0IfqOrrXmq6bG8AwvwisMXpdrg=
+github.com/opentracing-contrib/go-grpc v0.0.0-20210225150812-73cb765af46e/go.mod h1:DYR5Eij8rJl8h7gblRrOZ8g0kW1umSpKqYIBTgeDtLo=
+github.com/opentracing-contrib/go-observer v0.0.0-20170622124052-a52f23424492/go.mod h1:Ngi6UdF0k5OKD5t5wlmGhe/EDKPoUM3BXZSSfIuJbis=
+github.com/opentracing-contrib/go-stdlib v0.0.0-20190519235532-cf7a6c988dc9/go.mod h1:PLldrQSroqzH70Xl+1DQcGnefIbqsKR7UDaiux3zV+w=
+github.com/opentracing-contrib/go-stdlib v1.0.0 h1:TBS7YuVotp8myLon4Pv7BtCBzOTo1DeZCld0Z63mW2w=
+github.com/opentracing-contrib/go-stdlib v1.0.0/go.mod h1:qtI1ogk+2JhVPIXVc6q+NHziSmy2W5GbdQZFUHADCBU=
+github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
+github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
+github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs=
+github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=
+github.com/openzipkin-contrib/zipkin-go-opentracing v0.4.5/go.mod h1:/wsWhb9smxSfWAKL3wpBW7V8scJMt8N8gnaMCS9E/cA=
+github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
+github.com/openzipkin/zipkin-go v0.2.1/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
+github.com/openzipkin/zipkin-go v0.2.2/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
+github.com/pact-foundation/pact-go v1.0.4/go.mod h1:uExwJY4kCzNPcHRj+hCR/HBbOOIwwtUjcrb0b5/5kLM=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
+github.com/pascaldekloe/goe v0.1.0 h1:cBOtyMzM9HTpWjXfbbunk26uA6nG3a8n06Wieeh0MwY=
+github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
+github.com/pelletier/go-toml v1.4.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo=
+github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE=
+github.com/pelletier/go-toml v1.8.1/go.mod h1:T2/BmBdy8dvIRq1a/8aqjN41wvWlN4lrapLU/GW4pbc=
github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
+github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9iaPbIdPPGyKcA8hKdoy6hAWba7Yac=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
+github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
+github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/errors v0.8.1-0.20171018195549-f15c970de5b7/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
+github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.48.0 h1:klFBev4UPGvhr3GF2b73Q1omlzZVONAhLwDhcQX0+4E=
github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.48.0/go.mod h1:3WYi4xqXxGGXWDdQIITnLNmuDzO5n6wYva9spVhR4fg=
+github.com/prometheus/alertmanager v0.23.0/go.mod h1:0MLTrjQI8EuVmvykEhcfr/7X0xmaDAZrqMgxIq3OXHk=
+github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
+github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
+github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
+github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
+github.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
+github.com/prometheus/client_golang v1.4.1/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1 h1:ZiaPsmm9uiBeaSMRznKsCDNtPCS0T3JVDGF+06gjBzk=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
+github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
+github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
+github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
+github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
+github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
+github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.28.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
+github.com/prometheus/common v0.29.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
+github.com/prometheus/common v0.30.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.32.1 h1:hWIdL3N2HoUx3B8j3YN9mWor0qhY/NlEKZEaXxuIRh4=
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
+github.com/prometheus/common/sigv4 v0.1.0 h1:qoVebwtwwEhS85Czm2dSROY5fTo2PAPEVdDeppTwGX4=
+github.com/prometheus/common/sigv4 v0.1.0/go.mod h1:2Jkxxk9yYvCkE5G1sQT7GuEXm57JrvHu9k5YwTjsNtI=
+github.com/prometheus/exporter-toolkit v0.6.1/go.mod h1:ZUBIj498ePooX9t/2xtDjeQYwvRpiPP2lh5u4iblj2g=
+github.com/prometheus/exporter-toolkit v0.7.1/go.mod h1:ZUBIj498ePooX9t/2xtDjeQYwvRpiPP2lh5u4iblj2g=
+github.com/prometheus/node_exporter v1.0.0-rc.0.0.20200428091818-01054558c289 h1:dTUS1vaLWq+Y6XKOTnrFpoVsQKLCbCp1OLj24TDi7oM=
+github.com/prometheus/node_exporter v1.0.0-rc.0.0.20200428091818-01054558c289/go.mod h1:FGbBv5OPKjch+jNUJmEQpMZytIdyW0NdBtWFcfSKusc=
+github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
+github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
+github.com/prometheus/procfs v0.0.0-20190522114515-bc1a522cf7b1/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
+github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
+github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
+github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
+github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
+github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.7.3 h1:4jVXhlkAyzOScmCkXBTOLRLTz8EeU+eyjrwB/EPq0VU=
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
+github.com/prometheus/prometheus v1.8.2-0.20220303173753-edfe657b5405 h1:nGkR5xtWNTkV1ykDwObBvNPBjZrNQF55Y4b6Jv1YRB8=
+github.com/prometheus/prometheus v1.8.2-0.20220303173753-edfe657b5405/go.mod h1:yHgqW1gjCflLQEPNTdFCYG4rF+xCnp54C+hubRV3lH4=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
+github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
+github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
+github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
+github.com/rs/cors v1.8.0/go.mod h1:EBwu+T5AvHOcXwvZIkQFjUN6s8Czyqw12GL/Y0tUyRM=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
+github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
+github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8/go.mod h1:Z0q5wiBQGYcxhMZ6gUqHn6pYNLypFAvaL3UvgZLR0U4=
+github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
+github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
+github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9 h1:0roa6gXKgyta64uqh52AQG3wzZXH21unn+ltzQSXML0=
+github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9/go.mod h1:fCa7OJZ/9DRTnOKmxvT6pn+LPWUptQAmHF/SBJUGEcg=
github.com/sclevine/spec v1.4.0 h1:z/Q9idDcay5m5irkZ28M7PtQM4aOISzOpj4bUPkDee8=
github.com/sclevine/spec v1.4.0/go.mod h1:LvpgJaFyvQzRvc1kaDs0bulYwzC70PbiYjC4QnFHkOM=
+github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
+github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
+github.com/sercand/kuberesolver v2.1.0+incompatible/go.mod h1:lWF3GL0xptCB/vCiJPl/ZshwPsX/n4Y7u0CW9E7aQIQ=
+github.com/sercand/kuberesolver v2.4.0+incompatible h1:WE2OlRf6wjLxHwNkkFLQGaZcVLEXjMjBPjjEU5vksH8=
+github.com/sercand/kuberesolver v2.4.0+incompatible/go.mod h1:lWF3GL0xptCB/vCiJPl/ZshwPsX/n4Y7u0CW9E7aQIQ=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
+github.com/shopspring/decimal v1.2.0 h1:abSATXmQEYyShuxI4/vyW3tV1MrKAJzCZ/0zLUXYbsQ=
+github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
+github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
+github.com/shurcooL/vfsgen v0.0.0-20200824052919-0d455de96546/go.mod h1:TrYk7fJVaAttu97ZZKrO9UbRa8izdowaMIZcxYMbVaw=
+github.com/siebenmann/go-kstat v0.0.0-20160321171754-d34789b79745/go.mod h1:G81aIFAMS9ECrwBYR9YxhlPjWgrItd+Kje78O6+uqm8=
+github.com/sirupsen/logrus v1.0.4-0.20170822132746-89742aefa4b2/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
+github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
+github.com/sirupsen/logrus v1.4.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
+github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
+github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
+github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
+github.com/sony/gobreaker v0.4.1/go.mod h1:ZKptC7FHNvhBz7dN2LGjPVBz2sZJmc0/PkyDJOjmxWY=
+github.com/soundcloud/go-runit v0.0.0-20150630195641-06ad41a06c4a/go.mod h1:LeFCbQYJ3KJlPs/FvPz2dy1tkpxyeNESVyCNNzRXFR0=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
+github.com/spf13/afero v1.3.3/go.mod h1:5KUK8ByomD5Ti5Artl0RtHeI5pTF7MIDuXL3yY520V4=
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
+github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
+github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
+github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.1.3/go.mod h1:pGADOWyqRD/YMrPZigI/zbliZ2wVD/23d+is3pSWzOo=
github.com/spf13/cobra v1.2.1/go.mod h1:ExllRjgxM/piMAM+3tAZvg8fsklGAf3tPfi+i8t68Nk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.1-0.20171106142849-4c012f6dcd95/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
+github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/spf13/viper v1.8.1/go.mod h1:o0Pch8wJ9BVSWGQMbra6iw0oQ5oktSIBaujf1rJH9Ns=
+github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980/go.mod h1:AO3tvPzVZ/ayst6UlUKUv6rcPQInYe3IknH3jYhAKu8=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
+github.com/streadway/amqp v0.0.0-20190404075320-75d898a42a94/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
+github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
+github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5JnDBl6z3cMAg/SywNDC5ABu5ApDIw6lUbRmI=
+github.com/stretchr/objx v0.0.0-20180129172003-8a3f7159479f/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
+github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
@@ -581,28 +1265,78 @@ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
+github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/tchap/go-patricia v2.2.6+incompatible/go.mod h1:bmLyhP68RS6kStMGxByiQ23RP/odRBOTVjwp2cDyi6I=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
+github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
+github.com/uber/jaeger-client-go v2.28.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
+github.com/uber/jaeger-client-go v2.30.0+incompatible h1:D6wyKGCecFaSRUpo8lCVbaOOb6ThwMmTEbhRwtKR97o=
+github.com/uber/jaeger-client-go v2.30.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
+github.com/uber/jaeger-lib v2.2.0+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
+github.com/uber/jaeger-lib v2.4.1+incompatible h1:td4jdvLcExb4cBISKIpHuGoVXh+dVKhn2Um6rjCsSsg=
+github.com/uber/jaeger-lib v2.4.1+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
+github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
+github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
+github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
+github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
+github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
+github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw=
+github.com/vishvananda/netlink v0.0.0-20181108222139-023a6dafdcdf/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
+github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
+github.com/vishvananda/netlink v1.1.1-0.20201029203352-d40f9887b852/go.mod h1:twkDnbuQxJYemMlGd4JFIcuhgX83tXhKS2B/PRMpOho=
+github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
+github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
+github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
+github.com/weaveworks/common v0.0.0-20211015155308-ebe5bdc2c89e h1:B0gVGyVpjfWJWSRe027EkhmEype0a0Dt2uHVxcPrhfs=
+github.com/weaveworks/common v0.0.0-20211015155308-ebe5bdc2c89e/go.mod h1:GWX2dQ7yjrgvqH0+d3kCJC5bsY8oOFwqjxFMHaRK4/k=
+github.com/weaveworks/promrus v1.2.0 h1:jOLf6pe6/vss4qGHjXmGz4oDJQA+AOCqEL3FvvZGz7M=
+github.com/weaveworks/promrus v1.2.0/go.mod h1:SaE82+OJ91yqjrE1rsvBWVzNZKcHYFtMUyS1+Ogs/KA=
+github.com/willf/bitset v1.1.11-0.20200630133818-d5bec3311243/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
+github.com/willf/bitset v1.1.11/go.mod h1:83CECat5yLh5zVOf4P1ErAgKA5UDvKtgyUABdr3+MjI=
+github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
+github.com/xdg-go/scram v1.0.2/go.mod h1:1WAq6h33pAW+iRreB34OORO2Nf7qel3VV3fjBj+hCSs=
+github.com/xdg-go/stringprep v1.0.2/go.mod h1:8F9zXuvzgwmyT5DUm4GUfZGDdT3W+LCvS6+da4O5kxM=
+github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c/go.mod h1:lB8K/P019DLNhemzwFU4jHLhdvlE6uDZjXFejJXr49I=
+github.com/xdg/stringprep v0.0.0-20180714160509-73f8eece6fdc/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
+github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
+github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
+github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
+github.com/xlab/treeprint v1.1.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
+github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
+github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
+github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
+github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA=
+github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
+go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
+go.etcd.io/etcd v0.5.0-alpha.5.0.20200910180754-dd1b699fc489/go.mod h1:yVHk9ub3CSBatqGNg7GRmsnfLWtoW60w4eDYfh7vHDg=
+go.etcd.io/etcd v3.3.25+incompatible h1:V1RzkZJj9LqsJRy+TUBgpWSbZXITLB819lstuTFoZOY=
+go.etcd.io/etcd v3.3.25+incompatible/go.mod h1:yaeTdrJi5lOmYerz05bd8+V7KubZs8YSFZfzsF9A6aI=
+go.etcd.io/etcd/api/v3 v3.5.0 h1:GsV3S+OfZEOCNXdtNkBSR7kgLobAa/SO6tCxRa0GAYw=
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
+go.etcd.io/etcd/client/pkg/v3 v3.5.0 h1:2aQv6F436YnN7I4VbI8PPYrBhu+SmrTaADcf8Mi/6PU=
go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v2 v2.305.0/go.mod h1:h9puh54ZTgAKtEbut2oe9P4L/oqKCVB6xsXlzd7alYQ=
+go.etcd.io/etcd/client/v3 v3.5.0 h1:62Eh0XOro+rDwkrypAGDfgmNh5Joq+z+W9HZdlXMzek=
go.etcd.io/etcd/client/v3 v3.5.0/go.mod h1:AIKXXVX/DQXtfTEqBryiLTUXwON+GuvO6Z7lLS/oTh0=
go.etcd.io/etcd/pkg/v3 v3.5.0/go.mod h1:UzJGatBQ1lXChBkQF0AuAtkRQMYnHubxAEYIrC3MSsE=
go.etcd.io/etcd/raft/v3 v3.5.0/go.mod h1:UFOHSIvO/nKwd4lhkwabrTD3cqW5yVyYYf/KlD00Szc=
@@ -610,6 +1344,16 @@ go.etcd.io/etcd/server/v3 v3.5.0/go.mod h1:3Ah5ruV+M+7RZr0+Y/5mNLwC+eQlni+mQmOVd
go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.2/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
+go.mongodb.org/mongo-driver v1.3.0/go.mod h1:MSWZXKOynuguX+JSvwP8i+58jYCXxbia8HS3gZBapIE=
+go.mongodb.org/mongo-driver v1.3.4/go.mod h1:MSWZXKOynuguX+JSvwP8i+58jYCXxbia8HS3gZBapIE=
+go.mongodb.org/mongo-driver v1.4.3/go.mod h1:WcMNYLx/IlOxLe6JRJiv2uXuCz6zBLndR4SoGjYphSc=
+go.mongodb.org/mongo-driver v1.4.4/go.mod h1:WcMNYLx/IlOxLe6JRJiv2uXuCz6zBLndR4SoGjYphSc=
+go.mongodb.org/mongo-driver v1.4.6/go.mod h1:WcMNYLx/IlOxLe6JRJiv2uXuCz6zBLndR4SoGjYphSc=
+go.mongodb.org/mongo-driver v1.5.1/go.mod h1:gRXCHX4Jo7J0IJ1oDQyUxF7jfy19UfxniMS4xxMmUqw=
+go.mongodb.org/mongo-driver v1.7.5/go.mod h1:VXEWRZ6URJIkUq2SCAyapmhH0ZLRBP+FT4xhp5Zvxng=
+go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
+go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
+go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
@@ -620,50 +1364,94 @@ go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/contrib v0.20.0/go.mod h1:G/EtFaa6qaN7+LxqfIAT3GiZa7Wv5DTBUzl5H4LY0Kc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0/go.mod h1:oVGt1LRbBOBq1A5BQLlUg9UaU/54aiHw8cgjV3aWZ/E=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0/go.mod h1:2AboqHi0CiIZU0qwhtUfCYD1GeUzvvIXWNkhDt7ZMG4=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.29.0/go.mod h1:tLYsuf2v8fZreBVwp9gVMhefZlLFZaUiNVSq8QxXRII=
go.opentelemetry.io/otel v0.20.0/go.mod h1:Y3ugLH2oa81t5QO+Lty+zXf8zC9L26ax4Nzoxm/dooo=
+go.opentelemetry.io/otel v1.4.0/go.mod h1:jeAqMFKy2uLIxCtKxoFj0FAL5zAPKQagc3+GtBWakzk=
+go.opentelemetry.io/otel v1.4.1 h1:QbINgGDDcoQUoMJa2mMaWno49lja9sHwp6aoa2n3a4g=
+go.opentelemetry.io/otel v1.4.1/go.mod h1:StM6F/0fSwpd8dKWDCdRr7uRvEPYdW0hBSlbdTiUde4=
go.opentelemetry.io/otel/exporters/otlp v0.20.0/go.mod h1:YIieizyaN77rtLJra0buKiNBOm9XQfkPEKBeuhoMwAM=
+go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.4.1/go.mod h1:VpP4/RMn8bv8gNo9uK7/IMY4mtWLELsS+JIP0inH0h4=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.4.1/go.mod h1:o5RW5o2pKpJLD5dNTCmjF1DorYwMeFJmb/rKr5sLaa8=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.4.1/go.mod h1:c6E4V3/U+miqjs/8l950wggHGL1qzlp0Ypj9xoGrPqo=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.4.1/go.mod h1:VwYo0Hak6Efuy0TXsZs8o1hnV3dHDPNtDbycG0hI8+M=
+go.opentelemetry.io/otel/internal/metric v0.27.0/go.mod h1:n1CVxRqKqYZtqyTh9U/onvKapPGv7y/rpyOTI+LFNzw=
go.opentelemetry.io/otel/metric v0.20.0/go.mod h1:598I5tYlH1vzBjn+BTuhzTCSb/9debfNp6R3s7Pr1eU=
+go.opentelemetry.io/otel/metric v0.27.0/go.mod h1:raXDJ7uP2/Jc0nVZWQjJtzoyssOYWu/+pjZqRzfvZ7g=
go.opentelemetry.io/otel/oteltest v0.20.0/go.mod h1:L7bgKf9ZB7qCwT9Up7i9/pn0PWIa9FqQ2IQ8LoxiGnw=
go.opentelemetry.io/otel/sdk v0.20.0/go.mod h1:g/IcepuwNsoiX5Byy2nNV0ySUF1em498m7hBWC279Yc=
+go.opentelemetry.io/otel/sdk v1.4.1/go.mod h1:NBwHDgDIBYjwK2WNu1OPgsIc2IJzmBXNnvIJxJc8BpE=
go.opentelemetry.io/otel/sdk/export/metric v0.20.0/go.mod h1:h7RBNMsDJ5pmI1zExLi+bJK+Dr8NQCh0qGhm1KDnNlE=
go.opentelemetry.io/otel/sdk/metric v0.20.0/go.mod h1:knxiS8Xd4E/N+ZqKmUPf3gTTZ4/0TjTXukfxjzSTpHE=
go.opentelemetry.io/otel/trace v0.20.0/go.mod h1:6GjCW8zgDjwGHGa6GkyeB8+/5vjT16gUEi0Nf1iBdgw=
+go.opentelemetry.io/otel/trace v1.4.0/go.mod h1:uc3eRsqDfWs9R7b92xbQbU42/eTNz4N+gLP8qJCi4aE=
+go.opentelemetry.io/otel/trace v1.4.1 h1:O+16qcdTrT7zxv2J6GejTPFinSwA++cYerC5iSiF8EQ=
+go.opentelemetry.io/otel/trace v1.4.1/go.mod h1:iYEVbroFCNut9QkwEczV9vMRPHNKSSwYZjulEtsmhFc=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
+go.opentelemetry.io/proto/otlp v0.12.0/go.mod h1:TsIjwGWIx5VFYv9KGVlOpxoBl5Dy+63SUguV7GGvlSQ=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
-go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
+go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
+go.uber.org/atomic v1.5.1/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
+go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
+go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/goleak v1.1.12 h1:gZAh5/EyT/HQwlpkCy6wTpqfH9H8Lz8zbm3dZh+OyzA=
go.uber.org/goleak v1.1.12/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
-go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
+go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
+go.uber.org/multierr v1.4.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
+go.uber.org/multierr v1.7.0 h1:zaiO/rmgFjbmCXdSYJWQcdvOCsthmdaHfr3Gm2Kx4Ec=
+go.uber.org/multierr v1.7.0/go.mod h1:7EAYxJLBy9rStEaz58O2t4Uvip6FSURkq8/ppBp95ak=
+go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
+go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
go.uber.org/zap v1.19.1 h1:ue41HOKd1vGURxrmeKIgELGb3jPW9DMUDGtsinblHwI=
go.uber.org/zap v1.19.1/go.mod h1:j3DNczoxDZroyBnOT1L/Q79cfUMGZxlv/9dzN7SM1rI=
+go4.org/intern v0.0.0-20210108033219-3eb7198706b2 h1:VFTf+jjIgsldaz/Mr00VaCSswHJrI2hIjQygE/W4IMg=
+go4.org/intern v0.0.0-20210108033219-3eb7198706b2/go.mod h1:vLqJ+12kCw61iCWsPto0EOHhBS+o4rO5VIucbc9g2Cc=
+go4.org/unsafe/assume-no-moving-gc v0.0.0-20201222175341-b30ae309168e/go.mod h1:FftLjUGFEDu5k8lt0ddY+HcrH/qU/0qk+H8j9/nTl3E=
+go4.org/unsafe/assume-no-moving-gc v0.0.0-20201222180813-1025295fd063 h1:1tk03FUNpulq2cuWpXZWj649rwJpk0d20rxWiopKRmc=
+go4.org/unsafe/assume-no-moving-gc v0.0.0-20201222180813-1025295fd063/go.mod h1:FftLjUGFEDu5k8lt0ddY+HcrH/qU/0qk+H8j9/nTl3E=
golang.org/x/arch v0.0.0-20180920145803-b19384d3c130/go.mod h1:cYlCBUl1MsqxdiKgmc4uh7TxZfWSFLOGSRR090WDxt8=
+golang.org/x/crypto v0.0.0-20171113213409-9f005a07e0d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
+golang.org/x/crypto v0.0.0-20181009213950-7c1a557ab941/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
-golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190320223903-b7391e95e576/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20190422162423-af44ce270edf/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190530122614-20be4c3c3ed5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
-golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 h1:HWj/xjIHfjYU5nVXpTM0s39J9CbLn7Cc5a7IC5rwsMQ=
+golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
+golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.0.0-20211202192323-5770296d904e/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
+golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3 h1:0es+/5331RGQPcXlMfP+WrnIIS6dNnNRe0WB02W0F4M=
+golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
+golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190125153040-c74c464bbbf2/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
@@ -673,6 +1461,7 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
+golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -686,6 +1475,7 @@ golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRu
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 h1:VLliZ0d+/avPrXXH+OakdXhpJuEoBZuwh1m2j7U6Iug=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
@@ -697,32 +1487,41 @@ golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
-golang.org/x/mod v0.4.2 h1:Gz96sIWK3OalVv/I/qNygP42zyoKp3xptRVCWRFEBvo=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.5.0/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
+golang.org/x/mod v0.5.1 h1:OJxoQ/rynoF0dcCdI7cLPktw/hR2cueqYfjm43oqK38=
+golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190619014844-b5b0513f8c1b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190921015927-1a5e07d1ff72/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191007182048-72f939374954/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
@@ -735,6 +1534,7 @@ golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200602114024-627f9648deb9/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
@@ -745,15 +1545,24 @@ golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwY
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
-golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
+golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8=
+golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210825183410-e898025ed96a/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211209124913-491a49abca63/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f h1:oA4XRj0qtSt8Yo1Zms0CUlsT3KG69V2UGQWPBxujDmc=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@@ -769,18 +1578,23 @@ golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
-golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f h1:Qmd2pbz05z7z6lm0DrgQVVPuBm92jqujBKMHMOlOQEw=
+golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 h1:RerP+noqYHUQ8CMRcPlC2nvTa4dcBIjegkuWdcUDuqg=
+golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190412183630-56d357773e84/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -791,33 +1605,57 @@ golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20190209173611-3b5209105503/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190411185658-b44545bcd369/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190419153524-e8e3143a4f4a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190522044717-8097e1b27ff5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190531175056-4c3a928424d2/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190812073006-9eafafc0a87e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190902133755-9109b7679e13/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200120151820-655fe14d7479/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -828,36 +1666,60 @@ golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200817155316-9781c653f443/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200831180312-196b9ba8737a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200909081042-eff7692f9009/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200922070232-aee5d888a860/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201117170446-d9b008d0a637/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201202213521-69691e467435/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210426230700-d19ff857e887/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210816183151-1e6c022a8912/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.0.0-20211029165221-6e7872819dc8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 h1:XfKQ4OlFl8okEOr5UvAqFRVj8pY/4yfcXrddB8qAbU0=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220222172238-00053529121e h1:AGLQ2aegkB2Y9RY8YdQk+7MDCW9da7YmizIwNIt8NtQ=
+golang.org/x/sys v0.0.0-20220222172238-00053529121e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
@@ -878,22 +1740,30 @@ golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxb
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
-golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac h1:7zkz7BUtwNFFqcowJ+RIgu2MaV/MapERkDIy+mwPyjs=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 h1:vVKdlvoWBphwdxWKrFZEuM0kGgGLxUOYcY4U/2Vjg44=
+golang.org/x/time v0.0.0-20220210224613-90d013bbcef8/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181011042414-1f849cf54d09/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190125232054-d66bd3c5d5a6/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190206041539-40960b6deb8e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190329151228-23e29df326fe/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190416151739-9c9e1878f421/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190420181800-aa740d480789/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190531172133-b3315ee88b7d/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190617190820-da514acc4774/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
@@ -901,9 +1771,12 @@ golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgw
golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@@ -913,12 +1786,14 @@ golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200103221440-774c71fcf114/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200216192241-b320d3a0f5a2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
@@ -939,14 +1814,18 @@ golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
-golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
+golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
-golang.org/x/tools v0.1.6-0.20210820212750-d4cc65f0b2ff h1:VX/uD7MK0AHXGiScH3fsieUQUcpmRERPDYtqZdJnA+Q=
+golang.org/x/tools v0.1.6-0.20210726203631-07bc1bf47fb2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.6-0.20210820212750-d4cc65f0b2ff/go.mod h1:YD9qOF0M9xpSpdWTBbzEl5e/RnCefISl8E5Noe10jFM=
+golang.org/x/tools v0.1.9 h1:j9KsMiaP1c3B0OTQGth0/k+miLGTgLsAFUCrF2vLcF8=
+golang.org/x/tools v0.1.9/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -954,6 +1833,12 @@ golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1N
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gomodules.xyz/jsonpatch/v2 v2.2.0 h1:4pT439QV83L+G9FkcCriY6EkpcK6r6bK+A5FBUMI7qY=
gomodules.xyz/jsonpatch/v2 v2.2.0/go.mod h1:WXp+iVDkoLQqPudfQ9GBlwB2eZ5DKOnjQZCYdOS8GPY=
+gonum.org/v1/gonum v0.0.0-20180816165407-929014505bf4/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
+gonum.org/v1/gonum v0.8.2/go.mod h1:oe/vMfY3deqTw+1EZJhuvEW2iwGF1bW9wwu7XCu0+v0=
+gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0/go.mod h1:wa6Ws7BG/ESfp6dHfk7C6KdzKA7wR7u/rKwOGE66zvw=
+gonum.org/v1/plot v0.0.0-20190515093506-e2840ee46a6b/go.mod h1:Wt8AAjI+ypCyYX3nZBvf6cAIx93T+c/OS2HFAYskSZc=
+google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
+google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
@@ -976,7 +1861,20 @@ google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjR
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
google.golang.org/api v0.44.0/go.mod h1:EBOGZqzyhtvMDoxwS97ctnh0zUmYY6CxqXsc1AvkYD8=
+google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo=
+google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4=
+google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNefaw=
+google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU=
+google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k=
+google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
+google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
+google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI=
+google.golang.org/api v0.61.0/go.mod h1:xQRti5UdCmoCEqFxcz93fTl338AVqDgyaDRuOZ3hg9I=
+google.golang.org/api v0.63.0/go.mod h1:gs4ij2ffTRXwuzzgJl/56BdwJaA194ijkfn++9tDuPo=
+google.golang.org/api v0.67.0/go.mod h1:ShHKP8E60yPsKNw/w8w+VYaj9H6buA5UqDp8dhbQZ6g=
+google.golang.org/api v0.70.0/go.mod h1:Bs4ZM2HGifEvXwd50TtW70ovgJffJYw2oRCOFU/SkfA=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
+google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
@@ -984,11 +1882,15 @@ google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCID
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8/go.mod h1:0H1ncTHf11KCFhTc/+EFRbzSCOZx+VUbRMk55Yv5MYk=
+google.golang.org/genproto v0.0.0-20180518175338-11a468237815/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190522204451-c2c4e71fbf69/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
+google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
@@ -997,6 +1899,7 @@ google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvx
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200117163144-32f20d992d24/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
@@ -1019,6 +1922,7 @@ google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201102152239-715cce707fb0/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
@@ -1027,13 +1931,43 @@ google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
+google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
+google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
+google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
+google.golang.org/genproto v0.0.0-20210624195500-8bfb893ecb84/go.mod h1:SzzZ/N+nwJDaO1kznhnlzqS8ocJICar6hYhVyhi++24=
+google.golang.org/genproto v0.0.0-20210713002101-d411969a0d9a/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
+google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701ea/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
+google.golang.org/genproto v0.0.0-20210728212813-7823e685a01f/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
+google.golang.org/genproto v0.0.0-20210805201207-89edb61ffb67/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
+google.golang.org/genproto v0.0.0-20210813162853-db860fec028c/go.mod h1:cFeNkxwySK631ADgubI+/XFU/xp8FD5KIVV4rj8UC5w=
+google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211221195035-429b39de9b1c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20220207164111-0872dc986b00/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
+google.golang.org/genproto v0.0.0-20220222154240-daf995802d7b h1:wHqTlwZVR0x5EG2S6vKlCq63+Tl/vBoQELitHxqxDOo=
+google.golang.org/genproto v0.0.0-20220222154240-daf995802d7b/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
+google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
+google.golang.org/grpc v1.12.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
+google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
+google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+google.golang.org/grpc v1.22.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
@@ -1050,8 +1984,16 @@ google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAG
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
+google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
+google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
+google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
+google.golang.org/grpc v1.40.1/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
+google.golang.org/grpc v1.43.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
+google.golang.org/grpc v1.44.0 h1:weqSxi/TMs1SqFRMHCtBgXRs8k3X39QIDEZ0pRcttUg=
+google.golang.org/grpc v1.44.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
+google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -1066,8 +2008,10 @@ google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp0
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -1076,6 +2020,10 @@ gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
+gopkg.in/gcfg.v1 v1.2.3/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
+gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
+gopkg.in/go-playground/assert.v1 v1.2.1/go.mod h1:9RXL0bg/zibRAgZUYszZSwO/z8Y/a8bDuhia5mkpMnE=
+gopkg.in/go-playground/validator.v9 v9.29.1/go.mod h1:+c9/zcJMFNgbLvly1L1V+PpxWdVbfP1avr/N00E2vyQ=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
@@ -1083,9 +2031,12 @@ gopkg.in/ini.v1 v1.62.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
+gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
+gopkg.in/square/go-jose.v2 v2.5.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/src-d/go-billy.v4 v4.3.0/go.mod h1:tm33zBoOwxjYHZIE+OV8bxTWFMJLrconzFMd38aARFk=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
+gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@@ -1097,12 +2048,14 @@ gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gopkg.in/yaml.v3 v3.0.0-20200605160147-a5ece683394c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
+honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
@@ -1110,7 +2063,13 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
+inet.af/netaddr v0.0.0-20210707202901-70468d781e6c h1:ZNUX2CiFwNbN1VFaD4MQFmC8o5Rxc7BQW1P1K8kMpbE=
+inet.af/netaddr v0.0.0-20210707202901-70468d781e6c/go.mod h1:z0nx+Dh+7N7CC8V5ayHtHGpZpxLQZZxkIaaz6HN65Ls=
k8s.io/api v0.18.3/go.mod h1:UOaMwERbqJMfeeeHc8XJKawj4P9TgDRnViIqqBeH2QA=
+k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo=
+k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ=
+k8s.io/api v0.20.6/go.mod h1:X9e8Qag6JV/bL5G6bU8sdVRltWKmdHsFUGS3eVndqE8=
+k8s.io/api v0.22.7/go.mod h1:7hejA1BgBEiSsWljUyRkIjj+AISXO16IwsaDgFjJsQE=
k8s.io/api v0.23.0/go.mod h1:8wmDdLBHBNxtOIytwLstXt5E9PddnZb0GaMcqsvDBpg=
k8s.io/api v0.23.5 h1:zno3LUiMubxD/V1Zw3ijyKO3wxrhbUF1Ck+VjBvfaoA=
k8s.io/api v0.23.5/go.mod h1:Na4XuKng8PXJ2JsploYYrivXrINeTaycCGcYgF91Xm8=
@@ -1118,45 +2077,67 @@ k8s.io/apiextensions-apiserver v0.18.3/go.mod h1:TMsNGs7DYpMXd+8MOCX8KzPOCx8fnZM
k8s.io/apiextensions-apiserver v0.23.0 h1:uii8BYmHYiT2ZTAJxmvc3X8UhNYMxl2A0z0Xq3Pm+WY=
k8s.io/apiextensions-apiserver v0.23.0/go.mod h1:xIFAEEDlAZgpVBl/1VSjGDmLoXAWRG40+GsWhKhAxY4=
k8s.io/apimachinery v0.18.3/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko=
+k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
+k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
+k8s.io/apimachinery v0.20.6/go.mod h1:ejZXtW1Ra6V1O5H8xPBGz+T3+4gfkTCeExAHKU57MAc=
+k8s.io/apimachinery v0.22.7/go.mod h1:ZvVLP5iLhwVFg2Yx9Gh5W0um0DUauExbRhe+2Z8I1EU=
k8s.io/apimachinery v0.23.0/go.mod h1:fFCTTBKvKcwTPFzjlcxp91uPFZr+JA0FubU4fLzzFYc=
k8s.io/apimachinery v0.23.5 h1:Va7dwhp8wgkUPWsEXk6XglXWU4IKYLKNlv8VkX7SDM0=
k8s.io/apimachinery v0.23.5/go.mod h1:BEuFMMBaIbcOqVIJqNZJXGFTP4W6AycEpb5+m/97hrM=
k8s.io/apiserver v0.18.3/go.mod h1:tHQRmthRPLUtwqsOnJJMoI8SW3lnoReZeE861lH8vUw=
+k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU=
+k8s.io/apiserver v0.20.4/go.mod h1:Mc80thBKOyy7tbvFtB4kJv1kbdD0eIH8k8vianJcbFM=
+k8s.io/apiserver v0.20.6/go.mod h1:QIJXNt6i6JB+0YQRNcS0hdRHJlMhflFmsBDeSgT1r8Q=
k8s.io/apiserver v0.23.0/go.mod h1:Cec35u/9zAepDPPFyT+UMrgqOCjgJ5qtfVJDxjZYmt4=
-k8s.io/client-go v0.18.3/go.mod h1:4a/dpQEvzAhT1BbuWW09qvIaGw6Gbu1gZYiQZIi1DMw=
-k8s.io/client-go v0.23.0/go.mod h1:hrDnpnK1mSr65lHHcUuIZIXDgEbzc7/683c6hyG4jTA=
k8s.io/client-go v0.23.5 h1:zUXHmEuqx0RY4+CsnkOn5l0GU+skkRXKGJrhmE2SLd8=
k8s.io/client-go v0.23.5/go.mod h1:flkeinTO1CirYgzMPRWxUCnV0G4Fbu2vLhYCObnt/r4=
k8s.io/code-generator v0.18.3/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c=
k8s.io/code-generator v0.23.0/go.mod h1:vQvOhDXhuzqiVfM/YHp+dmg10WDZCchJVObc9MvowsE=
k8s.io/component-base v0.18.3/go.mod h1:bp5GzGR0aGkYEfTj+eTY0AN/vXTgkJdQXjNTTVUaa3k=
+k8s.io/component-base v0.20.1/go.mod h1:guxkoJnNoh8LNrbtiQOlyp2Y2XFCZQmrcg2n/DeYNLk=
+k8s.io/component-base v0.20.4/go.mod h1:t4p9EdiagbVCJKrQ1RsA5/V4rFQNDfRlevJajlGwgjI=
+k8s.io/component-base v0.20.6/go.mod h1:6f1MPBAeI+mvuts3sIdtpjljHWBQ2cIy38oBIWMYnrM=
k8s.io/component-base v0.23.0 h1:UAnyzjvVZ2ZR1lF35YwtNY6VMN94WtOnArcXBu34es8=
k8s.io/component-base v0.23.0/go.mod h1:DHH5uiFvLC1edCpvcTDV++NKULdYYU6pR9Tt3HIKMKI=
+k8s.io/cri-api v0.17.3/go.mod h1:X1sbHmuXhwaHs9xxYffLqJogVsnI+f6cPRcgPel7ywM=
+k8s.io/cri-api v0.20.1/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
+k8s.io/cri-api v0.20.4/go.mod h1:2JRbKt+BFLTjtrILYVqQK5jqhI+XNdF6UiGMgczeBCI=
+k8s.io/cri-api v0.20.6/go.mod h1:ew44AjNXwyn1s0U4xCKGodU7J1HzBeZ1MpGrpa5r8Yc=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20200114144118-36b2048a9120/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
+k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.3.0/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
-k8s.io/klog v1.0.0 h1:Pt+yjF5aB1xDSVbau4VsWe+dQNzA0qv1LlXdC2dF6Q8=
k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
+k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
+k8s.io/klog/v2 v2.9.0/go.mod h1:hy9LJ/NvuK+iVyP4Ehqva4HxZG/oXyIS3n3Jmire4Ec=
k8s.io/klog/v2 v2.30.0/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
+k8s.io/klog/v2 v2.40.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/klog/v2 v2.60.1 h1:VW25q3bZx9uE3vvdL6M8ezOX79vA2Aq1nEWLqNQclHc=
k8s.io/klog/v2 v2.60.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E=
+k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
+k8s.io/kube-openapi v0.0.0-20211109043538-20434351676c/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 h1:E3J9oCLlaobFUqsjG9DfKbP2BmgwBL2p7pn0A3dG9W4=
k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65/go.mod h1:sX9MT8g7NVZM5lVL/j8QyCCJe8YSMW30QvGZWaCIDIk=
+k8s.io/kubernetes v1.13.0/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk=
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
+k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 h1:HNSDgDCrr/6Ly3WEGKZftiE7IY19Vz2GdbOCyI4qqhc=
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
+rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7/go.mod h1:PHgbrJT7lCHcxMU+mDHEm+nx46H4zuuHZkDP6icnhu0=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.15/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.25/go.mod h1:Mlj9PNLmG9bZ6BHFwFKDo5afkpWyUISkb9Me0GnK66I=
sigs.k8s.io/controller-runtime v0.11.0 h1:DqO+c8mywcZLFJWILq4iktoECTyn30Bkj0CwgqMpZWQ=
sigs.k8s.io/controller-runtime v0.11.0/go.mod h1:KKwLiTooNGu+JmLZGn9Sl3Gjmfj66eMbCQznLP5zcqA=
@@ -1166,11 +2147,12 @@ sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2/go.mod h1:B+TnT182UBxE84DiCz
sigs.k8s.io/structured-merge-diff/v3 v3.0.0-20200116222232-67a7b8c61874/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=
sigs.k8s.io/structured-merge-diff/v3 v3.0.0/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.3/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.1.2/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
-sigs.k8s.io/structured-merge-diff/v4 v4.2.0/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 h1:bKCqE9GvQ5tiVHn5rfn1r+yao3aLQEaLzkkmAkf+A6Y=
sigs.k8s.io/structured-merge-diff/v4 v4.2.1/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
+sourcegraph.com/sourcegraph/appdash v0.0.0-20190731080439-ebfcffb1b5c0/go.mod h1:hI742Nqp5OhwiqlzhgfbWU4mW4yO10fP+LoT9WOswdU=
diff --git a/operator/hack/lokirules_invalid.yaml b/operator/hack/lokirules_invalid.yaml
new file mode 100644
index 0000000000000..8c31cf7e35b4f
--- /dev/null
+++ b/operator/hack/lokirules_invalid.yaml
@@ -0,0 +1,38 @@
+apiVersion: loki.grafana.com/v1beta1
+kind: AlertingRule
+metadata:
+ name: lokistack-dev
+ namespace: openshift-logging
+ labels:
+ loki.grafana.com/cluster-monitoring: "true"
+spec:
+ tenantID: "tenant-a"
+ groups:
+ - name: same-group-name
+ rules:
+ - alert: HighPercentageError
+ expr: |
+ broken expr
+ for: "brokenFor"
+ - name: same-group-name
+ rules:
+ - alert: http-credentials-leaked
+ expr: 'broken expr'
+ for: "brokenFor"
+---
+apiVersion: loki.grafana.com/v1beta1
+kind: RecordingRule
+metadata:
+ name: lokistack-dev
+ namespace: openshift-logging
+ labels:
+ loki.grafana.com/cluster-monitoring: "true"
+spec:
+ tenantID: "tenant-b"
+ groups:
+ - name: should_record
+ interval: "broken"
+ rules:
+ - record: "broken%metric^name"
+ expr: |
+ broken expr
diff --git a/operator/hack/lokirules_ocp.yaml b/operator/hack/lokirules_ocp.yaml
new file mode 100644
index 0000000000000..7b7e071a63e46
--- /dev/null
+++ b/operator/hack/lokirules_ocp.yaml
@@ -0,0 +1,41 @@
+---
+apiVersion: loki.grafana.com/v1beta1
+kind: AlertingRule
+metadata:
+ name: loki-operator-dev
+ namespace: openshift-operators-redhat
+ labels:
+ openshift.io/cluster-monitoring: "true"
+spec:
+ tenantID: "infrastructure"
+ groups:
+ - name: LokiOperatorHighReconciliationError
+ rules:
+ - alert: HighPercentageError
+ expr: |
+ sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job)
+ /
+ sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job)
+ > 0.01
+ for: 10s
+ labels:
+ severity: page
+ annotations:
+ summary: High Loki Operator Reconciliation Errors
+---
+apiVersion: loki.grafana.com/v1beta1
+kind: RecordingRule
+metadata:
+ name: loki-operator-dev
+ namespace: openshift-operators-redhat
+ labels:
+ openshift.io/cluster-monitoring: "true"
+spec:
+ tenantID: "infrastructure"
+ groups:
+ - name: LokiOperatorReconciliationErrors10m
+ interval: 10m
+ rules:
+ - record: "loki:operator:errors:rate10m"
+ expr: |
+ sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [10m]))
diff --git a/operator/hack/lokistack_gateway_ocp.yaml b/operator/hack/lokistack_gateway_ocp.yaml
index 296bc010c8861..2d5e4e949c3a1 100644
--- a/operator/hack/lokistack_gateway_ocp.yaml
+++ b/operator/hack/lokistack_gateway_ocp.yaml
@@ -11,3 +11,11 @@ spec:
storageClassName: gp2
tenants:
mode: openshift-logging
+ rules:
+ enabled: true
+ selector:
+ matchLabels:
+ openshift.io/cluster-monitoring: "true"
+ namespaceSelector:
+ matchLabels:
+ openshift.io/cluster-monitoring: "true"
diff --git a/operator/internal/handlers/internal/gateway/tenant_configmap.go b/operator/internal/handlers/internal/gateway/tenant_configmap.go
index 3b83f47b0b1a4..cdbe167329c35 100644
--- a/operator/internal/handlers/internal/gateway/tenant_configmap.go
+++ b/operator/internal/handlers/internal/gateway/tenant_configmap.go
@@ -5,7 +5,6 @@ import (
"github.com/grafana/loki/operator/internal/external/k8s"
"github.com/grafana/loki/operator/internal/manifests"
- "github.com/grafana/loki/operator/internal/manifests/openshift"
"github.com/ViaQ/logerr/v2/kverrors"
corev1 "k8s.io/api/core/v1"
@@ -38,7 +37,7 @@ type openShiftSpec struct {
// GetTenantConfigMapData returns the tenantName, tenantId, cookieSecret
// clusters to auto-create redirect URLs for OpenShift Auth or an error.
-func GetTenantConfigMapData(ctx context.Context, k k8s.Client, req ctrl.Request) (map[string]openshift.TenantData, error) {
+func GetTenantConfigMapData(ctx context.Context, k k8s.Client, req ctrl.Request) (map[string]manifests.TenantConfig, error) {
var tenantConfigMap corev1.ConfigMap
key := client.ObjectKey{Name: manifests.GatewayName(req.Name), Namespace: req.Namespace}
if err := k.Get(ctx, key, &tenantConfigMap); err != nil {
@@ -50,11 +49,16 @@ func GetTenantConfigMapData(ctx context.Context, k k8s.Client, req ctrl.Request)
return nil, kverrors.Wrap(err, "error occurred in extracting tenants.yaml configMap.")
}
- tcmMap := make(map[string]openshift.TenantData)
+ tcmMap := make(map[string]manifests.TenantConfig)
for _, tenant := range tcm.Tenants {
- tcmMap[tenant.Name] = openshift.TenantData{
- CookieSecret: tenant.OpenShift.CookieSecret,
+ tc := manifests.TenantConfig{}
+ if tenant.OpenShift != nil {
+ tc.OpenShift = &manifests.TenantOpenShiftSpec{
+ CookieSecret: tenant.OpenShift.CookieSecret,
+ }
}
+
+ tcmMap[tenant.Name] = tc
}
return tcmMap, nil
diff --git a/operator/internal/handlers/internal/gateway/tenant_configmap_test.go b/operator/internal/handlers/internal/gateway/tenant_configmap_test.go
index 3aa57e5766326..0f48215d7b5b5 100644
--- a/operator/internal/handlers/internal/gateway/tenant_configmap_test.go
+++ b/operator/internal/handlers/internal/gateway/tenant_configmap_test.go
@@ -5,7 +5,7 @@ import (
"testing"
"github.com/grafana/loki/operator/internal/external/k8s/k8sfakes"
- "github.com/grafana/loki/operator/internal/manifests/openshift"
+ "github.com/grafana/loki/operator/internal/manifests"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
@@ -62,15 +62,21 @@ func TestGetTenantConfigMapData_ConfigMapExist(t *testing.T) {
require.NotNil(t, ts)
require.NoError(t, err)
- expected := map[string]openshift.TenantData{
+ expected := map[string]manifests.TenantConfig{
"application": {
- CookieSecret: "test123",
+ OpenShift: &manifests.TenantOpenShiftSpec{
+ CookieSecret: "test123",
+ },
},
"infrastructure": {
- CookieSecret: "test456",
+ OpenShift: &manifests.TenantOpenShiftSpec{
+ CookieSecret: "test456",
+ },
},
"audit": {
- CookieSecret: "test789",
+ OpenShift: &manifests.TenantOpenShiftSpec{
+ CookieSecret: "test789",
+ },
},
}
require.Equal(t, expected, ts)
diff --git a/operator/internal/handlers/internal/rules/rules.go b/operator/internal/handlers/internal/rules/rules.go
new file mode 100644
index 0000000000000..b997f6e766b81
--- /dev/null
+++ b/operator/internal/handlers/internal/rules/rules.go
@@ -0,0 +1,119 @@
+package rules
+
+import (
+ "context"
+
+ "github.com/ViaQ/logerr/v2/kverrors"
+ lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
+ "github.com/grafana/loki/operator/internal/external/k8s"
+ corev1 "k8s.io/api/core/v1"
+ v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+)
+
+// List returns a slice of AlertingRules and a slice of RecordingRules for the given spec or an error. Three cases apply:
+// - Return only matching rules in the stack namespace if no namespace selector given.
+// - Return only matching rules in the stack namespace and in namespaces matching the namespace selector.
+// - Return no rules if rules selector does not apply at all.
+func List(ctx context.Context, k k8s.Client, stackNs string, rs *lokiv1beta1.RulesSpec) ([]lokiv1beta1.AlertingRule, []lokiv1beta1.RecordingRule, error) {
+ nsl, err := selectRulesNamespaces(ctx, k, stackNs, rs)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ ar, err := selectAlertingRules(ctx, k, rs)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ var alerts []lokiv1beta1.AlertingRule
+ for _, rule := range ar.Items {
+ for _, ns := range nsl.Items {
+ if rule.Namespace == ns.Name {
+ alerts = append(alerts, rule)
+ break
+ }
+ }
+ }
+
+ rr, err := selectRecordingRules(ctx, k, rs)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ var recs []lokiv1beta1.RecordingRule
+ for _, rule := range rr.Items {
+ for _, ns := range nsl.Items {
+ if rule.Namespace == ns.Name {
+ recs = append(recs, rule)
+ break
+ }
+ }
+ }
+
+ return alerts, recs, nil
+}
+
+func selectRulesNamespaces(ctx context.Context, k k8s.Client, stackNs string, rs *lokiv1beta1.RulesSpec) (corev1.NamespaceList, error) {
+ var stackNamespace corev1.Namespace
+ key := client.ObjectKey{Name: stackNs}
+
+ err := k.Get(ctx, key, &stackNamespace)
+ if err != nil {
+ return corev1.NamespaceList{}, kverrors.Wrap(err, "failed to get LokiStack namespace", "namespace", stackNs)
+ }
+
+ nsList := corev1.NamespaceList{Items: []corev1.Namespace{stackNamespace}}
+
+ nsSelector, err := metav1.LabelSelectorAsSelector(rs.NamespaceSelector)
+ if err != nil {
+ return nsList, kverrors.Wrap(err, "failed to create LokiRule namespace selector", "namespaceSelector", rs.NamespaceSelector)
+ }
+
+ var nsl v1.NamespaceList
+ err = k.List(ctx, &nsl, &client.MatchingLabelsSelector{Selector: nsSelector})
+ if err != nil {
+ return nsList, kverrors.Wrap(err, "failed to list namespaces for selector", "namespaceSelector", rs.NamespaceSelector)
+ }
+
+ for _, ns := range nsl.Items {
+ if ns.Name == stackNs {
+ continue
+ }
+
+ nsList.Items = append(nsList.Items, ns)
+ }
+
+ return nsList, nil
+}
+
+func selectAlertingRules(ctx context.Context, k k8s.Client, rs *lokiv1beta1.RulesSpec) (lokiv1beta1.AlertingRuleList, error) {
+ rulesSelector, err := metav1.LabelSelectorAsSelector(rs.Selector)
+ if err != nil {
+ return lokiv1beta1.AlertingRuleList{}, kverrors.Wrap(err, "failed to create AlertingRules selector", "selector", rs.Selector)
+ }
+
+ var rl lokiv1beta1.AlertingRuleList
+ err = k.List(ctx, &rl, &client.MatchingLabelsSelector{Selector: rulesSelector})
+ if err != nil {
+ return lokiv1beta1.AlertingRuleList{}, kverrors.Wrap(err, "failed to list AlertingRules for selector", "selector", rs.Selector)
+ }
+
+ return rl, nil
+}
+
+func selectRecordingRules(ctx context.Context, k k8s.Client, rs *lokiv1beta1.RulesSpec) (lokiv1beta1.RecordingRuleList, error) {
+ rulesSelector, err := metav1.LabelSelectorAsSelector(rs.Selector)
+ if err != nil {
+ return lokiv1beta1.RecordingRuleList{}, kverrors.Wrap(err, "failed to create RecordingRules selector", "selector", rs.Selector)
+ }
+
+ var rl lokiv1beta1.RecordingRuleList
+ err = k.List(ctx, &rl, &client.MatchingLabelsSelector{Selector: rulesSelector})
+ if err != nil {
+ return lokiv1beta1.RecordingRuleList{}, kverrors.Wrap(err, "failed to list RecordingRules for selector", "selector", rs.Selector)
+ }
+
+ return rl, nil
+}
diff --git a/operator/internal/handlers/internal/rules/rules_test.go b/operator/internal/handlers/internal/rules/rules_test.go
new file mode 100644
index 0000000000000..6d2b9748c1cf3
--- /dev/null
+++ b/operator/internal/handlers/internal/rules/rules_test.go
@@ -0,0 +1,365 @@
+package rules_test
+
+import (
+ "context"
+ "testing"
+
+ lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
+ "github.com/grafana/loki/operator/internal/external/k8s/k8sfakes"
+ "github.com/grafana/loki/operator/internal/handlers/internal/rules"
+ "github.com/stretchr/testify/require"
+ corev1 "k8s.io/api/core/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/labels"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/types"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+)
+
+func TestList_AlertingRulesMatchSelector_WithDefaultStackNamespaceRules(t *testing.T) {
+ const stackNs = "some-ns"
+
+ k := &k8sfakes.FakeClient{}
+ rs := &lokiv1beta1.RulesSpec{
+ Selector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ }
+
+ k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object) error {
+ if name.Name == stackNs {
+ k.SetClientObject(object, &corev1.Namespace{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: stackNs,
+ },
+ })
+ return nil
+ }
+ return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
+ }
+
+ k.ListStub = func(_ context.Context, ol client.ObjectList, opt ...client.ListOption) error {
+ switch ol.(type) {
+ case *corev1.NamespaceList:
+ k.SetClientObjectList(ol, &corev1.NamespaceList{})
+ return nil
+ case *lokiv1beta1.RecordingRuleList:
+ k.SetClientObjectList(ol, &lokiv1beta1.RecordingRuleList{})
+ return nil
+ }
+
+ l := opt[0].(*client.MatchingLabelsSelector)
+ m := labels.Set(rs.Selector.MatchLabels)
+
+ if l.Matches(m) {
+ k.SetClientObjectList(ol, &lokiv1beta1.AlertingRuleList{
+ Items: []lokiv1beta1.AlertingRule{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "rule-a",
+ Namespace: stackNs,
+ Labels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "rule-b",
+ Namespace: "other-ns",
+ Labels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ },
+ },
+ })
+ }
+
+ return nil
+ }
+
+ rules, _, err := rules.List(context.TODO(), k, stackNs, rs)
+
+ require.NoError(t, err)
+ require.NotEmpty(t, rules)
+ require.Len(t, rules, 1)
+}
+
+func TestList_AlertingRulesMatchSelector_FilteredByNamespaceSelector(t *testing.T) {
+ const stackNs = "some-ns"
+
+ k := &k8sfakes.FakeClient{}
+ rs := &lokiv1beta1.RulesSpec{
+ Selector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ NamespaceSelector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "group.acme.org/logs": "true",
+ },
+ },
+ }
+
+ k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object) error {
+ if name.Name == "some-ns" {
+ k.SetClientObject(object, &corev1.Namespace{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: stackNs,
+ },
+ })
+ return nil
+ }
+ return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
+ }
+
+ k.ListStub = func(_ context.Context, ol client.ObjectList, opt ...client.ListOption) error {
+ switch ol.(type) {
+ case *lokiv1beta1.RecordingRuleList:
+ k.SetClientObjectList(ol, &lokiv1beta1.RecordingRuleList{})
+ return nil
+ }
+
+ l := opt[0].(*client.MatchingLabelsSelector)
+ m := labels.Set(rs.Selector.MatchLabels)
+
+ if l.Matches(m) {
+ k.SetClientObjectList(ol, &lokiv1beta1.AlertingRuleList{
+ Items: []lokiv1beta1.AlertingRule{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "rule-a",
+ Namespace: "matching-ns",
+ Labels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "rule-b",
+ Namespace: stackNs,
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "rule-c",
+ Namespace: "not-matching-ns",
+ Labels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ },
+ },
+ })
+
+ return nil
+ }
+
+ n := labels.Set(rs.NamespaceSelector.MatchLabels)
+ if l.Matches(n) {
+ k.SetClientObjectList(ol, &corev1.NamespaceList{
+ Items: []corev1.Namespace{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "matching-ns",
+ Labels: map[string]string{
+ "group.acme.org/logs": "true",
+ },
+ },
+ },
+ },
+ })
+
+ return nil
+ }
+
+ k.SetClientObjectList(ol, &corev1.NamespaceList{})
+
+ return nil
+ }
+
+ rules, _, err := rules.List(context.TODO(), k, stackNs, rs)
+
+ require.NoError(t, err)
+ require.NotEmpty(t, rules)
+ require.Len(t, rules, 2)
+}
+
+func TestList_RecordingRulesMatchSelector_WithDefaultStackNamespaceRules(t *testing.T) {
+ const stackNs = "some-ns"
+
+ k := &k8sfakes.FakeClient{}
+ rs := &lokiv1beta1.RulesSpec{
+ Selector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ }
+
+ k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object) error {
+ if name.Name == stackNs {
+ k.SetClientObject(object, &corev1.Namespace{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: stackNs,
+ },
+ })
+ return nil
+ }
+ return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
+ }
+
+ k.ListStub = func(_ context.Context, ol client.ObjectList, opt ...client.ListOption) error {
+ switch ol.(type) {
+ case *corev1.NamespaceList:
+ k.SetClientObjectList(ol, &corev1.NamespaceList{})
+ return nil
+ case *lokiv1beta1.AlertingRuleList:
+ k.SetClientObjectList(ol, &lokiv1beta1.AlertingRuleList{})
+ return nil
+ }
+
+ l := opt[0].(*client.MatchingLabelsSelector)
+ m := labels.Set(rs.Selector.MatchLabels)
+
+ if l.Matches(m) {
+ k.SetClientObjectList(ol, &lokiv1beta1.RecordingRuleList{
+ Items: []lokiv1beta1.RecordingRule{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "rule-a",
+ Namespace: stackNs,
+ Labels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "rule-b",
+ Namespace: "other-ns",
+ Labels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ },
+ },
+ })
+ }
+
+ return nil
+ }
+
+ _, rules, err := rules.List(context.TODO(), k, stackNs, rs)
+
+ require.NoError(t, err)
+ require.NotEmpty(t, rules)
+ require.Len(t, rules, 1)
+}
+
+func TestList_RecordingRulesMatchSelector_FilteredByNamespaceSelector(t *testing.T) {
+ const stackNs = "some-ns"
+
+ k := &k8sfakes.FakeClient{}
+ rs := &lokiv1beta1.RulesSpec{
+ Selector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ NamespaceSelector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "group.acme.org/logs": "true",
+ },
+ },
+ }
+
+ k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object) error {
+ if name.Name == "some-ns" {
+ k.SetClientObject(object, &corev1.Namespace{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: stackNs,
+ },
+ })
+ return nil
+ }
+ return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
+ }
+
+ k.ListStub = func(_ context.Context, ol client.ObjectList, opt ...client.ListOption) error {
+ switch ol.(type) {
+ case *lokiv1beta1.AlertingRuleList:
+ k.SetClientObjectList(ol, &lokiv1beta1.AlertingRuleList{})
+ return nil
+ }
+
+ l := opt[0].(*client.MatchingLabelsSelector)
+ m := labels.Set(rs.Selector.MatchLabels)
+
+ if l.Matches(m) {
+ k.SetClientObjectList(ol, &lokiv1beta1.RecordingRuleList{
+ Items: []lokiv1beta1.RecordingRule{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "rule-a",
+ Namespace: "matching-ns",
+ Labels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "rule-b",
+ Namespace: stackNs,
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "rule-c",
+ Namespace: "not-matching-ns",
+ Labels: map[string]string{
+ "labelname": "labelvalue",
+ },
+ },
+ },
+ },
+ })
+
+ return nil
+ }
+
+ n := labels.Set(rs.NamespaceSelector.MatchLabels)
+ if l.Matches(n) {
+ k.SetClientObjectList(ol, &corev1.NamespaceList{
+ Items: []corev1.Namespace{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "matching-ns",
+ Labels: map[string]string{
+ "group.acme.org/logs": "true",
+ },
+ },
+ },
+ },
+ })
+ return nil
+ }
+
+ k.SetClientObjectList(ol, &corev1.NamespaceList{})
+
+ return nil
+ }
+
+ _, rules, err := rules.List(context.TODO(), k, stackNs, rs)
+
+ require.NoError(t, err)
+ require.NotEmpty(t, rules)
+ require.Len(t, rules, 2)
+}
diff --git a/operator/internal/handlers/lokistack_create_or_update.go b/operator/internal/handlers/lokistack_create_or_update.go
index ae8567579c2f3..d760620f19db5 100644
--- a/operator/internal/handlers/lokistack_create_or_update.go
+++ b/operator/internal/handlers/lokistack_create_or_update.go
@@ -8,9 +8,9 @@ import (
lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
"github.com/grafana/loki/operator/internal/external/k8s"
"github.com/grafana/loki/operator/internal/handlers/internal/gateway"
+ "github.com/grafana/loki/operator/internal/handlers/internal/rules"
"github.com/grafana/loki/operator/internal/handlers/internal/secrets"
"github.com/grafana/loki/operator/internal/manifests"
- "github.com/grafana/loki/operator/internal/manifests/openshift"
"github.com/grafana/loki/operator/internal/metrics"
"github.com/grafana/loki/operator/internal/status"
@@ -78,9 +78,9 @@ func CreateOrUpdateLokiStack(
}
var (
- baseDomain string
- tenantSecrets []*manifests.TenantSecrets
- tenantConfigMap map[string]openshift.TenantData
+ baseDomain string
+ tenantSecrets []*manifests.TenantSecrets
+ tenantConfigs map[string]manifests.TenantConfig
)
if flags.EnableGateway && stack.Spec.Tenants == nil {
return &status.DegradedError{
@@ -109,12 +109,23 @@ func CreateOrUpdateLokiStack(
if err != nil {
return err
}
+ }
- // extract the existing tenant's id, cookieSecret if exists, otherwise create new.
- tenantConfigMap, err = gateway.GetTenantConfigMapData(ctx, k, req)
- if err != nil {
- ll.Error(err, "error in getting tenant config map data")
- }
+ // extract the existing tenant's id, cookieSecret if exists, otherwise create new.
+ tenantConfigs, err = gateway.GetTenantConfigMapData(ctx, k, req)
+ if err != nil {
+ ll.Error(err, "error in getting tenant config map data")
+ }
+ }
+
+ var (
+ alertingRules []lokiv1beta1.AlertingRule
+ recordingRules []lokiv1beta1.RecordingRule
+ )
+ if stack.Spec.Rules != nil && stack.Spec.Rules.Enabled {
+ alertingRules, recordingRules, err = rules.List(ctx, k, req.Namespace, stack.Spec.Rules)
+ if err != nil {
+ log.Error(err, "failed to lookup rules", "spec", stack.Spec.Rules)
}
}
@@ -128,8 +139,12 @@ func CreateOrUpdateLokiStack(
Stack: stack.Spec,
Flags: flags,
ObjectStorage: *storage,
- TenantSecrets: tenantSecrets,
- TenantConfigMap: tenantConfigMap,
+ AlertingRules: alertingRules,
+ RecordingRules: recordingRules,
+ Tenants: manifests.Tenants{
+ Secrets: tenantSecrets,
+ Configs: tenantConfigs,
+ },
}
ll.Info("begin building manifests")
diff --git a/operator/internal/manifests/build.go b/operator/internal/manifests/build.go
index 8352a90e6387c..fe6832be358c7 100644
--- a/operator/internal/manifests/build.go
+++ b/operator/internal/manifests/build.go
@@ -58,6 +58,22 @@ func BuildAll(opts Options) ([]client.Object, error) {
res = append(res, indexGatewayObjs...)
res = append(res, BuildLokiGossipRingService(opts.Name))
+ if opts.Stack.Rules != nil && opts.Stack.Rules.Enabled {
+ rulesCm, err := RulesConfigMap(&opts)
+ if err != nil {
+ return nil, err
+ }
+
+ res = append(res, rulesCm)
+
+ rulerObjs, err := BuildRuler(opts)
+ if err != nil {
+ return nil, err
+ }
+
+ res = append(res, rulerObjs...)
+ }
+
if opts.Flags.EnableGateway {
gatewayObjects, err := BuildGateway(opts)
if err != nil {
diff --git a/operator/internal/manifests/build_test.go b/operator/internal/manifests/build_test.go
index 0e02e06814433..c72c152fb60d7 100644
--- a/operator/internal/manifests/build_test.go
+++ b/operator/internal/manifests/build_test.go
@@ -1,8 +1,10 @@
package manifests
import (
+ "fmt"
"testing"
+ appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
@@ -96,6 +98,9 @@ func TestBuildAll_WithFeatureFlags_EnableServiceMonitors(t *testing.T) {
Namespace: "test",
Stack: lokiv1beta1.LokiStackSpec{
Size: lokiv1beta1.SizeOneXSmall,
+ Rules: &lokiv1beta1.RulesSpec{
+ Enabled: true,
+ },
},
Flags: FeatureFlags{
EnableCertificateSigningService: false,
@@ -106,7 +111,7 @@ func TestBuildAll_WithFeatureFlags_EnableServiceMonitors(t *testing.T) {
},
{
desc: "service monitor per component created",
- MonitorCount: 7,
+ MonitorCount: 8,
BuildOptions: Options{
Name: "test",
Namespace: "test",
@@ -192,6 +197,7 @@ func TestBuildAll_WithFeatureFlags_EnableCertificateSigningService(t *testing.T)
NewQueryFrontendHTTPService(tst.BuildOptions),
NewCompactorHTTPService(tst.BuildOptions),
NewIndexGatewayHTTPService(tst.BuildOptions),
+ NewRulerHTTPService(tst.BuildOptions),
NewGatewayHTTPService(tst.BuildOptions),
}
@@ -206,6 +212,82 @@ func TestBuildAll_WithFeatureFlags_EnableCertificateSigningService(t *testing.T)
}
}
+func TestBuildAll_WithFeatureFlags_EnableTLSServiceMonitorConfig(t *testing.T) {
+ opts := Options{
+ Name: "test",
+ Namespace: "test",
+ Stack: lokiv1beta1.LokiStackSpec{
+ Size: lokiv1beta1.SizeOneXSmall,
+ Rules: &lokiv1beta1.RulesSpec{
+ Enabled: true,
+ },
+ },
+ Flags: FeatureFlags{
+ EnableServiceMonitors: true,
+ EnableTLSServiceMonitorConfig: true,
+ },
+ }
+
+ err := ApplyDefaultSettings(&opts)
+ require.NoError(t, err)
+ objects, buildErr := BuildAll(opts)
+ require.NoError(t, buildErr)
+ require.Equal(t, 8, serviceMonitorCount(objects))
+
+ for _, obj := range objects {
+ var (
+ name string
+ vs []corev1.Volume
+ vms []corev1.VolumeMount
+ args []string
+ rps corev1.URIScheme
+ lps corev1.URIScheme
+ )
+
+ switch o := obj.(type) {
+ case *appsv1.Deployment:
+ name = o.Name
+ vs = o.Spec.Template.Spec.Volumes
+ vms = o.Spec.Template.Spec.Containers[0].VolumeMounts
+ args = o.Spec.Template.Spec.Containers[0].Args
+ rps = o.Spec.Template.Spec.Containers[0].ReadinessProbe.ProbeHandler.HTTPGet.Scheme
+ lps = o.Spec.Template.Spec.Containers[0].LivenessProbe.ProbeHandler.HTTPGet.Scheme
+ case *appsv1.StatefulSet:
+ name = o.Name
+ vs = o.Spec.Template.Spec.Volumes
+ vms = o.Spec.Template.Spec.Containers[0].VolumeMounts
+ args = o.Spec.Template.Spec.Containers[0].Args
+ rps = o.Spec.Template.Spec.Containers[0].ReadinessProbe.ProbeHandler.HTTPGet.Scheme
+ lps = o.Spec.Template.Spec.Containers[0].LivenessProbe.ProbeHandler.HTTPGet.Scheme
+ default:
+ continue
+ }
+
+ secretName := fmt.Sprintf("%s-http-metrics", name)
+ expVolume := corev1.Volume{
+ Name: secretName,
+ VolumeSource: corev1.VolumeSource{
+ Secret: &corev1.SecretVolumeSource{
+ SecretName: secretName,
+ },
+ },
+ }
+ require.Contains(t, vs, expVolume)
+
+ expVolumeMount := corev1.VolumeMount{
+ Name: secretName,
+ ReadOnly: false,
+ MountPath: "/etc/proxy/secrets",
+ }
+ require.Contains(t, vms, expVolumeMount)
+
+ require.Contains(t, args, "-server.http-tls-cert-path=/etc/proxy/secrets/tls.crt")
+ require.Contains(t, args, "-server.http-tls-key-path=/etc/proxy/secrets/tls.key")
+ require.Equal(t, corev1.URISchemeHTTPS, rps)
+ require.Equal(t, corev1.URISchemeHTTPS, lps)
+ }
+}
+
func TestBuildAll_WithFeatureFlags_EnableGateway(t *testing.T) {
type test struct {
desc string
diff --git a/operator/internal/manifests/config.go b/operator/internal/manifests/config.go
index 1c82a439cfad1..6751ce16e971b 100644
--- a/operator/internal/manifests/config.go
+++ b/operator/internal/manifests/config.go
@@ -42,6 +42,8 @@ func LokiConfigMap(opt Options) (*corev1.ConfigMap, string, error) {
// ConfigOptions converts Options to config.Options
func ConfigOptions(opt Options) config.Options {
+ rulerEnabled := opt.Stack.Rules != nil && opt.Stack.Rules.Enabled
+
return config.Options{
Stack: opt.Stack,
Namespace: opt.Namespace,
@@ -72,6 +74,10 @@ func ConfigOptions(opt Options) config.Options {
},
ObjectStorage: opt.ObjectStorage,
EnableRemoteReporting: opt.Flags.EnableGrafanaLabsStats,
+ Ruler: config.Ruler{
+ Enabled: rulerEnabled,
+ RulesStorageDirectory: rulesStorageDirectory,
+ },
}
}
diff --git a/operator/internal/manifests/distributor.go b/operator/internal/manifests/distributor.go
index ff4091c674a21..6e29eaf844ae9 100644
--- a/operator/internal/manifests/distributor.go
+++ b/operator/internal/manifests/distributor.go
@@ -15,12 +15,14 @@ import (
)
const (
- walVolumeName = "wal"
- configVolumeName = "config"
- storageVolumeName = "storage"
- walDirectory = "/tmp/wal"
- dataDirectory = "/tmp/loki"
- secretDirectory = "/etc/proxy/secrets"
+ walVolumeName = "wal"
+ configVolumeName = "config"
+ rulesStorageVolumeName = "rules"
+ storageVolumeName = "storage"
+ walDirectory = "/tmp/wal"
+ dataDirectory = "/tmp/loki"
+ rulesStorageDirectory = "/tmp/rules"
+ secretDirectory = "/etc/proxy/secrets"
)
// BuildDistributor returns a list of k8s objects for Loki Distributor
diff --git a/operator/internal/manifests/gateway.go b/operator/internal/manifests/gateway.go
index 20361ff47a04a..f617eaa51f367 100644
--- a/operator/internal/manifests/gateway.go
+++ b/operator/internal/manifests/gateway.go
@@ -321,7 +321,7 @@ func gatewayConfigMap(opt Options) (*corev1.ConfigMap, string, error) {
// gatewayConfigOptions converts Options to gateway.Options
func gatewayConfigOptions(opt Options) gateway.Options {
var gatewaySecrets []*gateway.Secret
- for _, secret := range opt.TenantSecrets {
+ for _, secret := range opt.Tenants.Secrets {
gatewaySecret := &gateway.Secret{
TenantName: secret.TenantName,
ClientID: secret.ClientID,
@@ -331,20 +331,12 @@ func gatewayConfigOptions(opt Options) gateway.Options {
gatewaySecrets = append(gatewaySecrets, gatewaySecret)
}
- tenantConfigMap := make(map[string]gateway.TenantData)
- for tenant, tenantData := range opt.TenantConfigMap {
- tenantConfigMap[tenant] = gateway.TenantData{
- CookieSecret: tenantData.CookieSecret,
- }
- }
-
return gateway.Options{
Stack: opt.Stack,
Namespace: opt.Namespace,
Name: opt.Name,
OpenShiftOptions: opt.OpenShiftOptions,
TenantSecrets: gatewaySecrets,
- TenantConfigMap: tenantConfigMap,
}
}
diff --git a/operator/internal/manifests/gateway_tenants.go b/operator/internal/manifests/gateway_tenants.go
index f049adaaf9626..edf2b63e1ab49 100644
--- a/operator/internal/manifests/gateway_tenants.go
+++ b/operator/internal/manifests/gateway_tenants.go
@@ -27,6 +27,13 @@ func ApplyGatewayDefaultOptions(opts *Options) error {
return nil // continue using user input
case lokiv1beta1.OpenshiftLogging:
+ tenantData := make(map[string]openshift.TenantData)
+ for name, tenant := range opts.Tenants.Configs {
+ tenantData[name] = openshift.TenantData{
+ CookieSecret: tenant.OpenShift.CookieSecret,
+ }
+ }
+
defaults := openshift.NewOptions(
opts.Name,
opts.Namespace,
@@ -37,7 +44,7 @@ func ApplyGatewayDefaultOptions(opts *Options) error {
ComponentLabels(LabelGatewayComponent, opts.Name),
opts.Flags.EnableServiceMonitors,
opts.Flags.EnableCertificateSigningService,
- opts.TenantConfigMap,
+ tenantData,
)
if err := mergo.Merge(&opts.OpenShiftOptions, &defaults, mergo.WithOverride); err != nil {
diff --git a/operator/internal/manifests/gateway_tenants_test.go b/operator/internal/manifests/gateway_tenants_test.go
index a13b510edc8aa..3e106bc9046a0 100644
--- a/operator/internal/manifests/gateway_tenants_test.go
+++ b/operator/internal/manifests/gateway_tenants_test.go
@@ -69,6 +69,25 @@ func TestApplyGatewayDefaultsOptions(t *testing.T) {
Mode: lokiv1beta1.OpenshiftLogging,
},
},
+ Tenants: Tenants{
+ Configs: map[string]TenantConfig{
+ "application": {
+ OpenShift: &TenantOpenShiftSpec{
+ CookieSecret: "D31SJpSmPe6aUDTtU2zqAoW1gqEKoH5T",
+ },
+ },
+ "infrastructure": {
+ OpenShift: &TenantOpenShiftSpec{
+ CookieSecret: "i3N1paUy9JwNZIktni4kqXPuMvIHtHNe",
+ },
+ },
+ "audit": {
+ OpenShift: &TenantOpenShiftSpec{
+ CookieSecret: "6UssDXle7OHElqSW4M0DNRZ6JbaTjDM3",
+ },
+ },
+ },
+ },
},
want: &Options{
Name: "lokistack-ocp",
@@ -79,6 +98,25 @@ func TestApplyGatewayDefaultsOptions(t *testing.T) {
Mode: lokiv1beta1.OpenshiftLogging,
},
},
+ Tenants: Tenants{
+ Configs: map[string]TenantConfig{
+ "application": {
+ OpenShift: &TenantOpenShiftSpec{
+ CookieSecret: "D31SJpSmPe6aUDTtU2zqAoW1gqEKoH5T",
+ },
+ },
+ "infrastructure": {
+ OpenShift: &TenantOpenShiftSpec{
+ CookieSecret: "i3N1paUy9JwNZIktni4kqXPuMvIHtHNe",
+ },
+ },
+ "audit": {
+ OpenShift: &TenantOpenShiftSpec{
+ CookieSecret: "6UssDXle7OHElqSW4M0DNRZ6JbaTjDM3",
+ },
+ },
+ },
+ },
OpenShiftOptions: openshift.Options{
BuildOpts: openshift.BuildOptions{
LokiStackName: "lokistack-ocp",
diff --git a/operator/internal/manifests/gateway_test.go b/operator/internal/manifests/gateway_test.go
index 14ee69349e6e7..76a168f084936 100644
--- a/operator/internal/manifests/gateway_test.go
+++ b/operator/internal/manifests/gateway_test.go
@@ -95,12 +95,14 @@ func TestGatewayConfigMap_ReturnsSHA1OfBinaryContents(t *testing.T) {
},
},
},
- TenantSecrets: []*TenantSecrets{
- {
- TenantName: "test",
- ClientID: "test",
- ClientSecret: "test",
- IssuerCAPath: "/tmp/test",
+ Tenants: Tenants{
+ Secrets: []*TenantSecrets{
+ {
+ TenantName: "test",
+ ClientID: "test",
+ ClientSecret: "test",
+ IssuerCAPath: "/tmp/test",
+ },
},
},
}
diff --git a/operator/internal/manifests/internal/config/build_test.go b/operator/internal/manifests/internal/config/build_test.go
index a2b2502a4a2f2..320a39c46f824 100644
--- a/operator/internal/manifests/internal/config/build_test.go
+++ b/operator/internal/manifests/internal/config/build_test.go
@@ -536,3 +536,248 @@ func TestBuild_ConfigAndRuntimeConfig_CreateLokiConfigFailed(t *testing.T) {
require.Empty(t, cfg)
require.Empty(t, rCfg)
}
+
+func TestBuild_ConfigAndRuntimeConfig_RulerConfigGenerated(t *testing.T) {
+ expCfg := `
+---
+auth_enabled: true
+chunk_store_config:
+ chunk_cache_config:
+ enable_fifocache: true
+ fifocache:
+ max_size_bytes: 500MB
+common:
+ storage:
+ s3:
+ s3: http://test.default.svc.cluster.local.:9000
+ bucketnames: loki
+ region: us-east
+ access_key_id: test
+ secret_access_key: test123
+ s3forcepathstyle: true
+compactor:
+ compaction_interval: 2h
+ working_directory: /tmp/loki/compactor
+frontend:
+ tail_proxy_url: http://loki-querier-http-lokistack-dev.default.svc.cluster.local:3100
+ compress_responses: true
+ max_outstanding_per_tenant: 256
+ log_queries_longer_than: 5s
+frontend_worker:
+ frontend_address: loki-query-frontend-grpc-lokistack-dev.default.svc.cluster.local:9095
+ grpc_client_config:
+ max_send_msg_size: 104857600
+ match_max_concurrent: true
+ingester:
+ chunk_block_size: 262144
+ chunk_encoding: snappy
+ chunk_idle_period: 1h
+ chunk_retain_period: 5m
+ chunk_target_size: 2097152
+ flush_op_timeout: 10m
+ lifecycler:
+ final_sleep: 0s
+ heartbeat_period: 5s
+ interface_names:
+ - eth0
+ join_after: 30s
+ num_tokens: 512
+ ring:
+ replication_factor: 1
+ heartbeat_timeout: 1m
+ max_chunk_age: 2h
+ max_transfer_retries: 0
+ wal:
+ enabled: true
+ dir: /tmp/wal
+ replay_memory_ceiling: 2500
+ingester_client:
+ grpc_client_config:
+ max_recv_msg_size: 67108864
+ remote_timeout: 1s
+# NOTE: Keep the order of keys as in Loki docs
+# to enable easy diffs when vendoring newer
+# Loki releases.
+# (See https://grafana.com/docs/loki/latest/configuration/#limits_config)
+#
+# Values for not exposed fields are taken from the grafana/loki production
+# configuration manifests.
+# (See https://github.com/grafana/loki/blob/main/production/ksonnet/loki/config.libsonnet)
+limits_config:
+ ingestion_rate_strategy: global
+ ingestion_rate_mb: 4
+ ingestion_burst_size_mb: 6
+ max_label_name_length: 1024
+ max_label_value_length: 2048
+ max_label_names_per_series: 30
+ reject_old_samples: true
+ reject_old_samples_max_age: 168h
+ creation_grace_period: 10m
+ enforce_metric_name: false
+ # Keep max_streams_per_user always to 0 to default
+ # using max_global_streams_per_user always.
+ # (See https://github.com/grafana/loki/blob/main/pkg/ingester/limiter.go#L73)
+ max_streams_per_user: 0
+ max_line_size: 256000
+ max_entries_limit_per_query: 5000
+ max_global_streams_per_user: 0
+ max_chunks_per_query: 2000000
+ max_query_length: 721h
+ max_query_parallelism: 32
+ max_query_series: 500
+ cardinality_limit: 100000
+ max_streams_matchers_per_query: 1000
+ max_cache_freshness_per_query: 10m
+ per_stream_rate_limit: 3MB
+ per_stream_rate_limit_burst: 15MB
+ split_queries_by_interval: 30m
+memberlist:
+ abort_if_cluster_join_fails: true
+ bind_port: 7946
+ join_members:
+ - loki-gossip-ring-lokistack-dev.default.svc.cluster.local:7946
+ max_join_backoff: 1m
+ max_join_retries: 10
+ min_join_backoff: 1s
+querier:
+ engine:
+ max_look_back_period: 30s
+ timeout: 3m
+ extra_query_delay: 0s
+ max_concurrent: 2
+ query_ingesters_within: 3h
+ query_timeout: 1m
+ tail_max_duration: 1h
+query_range:
+ align_queries_with_step: true
+ cache_results: true
+ max_retries: 5
+ results_cache:
+ cache:
+ enable_fifocache: true
+ fifocache:
+ max_size_bytes: 500MB
+ parallelise_shardable_queries: true
+schema_config:
+ configs:
+ - from: "2020-10-01"
+ index:
+ period: 24h
+ prefix: index_
+ object_store: s3
+ schema: v11
+ store: boltdb-shipper
+ruler:
+ enable_api: true
+ enable_sharding: true
+ wal:
+ dir: /tmp/wal
+ truncate_frequency: 60m
+ min_age: 5m
+ max_age: 4h
+ rule_path: /tmp/loki
+ storage:
+ type: local
+ local:
+ directory: /tmp/rules
+ ring:
+ kvstore:
+ store: memberlist
+server:
+ graceful_shutdown_timeout: 5s
+ grpc_server_min_time_between_pings: '10s'
+ grpc_server_ping_without_stream_allowed: true
+ grpc_server_max_concurrent_streams: 1000
+ grpc_server_max_recv_msg_size: 104857600
+ grpc_server_max_send_msg_size: 104857600
+ http_listen_port: 3100
+ http_server_idle_timeout: 120s
+ http_server_write_timeout: 1m
+ log_level: info
+storage_config:
+ boltdb_shipper:
+ active_index_directory: /tmp/loki/index
+ cache_location: /tmp/loki/index_cache
+ cache_ttl: 24h
+ resync_interval: 5m
+ shared_store: s3
+ index_gateway_client:
+ server_address: dns:///loki-index-gateway-grpc-lokistack-dev.default.svc.cluster.local:9095
+tracing:
+ enabled: false
+analytics:
+ reporting_enabled: true
+`
+ expRCfg := `
+---
+overrides:
+`
+ opts := Options{
+ Stack: lokiv1beta1.LokiStackSpec{
+ ReplicationFactor: 1,
+ Limits: &lokiv1beta1.LimitsSpec{
+ Global: &lokiv1beta1.LimitsTemplateSpec{
+ IngestionLimits: &lokiv1beta1.IngestionLimitSpec{
+ IngestionRate: 4,
+ IngestionBurstSize: 6,
+ MaxLabelNameLength: 1024,
+ MaxLabelValueLength: 2048,
+ MaxLabelNamesPerSeries: 30,
+ MaxGlobalStreamsPerTenant: 0,
+ MaxLineSize: 256000,
+ },
+ QueryLimits: &lokiv1beta1.QueryLimitSpec{
+ MaxEntriesLimitPerQuery: 5000,
+ MaxChunksPerQuery: 2000000,
+ MaxQuerySeries: 500,
+ },
+ },
+ },
+ },
+ Namespace: "test-ns",
+ Name: "test",
+ FrontendWorker: Address{
+ FQDN: "loki-query-frontend-grpc-lokistack-dev.default.svc.cluster.local",
+ Port: 9095,
+ },
+ GossipRing: Address{
+ FQDN: "loki-gossip-ring-lokistack-dev.default.svc.cluster.local",
+ Port: 7946,
+ },
+ Querier: Address{
+ FQDN: "loki-querier-http-lokistack-dev.default.svc.cluster.local",
+ Port: 3100,
+ },
+ IndexGateway: Address{
+ FQDN: "loki-index-gateway-grpc-lokistack-dev.default.svc.cluster.local",
+ Port: 9095,
+ },
+ Ruler: Ruler{
+ Enabled: true,
+ RulesStorageDirectory: "/tmp/rules",
+ },
+ StorageDirectory: "/tmp/loki",
+ MaxConcurrent: MaxConcurrent{
+ AvailableQuerierCPUCores: 2,
+ },
+ WriteAheadLog: WriteAheadLog{
+ Directory: "/tmp/wal",
+ IngesterMemoryRequest: 5000,
+ },
+ ObjectStorage: storage.Options{
+ SharedStore: lokiv1beta1.ObjectStorageSecretS3,
+ S3: &storage.S3StorageConfig{
+ Endpoint: "http://test.default.svc.cluster.local.:9000",
+ Region: "us-east",
+ Buckets: "loki",
+ AccessKeyID: "test",
+ AccessKeySecret: "test123",
+ },
+ },
+ EnableRemoteReporting: true,
+ }
+ cfg, rCfg, err := Build(opts)
+ require.NoError(t, err)
+ require.YAMLEq(t, expCfg, string(cfg))
+ require.YAMLEq(t, expRCfg, string(rCfg))
+}
diff --git a/operator/internal/manifests/internal/config/loki-config.yaml b/operator/internal/manifests/internal/config/loki-config.yaml
index fa337c69a473a..8d186d4b5da56 100644
--- a/operator/internal/manifests/internal/config/loki-config.yaml
+++ b/operator/internal/manifests/internal/config/loki-config.yaml
@@ -156,6 +156,25 @@ schema_config:
object_store: {{ .ObjectStorage.SharedStore }}
schema: v11
store: boltdb-shipper
+
+{{ if .Ruler.Enabled }}
+ruler:
+ enable_api: true
+ enable_sharding: true
+ wal:
+ dir: {{ .WriteAheadLog.Directory }}
+ truncate_frequency: 60m
+ min_age: 5m
+ max_age: 4h
+ rule_path: {{ .StorageDirectory }}
+ storage:
+ type: local
+ local:
+ directory: {{ .Ruler.RulesStorageDirectory }}
+ ring:
+ kvstore:
+ store: memberlist
+{{ end }}
server:
graceful_shutdown_timeout: 5s
grpc_server_min_time_between_pings: '10s'
diff --git a/operator/internal/manifests/internal/config/options.go b/operator/internal/manifests/internal/config/options.go
index 0f0c2338f779f..63efe999722da 100644
--- a/operator/internal/manifests/internal/config/options.go
+++ b/operator/internal/manifests/internal/config/options.go
@@ -18,6 +18,7 @@ type Options struct {
GossipRing Address
Querier Address
IndexGateway Address
+ Ruler Ruler
StorageDirectory string
MaxConcurrent MaxConcurrent
WriteAheadLog WriteAheadLog
@@ -34,6 +35,12 @@ type Address struct {
Port int
}
+// Ruler configuration
+type Ruler struct {
+ Enabled bool
+ RulesStorageDirectory string
+}
+
// MaxConcurrent for concurrent query processing.
type MaxConcurrent struct {
AvailableQuerierCPUCores int32
diff --git a/operator/internal/manifests/internal/gateway/options.go b/operator/internal/manifests/internal/gateway/options.go
index 0df4f586645ed..f86827ab59d3a 100644
--- a/operator/internal/manifests/internal/gateway/options.go
+++ b/operator/internal/manifests/internal/gateway/options.go
@@ -15,7 +15,6 @@ type Options struct {
OpenShiftOptions openshift.Options
TenantSecrets []*Secret
- TenantConfigMap map[string]TenantData
}
// Secret for clientID, clientSecret and issuerCAPath for tenant's authentication.
@@ -25,9 +24,3 @@ type Secret struct {
ClientSecret string
IssuerCAPath string
}
-
-// TenantData defines the existing tenantID and cookieSecret for lokistack reconcile.
-type TenantData struct {
- TenantID string
- CookieSecret string
-}
diff --git a/operator/internal/manifests/internal/rules/marshal.go b/operator/internal/manifests/internal/rules/marshal.go
new file mode 100644
index 0000000000000..0fa4f9a62e24c
--- /dev/null
+++ b/operator/internal/manifests/internal/rules/marshal.go
@@ -0,0 +1,43 @@
+package rules
+
+import (
+ "github.com/ViaQ/logerr/v2/kverrors"
+ lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
+ "gopkg.in/yaml.v2"
+)
+
+type alertingRuleSpec struct {
+ Groups []*lokiv1beta1.AlertingRuleGroup `json:"groups"`
+}
+
+type recordingRuleSpec struct {
+ Groups []*lokiv1beta1.RecordingRuleGroup `json:"groups"`
+}
+
+// MarshalAlertingRule returns the alerting rule groups marshaled into YAML or an error.
+func MarshalAlertingRule(a lokiv1beta1.AlertingRule) (string, error) {
+ ar := alertingRuleSpec{
+ Groups: a.Spec.Groups,
+ }
+
+ content, err := yaml.Marshal(ar)
+ if err != nil {
+ return "", kverrors.Wrap(err, "failed to marshal alerting rule", "name", a.Name, "namespace", a.Namespace)
+ }
+
+ return string(content), nil
+}
+
+// MarshalRecordingRule returns the recording rule groups marshaled into YAML or an error.
+func MarshalRecordingRule(a lokiv1beta1.RecordingRule) (string, error) {
+ ar := recordingRuleSpec{
+ Groups: a.Spec.Groups,
+ }
+
+ content, err := yaml.Marshal(ar)
+ if err != nil {
+ return "", kverrors.Wrap(err, "failed to marshal recording rule", "name", a.Name, "namespace", a.Namespace)
+ }
+
+ return string(content), nil
+}
diff --git a/operator/internal/manifests/internal/rules/marshal_test.go b/operator/internal/manifests/internal/rules/marshal_test.go
new file mode 100644
index 0000000000000..e45ebcb29a0fb
--- /dev/null
+++ b/operator/internal/manifests/internal/rules/marshal_test.go
@@ -0,0 +1,147 @@
+package rules_test
+
+import (
+ "fmt"
+ "testing"
+
+ lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
+ "github.com/grafana/loki/operator/internal/manifests/internal/rules"
+ "github.com/stretchr/testify/require"
+)
+
+func TestMarshalAlertingRule(t *testing.T) {
+ expCfg := `
+groups:
+ - name: an-alert
+ interval: 1m
+ limit: 2
+ rules:
+ - expr: |-
+ sum(rate({app="foo", env="production"} |= "error" [5m])) by (job)
+ /
+ sum(rate({app="foo", env="production"}[5m])) by (job)
+ > 0.05
+ alert: HighPercentageErrors
+ for: 10m
+ annotations:
+ playbook: http://link/to/playbook
+ summary: High Percentage Latency
+ labels:
+ environment: production
+ severity: page
+ - expr: |-
+ sum(rate({app="foo", env="production"} |= "error" [5m])) by (job)
+ /
+ sum(rate({app="foo", env="production"}[5m])) by (job)
+ > 0.05
+ alert: LowPercentageErrors
+ for: 10m
+ annotations:
+ playbook: http://link/to/playbook
+ summary: Low Percentage Latency
+ labels:
+ environment: production
+ severity: low
+`
+
+ a := lokiv1beta1.AlertingRule{
+ Spec: lokiv1beta1.AlertingRuleSpec{
+ Groups: []*lokiv1beta1.AlertingRuleGroup{
+ {
+ Name: "an-alert",
+ Interval: lokiv1beta1.PrometheusDuration("1m"),
+ Limit: 2,
+ Rules: []*lokiv1beta1.AlertingRuleGroupSpec{
+ {
+ Alert: "HighPercentageErrors",
+ Expr: `sum(rate({app="foo", env="production"} |= "error" [5m])) by (job)
+ /
+sum(rate({app="foo", env="production"}[5m])) by (job)
+ > 0.05`,
+ For: lokiv1beta1.PrometheusDuration("10m"),
+ Labels: map[string]string{
+ "severity": "page",
+ "environment": "production",
+ },
+ Annotations: map[string]string{
+ "summary": "High Percentage Latency",
+ "playbook": "http://link/to/playbook",
+ },
+ },
+ {
+ Alert: "LowPercentageErrors",
+ Expr: `sum(rate({app="foo", env="production"} |= "error" [5m])) by (job)
+ /
+sum(rate({app="foo", env="production"}[5m])) by (job)
+ > 0.05`,
+ For: lokiv1beta1.PrometheusDuration("10m"),
+ Labels: map[string]string{
+ "severity": "low",
+ "environment": "production",
+ },
+ Annotations: map[string]string{
+ "summary": "Low Percentage Latency",
+ "playbook": "http://link/to/playbook",
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+
+ cfg, err := rules.MarshalAlertingRule(a)
+ require.NoError(t, err)
+ require.YAMLEq(t, expCfg, cfg)
+}
+
+func TestMarshalRecordingRule(t *testing.T) {
+ expCfg := `
+groups:
+ - name: a-recording
+ interval: 2d
+ limit: 1
+ rules:
+ - expr: |-
+ sum(
+ rate({container="nginx"}[1m])
+ )
+ record: nginx:requests:rate1m
+ - expr: |-
+ sum(
+ rate({container="banana"}[5m])
+ )
+ record: banana:requests:rate5m
+`
+
+ r := lokiv1beta1.RecordingRule{
+ Spec: lokiv1beta1.RecordingRuleSpec{
+ Groups: []*lokiv1beta1.RecordingRuleGroup{
+ {
+ Name: "a-recording",
+ Interval: lokiv1beta1.PrometheusDuration("2d"),
+ Limit: 1,
+ Rules: []*lokiv1beta1.RecordingRuleGroupSpec{
+ {
+ Expr: `sum(
+ rate({container="nginx"}[1m])
+)`,
+ Record: "nginx:requests:rate1m",
+ },
+ {
+ Expr: `sum(
+ rate({container="banana"}[5m])
+)`,
+ Record: "banana:requests:rate5m",
+ },
+ },
+ },
+ },
+ },
+ }
+
+ cfg, err := rules.MarshalRecordingRule(r)
+ fmt.Print(cfg)
+ require.NoError(t, err)
+ require.YAMLEq(t, expCfg, cfg)
+}
diff --git a/operator/internal/manifests/internal/sizes.go b/operator/internal/manifests/internal/sizes.go
index 82616db8be48c..8b9e257e04029 100644
--- a/operator/internal/manifests/internal/sizes.go
+++ b/operator/internal/manifests/internal/sizes.go
@@ -11,6 +11,7 @@ type ComponentResources struct {
IndexGateway ResourceRequirements
Ingester ResourceRequirements
Compactor ResourceRequirements
+ Ruler ResourceRequirements
WALStorage ResourceRequirements
// these two don't need a PVCSize
Querier corev1.ResourceRequirements
@@ -35,6 +36,13 @@ var ResourceRequirementsTable = map[lokiv1beta1.LokiStackSizeType]ComponentResou
corev1.ResourceMemory: resource.MustParse("3Gi"),
},
},
+ Ruler: ResourceRequirements{
+ Requests: map[corev1.ResourceName]resource.Quantity{
+ corev1.ResourceCPU: resource.MustParse("1"),
+ corev1.ResourceMemory: resource.MustParse("2Gi"),
+ },
+ PVCSize: resource.MustParse("10Gi"),
+ },
Ingester: ResourceRequirements{
PVCSize: resource.MustParse("10Gi"),
Requests: map[corev1.ResourceName]resource.Quantity{
@@ -85,6 +93,13 @@ var ResourceRequirementsTable = map[lokiv1beta1.LokiStackSizeType]ComponentResou
corev1.ResourceMemory: resource.MustParse("4Gi"),
},
},
+ Ruler: ResourceRequirements{
+ Requests: map[corev1.ResourceName]resource.Quantity{
+ corev1.ResourceCPU: resource.MustParse("4"),
+ corev1.ResourceMemory: resource.MustParse("8Gi"),
+ },
+ PVCSize: resource.MustParse("10Gi"),
+ },
Ingester: ResourceRequirements{
PVCSize: resource.MustParse("10Gi"),
Requests: map[corev1.ResourceName]resource.Quantity{
@@ -135,6 +150,13 @@ var ResourceRequirementsTable = map[lokiv1beta1.LokiStackSizeType]ComponentResou
corev1.ResourceMemory: resource.MustParse("10Gi"),
},
},
+ Ruler: ResourceRequirements{
+ Requests: map[corev1.ResourceName]resource.Quantity{
+ corev1.ResourceCPU: resource.MustParse("8"),
+ corev1.ResourceMemory: resource.MustParse("16Gi"),
+ },
+ PVCSize: resource.MustParse("10Gi"),
+ },
Ingester: ResourceRequirements{
PVCSize: resource.MustParse("10Gi"),
Requests: map[corev1.ResourceName]resource.Quantity{
@@ -227,6 +249,9 @@ var StackSizeTable = map[lokiv1beta1.LokiStackSizeType]lokiv1beta1.LokiStackSpec
IndexGateway: &lokiv1beta1.LokiComponentSpec{
Replicas: 1,
},
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 1,
+ },
},
},
@@ -276,6 +301,9 @@ var StackSizeTable = map[lokiv1beta1.LokiStackSizeType]lokiv1beta1.LokiStackSpec
IndexGateway: &lokiv1beta1.LokiComponentSpec{
Replicas: 2,
},
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 2,
+ },
},
},
@@ -325,6 +353,9 @@ var StackSizeTable = map[lokiv1beta1.LokiStackSizeType]lokiv1beta1.LokiStackSpec
IndexGateway: &lokiv1beta1.LokiComponentSpec{
Replicas: 2,
},
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 2,
+ },
},
},
}
diff --git a/operator/internal/manifests/mutate.go b/operator/internal/manifests/mutate.go
index e13a46c88c1da..37b6d148b3851 100644
--- a/operator/internal/manifests/mutate.go
+++ b/operator/internal/manifests/mutate.go
@@ -121,6 +121,7 @@ func mergeWithOverride(dst, src interface{}) error {
func mutateConfigMap(existing, desired *corev1.ConfigMap) {
existing.BinaryData = desired.BinaryData
+ existing.Data = desired.Data
}
func mutateServiceAccount(existing, desired *corev1.ServiceAccount) {
diff --git a/operator/internal/manifests/mutate_test.go b/operator/internal/manifests/mutate_test.go
index 4699780080572..486b46d12d91d 100644
--- a/operator/internal/manifests/mutate_test.go
+++ b/operator/internal/manifests/mutate_test.go
@@ -76,9 +76,7 @@ func TestGetMutateFunc_MutateConfigMap(t *testing.T) {
require.Equal(t, got.Labels, want.Labels)
require.Equal(t, got.Annotations, want.Annotations)
require.Equal(t, got.BinaryData, got.BinaryData)
-
- // Ensure not mutated
- require.NotEqual(t, got.Data, want.Data)
+ require.Equal(t, got.Data, want.Data)
}
func TestGetMutateFunc_MutateServiceSpec(t *testing.T) {
diff --git a/operator/internal/manifests/node_placement_test.go b/operator/internal/manifests/node_placement_test.go
index 857524b2c0c6c..0deabba6cf5b3 100644
--- a/operator/internal/manifests/node_placement_test.go
+++ b/operator/internal/manifests/node_placement_test.go
@@ -43,6 +43,10 @@ func TestTolerationsAreSetForEachComponent(t *testing.T) {
Tolerations: tolerations,
Replicas: 1,
},
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Tolerations: tolerations,
+ Replicas: 1,
+ },
},
},
ObjectStorage: storage.Options{},
@@ -69,6 +73,9 @@ func TestTolerationsAreSetForEachComponent(t *testing.T) {
IndexGateway: &lokiv1beta1.LokiComponentSpec{
Replicas: 1,
},
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 1,
+ },
},
},
ObjectStorage: storage.Options{},
@@ -103,6 +110,11 @@ func TestTolerationsAreSetForEachComponent(t *testing.T) {
assert.Equal(t, tolerations, NewIndexGatewayStatefulSet(optsWithTolerations).Spec.Template.Spec.Tolerations)
assert.Empty(t, NewIndexGatewayStatefulSet(optsWithoutTolerations).Spec.Template.Spec.Tolerations)
})
+
+ t.Run("ruler", func(t *testing.T) {
+ assert.Equal(t, tolerations, NewRulerStatefulSet(optsWithTolerations).Spec.Template.Spec.Tolerations)
+ assert.Empty(t, NewRulerStatefulSet(optsWithoutTolerations).Spec.Template.Spec.Tolerations)
+ })
}
func TestNodeSelectorsAreSetForEachComponent(t *testing.T) {
@@ -134,6 +146,10 @@ func TestNodeSelectorsAreSetForEachComponent(t *testing.T) {
NodeSelector: nodeSelectors,
Replicas: 1,
},
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ NodeSelector: nodeSelectors,
+ Replicas: 1,
+ },
},
},
ObjectStorage: storage.Options{},
@@ -160,6 +176,9 @@ func TestNodeSelectorsAreSetForEachComponent(t *testing.T) {
IndexGateway: &lokiv1beta1.LokiComponentSpec{
Replicas: 1,
},
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 1,
+ },
},
},
ObjectStorage: storage.Options{},
@@ -194,4 +213,9 @@ func TestNodeSelectorsAreSetForEachComponent(t *testing.T) {
assert.Equal(t, nodeSelectors, NewIndexGatewayStatefulSet(optsWithNodeSelectors).Spec.Template.Spec.NodeSelector)
assert.Empty(t, NewIndexGatewayStatefulSet(optsWithoutNodeSelectors).Spec.Template.Spec.NodeSelector)
})
+
+ t.Run("ruler", func(t *testing.T) {
+ assert.Equal(t, nodeSelectors, NewRulerStatefulSet(optsWithNodeSelectors).Spec.Template.Spec.NodeSelector)
+ assert.Empty(t, NewRulerStatefulSet(optsWithoutNodeSelectors).Spec.Template.Spec.NodeSelector)
+ })
}
diff --git a/operator/internal/manifests/openshift/options.go b/operator/internal/manifests/openshift/options.go
index 34b541a3b03c0..d30c345fca18d 100644
--- a/operator/internal/manifests/openshift/options.go
+++ b/operator/internal/manifests/openshift/options.go
@@ -62,23 +62,18 @@ func NewOptions(
var authn []AuthenticationSpec
for _, name := range defaultTenants {
- if tenantConfigMap != nil {
- authn = append(authn, AuthenticationSpec{
- TenantName: name,
- TenantID: name,
- ServiceAccount: gwName,
- RedirectURL: fmt.Sprintf("http://%s/openshift/%s/callback", host, name),
- CookieSecret: tenantConfigMap[name].CookieSecret,
- })
- } else {
- authn = append(authn, AuthenticationSpec{
- TenantName: name,
- TenantID: name,
- ServiceAccount: gwName,
- RedirectURL: fmt.Sprintf("http://%s/openshift/%s/callback", host, name),
- CookieSecret: newCookieSecret(),
- })
+ cookieSecret := tenantConfigMap[name].CookieSecret
+ if cookieSecret == "" {
+ cookieSecret = newCookieSecret()
}
+
+ authn = append(authn, AuthenticationSpec{
+ TenantName: name,
+ TenantID: name,
+ ServiceAccount: gwName,
+ RedirectURL: fmt.Sprintf("http://%s/openshift/%s/callback", host, name),
+ CookieSecret: cookieSecret,
+ })
}
return Options{
diff --git a/operator/internal/manifests/options.go b/operator/internal/manifests/options.go
index 2066c9488b95e..7c056b5153ecc 100644
--- a/operator/internal/manifests/options.go
+++ b/operator/internal/manifests/options.go
@@ -22,11 +22,14 @@ type Options struct {
Stack lokiv1beta1.LokiStackSpec
ResourceRequirements internal.ComponentResources
+ AlertingRules []lokiv1beta1.AlertingRule
+ RecordingRules []lokiv1beta1.RecordingRule
+
ObjectStorage storage.Options
OpenShiftOptions openshift.Options
- TenantSecrets []*TenantSecrets
- TenantConfigMap map[string]openshift.TenantData
+
+ Tenants Tenants
}
// FeatureFlags contains flags that activate various features
@@ -40,6 +43,14 @@ type FeatureFlags struct {
EnableGrafanaLabsStats bool
}
+// Tenants contains the configuration per tenant and secrets for authn/authz.
+// Secrets are required only for modes static and dynamic to reconcile the OIDC provider.
+// Configs are required only for all modes to reconcile rules and gateway configuration.
+type Tenants struct {
+ Secrets []*TenantSecrets
+ Configs map[string]TenantConfig
+}
+
// TenantSecrets for clientID, clientSecret and issuerCAPath for tenant's authentication.
type TenantSecrets struct {
TenantName string
@@ -47,3 +58,22 @@ type TenantSecrets struct {
ClientSecret string
IssuerCAPath string
}
+
+// TenantConfig for tenant authorizationconfig
+type TenantConfig struct {
+ OIDC *TenantOIDCSpec
+ OPA *TenantOPASpec
+ OpenShift *TenantOpenShiftSpec
+ RuleFiles []string
+}
+
+// TenantOIDCSpec stub config for OIDC configuration options (e.g. used in static or dynamic mode)
+type TenantOIDCSpec struct{}
+
+// TenantOPASpec stub config for OPA configuration options (e.g. used in dynamic mode)
+type TenantOPASpec struct{}
+
+// TenantOpenShiftSpec config for OpenShift authentication options (e.g. used in openshift-logging mode)
+type TenantOpenShiftSpec struct {
+ CookieSecret string
+}
diff --git a/operator/internal/manifests/ruler.go b/operator/internal/manifests/ruler.go
new file mode 100644
index 0000000000000..b4c3a088527cd
--- /dev/null
+++ b/operator/internal/manifests/ruler.go
@@ -0,0 +1,278 @@
+package manifests
+
+import (
+ "fmt"
+ "path"
+
+ "github.com/grafana/loki/operator/internal/manifests/internal/config"
+ appsv1 "k8s.io/api/apps/v1"
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/resource"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/labels"
+ "k8s.io/apimachinery/pkg/util/intstr"
+ "k8s.io/utils/pointer"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+)
+
+// BuildRuler returns a list of k8s objects for Loki Stack Ruler
+func BuildRuler(opts Options) ([]client.Object, error) {
+ statefulSet := NewRulerStatefulSet(opts)
+ if opts.Flags.EnableTLSServiceMonitorConfig {
+ if err := configureRulerServiceMonitorPKI(statefulSet, opts.Name); err != nil {
+ return nil, err
+ }
+ }
+
+ return []client.Object{
+ statefulSet,
+ NewRulerGRPCService(opts),
+ NewRulerHTTPService(opts),
+ }, nil
+}
+
+// NewRulerStatefulSet creates a statefulset object for an ruler
+func NewRulerStatefulSet(opts Options) *appsv1.StatefulSet {
+ podSpec := corev1.PodSpec{
+ Volumes: []corev1.Volume{
+ {
+ Name: configVolumeName,
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ DefaultMode: &defaultConfigMapMode,
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: lokiConfigMapName(opts.Name),
+ },
+ },
+ },
+ },
+ {
+ Name: rulesStorageVolumeName,
+ VolumeSource: corev1.VolumeSource{
+ ConfigMap: &corev1.ConfigMapVolumeSource{
+ DefaultMode: &defaultConfigMapMode,
+ LocalObjectReference: corev1.LocalObjectReference{
+ Name: RulesConfigMapName(opts.Name),
+ },
+ Items: ruleVolumeItems(opts.Tenants.Configs),
+ },
+ },
+ },
+ },
+ Containers: []corev1.Container{
+ {
+ Image: opts.Image,
+ Name: "loki-ruler",
+ Resources: corev1.ResourceRequirements{
+ Limits: opts.ResourceRequirements.Ruler.Limits,
+ Requests: opts.ResourceRequirements.Ruler.Requests,
+ },
+ Args: []string{
+ "-target=ruler",
+ fmt.Sprintf("-config.file=%s", path.Join(config.LokiConfigMountDir, config.LokiConfigFileName)),
+ fmt.Sprintf("-runtime-config.file=%s", path.Join(config.LokiConfigMountDir, config.LokiRuntimeConfigFileName)),
+ },
+ ReadinessProbe: lokiReadinessProbe(),
+ LivenessProbe: lokiLivenessProbe(),
+ Ports: []corev1.ContainerPort{
+ {
+ Name: lokiHTTPPortName,
+ ContainerPort: httpPort,
+ Protocol: protocolTCP,
+ },
+ {
+ Name: lokiGRPCPortName,
+ ContainerPort: grpcPort,
+ Protocol: protocolTCP,
+ },
+ {
+ Name: lokiGossipPortName,
+ ContainerPort: gossipPort,
+ Protocol: protocolTCP,
+ },
+ },
+ VolumeMounts: []corev1.VolumeMount{
+ {
+ Name: configVolumeName,
+ ReadOnly: false,
+ MountPath: config.LokiConfigMountDir,
+ },
+ {
+ Name: rulesStorageVolumeName,
+ ReadOnly: false,
+ MountPath: rulesStorageDirectory,
+ },
+ {
+ Name: walVolumeName,
+ ReadOnly: false,
+ MountPath: walDirectory,
+ },
+ {
+ Name: storageVolumeName,
+ ReadOnly: false,
+ MountPath: dataDirectory,
+ },
+ },
+ TerminationMessagePath: "/dev/termination-log",
+ TerminationMessagePolicy: "File",
+ ImagePullPolicy: "IfNotPresent",
+ },
+ },
+ }
+
+ if opts.Stack.Template != nil && opts.Stack.Template.Ruler != nil {
+ podSpec.Tolerations = opts.Stack.Template.Ruler.Tolerations
+ podSpec.NodeSelector = opts.Stack.Template.Ruler.NodeSelector
+ }
+
+ l := ComponentLabels(LabelRulerComponent, opts.Name)
+ a := commonAnnotations(opts.ConfigSHA1)
+
+ return &appsv1.StatefulSet{
+ TypeMeta: metav1.TypeMeta{
+ Kind: "StatefulSet",
+ APIVersion: appsv1.SchemeGroupVersion.String(),
+ },
+ ObjectMeta: metav1.ObjectMeta{
+ Name: RulerName(opts.Name),
+ Labels: l,
+ },
+ Spec: appsv1.StatefulSetSpec{
+ PodManagementPolicy: appsv1.ParallelPodManagement,
+ UpdateStrategy: appsv1.StatefulSetUpdateStrategy{
+ Type: appsv1.RollingUpdateStatefulSetStrategyType,
+ },
+ RevisionHistoryLimit: pointer.Int32Ptr(10),
+ Replicas: pointer.Int32Ptr(opts.Stack.Template.Ruler.Replicas),
+ Selector: &metav1.LabelSelector{
+ MatchLabels: labels.Merge(l, GossipLabels()),
+ },
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: fmt.Sprintf("loki-ruler-%s", opts.Name),
+ Labels: labels.Merge(l, GossipLabels()),
+ Annotations: a,
+ },
+ Spec: podSpec,
+ },
+ VolumeClaimTemplates: []corev1.PersistentVolumeClaim{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: l,
+ Name: storageVolumeName,
+ },
+ Spec: corev1.PersistentVolumeClaimSpec{
+ AccessModes: []corev1.PersistentVolumeAccessMode{
+ // TODO: should we verify that this is possible with the given storage class first?
+ corev1.ReadWriteOnce,
+ },
+ Resources: corev1.ResourceRequirements{
+ Requests: map[corev1.ResourceName]resource.Quantity{
+ corev1.ResourceStorage: opts.ResourceRequirements.Ruler.PVCSize,
+ },
+ },
+ StorageClassName: pointer.StringPtr(opts.Stack.StorageClassName),
+ VolumeMode: &volumeFileSystemMode,
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: l,
+ Name: walVolumeName,
+ },
+ Spec: corev1.PersistentVolumeClaimSpec{
+ AccessModes: []corev1.PersistentVolumeAccessMode{
+ // TODO: should we verify that this is possible with the given storage class first?
+ corev1.ReadWriteOnce,
+ },
+ Resources: corev1.ResourceRequirements{
+ Requests: map[corev1.ResourceName]resource.Quantity{
+ corev1.ResourceStorage: opts.ResourceRequirements.WALStorage.PVCSize,
+ },
+ },
+ StorageClassName: pointer.StringPtr(opts.Stack.StorageClassName),
+ VolumeMode: &volumeFileSystemMode,
+ },
+ },
+ },
+ },
+ }
+}
+
+// NewRulerGRPCService creates a k8s service for the ruler GRPC endpoint
+func NewRulerGRPCService(opts Options) *corev1.Service {
+ l := ComponentLabels(LabelRulerComponent, opts.Name)
+
+ return &corev1.Service{
+ TypeMeta: metav1.TypeMeta{
+ Kind: "Service",
+ APIVersion: corev1.SchemeGroupVersion.String(),
+ },
+ ObjectMeta: metav1.ObjectMeta{
+ Name: serviceNameRulerGRPC(opts.Name),
+ Labels: l,
+ },
+ Spec: corev1.ServiceSpec{
+ ClusterIP: "None",
+ Ports: []corev1.ServicePort{
+ {
+ Name: lokiGRPCPortName,
+ Port: grpcPort,
+ Protocol: protocolTCP,
+ TargetPort: intstr.IntOrString{IntVal: grpcPort},
+ },
+ },
+ Selector: l,
+ },
+ }
+}
+
+// NewRulerHTTPService creates a k8s service for the ruler HTTP endpoint
+func NewRulerHTTPService(opts Options) *corev1.Service {
+ serviceName := serviceNameRulerHTTP(opts.Name)
+ l := ComponentLabels(LabelRulerComponent, opts.Name)
+ a := serviceAnnotations(serviceName, opts.Flags.EnableCertificateSigningService)
+
+ return &corev1.Service{
+ TypeMeta: metav1.TypeMeta{
+ Kind: "Service",
+ APIVersion: corev1.SchemeGroupVersion.String(),
+ },
+ ObjectMeta: metav1.ObjectMeta{
+ Name: serviceName,
+ Labels: l,
+ Annotations: a,
+ },
+ Spec: corev1.ServiceSpec{
+ Ports: []corev1.ServicePort{
+ {
+ Name: lokiHTTPPortName,
+ Port: httpPort,
+ Protocol: protocolTCP,
+ TargetPort: intstr.IntOrString{IntVal: httpPort},
+ },
+ },
+ Selector: l,
+ },
+ }
+}
+
+func configureRulerServiceMonitorPKI(statefulSet *appsv1.StatefulSet, stackName string) error {
+ serviceName := serviceNameRulerHTTP(stackName)
+ return configureServiceMonitorPKI(&statefulSet.Spec.Template.Spec, serviceName)
+}
+
+func ruleVolumeItems(tenants map[string]TenantConfig) []corev1.KeyToPath {
+ var items []corev1.KeyToPath
+
+ for id, tenant := range tenants {
+ for _, rule := range tenant.RuleFiles {
+ items = append(items, corev1.KeyToPath{
+ Key: rule,
+ Path: fmt.Sprintf("%s/%s", id, rule),
+ })
+ }
+ }
+
+ return items
+}
diff --git a/operator/internal/manifests/ruler_test.go b/operator/internal/manifests/ruler_test.go
new file mode 100644
index 0000000000000..c906d4edfcbf2
--- /dev/null
+++ b/operator/internal/manifests/ruler_test.go
@@ -0,0 +1,93 @@
+package manifests_test
+
+import (
+ "testing"
+
+ lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
+ "github.com/grafana/loki/operator/internal/manifests"
+ "github.com/stretchr/testify/require"
+ corev1 "k8s.io/api/core/v1"
+)
+
+func TestNewRulerStatefulSet_HasTemplateConfigHashAnnotation(t *testing.T) {
+ ss := manifests.NewRulerStatefulSet(manifests.Options{
+ Name: "abcd",
+ Namespace: "efgh",
+ ConfigSHA1: "deadbeef",
+ Stack: lokiv1beta1.LokiStackSpec{
+ StorageClassName: "standard",
+ Template: &lokiv1beta1.LokiTemplateSpec{
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 1,
+ },
+ },
+ },
+ })
+
+ expected := "loki.grafana.com/config-hash"
+ annotations := ss.Spec.Template.Annotations
+ require.Contains(t, annotations, expected)
+ require.Equal(t, annotations[expected], "deadbeef")
+}
+
+func TestNewRulerStatefulSet_SelectorMatchesLabels(t *testing.T) {
+ // You must set the .spec.selector field of a StatefulSet to match the labels of
+ // its .spec.template.metadata.labels. Prior to Kubernetes 1.8, the
+ // .spec.selector field was defaulted when omitted. In 1.8 and later versions,
+ // failing to specify a matching Pod Selector will result in a validation error
+ // during StatefulSet creation.
+ // See https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-selector
+ sts := manifests.NewRulerStatefulSet(manifests.Options{
+ Name: "abcd",
+ Namespace: "efgh",
+ Stack: lokiv1beta1.LokiStackSpec{
+ StorageClassName: "standard",
+ Template: &lokiv1beta1.LokiTemplateSpec{
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 1,
+ },
+ },
+ },
+ })
+
+ l := sts.Spec.Template.GetObjectMeta().GetLabels()
+ for key, value := range sts.Spec.Selector.MatchLabels {
+ require.Contains(t, l, key)
+ require.Equal(t, l[key], value)
+ }
+}
+
+func TestNewRulerStatefulSet_MountsRulesInPerTenantIDSubDirectories(t *testing.T) {
+ sts := manifests.NewRulerStatefulSet(manifests.Options{
+ Name: "abcd",
+ Namespace: "efgh",
+ Stack: lokiv1beta1.LokiStackSpec{
+ StorageClassName: "standard",
+ Template: &lokiv1beta1.LokiTemplateSpec{
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 1,
+ },
+ },
+ },
+ Tenants: manifests.Tenants{
+ Configs: map[string]manifests.TenantConfig{
+ "tenant-a": {RuleFiles: []string{"rule-a-alerts.yaml", "rule-b-recs.yaml"}},
+ "tenant-b": {RuleFiles: []string{"rule-a-alerts.yaml", "rule-b-recs.yaml"}},
+ },
+ },
+ })
+
+ vs := sts.Spec.Template.Spec.Volumes
+
+ var (
+ volumeNames []string
+ volumeItems []corev1.KeyToPath
+ )
+ for _, v := range vs {
+ volumeNames = append(volumeNames, v.Name)
+ volumeItems = append(volumeItems, v.ConfigMap.Items...)
+ }
+
+ require.NotEmpty(t, volumeNames)
+ require.NotEmpty(t, volumeItems)
+}
diff --git a/operator/internal/manifests/rules_config.go b/operator/internal/manifests/rules_config.go
new file mode 100644
index 0000000000000..90c9332871e62
--- /dev/null
+++ b/operator/internal/manifests/rules_config.go
@@ -0,0 +1,58 @@
+package manifests
+
+import (
+ "fmt"
+
+ "github.com/grafana/loki/operator/internal/manifests/internal/rules"
+ corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+// RulesConfigMap returns a ConfigMap resource that contains
+// all loki alerting and recording rules as YAML data.
+func RulesConfigMap(opts *Options) (*corev1.ConfigMap, error) {
+ data := make(map[string]string)
+
+ for _, r := range opts.AlertingRules {
+ c, err := rules.MarshalAlertingRule(r)
+ if err != nil {
+ return nil, err
+ }
+
+ key := fmt.Sprintf("%s-%s-%s.yaml", r.Namespace, r.Name, r.UID)
+ if tenant, ok := opts.Tenants.Configs[r.Spec.TenantID]; ok {
+ tenant.RuleFiles = append(tenant.RuleFiles, key)
+ data[key] = c
+ opts.Tenants.Configs[r.Spec.TenantID] = tenant
+ }
+ }
+
+ for _, r := range opts.RecordingRules {
+ c, err := rules.MarshalRecordingRule(r)
+ if err != nil {
+ return nil, err
+ }
+
+ key := fmt.Sprintf("%s-%s-%s.yaml", r.Namespace, r.Name, r.UID)
+ if tenant, ok := opts.Tenants.Configs[r.Spec.TenantID]; ok {
+ tenant.RuleFiles = append(tenant.RuleFiles, key)
+ data[key] = c
+ opts.Tenants.Configs[r.Spec.TenantID] = tenant
+ }
+ }
+
+ l := commonLabels(opts.Name)
+
+ return &corev1.ConfigMap{
+ TypeMeta: metav1.TypeMeta{
+ Kind: "ConfigMap",
+ APIVersion: corev1.SchemeGroupVersion.String(),
+ },
+ ObjectMeta: metav1.ObjectMeta{
+ Name: RulesConfigMapName(opts.Name),
+ Namespace: opts.Namespace,
+ Labels: l,
+ },
+ Data: data,
+ }, nil
+}
diff --git a/operator/internal/manifests/rules_config_test.go b/operator/internal/manifests/rules_config_test.go
new file mode 100644
index 0000000000000..d932aeddf9cb3
--- /dev/null
+++ b/operator/internal/manifests/rules_config_test.go
@@ -0,0 +1,123 @@
+package manifests_test
+
+import (
+ "fmt"
+ "testing"
+
+ lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
+ "github.com/grafana/loki/operator/internal/manifests"
+ "github.com/stretchr/testify/require"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/types"
+)
+
+func TestRulesConfigMap_ReturnsDataEntriesPerRule(t *testing.T) {
+ cm, err := manifests.RulesConfigMap(testOptions())
+ require.NoError(t, err)
+ require.NotNil(t, cm)
+ require.Len(t, cm.Data, 4)
+ require.Contains(t, cm.Data, "dev-alerting-rules-alerts1.yaml")
+ require.Contains(t, cm.Data, "dev-recording-rules-recs1.yaml")
+ require.Contains(t, cm.Data, "prod-alerting-rules-alerts2.yaml")
+ require.Contains(t, cm.Data, "prod-recording-rules-recs2.yaml")
+}
+
+func TestRulesConfigMap_ReturnsTenantMapPerRule(t *testing.T) {
+ opts := testOptions()
+ cm, err := manifests.RulesConfigMap(opts)
+ require.NoError(t, err)
+ require.NotNil(t, cm)
+ require.Len(t, cm.Data, 4)
+ fmt.Print(opts.Tenants.Configs)
+ require.Contains(t, opts.Tenants.Configs["tenant-a"].RuleFiles, "dev-alerting-rules-alerts1.yaml")
+ require.Contains(t, opts.Tenants.Configs["tenant-a"].RuleFiles, "prod-alerting-rules-alerts2.yaml")
+ require.Contains(t, opts.Tenants.Configs["tenant-b"].RuleFiles, "dev-recording-rules-recs1.yaml")
+ require.Contains(t, opts.Tenants.Configs["tenant-b"].RuleFiles, "prod-recording-rules-recs2.yaml")
+}
+
+func testOptions() *manifests.Options {
+ return &manifests.Options{
+ Tenants: manifests.Tenants{
+ Configs: map[string]manifests.TenantConfig{
+ "tenant-a": {},
+ "tenant-b": {},
+ },
+ },
+ AlertingRules: []lokiv1beta1.AlertingRule{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "alerting-rules",
+ Namespace: "dev",
+ UID: types.UID("alerts1"),
+ },
+ Spec: lokiv1beta1.AlertingRuleSpec{
+ TenantID: "tenant-a",
+ Groups: []*lokiv1beta1.AlertingRuleGroup{
+ {
+ Name: "rule-a",
+ },
+ {
+ Name: "rule-b",
+ },
+ },
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "alerting-rules",
+ Namespace: "prod",
+ UID: types.UID("alerts2"),
+ },
+ Spec: lokiv1beta1.AlertingRuleSpec{
+ TenantID: "tenant-a",
+ Groups: []*lokiv1beta1.AlertingRuleGroup{
+ {
+ Name: "rule-c",
+ },
+ {
+ Name: "rule-d",
+ },
+ },
+ },
+ },
+ },
+ RecordingRules: []lokiv1beta1.RecordingRule{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "recording-rules",
+ Namespace: "dev",
+ UID: types.UID("recs1"),
+ },
+ Spec: lokiv1beta1.RecordingRuleSpec{
+ TenantID: "tenant-b",
+ Groups: []*lokiv1beta1.RecordingRuleGroup{
+ {
+ Name: "rule-a",
+ },
+ {
+ Name: "rule-b",
+ },
+ },
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "recording-rules",
+ Namespace: "prod",
+ UID: types.UID("recs2"),
+ },
+ Spec: lokiv1beta1.RecordingRuleSpec{
+ TenantID: "tenant-b",
+ Groups: []*lokiv1beta1.RecordingRuleGroup{
+ {
+ Name: "rule-c",
+ },
+ {
+ Name: "rule-d",
+ },
+ },
+ },
+ },
+ },
+ }
+}
diff --git a/operator/internal/manifests/service_monitor.go b/operator/internal/manifests/service_monitor.go
index 197b1278f5e19..3b5864092e543 100644
--- a/operator/internal/manifests/service_monitor.go
+++ b/operator/internal/manifests/service_monitor.go
@@ -20,6 +20,7 @@ func BuildServiceMonitors(opts Options) []client.Object {
NewCompactorServiceMonitor(opts),
NewQueryFrontendServiceMonitor(opts),
NewIndexGatewayServiceMonitor(opts),
+ NewRulerServiceMonitor(opts),
NewGatewayServiceMonitor(opts),
}
}
@@ -90,6 +91,17 @@ func NewIndexGatewayServiceMonitor(opts Options) *monitoringv1.ServiceMonitor {
return newServiceMonitor(opts.Namespace, serviceMonitorName, l, lokiEndpoint)
}
+// NewRulerServiceMonitor creates a k8s service monitor for the ruler component
+func NewRulerServiceMonitor(opts Options) *monitoringv1.ServiceMonitor {
+ l := ComponentLabels(LabelRulerComponent, opts.Name)
+
+ serviceMonitorName := serviceMonitorName(RulerName(opts.Name))
+ serviceName := serviceNameRulerHTTP(opts.Name)
+ lokiEndpoint := serviceMonitorEndpoint(lokiHTTPPortName, serviceName, opts.Namespace, opts.Flags.EnableTLSServiceMonitorConfig)
+
+ return newServiceMonitor(opts.Namespace, serviceMonitorName, l, lokiEndpoint)
+}
+
// NewGatewayServiceMonitor creates a k8s service monitor for the lokistack-gateway component
func NewGatewayServiceMonitor(opts Options) *monitoringv1.ServiceMonitor {
l := ComponentLabels(LabelGatewayComponent, opts.Name)
diff --git a/operator/internal/manifests/service_monitor_test.go b/operator/internal/manifests/service_monitor_test.go
index 2adfd248b785f..3858d2a4c6f2c 100644
--- a/operator/internal/manifests/service_monitor_test.go
+++ b/operator/internal/manifests/service_monitor_test.go
@@ -55,6 +55,9 @@ func TestServiceMonitorMatchLabels(t *testing.T) {
IndexGateway: &lokiv1beta1.LokiComponentSpec{
Replicas: 1,
},
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 1,
+ },
},
},
}
@@ -88,6 +91,10 @@ func TestServiceMonitorMatchLabels(t *testing.T) {
Service: NewIndexGatewayHTTPService(opt),
ServiceMonitor: NewIndexGatewayServiceMonitor(opt),
},
+ {
+ Service: NewRulerHTTPService(opt),
+ ServiceMonitor: NewRulerServiceMonitor(opt),
+ },
}
for _, tst := range table {
diff --git a/operator/internal/manifests/service_test.go b/operator/internal/manifests/service_test.go
index a88a10e8c10aa..ec80d7a1ee4ee 100644
--- a/operator/internal/manifests/service_test.go
+++ b/operator/internal/manifests/service_test.go
@@ -44,6 +44,9 @@ func TestServicesMatchPorts(t *testing.T) {
IndexGateway: &lokiv1beta1.LokiComponentSpec{
Replicas: 1,
},
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 1,
+ },
},
},
}
@@ -98,6 +101,13 @@ func TestServicesMatchPorts(t *testing.T) {
NewIndexGatewayHTTPService(opt),
},
},
+ {
+ Containers: NewRulerStatefulSet(opt).Spec.Template.Spec.Containers,
+ Services: []*corev1.Service{
+ NewRulerGRPCService(opt),
+ NewRulerHTTPService(opt),
+ },
+ },
}
containerHasPort := func(containers []corev1.Container, port int32) bool {
@@ -163,6 +173,9 @@ func TestServicesMatchLabels(t *testing.T) {
IndexGateway: &lokiv1beta1.LokiComponentSpec{
Replicas: 1,
},
+ Ruler: &lokiv1beta1.LokiComponentSpec{
+ Replicas: 1,
+ },
},
},
}
@@ -217,6 +230,13 @@ func TestServicesMatchLabels(t *testing.T) {
NewIndexGatewayHTTPService(opt),
},
},
+ {
+ Object: NewRulerStatefulSet(opt),
+ Services: []*corev1.Service{
+ NewRulerGRPCService(opt),
+ NewRulerHTTPService(opt),
+ },
+ },
}
for _, tst := range table {
diff --git a/operator/internal/manifests/var.go b/operator/internal/manifests/var.go
index 9f87370702ce0..f1f1ba72a9b87 100644
--- a/operator/internal/manifests/var.go
+++ b/operator/internal/manifests/var.go
@@ -60,6 +60,8 @@ const (
LabelQueryFrontendComponent string = "query-frontend"
// LabelIndexGatewayComponent is the label value for the lokiStack-index-gateway component
LabelIndexGatewayComponent string = "index-gateway"
+ // LabelRulerComponent is the label value for the lokiStack-ruler component
+ LabelRulerComponent string = "ruler"
// LabelGatewayComponent is the label value for the lokiStack-gateway component
LabelGatewayComponent string = "lokistack-gateway"
)
@@ -136,6 +138,16 @@ func IndexGatewayName(stackName string) string {
return fmt.Sprintf("%s-index-gateway", stackName)
}
+// RulerName is the name of the ruler statefulset
+func RulerName(stackName string) string {
+ return fmt.Sprintf("%s-ruler", stackName)
+}
+
+// RulesConfigMapName is the name of the alerting rules configmap
+func RulesConfigMapName(stackName string) string {
+ return fmt.Sprintf("%s-rules", stackName)
+}
+
// GatewayName is the name of the lokiStack-gateway statefulset
func GatewayName(stackName string) string {
return fmt.Sprintf("%s-gateway", stackName)
@@ -194,6 +206,14 @@ func serviceNameIndexGatewayGRPC(stackName string) string {
return fmt.Sprintf("%s-index-gateway-grpc", stackName)
}
+func serviceNameRulerHTTP(stackName string) string {
+ return fmt.Sprintf("%s-ruler-http", stackName)
+}
+
+func serviceNameRulerGRPC(stackName string) string {
+ return fmt.Sprintf("%s-ruler-grpc", stackName)
+}
+
func serviceNameGatewayHTTP(stackName string) string {
return fmt.Sprintf("%s-gateway-http", stackName)
}
diff --git a/operator/internal/status/components.go b/operator/internal/status/components.go
index 66d0426e7b9c5..f4e2576ba281e 100644
--- a/operator/internal/status/components.go
+++ b/operator/internal/status/components.go
@@ -60,6 +60,12 @@ func SetComponentsStatus(ctx context.Context, k k8s.Client, req ctrl.Request) er
if err != nil {
return kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelGatewayComponent)
}
+
+ s.Status.Components.Ruler, err = appendPodStatus(ctx, k, manifests.LabelRulerComponent, s.Name, s.Namespace)
+ if err != nil {
+ return kverrors.Wrap(err, "failed lookup LokiStack component pods status", "name", manifests.LabelRulerComponent)
+ }
+
return k.Status().Update(ctx, &s, &client.UpdateOptions{})
}
diff --git a/operator/internal/status/components_test.go b/operator/internal/status/components_test.go
index fe51837157ce3..ad5ed0ec53c70 100644
--- a/operator/internal/status/components_test.go
+++ b/operator/internal/status/components_test.go
@@ -160,3 +160,157 @@ func TestSetComponentsStatus_WhenPodListExisting_SetPodStatusMap(t *testing.T) {
require.NotZero(t, k.StatusCallCount())
require.NotZero(t, sw.UpdateCallCount())
}
+
+func TestSetComponentsStatus_WhenRulerEnabled_SetPodStatusMap(t *testing.T) {
+ sw := &k8sfakes.FakeStatusWriter{}
+ k := &k8sfakes.FakeClient{}
+
+ k.StatusStub = func() client.StatusWriter { return sw }
+
+ s := lokiv1beta1.LokiStack{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "my-stack",
+ Namespace: "some-ns",
+ },
+ Spec: lokiv1beta1.LokiStackSpec{
+ Rules: &lokiv1beta1.RulesSpec{
+ Enabled: true,
+ },
+ },
+ }
+
+ r := ctrl.Request{
+ NamespacedName: types.NamespacedName{
+ Name: "my-stack",
+ Namespace: "some-ns",
+ },
+ }
+
+ k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object) error {
+ if r.Name == name.Name && r.Namespace == name.Namespace {
+ k.SetClientObject(object, &s)
+ return nil
+ }
+ return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
+ }
+
+ k.ListStub = func(_ context.Context, l client.ObjectList, _ ...client.ListOption) error {
+ pods := v1.PodList{
+ Items: []v1.Pod{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "pod-a",
+ },
+ Status: v1.PodStatus{
+ Phase: v1.PodPending,
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "pod-b",
+ },
+ Status: v1.PodStatus{
+ Phase: v1.PodRunning,
+ },
+ },
+ },
+ }
+ k.SetClientObjectList(l, &pods)
+ return nil
+ }
+
+ expected := lokiv1beta1.PodStatusMap{
+ "Pending": []string{"pod-a"},
+ "Running": []string{"pod-b"},
+ }
+
+ sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
+ stack := obj.(*lokiv1beta1.LokiStack)
+ require.Equal(t, expected, stack.Status.Components.Ruler)
+ return nil
+ }
+
+ err := status.SetComponentsStatus(context.TODO(), k, r)
+ require.NoError(t, err)
+ require.NotZero(t, k.ListCallCount())
+ require.NotZero(t, k.StatusCallCount())
+ require.NotZero(t, sw.UpdateCallCount())
+}
+
+func TestSetComponentsStatus_WhenRulerNotEnabled_DoNothing(t *testing.T) {
+ sw := &k8sfakes.FakeStatusWriter{}
+ k := &k8sfakes.FakeClient{}
+
+ k.StatusStub = func() client.StatusWriter { return sw }
+
+ s := lokiv1beta1.LokiStack{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "my-stack",
+ Namespace: "some-ns",
+ },
+ Spec: lokiv1beta1.LokiStackSpec{
+ Rules: &lokiv1beta1.RulesSpec{
+ Enabled: false,
+ },
+ },
+ }
+
+ r := ctrl.Request{
+ NamespacedName: types.NamespacedName{
+ Name: "my-stack",
+ Namespace: "some-ns",
+ },
+ }
+
+ k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object) error {
+ if r.Name == name.Name && r.Namespace == name.Namespace {
+ k.SetClientObject(object, &s)
+ return nil
+ }
+ return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
+ }
+
+ k.ListStub = func(_ context.Context, l client.ObjectList, o ...client.ListOption) error {
+ s := o[0].(client.MatchingLabels)
+
+ c, ok := s["app.kubernetes.io/component"]
+ if !ok || c == "ruler" {
+ return nil
+ }
+
+ pods := v1.PodList{
+ Items: []v1.Pod{
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "pod-a",
+ },
+ Status: v1.PodStatus{
+ Phase: v1.PodPending,
+ },
+ },
+ {
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "pod-b",
+ },
+ Status: v1.PodStatus{
+ Phase: v1.PodRunning,
+ },
+ },
+ },
+ }
+ k.SetClientObjectList(l, &pods)
+ return nil
+ }
+
+ sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
+ stack := obj.(*lokiv1beta1.LokiStack)
+ require.Equal(t, stack.Status.Components.Ruler, lokiv1beta1.PodStatusMap{})
+ return nil
+ }
+
+ err := status.SetComponentsStatus(context.TODO(), k, r)
+ require.NoError(t, err)
+ require.NotZero(t, k.ListCallCount())
+ require.NotZero(t, k.StatusCallCount())
+ require.NotZero(t, sw.UpdateCallCount())
+}
diff --git a/operator/internal/status/status.go b/operator/internal/status/status.go
index 9138db3c1ebdc..7c43434c63910 100644
--- a/operator/internal/status/status.go
+++ b/operator/internal/status/status.go
@@ -37,7 +37,8 @@ func Refresh(ctx context.Context, k k8s.Client, req ctrl.Request) error {
len(cs.Querier[corev1.PodFailed]) +
len(cs.QueryFrontend[corev1.PodFailed]) +
len(cs.Gateway[corev1.PodFailed]) +
- len(cs.IndexGateway[corev1.PodFailed])
+ len(cs.IndexGateway[corev1.PodFailed]) +
+ len(cs.Ruler[corev1.PodFailed])
unknown := len(cs.Compactor[corev1.PodUnknown]) +
len(cs.Distributor[corev1.PodUnknown]) +
@@ -45,7 +46,8 @@ func Refresh(ctx context.Context, k k8s.Client, req ctrl.Request) error {
len(cs.Querier[corev1.PodUnknown]) +
len(cs.QueryFrontend[corev1.PodUnknown]) +
len(cs.Gateway[corev1.PodUnknown]) +
- len(cs.IndexGateway[corev1.PodUnknown])
+ len(cs.IndexGateway[corev1.PodUnknown]) +
+ len(cs.Ruler[corev1.PodUnknown])
if failed != 0 || unknown != 0 {
return SetFailedCondition(ctx, k, req)
@@ -58,7 +60,8 @@ func Refresh(ctx context.Context, k k8s.Client, req ctrl.Request) error {
len(cs.Querier[corev1.PodPending]) +
len(cs.QueryFrontend[corev1.PodPending]) +
len(cs.Gateway[corev1.PodPending]) +
- len(cs.IndexGateway[corev1.PodPending])
+ len(cs.IndexGateway[corev1.PodPending]) +
+ len(cs.Ruler[corev1.PodPending])
if pending != 0 {
return SetPendingCondition(ctx, k, req)
diff --git a/operator/main.go b/operator/main.go
index 02f0a3b4bf4b5..0d2bc1c95c3b9 100644
--- a/operator/main.go
+++ b/operator/main.go
@@ -13,13 +13,14 @@ import (
// to ensure that exec-entrypoint and run can make use of them.
_ "k8s.io/client-go/plugin/pkg/client/auth"
+ configv1 "github.com/openshift/api/config/v1"
+ routev1 "github.com/openshift/api/route/v1"
+ monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
+
lokiv1beta1 "github.com/grafana/loki/operator/api/v1beta1"
"github.com/grafana/loki/operator/controllers"
"github.com/grafana/loki/operator/internal/manifests"
"github.com/grafana/loki/operator/internal/metrics"
- configv1 "github.com/openshift/api/config/v1"
- routev1 "github.com/openshift/api/route/v1"
- monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@@ -123,6 +124,30 @@ func main() {
logger.Error(err, "unable to create controller", "controller", "LokiStack")
os.Exit(1)
}
+ if err = (&controllers.AlertingRuleReconciler{
+ Client: mgr.GetClient(),
+ Log: logger.WithName("controllers").WithName("AlertingRule"),
+ Scheme: mgr.GetScheme(),
+ }).SetupWithManager(mgr); err != nil {
+ logger.Error(err, "unable to create controller", "controller", "AlertingRule")
+ os.Exit(1)
+ }
+ if err = (&lokiv1beta1.AlertingRule{}).SetupWebhookWithManager(mgr); err != nil {
+ logger.Error(err, "unable to create webhook", "webhook", "AlertingRule")
+ os.Exit(1)
+ }
+ if err = (&controllers.RecordingRuleReconciler{
+ Client: mgr.GetClient(),
+ Log: logger.WithName("controllers").WithName("RecordingRule"),
+ Scheme: mgr.GetScheme(),
+ }).SetupWithManager(mgr); err != nil {
+ logger.Error(err, "unable to create controller", "controller", "RecordingRule")
+ os.Exit(1)
+ }
+ if err = (&lokiv1beta1.RecordingRule{}).SetupWebhookWithManager(mgr); err != nil {
+ logger.Error(err, "unable to create webhook", "webhook", "RecordingRule")
+ os.Exit(1)
+ }
// +kubebuilder:scaffold:builder
if err = mgr.AddHealthzCheck("health", healthz.Ping); err != nil {
|
operator
|
Add rules support (#5986)
|
523b5dea5b50d332a6a6f9782f2105eb965436fd
|
2024-03-29 01:02:11
|
Benoit Arnaud
|
docs: Update _index.md (#10773)
| false
|
diff --git a/docs/sources/setup/install/helm/install-monolithic/_index.md b/docs/sources/setup/install/helm/install-monolithic/_index.md
index 01a3d6d357e10..e85d6a52159b5 100644
--- a/docs/sources/setup/install/helm/install-monolithic/_index.md
+++ b/docs/sources/setup/install/helm/install-monolithic/_index.md
@@ -60,10 +60,17 @@ If you set the `singleBinary.replicas` value to 2 or more, this chart configures
ruler: loki-ruler
admin: loki-admin
type: 's3'
+ bucketNames:
+ chunks: loki-chunks
+ ruler: loki-ruler
+ admin: loki-admin
s3:
endpoint: foo.aws.com
+ region: <AWS region>
secretAccessKey: supersecret
accessKeyId: secret
+ s3ForcePathStyle: false
+ insecure: false
singleBinary:
replicas: 3
```
|
docs
|
Update _index.md (#10773)
|
6abb12dd7078d7e0d84ab67d93301c9599768840
|
2025-02-19 18:45:48
|
Jackson Coelho
|
chore(helm): add missing check for adminAPI on enterprise (#16369)
| false
|
diff --git a/production/helm/loki/templates/admin-api/deployment-admin-api.yaml b/production/helm/loki/templates/admin-api/deployment-admin-api.yaml
index 75623bdbf8995..b6dbcf75c80e5 100644
--- a/production/helm/loki/templates/admin-api/deployment-admin-api.yaml
+++ b/production/helm/loki/templates/admin-api/deployment-admin-api.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.enterprise.enabled }}
+{{- if and .Values.enterprise.enabled .Values.enterprise.adminApi.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
diff --git a/production/helm/loki/templates/admin-api/service-admin-api.yaml b/production/helm/loki/templates/admin-api/service-admin-api.yaml
index c7daa2790a120..8f8172366e734 100644
--- a/production/helm/loki/templates/admin-api/service-admin-api.yaml
+++ b/production/helm/loki/templates/admin-api/service-admin-api.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.enterprise.enabled }}
+{{- if and .Values.enterprise.enabled .Values.enterprise.adminApi.enabled }}
apiVersion: v1
kind: Service
metadata:
@@ -25,4 +25,4 @@ spec:
targetPort: grpc
selector:
{{- include "enterprise-logs.adminApiSelectorLabels" . | nindent 4 }}
-{{- end }}
\ No newline at end of file
+{{- end }}
|
chore
|
add missing check for adminAPI on enterprise (#16369)
|
784c4ce2325250f31ab75874bbfc601dc685bbbd
|
2020-02-05 23:56:53
|
Robert Fratto
|
ci: print error when getting tags fails (#1640)
| false
|
diff --git a/tools/delete_tags.go b/tools/delete_tags.go
index 5bb468550572f..fcb6286a94288 100644
--- a/tools/delete_tags.go
+++ b/tools/delete_tags.go
@@ -19,6 +19,11 @@ type auth struct {
Password string `json:"password"`
}
+func logAndQuit(fmt string, args ...interface{}) {
+ log.Printf(fmt, args...)
+ os.Exit(0)
+}
+
func main() {
var (
auth auth
@@ -51,12 +56,12 @@ func main() {
// Get an auth token
jwt, err := getJWT(auth)
if err != nil {
- log.Fatalln(err)
+ logAndQuit(err.Error())
}
tags, err := getTags(jwt, repo)
if err != nil {
- log.Fatalln(err)
+ logAndQuit(err.Error())
}
log.Printf("Discovered %d tags pre-filtering\n", len(tags))
@@ -108,7 +113,7 @@ func getJWT(a auth) (string, error) {
return "", err
}
resp.Body.Close()
- log.Fatalf("failed to log in: %v", string(body))
+ return "", fmt.Errorf("failed to log in: %v", string(body))
}
defer resp.Body.Close()
@@ -165,11 +170,16 @@ func getTagsFromURL(jwt string, url string) (getTagResponse, error) {
if err != nil {
return res, err
}
- if resp.StatusCode != 200 {
- return res, errors.New("failed to get tags")
- }
defer resp.Body.Close()
+ if resp.StatusCode/100 != 2 {
+ bb, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return res, err
+ }
+ return res, errors.New(string(bb))
+ }
+
err = json.NewDecoder(resp.Body).Decode(&res)
return res, err
}
@@ -188,10 +198,13 @@ func deleteTag(jwt string, repo string, tag string) error {
}
defer resp.Body.Close()
- bb, err := ioutil.ReadAll(resp.Body)
if resp.StatusCode/100 != 2 {
- return fmt.Errorf("resp code %d: %s", resp.StatusCode, string(bb))
+ bb, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return err
+ }
+ return errors.New(string(bb))
}
- return err
+ return nil
}
|
ci
|
print error when getting tags fails (#1640)
|
a550b767d3c8c132362165d6544a09907e613854
|
2023-10-26 19:29:16
|
Kaviraj Kanagaraj
|
config: Remove already deprecated `store.max-look-back-period`. (#11038)
| false
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index c62cf0894d703..3e40e765342d6 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,7 @@
##### Enhancements
+* [11038](https://github.com/grafana/loki/pull/11038) **kavirajk**: Remove already deprecated `store.max-look-back-period`.
* [10906](https://github.com/grafana/loki/pull/10906) **kavirajk**: Support Loki ruler to notify WAL writes to remote storage.
* [10613](https://github.com/grafana/loki/pull/10613) **ngc4579**: Helm: allow GrafanaAgent tolerations
* [10295](https://github.com/grafana/loki/pull/10295) **changhyuni**: Storage: remove signatureversionv2 from s3.
diff --git a/docs/sources/configure/_index.md b/docs/sources/configure/_index.md
index b68bba15c8d7e..f32fc97d494cf 100644
--- a/docs/sources/configure/_index.md
+++ b/docs/sources/configure/_index.md
@@ -2272,10 +2272,6 @@ The `chunk_store_config` block configures how chunks will be cached and how long
# Cache index entries older than this period. 0 to disable.
# CLI flag: -store.cache-lookups-older-than
[cache_lookups_older_than: <duration> | default = 0s]
-
-# This flag is deprecated. Use -querier.max-query-lookback instead.
-# CLI flag: -store.max-look-back-period
-[max_look_back_period: <duration> | default = 0s]
```
### schema_config
diff --git a/docs/sources/setup/upgrade/_index.md b/docs/sources/setup/upgrade/_index.md
index 969465441cbd2..3d9dc1ac941de 100644
--- a/docs/sources/setup/upgrade/_index.md
+++ b/docs/sources/setup/upgrade/_index.md
@@ -47,9 +47,10 @@ The previous default value `false` is applied.
#### Deprecated configuration options are removed
-1. Removes already deprecated `-querier.engine.timeout` CLI flag and the corresponding YAML setting.
+1. Removed already deprecated `store.max-look-back-period` CLI flag and the corresponding YAML settings. Use. Use `querier.max-query-lookback` config instead.
+1. Removes already deprecated `-querier.engine.timeout` CLI flag and the corresponding YAML setting.
1. Also removes the `query_timeout` from the querier YAML section. Instead of configuring `query_timeout` under `querier`, you now configure it in [Limits Config](/docs/loki/latest/configuration/#limits_config).
-1. `s3.sse-encryption` is removed. AWS now defaults encryption of all buckets to SSE-S3. Use `sse.type` to set SSE type.
+1. `s3.sse-encryption` is removed. AWS now defaults encryption of all buckets to SSE-S3. Use `sse.type` to set SSE type.
1. `ruler.wal-cleaer.period` is removed. Use `ruler.wal-cleaner.period` instead.
1. `experimental.ruler.enable-api` is removed. Use `ruler.enable-api` instead.
1. `split_queries_by_interval` is removed from `query_range` YAML section. You can instead configure it in [Limits Config](/docs/loki/latest/configuration/#limits_config).
diff --git a/pkg/loki/loki.go b/pkg/loki/loki.go
index 0e4bc081551d7..7b28d4b5ef41c 100644
--- a/pkg/loki/loki.go
+++ b/pkg/loki/loki.go
@@ -248,11 +248,6 @@ func (c *Config) Validate() error {
if err := c.ChunkStoreConfig.Validate(util_log.Logger); err != nil {
return errors.Wrap(err, "invalid chunk store config")
}
- // TODO(cyriltovena): remove when MaxLookBackPeriod in the storage will be fully deprecated.
- if c.ChunkStoreConfig.MaxLookBackPeriod > 0 {
- c.LimitsConfig.MaxQueryLookback = c.ChunkStoreConfig.MaxLookBackPeriod
- }
-
if err := c.QueryRange.Validate(); err != nil {
return errors.Wrap(err, "invalid query_range config")
}
diff --git a/pkg/storage/config/store.go b/pkg/storage/config/store.go
index 1c93bacf0d08a..9fe276b47614c 100644
--- a/pkg/storage/config/store.go
+++ b/pkg/storage/config/store.go
@@ -5,8 +5,6 @@ import (
"time"
"github.com/go-kit/log"
- "github.com/go-kit/log/level"
- "github.com/grafana/dskit/flagext"
"github.com/prometheus/common/model"
"github.com/grafana/loki/pkg/storage/chunk/cache"
@@ -28,10 +26,6 @@ type ChunkStoreConfig struct {
// When DisableIndexDeduplication is true and chunk is already there in cache, only index would be written to the store and not chunk.
DisableIndexDeduplication bool `yaml:"-"`
-
- // Limits query start time to be greater than now() - MaxLookBackPeriod, if set.
- // Will be deprecated in the next major release.
- MaxLookBackPeriod model.Duration `yaml:"max_look_back_period"`
}
func (cfg *ChunkStoreConfig) ChunkCacheStubs() bool {
@@ -47,14 +41,8 @@ func (cfg *ChunkStoreConfig) RegisterFlags(f *flag.FlagSet) {
cfg.WriteDedupeCacheConfig.RegisterFlagsWithPrefix("store.index-cache-write.", "", f)
f.Var(&cfg.CacheLookupsOlderThan, "store.cache-lookups-older-than", "Cache index entries older than this period. 0 to disable.")
- f.Var(&cfg.MaxLookBackPeriod, "store.max-look-back-period", "This flag is deprecated. Use -querier.max-query-lookback instead.")
}
func (cfg *ChunkStoreConfig) Validate(logger log.Logger) error {
- if cfg.MaxLookBackPeriod > 0 {
- flagext.DeprecatedFlagsUsed.Inc()
- level.Warn(logger).Log("msg", "running with DEPRECATED flag -store.max-look-back-period, use -querier.max-query-lookback instead.")
- }
-
return nil
}
diff --git a/tools/deprecated-config-checker/checker/checker_test.go b/tools/deprecated-config-checker/checker/checker_test.go
index 929166ed4aa7d..9d93bc84b62a4 100644
--- a/tools/deprecated-config-checker/checker/checker_test.go
+++ b/tools/deprecated-config-checker/checker/checker_test.go
@@ -26,6 +26,7 @@ var (
"storage_config.boltdb_shipper.use_boltdb_shipper_as_backup",
"storage_config.aws.sse_encryption",
"storage_config.s3.sse_encryption",
+ "chunk_store_config.max_look_back_period",
}
expectedConfigDeprecates = []string{
@@ -38,7 +39,6 @@ var (
"storage_config.grpc_store",
"storage_config.aws.dynamodb",
"chunk_store_config.write_dedupe_cache_config",
- "chunk_store_config.max_look_back_period",
"limits_config.unordered_writes",
"limits_config.ruler_evaluation_delay_duration",
"limits_config.ruler_remote_write_url",
diff --git a/tools/deprecated-config-checker/deleted-config.yaml b/tools/deprecated-config-checker/deleted-config.yaml
index 9fa53d61cfbd4..b21bc995185d1 100644
--- a/tools/deprecated-config-checker/deleted-config.yaml
+++ b/tools/deprecated-config-checker/deleted-config.yaml
@@ -31,3 +31,6 @@ storage_config:
use_boltdb_shipper_as_backup: "Since TSDB is now stable and the recommended index type, the setting has become irrelevant and therefore was removed. The previous default value false is applied."
aws: *s3_deletes
s3: *s3_deletes
+
+chunk_store_config:
+ max_look_back_period: "Use global or per-tenant max_query_lookback configuration from limits_config."
diff --git a/tools/deprecated-config-checker/deprecated-config.yaml b/tools/deprecated-config-checker/deprecated-config.yaml
index 873ef9ec76c85..0cd8e8fd8c818 100644
--- a/tools/deprecated-config-checker/deprecated-config.yaml
+++ b/tools/deprecated-config-checker/deprecated-config.yaml
@@ -44,7 +44,6 @@ storage_config:
chunk_store_config:
write_dedupe_cache_config: "Write dedupe cache is deprecated along with deprecated index types. Consider using TSDB index which does not require a write dedupe cache."
- max_look_back_period: "Use global or per-tenant max_query_lookback configuration from limits_config."
## NOTE: This will also be used to validate per-tenant overrides.
limits_config:
diff --git a/tools/deprecated-config-checker/test-fixtures/config.yaml b/tools/deprecated-config-checker/test-fixtures/config.yaml
index eaa713ff23e25..2600c63034ea4 100644
--- a/tools/deprecated-config-checker/test-fixtures/config.yaml
+++ b/tools/deprecated-config-checker/test-fixtures/config.yaml
@@ -47,7 +47,7 @@ chunk_store_config:
cache_lookups_older_than: 1h
write_dedupe_cache_config: # DEPRECATED
default_validity: 30m
- max_look_back_period: 1m # DEPRECATED
+ max_look_back_period: 1m # DELETED
ruler:
flush_period: 1s
|
config
|
Remove already deprecated `store.max-look-back-period`. (#11038)
|
1d99b065144c5d6a4e2ac5def4a329d3cbce4718
|
2024-03-22 22:10:36
|
Travis Patterson
|
fix: explicitly serialize and deserialize noop label filters (#12124)
| false
|
diff --git a/pkg/logql/syntax/serialize.go b/pkg/logql/syntax/serialize.go
index 53c4bef37d290..84af7e803d0d3 100644
--- a/pkg/logql/syntax/serialize.go
+++ b/pkg/logql/syntax/serialize.go
@@ -69,6 +69,7 @@ const (
RHS = "rhs"
Src = "src"
StringField = "string"
+ NoopField = "noop"
Type = "type"
Unwrap = "unwrap"
Value = "value"
@@ -415,8 +416,26 @@ func encodeLabelFilter(s *jsoniter.Stream, filter log.LabelFilterer) {
s.WriteObjectEnd()
s.WriteObjectEnd()
- case log.NoopLabelFilter:
- return
+ case *log.NoopLabelFilter:
+ s.WriteObjectStart()
+ s.WriteObjectField(NoopField)
+
+ s.WriteObjectStart()
+ if concrete.Matcher != nil {
+ s.WriteObjectField(Name)
+ s.WriteString(concrete.Name)
+
+ s.WriteMore()
+ s.WriteObjectField(Value)
+ s.WriteString(concrete.Value)
+
+ s.WriteMore()
+ s.WriteObjectField(Type)
+ s.WriteInt(int(concrete.Type))
+ }
+ s.WriteObjectEnd()
+
+ s.WriteObjectEnd()
case *log.BytesLabelFilter:
s.WriteObjectStart()
s.WriteObjectField(Bytes)
@@ -606,8 +625,7 @@ func decodeLabelFilter(iter *jsoniter.Iterator) log.LabelFilterer {
}
filter = log.NewNumericLabelFilter(t, name, value)
- case StringField:
-
+ case StringField, NoopField:
var name string
var value string
var t labels.MatchType
diff --git a/pkg/logql/syntax/serialize_test.go b/pkg/logql/syntax/serialize_test.go
index 2c6bb6f0ef663..a50cf5c78a989 100644
--- a/pkg/logql/syntax/serialize_test.go
+++ b/pkg/logql/syntax/serialize_test.go
@@ -53,6 +53,9 @@ func TestJSONSerializationRoundTrip(t *testing.T) {
"multiple post filters": {
query: `rate({app="foo"} | json | unwrap foo | latency >= 250ms or bytes > 42B or ( status_code < 500 and status_code > 200) or source = ip("") and user = "me" [1m])`,
},
+ "multiple post filters where one is a noop": {
+ query: `rate({app="foo"} | json | unwrap foo | latency >= 250ms or bytes=~".*" [1m])`,
+ },
"empty label filter string": {
query: `rate({app="foo"} |= "bar" | json | unwrap latency | path!="" [5m])`,
},
|
fix
|
explicitly serialize and deserialize noop label filters (#12124)
|
8892dc89231ebe7b05fc1c4e0b7647f328f9c1ce
|
2024-04-29 22:56:19
|
Tanat Lokejaroenlarb
|
feat: parameterise the MaximumEventAgeInSeconds, LogGroupName, and IAMRoleName for lambda-promtail CloudFormation template (#12728)
| false
|
diff --git a/tools/lambda-promtail/template.yaml b/tools/lambda-promtail/template.yaml
index 6fc1d90e030e5..57dcc80660c8a 100644
--- a/tools/lambda-promtail/template.yaml
+++ b/tools/lambda-promtail/template.yaml
@@ -13,6 +13,10 @@ Parameters:
Description: The maximum of concurrent executions you want to reserve for the function.
Type: Number
Default: 2
+ MaximumEventAgeInSeconds:
+ Description: The maximum age of a request that Lambda sends to a function for processing.
+ Type: Number
+ Default: 21600
Username:
Description: The basic auth username, necessary if writing directly to Grafana Cloud Loki.
Type: String
@@ -51,6 +55,14 @@ Parameters:
Description: Determines whether to verify the TLS certificate
Type: String
Default: "false"
+ LogGroupName:
+ Description: Name of the CloudWatch Log Group to subscribe from.
+ Type: String
+ Default: "/aws/lambda/some-lamda-log-group"
+ IAMRoleName:
+ Description: Name of the LambdaPromtailRole IAM Role.
+ Type: String
+ Default: "iam_for_lambda"
Resources:
LambdaPromtailRole:
@@ -78,7 +90,7 @@ Resources:
- logs:PutLogEvents
- logs:PutSubscriptionFilter
Resource: arn:aws:logs:*:*:*
- RoleName: iam_for_lambda
+ RoleName: !Ref IAMRoleName
LambdaPromtailFunction:
Type: AWS::Lambda::Function
Properties:
@@ -119,6 +131,7 @@ Resources:
Properties:
FunctionName: !Ref LambdaPromtailFunction
MaximumRetryAttempts: 2
+ MaximumEventAgeInSeconds: !Ref MaximumEventAgeInSeconds
Qualifier: !GetAtt LambdaPromtailVersion.Version
# Copy this block and modify as required to create Subscription Filters for
# additional CloudWatch Log Groups.
@@ -128,7 +141,7 @@ Resources:
Properties:
DestinationArn: !GetAtt LambdaPromtailFunction.Arn
FilterPattern: ""
- LogGroupName: "/aws/lambda/some-lamda-log-group"
+ LogGroupName: !Ref LogGroupName
Outputs:
LambdaPromtailFunction:
|
feat
|
parameterise the MaximumEventAgeInSeconds, LogGroupName, and IAMRoleName for lambda-promtail CloudFormation template (#12728)
|
8dc9a9cd910cdfd096b40995cf56c36fd73bdc53
|
2020-06-04 18:39:11
|
Ed Welch
|
docs: fix config error for new metrics in docs (#2163)
| false
|
diff --git a/docs/clients/promtail/stages/metrics.md b/docs/clients/promtail/stages/metrics.md
index 380be83390d96..f48b1f9eebd42 100644
--- a/docs/clients/promtail/stages/metrics.md
+++ b/docs/clients/promtail/stages/metrics.md
@@ -141,16 +141,16 @@ config:
type: Counter
description: "total number of log lines"
prefix: my_promtail_custom_
- match_all: true
config:
+ match_all: true
action: inc
log_bytes_total:
type: Counter
description: "total bytes of log lines"
prefix: my_promtail_custom_
- match_all: true
- count_entry_bytes: true
config:
+ match_all: true
+ count_entry_bytes: true
action: add
```
|
docs
|
fix config error for new metrics in docs (#2163)
|
d804e24c2653b17afc82f60e49104f7a20b3f95a
|
2022-07-07 16:19:19
|
Robert Jacob
|
operator: Disable client certificate authentication on gateway (#6594)
| false
|
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index a3c2c06c7f5f1..c180d029b10c2 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,6 @@
## Main
+- [6594](https://github.com/grafana/loki/pull/6594) **xperimental**: Disable client certificate authentication on gateway
- [6551](https://github.com/grafana/loki/pull/6561) **periklis**: Add operator docs for object storage
- [6549](https://github.com/grafana/loki/pull/6549) **periklis**: Refactor feature gates to use custom resource definition
- [6514](https://github.com/grafana/loki/pull/6514) **Red-GV** Update all pods and containers to be compliant with restricted Pod Security Standard
diff --git a/operator/internal/manifests/gateway_tenants_test.go b/operator/internal/manifests/gateway_tenants_test.go
index 87f0ceabbc690..2e47600b4e16e 100644
--- a/operator/internal/manifests/gateway_tenants_test.go
+++ b/operator/internal/manifests/gateway_tenants_test.go
@@ -258,6 +258,7 @@ func TestConfigureDeploymentForMode(t *testing.T) {
"--logs.tail.endpoint=http://example.com",
"--logs.write.endpoint=http://example.com",
fmt.Sprintf("--web.healthchecks.url=https://localhost:%d", gatewayHTTPPort),
+ "--tls.client-auth-type=NoClientCert",
"--tls.server.cert-file=/var/run/tls/http/tls.crt",
"--tls.server.key-file=/var/run/tls/http/tls.key",
"--tls.healthchecks.server-ca-file=/var/run/ca/service-ca.crt",
@@ -429,6 +430,7 @@ func TestConfigureDeploymentForMode(t *testing.T) {
"--logs.tail.endpoint=http://example.com",
"--logs.write.endpoint=http://example.com",
fmt.Sprintf("--web.healthchecks.url=https://localhost:%d", gatewayHTTPPort),
+ "--tls.client-auth-type=NoClientCert",
"--tls.server.cert-file=/var/run/tls/http/tls.crt",
"--tls.server.key-file=/var/run/tls/http/tls.key",
"--tls.healthchecks.server-ca-file=/var/run/ca/service-ca.crt",
@@ -613,6 +615,7 @@ func TestConfigureDeploymentForMode(t *testing.T) {
"--logs.write.endpoint=https://example.com",
fmt.Sprintf("--web.healthchecks.url=https://localhost:%d", gatewayHTTPPort),
"--logs.tls.ca-file=/var/run/ca/service-ca.crt",
+ "--tls.client-auth-type=NoClientCert",
"--tls.server.cert-file=/var/run/tls/http/tls.crt",
"--tls.server.key-file=/var/run/tls/http/tls.key",
"--tls.healthchecks.server-ca-file=/var/run/ca/service-ca.crt",
diff --git a/operator/internal/manifests/openshift/configure.go b/operator/internal/manifests/openshift/configure.go
index 9cd489c735c55..c02bade5d1ebc 100644
--- a/operator/internal/manifests/openshift/configure.go
+++ b/operator/internal/manifests/openshift/configure.go
@@ -98,6 +98,7 @@ func ConfigureGatewayDeployment(
keyFilePath := path.Join(tlsDir, keyFile)
caFilePath := path.Join(caDir, caFile)
gwArgs = append(gwArgs,
+ "--tls.client-auth-type=NoClientCert",
fmt.Sprintf("--tls.server.cert-file=%s", certFilePath),
fmt.Sprintf("--tls.server.key-file=%s", keyFilePath),
fmt.Sprintf("--tls.healthchecks.server-ca-file=%s", caFilePath),
|
operator
|
Disable client certificate authentication on gateway (#6594)
|
a38bba9d90bd4357544cc147eaca62bfb3357395
|
2025-03-06 00:16:43
|
renovate[bot]
|
fix(deps): update module golang.org/x/time to v0.11.0 (main) (#16571)
| false
|
diff --git a/go.mod b/go.mod
index 206150168d014..02b33da62b6e9 100644
--- a/go.mod
+++ b/go.mod
@@ -103,7 +103,7 @@ require (
golang.org/x/net v0.36.0
golang.org/x/sync v0.12.0
golang.org/x/sys v0.31.0
- golang.org/x/time v0.10.0
+ golang.org/x/time v0.11.0
google.golang.org/api v0.223.0
google.golang.org/grpc v1.70.0
gopkg.in/yaml.v2 v2.4.0
diff --git a/go.sum b/go.sum
index 50bfe611eb872..7a7bf4d285d1d 100644
--- a/go.sum
+++ b/go.sum
@@ -1547,8 +1547,8 @@ golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxb
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
-golang.org/x/time v0.10.0 h1:3usCWA8tQn0L8+hFJQNgzpWbd89begxN66o1Ojdn5L4=
-golang.org/x/time v0.10.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
+golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
+golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
diff --git a/vendor/golang.org/x/time/rate/rate.go b/vendor/golang.org/x/time/rate/rate.go
index ec5f0cdd0c0e6..794b2e32bfaa2 100644
--- a/vendor/golang.org/x/time/rate/rate.go
+++ b/vendor/golang.org/x/time/rate/rate.go
@@ -85,7 +85,7 @@ func (lim *Limiter) Burst() int {
// TokensAt returns the number of tokens available at time t.
func (lim *Limiter) TokensAt(t time.Time) float64 {
lim.mu.Lock()
- _, tokens := lim.advance(t) // does not mutate lim
+ tokens := lim.advance(t) // does not mutate lim
lim.mu.Unlock()
return tokens
}
@@ -186,7 +186,7 @@ func (r *Reservation) CancelAt(t time.Time) {
return
}
// advance time to now
- t, tokens := r.lim.advance(t)
+ tokens := r.lim.advance(t)
// calculate new number of tokens
tokens += restoreTokens
if burst := float64(r.lim.burst); tokens > burst {
@@ -307,7 +307,7 @@ func (lim *Limiter) SetLimitAt(t time.Time, newLimit Limit) {
lim.mu.Lock()
defer lim.mu.Unlock()
- t, tokens := lim.advance(t)
+ tokens := lim.advance(t)
lim.last = t
lim.tokens = tokens
@@ -324,7 +324,7 @@ func (lim *Limiter) SetBurstAt(t time.Time, newBurst int) {
lim.mu.Lock()
defer lim.mu.Unlock()
- t, tokens := lim.advance(t)
+ tokens := lim.advance(t)
lim.last = t
lim.tokens = tokens
@@ -347,7 +347,7 @@ func (lim *Limiter) reserveN(t time.Time, n int, maxFutureReserve time.Duration)
}
}
- t, tokens := lim.advance(t)
+ tokens := lim.advance(t)
// Calculate the remaining number of tokens resulting from the request.
tokens -= float64(n)
@@ -380,10 +380,11 @@ func (lim *Limiter) reserveN(t time.Time, n int, maxFutureReserve time.Duration)
return r
}
-// advance calculates and returns an updated state for lim resulting from the passage of time.
+// advance calculates and returns an updated number of tokens for lim
+// resulting from the passage of time.
// lim is not changed.
// advance requires that lim.mu is held.
-func (lim *Limiter) advance(t time.Time) (newT time.Time, newTokens float64) {
+func (lim *Limiter) advance(t time.Time) (newTokens float64) {
last := lim.last
if t.Before(last) {
last = t
@@ -396,7 +397,7 @@ func (lim *Limiter) advance(t time.Time) (newT time.Time, newTokens float64) {
if burst := float64(lim.burst); tokens > burst {
tokens = burst
}
- return t, tokens
+ return tokens
}
// durationFromTokens is a unit conversion function from the number of tokens to the duration
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 47e9d462b68bc..378712539f19d 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -2065,8 +2065,8 @@ golang.org/x/text/secure/bidirule
golang.org/x/text/transform
golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm
-# golang.org/x/time v0.10.0
-## explicit; go 1.18
+# golang.org/x/time v0.11.0
+## explicit; go 1.23.0
golang.org/x/time/rate
# golang.org/x/tools v0.29.0
## explicit; go 1.22.0
|
fix
|
update module golang.org/x/time to v0.11.0 (main) (#16571)
|
687662ea0821a0d5721e53af257df231d28ff754
|
2020-06-23 02:41:51
|
Diana Payton
|
docs: Local install edits (#2220)
| false
|
diff --git a/docs/clients/README.md b/docs/clients/README.md
index bb54fd657b4e4..900586c137f2c 100644
--- a/docs/clients/README.md
+++ b/docs/clients/README.md
@@ -1,13 +1,13 @@
-# Loki Clients
+# Loki clients
Loki supports the following official clients for sending logs:
-1. [Promtail](./promtail/README.md)
-2. [Docker Driver](./docker-driver/README.md)
-3. [Fluentd](./fluentd/README.md)
-4. [Fluent Bit](../../cmd/fluent-bit/README.md)
+- [Promtail](./promtail/README.md)
+- [Docker Driver](./docker-driver/README.md)
+- [Fluentd](./fluentd/README.md)
+- [Fluent Bit](../../cmd/fluent-bit/README.md)
-## Picking a Client
+## Picking a client
While all clients can be used simultaneously to cover multiple use cases, which
client is initially picked to send logs depends on your use case.
@@ -41,9 +41,9 @@ and you already have configured `Parser` and `Filter` plugins.
Fluentd also works well for extracting metrics from logs when using its
Prometheus plugin.
-# Unofficial Clients
+# Unofficial clients
-Please note that the Loki API is not stable yet and breaking changes may occur
+Please note that the Loki API is not stable yet, so breaking changes might occur
when using or writing a third-party client.
- [promtail-client](https://github.com/afiskon/promtail-client) (Go)
diff --git a/docs/clients/promtail/README.md b/docs/clients/promtail/README.md
index 51e5fed0716e9..f503928d41f98 100644
--- a/docs/clients/promtail/README.md
+++ b/docs/clients/promtail/README.md
@@ -6,14 +6,14 @@ deployed to every machine that has applications needed to be monitored.
It primarily:
-1. Discovers targets
-2. Attaches labels to log streams
-3. Pushes them to the Loki instance.
+- Discovers targets
+- Attaches labels to log streams
+- Pushes them to the Loki instance.
Currently, Promtail can tail logs from two sources: local log files and the
systemd journal (on AMD64 machines only).
-## Log File Discovery
+## Log file discovery
Before Promtail can ship any data from log files to Loki, it needs to find out
information about its environment. Specifically, this means discovering
@@ -32,12 +32,12 @@ Just like Prometheus, `promtail` is configured using a `scrape_configs` stanza.
drop, and the final metadata to attach to the log line. Refer to the docs for
[configuring Promtail](configuration.md) for more details.
-## Receiving Logs From Syslog
+## Receiving logs From Syslog
When the [Syslog Target](./scraping.md#syslog-target) is being used, logs
can be written with the syslog protocol to the configured port.
-## Labeling and Parsing
+## Labeling and parsing
During service discovery, metadata is determined (pod name, filename, etc.) that
may be attached to the log line as a label for easier identification when
diff --git a/docs/clients/promtail/installation.md b/docs/clients/promtail/installation.md
index c32d16e359729..fdf2dc3fc6f88 100644
--- a/docs/clients/promtail/installation.md
+++ b/docs/clients/promtail/installation.md
@@ -1,4 +1,4 @@
-# Installing Promtail
+# Install Promtail
Promtail is distributed as a [binary](#binary), [Docker container](#docker), and
[Helm chart](#helm).
@@ -12,7 +12,7 @@ Every release includes binaries for Promtail which can be found on the
```bash
# modify tag to most recent version
-$ docker pull grafana/promtail:1.5.0
+docker pull grafana/promtail:1.5.0
```
## Helm
@@ -23,13 +23,13 @@ Make sure that Helm is
Then you can add Loki's chart repository to Helm:
```bash
-$ helm repo add loki https://grafana.github.io/loki/charts
+helm repo add loki https://grafana.github.io/loki/charts
```
And the chart repository can be updated by running:
```bash
-$ helm repo update
+helm repo update
```
Finally, Promtail can be deployed with:
@@ -40,7 +40,7 @@ $ helm upgrade --install promtail loki/promtail --set "loki.serviceName=loki"
## Kubernetes
-### DaemonSet (Recommended)
+### DaemonSet (recommended)
A `DaemonSet` will deploy `promtail` on every node within a Kubernetes cluster.
diff --git a/docs/getting-started/get-logs-into-loki.md b/docs/getting-started/get-logs-into-loki.md
new file mode 100644
index 0000000000000..572fadd69a7f8
--- /dev/null
+++ b/docs/getting-started/get-logs-into-loki.md
@@ -0,0 +1,74 @@
+# Get logs into Loki
+
+After you [install and run Loki](./installation/local.md), you probably want to get logs from other applications into it.
+
+To get application logs into Loki, you need to edit the [Promtail](./clients/promtail/README.md) config file.
+
+Detailed information about configuring Promtail is available in [Promtail configuration](./clients/promtail/configuration.md).
+
+The following instructions should help you get started.
+
+1. If you haven't already, download a Promtail configuration file. Keep track of where it is, because you will need to cite it when you run the binary.
+
+```
+wget https://github.com/grafana/loki/blob/master/cmd/promtail/promtail-local-config.yaml
+```
+
+2. Open the config file in the text editor of your choice. It should look similar to this:
+
+```
+server:
+ http_listen_port: 9080
+ grpc_listen_port: 0
+
+positions:
+ filename: /tmp/positions.yaml
+
+clients:
+ - url: http://loki:3100/loki/api/v1/push
+
+scrape_configs:
+- job_name: system
+ static_configs:
+ - targets:
+ - localhost
+ labels:
+ job: varlogs
+ __path__: /var/log/*log
+```
+
+ The seven lines under `scrape_configs` are what send the logs that Loki generates to Loki, which then outputs them in the command line and http://localhost:3100/metrics.
+
+3. Copy the seven lines under `scrape_configs`, and then paste them under the original job (you can also just edit the original seven lines).
+
+ Below is an example that sends logs from a default Grafana installation to Loki. We updated the following fields:
+ - job_name - This differentiates the logs collected from other log groups.
+ - targets - Optional for static_configs, however is often defined because in older versions of Promtail it was not optional. This was an artifact from directly using the Prometheus service discovery code which required this entry.
+ - labels - Static label to apply to every log line scraped by this definition. Good examples would be environment name, job name, or app name.
+ - __path__ - The path to where the logs are stored that I want Loki to consume.
+
+```
+- job_name: grafana
+ static_configs:
+ - targets:
+ - grafana
+ labels:
+ job: grafana
+ __path__: "C:/Program Files/GrafanaLabs/grafana/data/log/grafana.log"
+```
+
+4. Enter the following command to run Promtail. Examples below assume you have put the config file in the same directory as the binary.
+
+**Windows**
+
+```
+`.\promtail-windows-amd64.exe --config.file=promtail-local-config.yaml`
+```
+
+**Linux**
+
+```
+./promtail-linux-amd64 -config.file=promtail-local-config.yaml
+```
+
+You should now see your application logs. If you are using Grafana, you might need to refresh your instance in order to see the logs.
diff --git a/docs/installation/README.md b/docs/installation/README.md
index c6655b7cede11..9ba4e0b52fcda 100644
--- a/docs/installation/README.md
+++ b/docs/installation/README.md
@@ -1,6 +1,21 @@
-# Installing Loki
+# Install Loki
-1. [Installing using Tanka (recommended)](./tanka.md)
-2. [Installing through Helm](./helm.md)
-3. [Installing through Docker or Docker Compose](./docker.md)
-4. [Installing locally](./local.md)
+## Installation methods
+
+Instructions for different methods of installing Loki and Promtail.
+
+- [Install using Tanka (recommended)](./tanka.md)
+- [Install through Helm](./helm.md)
+- [Install through Docker or Docker Compose](./docker.md)
+- [Install and run locally](./local.md)
+- [Install from source](./install-from-source.md)
+
+## General process
+
+In order to run Loki, you must:
+
+1. Download and install both Loki and Promtail.
+1. Download config files for both programs.
+1. Start Loki.
+1. Update the Promtail config file to get your logs into Loki.
+1. Start Promtail.
diff --git a/docs/installation/docker.md b/docs/installation/docker.md
index 16fde9d577b15..3c8e077149e3e 100644
--- a/docs/installation/docker.md
+++ b/docs/installation/docker.md
@@ -1,7 +1,7 @@
-# Installing Loki with Docker or Docker Compose
+# Install Loki with Docker or Docker Compose
-You can install Loki with Docker or Docker Compose for evaluating, testing, or developing Loki.
-For production, we recommend Tanka or Helm.
+You can install Loki and Promtail with Docker or Docker Compose if you are evaluating, testing, or developing Loki.
+For production, we recommend installing with Tanka or Helm.
## Prerequisites
@@ -25,8 +25,7 @@ When finished, `loki-config.yaml` and `promtail-config.yaml` are downloaded in t
Navigate to http://localhost:3100/metrics to view the metrics and http://localhost:3100/ready for readiness.
-As of v1.5.0, image is configured to run by default as user loki with UID `10001` and GID `10001`. You can use a different user, specially if you are using bind mounts, by specifying uid with docker run command
-by specifying `--user=UID` with numeric UID suited to your needs.
+As of v1.5.0, image is configured to run by default as user loki with UID `10001` and GID `10001`. You can use a different user, specially if you are using bind mounts, by specifying the UID with a `docker run` command and using `--user=UID` with numeric UID suited to your needs.
**Windows**
@@ -46,7 +45,9 @@ Navigate to http://localhost:3100/metrics to view the output.
## Install with Docker Compose
+Run the following commands in your command line. They work for Windows or Linux systems.
+
```bash
-$ wget https://raw.githubusercontent.com/grafana/loki/v1.5.0/production/docker-compose.yaml -O docker-compose.yaml
-$ docker-compose -f docker-compose.yaml up
+wget https://raw.githubusercontent.com/grafana/loki/v1.5.0/production/docker-compose.yaml -O docker-compose.yaml
+docker-compose -f docker-compose.yaml up
```
diff --git a/docs/installation/helm.md b/docs/installation/helm.md
index b082228301579..31d9f70bf4cbd 100644
--- a/docs/installation/helm.md
+++ b/docs/installation/helm.md
@@ -1,19 +1,20 @@
-# Installing Loki with Helm
+# Install Loki with Helm
## Prerequisites
Make sure you have Helm [installed](https://helm.sh/docs/using_helm/#installing-helm) and
-[deployed](https://helm.sh/docs/using_helm/#installing-tiller) to your cluster. Then add
-[Loki's chart repository](https://github.com/grafana/loki/tree/master/production/helm/loki) to Helm:
+[deployed](https://helm.sh/docs/using_helm/#installing-tiller) to your cluster.
+
+Add [Loki's chart repository](https://github.com/grafana/loki/tree/master/production/helm/loki) to Helm:
```bash
-$ helm repo add loki https://grafana.github.io/loki/charts
+helm repo add loki https://grafana.github.io/loki/charts
```
-You can update the chart repository by running:
+To update the chart repository, run:
```bash
-$ helm repo update
+helm repo update
```
## Deploy Loki to your cluster
@@ -21,32 +22,32 @@ $ helm repo update
### Deploy with default config
```bash
-$ helm upgrade --install loki loki/loki-stack
+helm upgrade --install loki loki/loki-stack
```
### Deploy in a custom namespace
```bash
-$ helm upgrade --install loki --namespace=loki loki/loki
+helm upgrade --install loki --namespace=loki loki/loki
```
### Deploy with custom config
```bash
-$ helm upgrade --install loki loki/loki --set "key1=val1,key2=val2,..."
+helm upgrade --install loki loki/loki --set "key1=val1,key2=val2,..."
```
### Deploy Loki Stack (Loki, Promtail, Grafana, Prometheus)
```bash
-$ helm upgrade --install loki loki/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false
+helm upgrade --install loki loki/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false
```
### Deploy Loki Stack (Loki, Fluent Bit, Grafana, Prometheus)
```bash
-$ helm upgrade --install loki loki/loki-stack \
- --set fluent-bit.enabled=true,promtail.enabled=false,grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false
+helm upgrade --install loki loki/loki-stack \
+ --set fluent-bit.enabled=true,promtail.enabled=false,grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false
```
## Deploy Grafana to your cluster
@@ -54,19 +55,19 @@ $ helm upgrade --install loki loki/loki-stack \
To install Grafana on your cluster with Helm, use the following command:
```bash
-$ helm install stable/grafana -n loki-grafana
+helm install stable/grafana -n loki-grafana
```
To get the admin password for the Grafana pod, run the following command:
```bash
-$ kubectl get secret --namespace <YOUR-NAMESPACE> loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
+kubectl get secret --namespace <YOUR-NAMESPACE> loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
```
To access the Grafana UI, run the following command:
```bash
-$ kubectl port-forward --namespace <YOUR-NAMESPACE> service/loki-grafana 3000:80
+kubectl port-forward --namespace <YOUR-NAMESPACE> service/loki-grafana 3000:80
```
Navigate to `http://localhost:3000` and login with `admin` and the password
@@ -79,8 +80,7 @@ If Loki and Promtail are deployed on different clusters you can add an Ingress
in front of Loki. By adding a certificate you create an HTTPS endpoint. For
extra security you can also enable Basic Authentication on the Ingress.
-In Promtail, set the following values to communicate using HTTPS and basic
-authentication:
+In Promtail, set the following values to communicate using HTTPS and basic authentication:
```yaml
loki:
diff --git a/docs/installation/install-from-source.md b/docs/installation/install-from-source.md
new file mode 100644
index 0000000000000..53584369705e4
--- /dev/null
+++ b/docs/installation/install-from-source.md
@@ -0,0 +1,26 @@
+# Build from source
+
+In order to build Loki manually, you need to clone the GitHub repo and then `make Loki`.
+
+## Prerequisites
+
+- Go 1.13 or later
+- Make
+- Docker (for updating protobuf files and yacc files)
+
+## Build manually on your local system
+
+Clone Loki to `$GOPATH/src/github.com/grafana/loki`:
+
+```bash
+git clone https://github.com/grafana/loki $GOPATH/src/github.com/grafana/loki
+```
+
+Then change into that directory and run `make loki`:
+
+```bash
+cd $GOPATH/src/github.com/grafana/loki
+make loki
+```
+
+A file at ./cmd/loki/loki will be created and is the final built binary.
diff --git a/docs/installation/local.md b/docs/installation/local.md
index c75c6f8878d5e..2a65b5a0716e0 100644
--- a/docs/installation/local.md
+++ b/docs/installation/local.md
@@ -1,11 +1,43 @@
-# Installing Loki Locally
+# Install and run Loki locally
-## Release Binaries
+In order to log events with Loki, you must download and install both Promtail and Loki.
+- Loki is the logging engine.
+- Promtail sends logs to Loki.
+
+## Install and run
+
+1. Navigate to the [release page](https://github.com/grafana/loki/releases/).
+2. Scroll down to the Assets section under the version that you want to install.
+3. Download the Loki and Promtail .zip files that correspond to your system.
+ **Note:** Do not download LogCLI or Loki Canary at this time. [LogCLI](./getting-started/logcli.md) allows you to run Loki queries in a command line interface. [Loki Canary](./operations/loki-canary.md) is a tool to audit Loki performance.
+4. Unzip the package contents into the same directory. This is where the two programs will run.
+5. In the command line, change directory (`cd` on most systems) to the directory with Loki and Promtail. Copy and paste the commands below into your command line to download generic configuration files:
+```
+wget https://raw.githubusercontent.com/grafana/loki/master/cmd/loki/loki-local-config.yaml
+wget https://raw.githubusercontent.com/grafana/loki/master/cmd/promtail/promtail-local-config.yaml
+```
+6. Enter the following command to start Loki:
+
+**Windows**
+
+```
+.\loki-windows-amd64.exe --config.file=loki-local-config.yaml
+```
+
+**Linux**
+```
+./promtail-linux-amd64 -config.file=promtail-local-config.yaml
+```
+
+Loki runs and displays Loki logs in your command line and on http://localhost:3100/metrics.
+
+Congratulations, Loki is installed and running! Next, you might want edit the Promtail config file to [get logs into Loki](./getting-started/get-logs-into-loki.md).
+
+## Release binaries - openSUSE Linux only
Every release includes binaries for Loki which can be found on the
[Releases page](https://github.com/grafana/loki/releases).
-
## Community openSUSE Linux packages
The community provides packages of Loki for openSUSE Linux. To install:
@@ -13,35 +45,9 @@ The community provides packages of Loki for openSUSE Linux. To install:
1. Add the repository `https://download.opensuse.org/repositories/security:/logging/`
to your system. For example, if you are using Leap 15.1, run
`sudo zypper ar https://download.opensuse.org/repositories/security:/logging/openSUSE_Leap_15.1/security:logging.repo ; sudo zypper ref`
-2. Install the Loki package with `zypper in loki`
-3. Enable the Loki and Promtail services:
+1. Install the Loki package with `zypper in loki`
+1. Enable the Loki and Promtail services:
- `systemd start loki && systemd enable loki`
- `systemd start promtail && systemd enable promtail`
-4. Modify the configuration files as needed: `/etc/loki/promtail.yaml` and
+1. Modify the configuration files as needed: `/etc/loki/promtail.yaml` and
`/etc/loki/loki.yaml`.
-
-## Manual Build
-
-### Prerequisites
-
-- Go 1.13 or later
-- Make
-- Docker (for updating protobuf files and yacc files)
-
-### Building
-
-Clone Loki to `$GOPATH/src/github.com/grafana/loki`:
-
-```bash
-$ git clone https://github.com/grafana/loki $GOPATH/src/github.com/grafana/loki
-```
-
-Then change into that directory and run `make loki`:
-
-```bash
-$ cd $GOPATH/src/github.com/grafana/loki
-$ make loki
-
-# A file at ./cmd/loki/loki will be created and is the
-# final built binary.
-```
diff --git a/docs/installation/tanka.md b/docs/installation/tanka.md
index 79b1be4e576f9..3a8564e5452a4 100644
--- a/docs/installation/tanka.md
+++ b/docs/installation/tanka.md
@@ -1,4 +1,4 @@
-# Installing Loki with Tanka
+# Install Loki with Tanka
[Tanka](https://tanka.dev) is a reimplementation of
[Ksonnet](https://ksonnet.io) that Grafana Labs created after Ksonnet was
@@ -6,7 +6,7 @@ deprecated. Tanka is used by Grafana Labs to run Loki in production.
## Prerequisites
-Grab the latest version of Tanka (at least version v0.5.0) for the `tk env`
+Install the latest version of Tanka (at least version v0.5.0) for the `tk env`
commands. Prebuilt binaries for Tanka can be found at the [Tanka releases
URL](https://github.com/grafana/tanka/releases).
@@ -23,7 +23,7 @@ tk env add environments/loki --namespace=loki --server=<Kubernetes API server>
## Deploying
-Grab the Loki & Promtail module using `jb`:
+Download and install the Loki and Promtail module using `jb`:
```bash
go get -u github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb
@@ -75,8 +75,7 @@ loki + promtail + gateway {
}
```
-Notice that `container_root_path` is your own data root for the Docker Daemon,
-run `docker info | grep "Root Dir"` to get it.
+Notice that `container_root_path` is your own data root for the Docker Daemon.
+Run `docker info | grep "Root Dir"` to get the root path.
-Run `tk show environments/loki` to see the manifests that will be deployed to the cluster and
-finally run `tk apply environments/loki` to deploy it.
+Run `tk show environments/loki` to see the manifests that will be deployed to the cluster. Run `tk apply environments/loki` to deploy the manifests.
|
docs
|
Local install edits (#2220)
|
8788ddb8d6bcc00aada2a8c4231bd04e2d890111
|
2021-06-17 20:18:44
|
Danny Kopping
|
ruler: documentation for recording rules (#3851)
| false
|
diff --git a/docs/sources/configuration/_index.md b/docs/sources/configuration/_index.md
index f09ff3e33cad6..fbac42add351d 100644
--- a/docs/sources/configuration/_index.md
+++ b/docs/sources/configuration/_index.md
@@ -566,6 +566,61 @@ storage:
# CLI flag: -ruler.storage.local.directory
[directory: <filename> | default = ""]
+# Remote-write configuration to send rule samples to a Prometheus remote-write endpoint.
+remote_write:
+ # Enable remote-write functionality.
+ # CLI flag: -ruler.remote-write.enabled
+ [enabled: <boolean> | default = false]
+
+ client:
+ # The URL of the endpoint to send samples to.
+ url: <string>
+
+ # Timeout for requests to the remote write endpoint.
+ [remote_timeout: <duration> | default = 30s]
+
+ # Custom HTTP headers to be sent along with each remote write request.
+ # Be aware that headers that are set by Prometheus itself can't be overwritten.
+ headers:
+ [<string>: <string> ...]
+
+ # HTTP proxy server to use to connect to the targets.
+ [proxy_url: <string>]
+
+ # Sets the `Authorization` header on every remote write request with the
+ # configured username and password.
+ # password and password_file are mutually exclusive.
+ basic_auth:
+ [username: <string>]
+ [password: <secret>]
+ [password_file: <string>]
+
+ # `Authorization` header configuration.
+ authorization:
+ # Sets the authentication type.
+ [type: <string> | default: Bearer]
+ # Sets the credentials. It is mutually exclusive with
+ # `credentials_file`.
+ [credentials: <secret>]
+ # Sets the credentials with the credentials read from the configured file.
+ # It is mutually exclusive with `credentials`.
+ [credentials_file: <filename>]
+
+ tls_config:
+ # CA certificate to validate API server certificate with.
+ [ca_file: <filename>]
+
+ # Certificate and key files for client cert authentication to the server.
+ [cert_file: <filename>]
+ [key_file: <filename>]
+
+ # ServerName extension to indicate the name of the server.
+ # https://tools.ietf.org/html/rfc4366#section-3.1
+ [server_name: <string>]
+
+ # Disable validation of the server certificate.
+ [insecure_skip_verify: <boolean>]
+
# File path to store temporary rule files
# CLI flag: -ruler.rule-path
[rule_path: <filename> | default = "/rules"]
@@ -1754,6 +1809,10 @@ logs in Loki.
# If no rule is matched the `retention_period` is used.
[retention_stream: <array> | default = none]
+# Capacity of remote-write queues; if a queue exceeds its capacity it will evict oldest samples.
+# CLI flag: -ruler.remote-write.queue-capacity
+[ruler_remote_write_queue_capacity: <int> | default = 10000]
+
# Feature renamed to 'runtime configuration', flag deprecated in favor of -runtime-config.file (runtime_config.file in YAML).
# CLI flag: -limits.per-user-override-config
[per_tenant_override_config: <string>]
diff --git a/docs/sources/alerting/_index.md b/docs/sources/rules/_index.md
similarity index 59%
rename from docs/sources/alerting/_index.md
rename to docs/sources/rules/_index.md
index 743eaebc984a3..e09da85caaa2c 100644
--- a/docs/sources/alerting/_index.md
+++ b/docs/sources/rules/_index.md
@@ -1,13 +1,17 @@
---
-title: Alerting
+aliases:
+ - /alerting/
+title: Alerting and Recording Rules
weight: 700
---
-# Alerting
+# Rules and the Ruler
-Loki includes a component called the Ruler, adapted from our upstream project, Cortex. The Ruler is responsible for continually evaluating a set of configurable queries and then alerting when certain conditions happen, e.g. a high percentage of error logs.
+Loki includes a component called the Ruler, adapted from our upstream project, Cortex. The Ruler is responsible for continually evaluating a set of configurable queries and performing an action based on the result.
-First, ensure the Ruler component is enabled. The following is a basic configuration which loads rules from configuration files:
+This example configuration sources rules from a local disk.
+
+[Ruler storage](#ruler-storage) provides further details.
```yaml
ruler:
@@ -24,72 +28,19 @@ ruler:
```
-## Prometheus Compatible
-
-When running the Ruler (which runs by default in the single binary), Loki accepts rules files and then schedules them for continual evaluation. These are _Prometheus compatible_! This means the rules file has the same structure as in [Prometheus' Alerting Rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/), except that the rules specified are in LogQL.
-
-Let's see what that looks like:
-
-The syntax of a rule file is:
-
-```yaml
-groups:
- [ - <rule_group> ]
-```
-
-A simple example file could be:
+We support two kinds of rules: [alerting](#alerting-rules) rules and [recording](#recording-rules) rules.
-```yaml
-groups:
- - name: example
- rules:
- - alert: HighThroughputLogStreams
- expr: sum by(container) (rate({job=~"loki-dev/.*"}[1m])) > 1000
- for: 2m
-```
+## Alerting Rules
-### `<rule_group>`
+We support [Prometheus-compatible](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) alerting rules. From Prometheus' documentation:
-```yaml
-# The name of the group. Must be unique within a file.
-name: <string>
+> Alerting rules allow you to define alert conditions based on Prometheus expression language expressions and to send notifications about firing alerts to an external service.
-# How often rules in the group are evaluated.
-[ interval: <duration> | default = Ruler.evaluation_interval || 1m ]
-
-rules:
- [ - <rule> ... ]
-```
-
-### `<rule>`
-
-The syntax for alerting rules is (see the LogQL [Metric Queries](https://grafana.com/docs/loki/latest/logql/#metric-queries) for more details):
-
-```yaml
-# The name of the alert. Must be a valid label value.
-alert: <string>
-
-# The LogQL expression to evaluate (must be an instant vector). Every evaluation cycle this is
-# evaluated at the current time, and all resultant time series become
-# pending/firing alerts.
-expr: <string>
-
-# Alerts are considered firing once they have been returned for this long.
-# Alerts which have not yet fired for long enough are considered pending.
-[ for: <duration> | default = 0s ]
-
-# Labels to add or overwrite for each alert.
-labels:
- [ <labelname>: <tmpl_string> ]
-
-# Annotations to add to each alert.
-annotations:
- [ <labelname>: <tmpl_string> ]
-```
+Loki alerting rules are exactly the same, except they use LogQL for their expressions.
### Example
-A full-fledged example of a rules file might look like:
+A complete example of a rules file:
```yaml
groups:
@@ -117,25 +68,96 @@ groups:
severity: critical
```
-## Use cases
+## Recording Rules
-The Ruler's Prometheus compatibility further accentuates the marriage between metrics and logs. For those looking to get started alerting based on logs, or wondering why this might be useful, here are a few use cases we think fit very well.
+We support [Prometheus-compatible](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules) recording rules. From Prometheus' documentation:
-### We aren't using metrics yet
+> Recording rules allow you to precompute frequently needed or computationally expensive expressions and save their result as a new set of time series.
+
+> Querying the precomputed result will then often be much faster than executing the original expression every time it is needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh.
+
+Loki allows you to run [_metric queries_](https://grafana.com/docs/loki/latest/logql/#metric-queries) over your logs, which means
+that you can derive a numeric aggregation from your logs, like calculating the number of requests over time from your NGINX access log.
+
+### Example
-Many nascent projects, apps, or even companies may not have a metrics backend yet. We tend to add logging support before metric support, so if you're in this stage, alerting based on logs can help bridge the gap. It's easy to start building Loki alerts for things like _the percentage of error logs_ such as the example from earlier:
```yaml
-- alert: HighPercentageError
- expr: |
- sum(rate({app="foo", env="production"} |= "error" [5m])) by (job)
- /
- sum(rate({app="foo", env="production"}[5m])) by (job)
- > 0.05
+name: NginxRules
+interval: 1m
+rules:
+ - record: nginx:requests:rate1m
+ expr: |
+ sum(
+ rate({container="nginx"}[1m])
+ )
+ labels:
+ cluster: "us-central1"
```
+This query (`expr`) will be executed every 1 minute (`interval`), the result of which will be stored in the metric
+name we have defined (`record`). This metric named `nginx:requests:rate1m` can now be sent to Prometheus, where it will be stored
+just like any other metric.
+
+### Remote-Write
+
+With recording rules, you can run these metric queries continually on an interval, and have the resulting metrics written
+to a Prometheus-compatible remote-write endpoint. They produce Prometheus metrics from log entries.
+
+At the time of writing, these are the compatible backends that support this:
+
+- [Prometheus](https://prometheus.io/docs/prometheus/latest/disabled_features/#remote-write-receiver) (`>=v2.25.0`):
+ Prometheus is generally a pull-based system, but since `v2.25.0` has allowed for metrics to be written directly to it as well.
+- [Cortex](https://cortexmetrics.io/docs/api/#remote-write)
+- [Thanos (`Receiver`)](https://thanos.io/tip/components/receive.md/)
+
+Here is an example remote-write configuration for sending to a local Prometheus instance:
+
+```yaml
+ruler:
+ ... other settings ...
+
+ remote_write:
+ enabled: true
+ client:
+ url: http://localhost:9090/api/v1/write
+```
+
+Further configuration options can be found under [ruler_config](/configuration#ruler_config).
+
+### Resilience and Durability
+
+Given the above remote-write configuration, one needs to take into account what would happen if the remote-write receiver
+becomes unavailable.
+
+The Ruler component ensures some durability guarantees by buffering all outgoing writes in an in-memory queue. This queue
+holds all metric samples that are due to be written to the remote-write receiver, and while that receiver is down, the buffer
+will grow in size.
+
+Once the queue is full, the oldest samples will be evicted from the queue. The size of this queue is controllable globally,
+or on a per-tenant basis, with the [`ruler_remote_write_queue_capacity`](/configuration#limits_config) limit setting. By default, this value is set to 10000 samples.
+
+**NOTE**: this queue only exists in-memory at this time; there is no Write-Ahead Log (WAL) functionality available yet.
+This means that if your Ruler instance crashes, all pending metric samples in the queue that have not yet been written will be lost.
+
+### Operational Considerations
+
+Metrics are available to monitor recording rule evaluations and writes.
+
+| Metric | Description |
+|---|---|
+| `recording_rules_samples_queued_current` | Number of samples queued to be remote-written. |
+| `recording_rules_samples_queued_total` | Total number of samples queued. |
+| `recording_rules_samples_queue_capacity` | Number of samples that can be queued before eviction of the oldest samples occurs. |
+| `recording_rules_samples_evicted_total` | Number of samples evicted from queue because the queue is full. |
+| `recording_rules_remote_write_errors` | Number of samples that failed to be remote-written due to error. |
+
+## Use cases
+
+The Ruler's Prometheus compatibility further accentuates the marriage between metrics and logs. For those looking to get started with metrics and alerts based on logs, or wondering why this might be useful, here are a few use cases we think fit very well.
+
### Black box monitoring
-We don't always control the source code of applications we run. Think load balancers and the myriad components (both open source and closed third-party) that support our applications; it's a common problem that these don't expose a metric you want (or any metrics at all). How then, can we bring them into our observability stack in order to monitor them effectively? Alerting based on logs is a great answer for these problems.
+We don't always control the source code of applications we run. Load balancers and a myriad of other components, both open source and closed third-party, support our applications while they don't expose the metrics we want. Some don't expose any metrics at all. Loki's alerting and recording rules can produce metrics and alert on the state of the system, bringing the components into our observability stack by using the logs. This is an incredibly powerful way to introduce advanced observability into legacy architectures.
### Event alerting
@@ -162,7 +184,7 @@ Creating these alerts in LogQL is attractive because these metrics can be extrac
## Interacting with the Ruler
-Because the rule files are identical to Prometheus rule files, we can interact with the Loki Ruler via [`cortex-tool`](https://github.com/grafana/cortex-tools#rules). The CLI is in early development, but works alongside both Loki and cortex. Make sure to pass the `--backend=loki` argument to commands when using it with Loki.
+Because the rule files are identical to Prometheus rule files, we can interact with the Loki Ruler via [`cortextool`](https://github.com/grafana/cortex-tools#rules). The CLI is in early development, but it works with both Loki and Cortex. Pass the `--backend=loki` option when using it with Loki.
> **Note:** Not all commands in cortextool currently support Loki.
@@ -275,8 +297,8 @@ Yaml files are expected to be [Prometheus compatible](#Prometheus_Compatible) bu
There are a few things coming to increase the robustness of this service. In no particular order:
-- Recording rules.
-- Backend metric stores adapters for generated alert and recording rule data. The first will likely be Cortex, as Loki is built atop it.
+- WAL for recording rule.
+- Backend metric stores adapters for generated alert rule data.
## Misc Details: Metrics backends vs in-memory
|
ruler
|
documentation for recording rules (#3851)
|
e81345ec8b076e06bd43d48c47396d88fef72417
|
2024-08-01 20:59:38
|
J Stickler
|
docs: fix broken links due to Alloy docs reorg (#13715)
| false
|
diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md
index 3221f5489f917..11401975b83ac 100644
--- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md
+++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md
@@ -392,7 +392,7 @@ Head back to where you started from to continue with the Loki documentation: [Lo
For more information on Grafana Alloy, refer to the following resources:
- [Grafana Alloy getting started examples](https://grafana.com/docs/alloy/latest/tutorials/)
-- [Grafana Alloy common task examples](https://grafana.com/docs/alloy/latest/tasks/)
+- [Grafana Alloy common task examples](https://grafana.com/docs/alloy/latest/collect/)
- [Grafana Alloy component reference](https://grafana.com/docs/alloy/latest/reference/components/)
## Complete metrics, logs, traces, and profiling example
diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md
index 1161ba160a3d5..fc7c948bdd4ea 100644
--- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md
+++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md
@@ -279,7 +279,7 @@ Head back to where you started from to continue with the Loki documentation: [Lo
For more information on Grafana Alloy, refer to the following resources:
- [Grafana Alloy getting started examples](https://grafana.com/docs/alloy/latest/tutorials/)
-- [Grafana Alloy common task examples](https://grafana.com/docs/alloy/latest/tasks/)
+- [Grafana Alloy common task examples](https://grafana.com/docs/alloy/latest/collect/)
- [Grafana Alloy component reference](https://grafana.com/docs/alloy/latest/reference/components/)
## Complete metrics, logs, traces, and profiling example
|
docs
|
fix broken links due to Alloy docs reorg (#13715)
|
f2c2a22bdcc6f2488eb326c465cadbe2d3087082
|
2024-11-28 19:24:04
|
Paul Rogers
|
chore: Preparation for incoming static code analysis CI check (#15164)
| false
|
diff --git a/clients/cmd/docker-driver/driver.go b/clients/cmd/docker-driver/driver.go
index 8783b5b92505f..c5874d63eeba1 100644
--- a/clients/cmd/docker-driver/driver.go
+++ b/clients/cmd/docker-driver/driver.go
@@ -88,7 +88,7 @@ func (d *driver) StartLogging(file string, logCtx logger.Info) error {
var jsonl logger.Logger
if !noFile {
- if err := os.MkdirAll(folder, 0755); err != nil {
+ if err := os.MkdirAll(folder, 0750); err != nil {
return errors.Wrap(err, "error setting up logger dir")
}
diff --git a/clients/cmd/docker-driver/main.go b/clients/cmd/docker-driver/main.go
index 06d90b81bda56..f83cff7407b3b 100644
--- a/clients/cmd/docker-driver/main.go
+++ b/clients/cmd/docker-driver/main.go
@@ -40,7 +40,7 @@ func main() {
pprofPort := os.Getenv("PPROF_PORT")
if pprofPort != "" {
go func() {
- err := http.ListenAndServe(fmt.Sprintf(":%s", pprofPort), nil)
+ err := http.ListenAndServe(fmt.Sprintf(":%s", pprofPort), nil) //#nosec G114 -- This is a debug feature that must be intentionally enabled and is not used in prod, DOS is not a concern.
logger.Log("msg", "http server stopped", "err", err)
}()
}
diff --git a/clients/cmd/fluent-bit/dque.go b/clients/cmd/fluent-bit/dque.go
index 6e5746033254b..d1e9bc7d33809 100644
--- a/clients/cmd/fluent-bit/dque.go
+++ b/clients/cmd/fluent-bit/dque.go
@@ -59,7 +59,7 @@ func newDque(cfg *config, logger log.Logger, metrics *client.Metrics) (client.Cl
logger: log.With(logger, "component", "queue", "name", cfg.bufferConfig.dqueConfig.queueName),
}
- err = os.MkdirAll(cfg.bufferConfig.dqueConfig.queueDir, 0644)
+ err = os.MkdirAll(cfg.bufferConfig.dqueConfig.queueDir, 0640)
if err != nil {
return nil, fmt.Errorf("cannot create queue directory: %s", err)
}
diff --git a/clients/pkg/promtail/promtail.go b/clients/pkg/promtail/promtail.go
index 73e52f21703e1..86c27d55d7727 100644
--- a/clients/pkg/promtail/promtail.go
+++ b/clients/pkg/promtail/promtail.go
@@ -1,7 +1,6 @@
package promtail
import (
- "crypto/md5"
"errors"
"fmt"
"os"
@@ -10,6 +9,8 @@ import (
"syscall"
"time"
+ "golang.org/x/crypto/sha3"
+
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
@@ -130,7 +131,8 @@ func (p *Promtail) reloadConfig(cfg *config.Config) error {
return errConfigNotChange
}
newConf := cfg.String()
- level.Info(p.logger).Log("msg", "Reloading configuration file", "md5sum", fmt.Sprintf("%x", md5.Sum([]byte(newConf))))
+ hash := sha3.Sum256([]byte(newConf))
+ level.Info(p.logger).Log("msg", "Reloading configuration file", "sha3sum", fmt.Sprintf("%x", hash))
if p.targetManagers != nil {
p.targetManagers.Stop()
}
diff --git a/clients/pkg/promtail/targets/kafka/authentication.go b/clients/pkg/promtail/targets/kafka/authentication.go
index a58d55589c629..6e6c70bf417b5 100644
--- a/clients/pkg/promtail/targets/kafka/authentication.go
+++ b/clients/pkg/promtail/targets/kafka/authentication.go
@@ -13,7 +13,7 @@ import (
func createTLSConfig(cfg promconfig.TLSConfig) (*tls.Config, error) {
tc := &tls.Config{
- InsecureSkipVerify: cfg.InsecureSkipVerify,
+ InsecureSkipVerify: cfg.InsecureSkipVerify, //#nosec G402 -- User has explicitly requested to disable TLS
ServerName: cfg.ServerName,
}
// load ca cert
diff --git a/clients/pkg/promtail/targets/testutils/testutils.go b/clients/pkg/promtail/targets/testutils/testutils.go
index b88e87b323dd1..1eaa54396176f 100644
--- a/clients/pkg/promtail/targets/testutils/testutils.go
+++ b/clients/pkg/promtail/targets/testutils/testutils.go
@@ -16,7 +16,7 @@ var letters = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
func RandName() string {
b := make([]rune, 10)
for i := range b {
- b[i] = letters[randomGenerator.Intn(len(letters))]
+ b[i] = letters[randomGenerator.Intn(len(letters))] //#nosec G404 -- Generating random test data, fine.
}
return string(b)
}
diff --git a/cmd/chunks-inspect/main.go b/cmd/chunks-inspect/main.go
index e5d25a20b713b..47fb04123783f 100644
--- a/cmd/chunks-inspect/main.go
+++ b/cmd/chunks-inspect/main.go
@@ -112,7 +112,7 @@ func printFile(filename string, blockDetails, printLines, storeBlocks bool) {
}
func writeBlockToFile(data []byte, blockIndex int, filename string) {
- err := os.WriteFile(filename, data, 0644)
+ err := os.WriteFile(filename, data, 0640) // #nosec G306 -- this is fencing off the "other" permissions
if err != nil {
log.Println("Failed to store block", blockIndex, "to file", filename, "due to error:", err)
} else {
diff --git a/cmd/loki-canary/main.go b/cmd/loki-canary/main.go
index 304bb1b1c3e91..b89a68ef77ac7 100644
--- a/cmd/loki-canary/main.go
+++ b/cmd/loki-canary/main.go
@@ -208,8 +208,16 @@ func main() {
})
http.Handle("/metrics", promhttp.Handler())
go func() {
- err := http.ListenAndServe(":"+strconv.Itoa(*port), nil)
- if err != nil {
+ srv := &http.Server{
+ Addr: ":" + strconv.Itoa(*port),
+ Handler: nil, // uses default mux from http.Handle calls above
+ ReadTimeout: 120 * time.Second,
+ WriteTimeout: 120 * time.Second,
+ IdleTimeout: 120 * time.Second,
+ ReadHeaderTimeout: 120 * time.Second,
+ }
+ err := srv.ListenAndServe()
+ if err != nil && err != http.ErrServerClosed {
panic(err)
}
}()
diff --git a/cmd/migrate/main.go b/cmd/migrate/main.go
index 9b09a462538ed..6ff24256afac1 100644
--- a/cmd/migrate/main.go
+++ b/cmd/migrate/main.go
@@ -53,7 +53,7 @@ func main() {
flag.Parse()
go func() {
- log.Println(http.ListenAndServe("localhost:8080", nil))
+ log.Println(http.ListenAndServe("localhost:8080", nil)) //#nosec G114 -- This is only bound to localhost, not a plausible DOS vector.
}()
// Create a set of defaults
diff --git a/integration/client/client.go b/integration/client/client.go
index fb2a795f910fe..94c5f8e0b508e 100644
--- a/integration/client/client.go
+++ b/integration/client/client.go
@@ -237,7 +237,7 @@ func (c *Client) Get(path string) (*http.Response, error) {
// Get all the metrics
func (c *Client) Metrics() (string, error) {
url := fmt.Sprintf("%s/metrics", c.baseURL)
- res, err := http.Get(url)
+ res, err := http.Get(url) //#nosec G107 -- Intentionally taking user input from config
if err != nil {
return "", err
}
diff --git a/integration/cluster/cluster.go b/integration/cluster/cluster.go
index 57bc182d0c8cb..2d1c92037d05a 100644
--- a/integration/cluster/cluster.go
+++ b/integration/cluster/cluster.go
@@ -176,7 +176,7 @@ func New(logLevel level.Value, opts ...func(*Cluster)) *Cluster {
overridesFile := filepath.Join(sharedPath, "loki-overrides.yaml")
- err = os.WriteFile(overridesFile, []byte(`overrides:`), 0o777)
+ err = os.WriteFile(overridesFile, []byte(`overrides:`), 0640) // #nosec G306 -- this is fencing off the "other" permissions
if err != nil {
panic(fmt.Errorf("error creating overrides file: %w", err))
}
@@ -348,7 +348,7 @@ func (c *Component) writeConfig() error {
return fmt.Errorf("error getting merged config: %w", err)
}
- if err := os.WriteFile(configFile.Name(), mergedConfig, 0o644); err != nil {
+ if err := os.WriteFile(configFile.Name(), mergedConfig, 0640); err != nil { // #nosec G306 -- this is fencing off the "other" permissions
return fmt.Errorf("error writing config file: %w", err)
}
@@ -525,7 +525,7 @@ func (c *Component) SetTenantLimits(tenant string, limits validation.Limits) err
return err
}
- return os.WriteFile(c.overridesFile, config, 0o777)
+ return os.WriteFile(c.overridesFile, config, 0640) // #nosec G306 -- this is fencing off the "other" permissions
}
func (c *Component) GetTenantLimits(tenant string) validation.Limits {
diff --git a/integration/cluster/ruler.go b/integration/cluster/ruler.go
index fd7b1462dd98a..59845fd50324a 100644
--- a/integration/cluster/ruler.go
+++ b/integration/cluster/ruler.go
@@ -33,17 +33,17 @@ func (c *Component) WithTenantRules(tenantFilesMap map[string]map[string]string)
sharedPath := c.ClusterSharedPath()
rulesPath := filepath.Join(sharedPath, "rules")
- if err := os.Mkdir(rulesPath, 0755); err != nil {
+ if err := os.Mkdir(rulesPath, 0750); err != nil {
return fmt.Errorf("error creating rules path: %w", err)
}
for tenant, files := range tenantFilesMap {
for filename, file := range files {
path := filepath.Join(rulesPath, tenant)
- if err := os.Mkdir(path, 0755); err != nil {
+ if err := os.Mkdir(path, 0750); err != nil {
return fmt.Errorf("error creating tenant %s rules path: %w", tenant, err)
}
- if err := os.WriteFile(filepath.Join(path, filename), []byte(strings.TrimSpace(file)), 0644); err != nil {
+ if err := os.WriteFile(filepath.Join(path, filename), []byte(strings.TrimSpace(file)), 0640); err != nil { // #nosec G306 -- this is fencing off the "other" permissions
return fmt.Errorf("error creating rule file at path %s: %w", path, err)
}
}
diff --git a/operator/internal/manifests/storage/var.go b/operator/internal/manifests/storage/var.go
index 108d811412c3d..90def59358e5d 100644
--- a/operator/internal/manifests/storage/var.go
+++ b/operator/internal/manifests/storage/var.go
@@ -10,15 +10,15 @@ const (
// EnvAWSAccessKeyID is the environment variable to specify the AWS client id to access S3.
EnvAWSAccessKeyID = "AWS_ACCESS_KEY_ID"
// EnvAWSAccessKeySecret is the environment variable to specify the AWS client secret to access S3.
- EnvAWSAccessKeySecret = "AWS_ACCESS_KEY_SECRET"
+ EnvAWSAccessKeySecret = "AWS_ACCESS_KEY_SECRET" //#nosec G101 -- False positive
// EnvAWSSseKmsEncryptionContext is the environment variable to specify the AWS KMS encryption context when using type SSE-KMS.
EnvAWSSseKmsEncryptionContext = "AWS_SSE_KMS_ENCRYPTION_CONTEXT"
// EnvAWSRoleArn is the environment variable to specify the AWS role ARN secret for the federated identity workflow.
EnvAWSRoleArn = "AWS_ROLE_ARN"
// EnvAWSWebIdentityTokenFile is the environment variable to specify the path to the web identity token file used in the federated identity workflow.
- EnvAWSWebIdentityTokenFile = "AWS_WEB_IDENTITY_TOKEN_FILE"
+ EnvAWSWebIdentityTokenFile = "AWS_WEB_IDENTITY_TOKEN_FILE" //#nosec G101 -- False positive
// EnvAWSCredentialsFile is the environment variable to specify the path to the shared credentials file
- EnvAWSCredentialsFile = "AWS_SHARED_CREDENTIALS_FILE"
+ EnvAWSCredentialsFile = "AWS_SHARED_CREDENTIALS_FILE" //#nosec G101 -- False positive
// EnvAWSSdkLoadConfig is the environment that enabled the AWS SDK to enable the shared credentials file to be loaded
EnvAWSSdkLoadConfig = "AWS_SDK_LOAD_CONFIG"
// EnvAzureStorageAccountName is the environment variable to specify the Azure storage account name to access the container.
@@ -34,7 +34,7 @@ const (
// EnvAzureFederatedTokenFile is the environment variable used to store the path to the Managed Identity token.
EnvAzureFederatedTokenFile = "AZURE_FEDERATED_TOKEN_FILE"
// EnvGoogleApplicationCredentials is the environment variable to specify path to key.json
- EnvGoogleApplicationCredentials = "GOOGLE_APPLICATION_CREDENTIALS"
+ EnvGoogleApplicationCredentials = "GOOGLE_APPLICATION_CREDENTIALS" //#nosec G101 -- False positive
// EnvSwiftPassword is the environment variable to specify the OpenStack Swift password.
EnvSwiftPassword = "SWIFT_PASSWORD"
// EnvSwiftUsername is the environment variable to specify the OpenStack Swift username.
@@ -52,7 +52,7 @@ const (
// KeyAWSAccessKeyID is the secret data key for the AWS client id to access S3.
KeyAWSAccessKeyID = "access_key_id"
// KeyAWSAccessKeySecret is the secret data key for the AWS client secret to access S3.
- KeyAWSAccessKeySecret = "access_key_secret"
+ KeyAWSAccessKeySecret = "access_key_secret" //#nosec G101 -- False positive
// KeyAWSBucketNames is the secret data key for the AWS S3 bucket names.
KeyAWSBucketNames = "bucketnames"
// KeyAWSEndpoint is the secret data key for the AWS endpoint URL.
@@ -131,16 +131,16 @@ const (
saTokenVolumeName = "bound-sa-token"
saTokenExpiration int64 = 3600
- saTokenVolumeMountPath = "/var/run/secrets/storage/serviceaccount"
+ saTokenVolumeMountPath = "/var/run/secrets/storage/serviceaccount" //#nosec G101 -- False positive
ServiceAccountTokenFilePath = saTokenVolumeMountPath + "/token"
- secretDirectory = "/etc/storage/secrets"
+ secretDirectory = "/etc/storage/secrets" //#nosec G101 -- False positive
storageTLSVolume = "storage-tls"
caDirectory = "/etc/storage/ca"
- tokenAuthConfigVolumeName = "token-auth-config"
- tokenAuthConfigDirectory = "/etc/storage/token-auth"
+ tokenAuthConfigVolumeName = "token-auth-config" //#nosec G101 -- False positive
+ tokenAuthConfigDirectory = "/etc/storage/token-auth" //#nosec G101 -- False positive
awsDefaultAudience = "sts.amazonaws.com"
azureDefaultAudience = "api://AzureADTokenExchange"
diff --git a/operator/internal/manifests/var.go b/operator/internal/manifests/var.go
index 9e501ee72ae99..8813c07454621 100644
--- a/operator/internal/manifests/var.go
+++ b/operator/internal/manifests/var.go
@@ -68,7 +68,7 @@ const (
// PrometheusCAFile declares the path for prometheus CA file for service monitors.
PrometheusCAFile string = "/etc/prometheus/configmaps/serving-certs-ca-bundle/service-ca.crt"
// BearerTokenFile declares the path for bearer token file for service monitors.
- BearerTokenFile string = "/var/run/secrets/kubernetes.io/serviceaccount/token"
+ BearerTokenFile string = "/var/run/secrets/kubernetes.io/serviceaccount/token" //#nosec G101 -- False positive
// labelJobComponent is a ServiceMonitor.Spec.JobLabel.
labelJobComponent string = "loki.grafana.com/component"
@@ -80,7 +80,7 @@ const (
// AnnotationLokiObjectStoreHash stores the last SHA1 hash of the loki object storage credetials.
AnnotationLokiObjectStoreHash string = "loki.grafana.com/object-store-hash"
// AnnotationLokiTokenCCOAuthHash stores the SHA1 hash of the secret generated by the Cloud Credential Operator.
- AnnotationLokiTokenCCOAuthHash string = "loki.grafana.com/token-cco-auth-hash"
+ AnnotationLokiTokenCCOAuthHash string = "loki.grafana.com/token-cco-auth-hash" //#nosec G101 -- False positive
// LabelCompactorComponent is the label value for the compactor component
LabelCompactorComponent string = "compactor"
diff --git a/pkg/canary/comparator/comparator.go b/pkg/canary/comparator/comparator.go
index 8d72fac6260f9..a141e2c0864f7 100644
--- a/pkg/canary/comparator/comparator.go
+++ b/pkg/canary/comparator/comparator.go
@@ -275,7 +275,7 @@ func (c *Comparator) run() {
t := time.NewTicker(c.pruneInterval)
// Use a random tick up to the interval for the first tick
firstMt := true
- randomGenerator := rand.New(rand.NewSource(time.Now().UnixNano()))
+ randomGenerator := rand.New(rand.NewSource(time.Now().UnixNano())) //#nosec G404 -- Random sampling for health testing purposes, does not require secure random.
mt := time.NewTicker(time.Duration(randomGenerator.Int63n(c.metricTestInterval.Nanoseconds())))
sc := time.NewTicker(c.spotCheckQueryRate)
ct := time.NewTicker(c.cacheTestInterval)
diff --git a/pkg/canary/writer/writer.go b/pkg/canary/writer/writer.go
index 032e7736ea782..32b23507dd718 100644
--- a/pkg/canary/writer/writer.go
+++ b/pkg/canary/writer/writer.go
@@ -82,8 +82,8 @@ func (w *Writer) run() {
select {
case <-t.C:
t := time.Now()
- if i := rand.Intn(100); i < w.outOfOrderPercentage {
- n := rand.Intn(int(w.outOfOrderMax.Seconds()-w.outOfOrderMin.Seconds())) + int(w.outOfOrderMin.Seconds())
+ if i := rand.Intn(100); i < w.outOfOrderPercentage { //#nosec G404 -- Random sampling for testing purposes, does not require secure random.
+ n := rand.Intn(int(w.outOfOrderMax.Seconds()-w.outOfOrderMin.Seconds())) + int(w.outOfOrderMin.Seconds()) //#nosec G404 -- Random sampling for testing purposes, does not require secure random.
t = t.Add(-time.Duration(n) * time.Second)
}
ts := strconv.FormatInt(t.UnixNano(), 10)
diff --git a/pkg/chunkenc/memchunk.go b/pkg/chunkenc/memchunk.go
index 790210d3af8b2..ef816aca52153 100644
--- a/pkg/chunkenc/memchunk.go
+++ b/pkg/chunkenc/memchunk.go
@@ -1332,7 +1332,7 @@ func (hb *headBlock) SampleIterator(ctx context.Context, mint, maxt int64, extra
}
func unsafeGetBytes(s string) []byte {
- return unsafe.Slice(unsafe.StringData(s), len(s))
+ return unsafe.Slice(unsafe.StringData(s), len(s)) // #nosec G103 -- we know the string is not mutated
}
type bufferedIterator struct {
diff --git a/pkg/compactor/deletion/delete_requests_store.go b/pkg/compactor/deletion/delete_requests_store.go
index b7ddfe13a6182..e351a694b0f19 100644
--- a/pkg/compactor/deletion/delete_requests_store.go
+++ b/pkg/compactor/deletion/delete_requests_store.go
@@ -413,7 +413,7 @@ func splitUserIDAndRequestID(rangeValue string) (userID, requestID, seqID string
// unsafeGetString is like yolostring but with a meaningful name
func unsafeGetString(buf []byte) string {
- return *((*string)(unsafe.Pointer(&buf)))
+ return *((*string)(unsafe.Pointer(&buf))) // #nosec G103 -- we know the string is not mutated
}
func generateCacheGenNumber() []byte {
diff --git a/pkg/compactor/retention/retention.go b/pkg/compactor/retention/retention.go
index 96eafcc2a7d58..1e7be7b81d27c 100644
--- a/pkg/compactor/retention/retention.go
+++ b/pkg/compactor/retention/retention.go
@@ -484,7 +484,7 @@ func CopyMarkers(src string, dst string) error {
return fmt.Errorf("read marker file: %w", err)
}
- if err := os.WriteFile(filepath.Join(targetDir, marker.Name()), data, 0o666); err != nil {
+ if err := os.WriteFile(filepath.Join(targetDir, marker.Name()), data, 0640); err != nil { // #nosec G306 -- this is fencing off the "other" permissions
return fmt.Errorf("write marker file: %w", err)
}
}
diff --git a/pkg/compactor/retention/util.go b/pkg/compactor/retention/util.go
index 285fdc1fd553a..535aa8370dae6 100644
--- a/pkg/compactor/retention/util.go
+++ b/pkg/compactor/retention/util.go
@@ -13,7 +13,7 @@ import (
// unsafeGetString is like yolostring but with a meaningful name
func unsafeGetString(buf []byte) string {
- return *((*string)(unsafe.Pointer(&buf)))
+ return *((*string)(unsafe.Pointer(&buf))) // #nosec G103 -- we know the string is not mutated
}
func copyFile(src, dst string) (int64, error) {
diff --git a/pkg/compactor/testutil.go b/pkg/compactor/testutil.go
index 4ebba27f64bfa..feba141ba514d 100644
--- a/pkg/compactor/testutil.go
+++ b/pkg/compactor/testutil.go
@@ -81,7 +81,7 @@ func SetupTable(t *testing.T, path string, commonDBsConfig IndexesConfig, perUse
idx := 0
for filename, content := range commonIndexes {
filePath := filepath.Join(path, strings.TrimSuffix(filename, ".gz"))
- require.NoError(t, os.WriteFile(filePath, []byte(content), 0777))
+ require.NoError(t, os.WriteFile(filePath, []byte(content), 0640)) // #nosec G306 -- this is fencing off the "other" permissions
if strings.HasSuffix(filename, ".gz") {
compressFile(t, filePath)
}
@@ -92,7 +92,7 @@ func SetupTable(t *testing.T, path string, commonDBsConfig IndexesConfig, perUse
require.NoError(t, util.EnsureDirectory(filepath.Join(path, userID)))
for filename, content := range files {
filePath := filepath.Join(path, userID, strings.TrimSuffix(filename, ".gz"))
- require.NoError(t, os.WriteFile(filePath, []byte(content), 0777))
+ require.NoError(t, os.WriteFile(filePath, []byte(content), 0640)) // #nosec G306 -- this is fencing off the "other" permissions
if strings.HasSuffix(filename, ".gz") {
compressFile(t, filePath)
}
diff --git a/pkg/ingester/checkpoint.go b/pkg/ingester/checkpoint.go
index 1f6dcf8eefeee..8fb2e055a7469 100644
--- a/pkg/ingester/checkpoint.go
+++ b/pkg/ingester/checkpoint.go
@@ -344,7 +344,7 @@ func (w *WALCheckpointWriter) Advance() (bool, error) {
}
}
- if err := os.MkdirAll(checkpointDirTemp, 0777); err != nil {
+ if err := os.MkdirAll(checkpointDirTemp, 0750); err != nil {
return false, fmt.Errorf("create checkpoint dir: %w", err)
}
diff --git a/pkg/ingester/ingester.go b/pkg/ingester/ingester.go
index 958e01b8d9907..5659c3b056a39 100644
--- a/pkg/ingester/ingester.go
+++ b/pkg/ingester/ingester.go
@@ -335,7 +335,7 @@ func New(cfg Config, clientConfig client.Config, store Store, limits Limits, con
i.replayController = newReplayController(metrics, cfg.WAL, &replayFlusher{i})
if cfg.WAL.Enabled {
- if err := os.MkdirAll(cfg.WAL.Dir, os.ModePerm); err != nil {
+ if err := os.MkdirAll(cfg.WAL.Dir, 0750); err != nil {
// Best effort try to make path absolute for easier debugging.
path, _ := filepath.Abs(cfg.WAL.Dir)
if path == "" {
@@ -759,7 +759,7 @@ func (i *Ingester) loop() {
// flush at the same time. Flushing at the same time can cause concurrently
// writing the same chunk to object storage, which in AWS S3 leads to being
// rate limited.
- jitter := time.Duration(rand.Int63n(int64(float64(i.cfg.FlushCheckPeriod.Nanoseconds()) * 0.8)))
+ jitter := time.Duration(rand.Int63n(int64(float64(i.cfg.FlushCheckPeriod.Nanoseconds()) * 0.8))) //#nosec G404 -- Jitter does not require a CSPRNG.
initialDelay := time.NewTimer(jitter)
defer initialDelay.Stop()
diff --git a/pkg/kafka/partitionring/partition_ring.go b/pkg/kafka/partitionring/partition_ring.go
index 15dad003dd931..542a4aee80a41 100644
--- a/pkg/kafka/partitionring/partition_ring.go
+++ b/pkg/kafka/partitionring/partition_ring.go
@@ -62,8 +62,9 @@ func ExtractIngesterPartitionID(ingesterID string) (int32, error) {
if len(match) == 0 {
return 0, fmt.Errorf("ingester ID %s doesn't match regular expression %q", ingesterID, ingesterIDRegexp.String())
}
+
// Parse the ingester sequence number.
- ingesterSeq, err := strconv.Atoi(match[1])
+ ingesterSeq, err := strconv.ParseInt(match[1], 10, 32)
if err != nil {
return 0, fmt.Errorf("no ingester sequence number in ingester ID %s", ingesterID)
}
diff --git a/pkg/loghttp/query.go b/pkg/loghttp/query.go
index a2bce462aab80..86644407a8c15 100644
--- a/pkg/loghttp/query.go
+++ b/pkg/loghttp/query.go
@@ -266,7 +266,7 @@ func (s Streams) ToProto() []logproto.Stream {
}
result := make([]logproto.Stream, 0, len(s))
for _, s := range s {
- entries := *(*[]logproto.Entry)(unsafe.Pointer(&s.Entries))
+ entries := *(*[]logproto.Entry)(unsafe.Pointer(&s.Entries)) // #nosec G103 -- we know the string is not mutated
result = append(result, logproto.Stream{
Labels: s.Labels.String(),
Entries: entries,
diff --git a/pkg/logproto/compat.go b/pkg/logproto/compat.go
index 69ffe4dece3ca..b0110c2250b5e 100644
--- a/pkg/logproto/compat.go
+++ b/pkg/logproto/compat.go
@@ -51,14 +51,14 @@ func ToWriteRequest(lbls []labels.Labels, samples []LegacySample, metadata []*Me
// Note: while resulting labels.Labels is supposedly sorted, this function
// doesn't enforce that. If input is not sorted, output will be wrong.
func FromLabelAdaptersToLabels(ls []LabelAdapter) labels.Labels {
- return *(*labels.Labels)(unsafe.Pointer(&ls))
+ return *(*labels.Labels)(unsafe.Pointer(&ls)) // #nosec G103 -- we know the string is not mutated
}
// FromLabelsToLabelAdapters casts labels.Labels to []LabelAdapter.
// It uses unsafe, but as LabelAdapter == labels.Label this should be safe.
// This allows us to use labels.Labels directly in protos.
func FromLabelsToLabelAdapters(ls labels.Labels) []LabelAdapter {
- return *(*[]LabelAdapter)(unsafe.Pointer(&ls))
+ return *(*[]LabelAdapter)(unsafe.Pointer(&ls)) // #nosec G103 -- we know the string is not mutated
}
// FromLabelAdaptersToMetric converts []LabelAdapter to a model.Metric.
@@ -155,7 +155,7 @@ func SampleJsoniterDecode(ptr unsafe.Pointer, iter *jsoniter.Iterator) {
}
bs := iter.ReadStringAsSlice()
- ss := *(*string)(unsafe.Pointer(&bs))
+ ss := *(*string)(unsafe.Pointer(&bs)) // #nosec G103 -- we know the string is not mutated
v, err := strconv.ParseFloat(ss, 64)
if err != nil {
iter.ReportError("logproto.LegacySample", err.Error())
diff --git a/pkg/logql/log/pipeline.go b/pkg/logql/log/pipeline.go
index 181947fc07435..fe4828f682a37 100644
--- a/pkg/logql/log/pipeline.go
+++ b/pkg/logql/log/pipeline.go
@@ -380,9 +380,9 @@ func ReduceStages(stages []Stage) Stage {
}
func unsafeGetBytes(s string) []byte {
- return unsafe.Slice(unsafe.StringData(s), len(s))
+ return unsafe.Slice(unsafe.StringData(s), len(s)) // #nosec G103 -- we know the string is not mutated
}
func unsafeGetString(buf []byte) string {
- return *((*string)(unsafe.Pointer(&buf)))
+ return *((*string)(unsafe.Pointer(&buf))) // #nosec G103 -- we know the string is not mutated
}
diff --git a/pkg/logql/sketch/topk.go b/pkg/logql/sketch/topk.go
index 23280154daed8..86b01e4c56638 100644
--- a/pkg/logql/sketch/topk.go
+++ b/pkg/logql/sketch/topk.go
@@ -210,7 +210,7 @@ func (t *Topk) updateBF(removed, added string) {
}
func unsafeGetBytes(s string) []byte {
- return unsafe.Slice(unsafe.StringData(s), len(s))
+ return unsafe.Slice(unsafe.StringData(s), len(s)) // #nosec G103 -- we know the string is not mutated
}
// Observe is our sketch event observation function, which is a bit more complex than the original count min sketch + heap TopK
diff --git a/pkg/logql/test_utils.go b/pkg/logql/test_utils.go
index e7f003327ea01..703842b23f2eb 100644
--- a/pkg/logql/test_utils.go
+++ b/pkg/logql/test_utils.go
@@ -257,7 +257,7 @@ func (m MockDownstreamer) Downstream(ctx context.Context, queries []DownstreamQu
// create nStreams of nEntries with labelNames each where each label value
// with the exception of the "index" label is modulo'd into a shard
func randomStreams(nStreams, nEntries, nShards int, labelNames []string, valueField bool) (streams []logproto.Stream) {
- r := rand.New(rand.NewSource(42))
+ r := rand.New(rand.NewSource(42)) //#nosec G404 -- Generation of test data only, no need for a cryptographic PRNG
for i := 0; i < nStreams; i++ {
// labels
stream := logproto.Stream{}
diff --git a/pkg/lokifrontend/frontend/v2/frontend.go b/pkg/lokifrontend/frontend/v2/frontend.go
index dae27ec94682c..7789b7d8f3710 100644
--- a/pkg/lokifrontend/frontend/v2/frontend.go
+++ b/pkg/lokifrontend/frontend/v2/frontend.go
@@ -154,7 +154,7 @@ func NewFrontend(cfg Config, ring ring.ReadRing, log log.Logger, reg prometheus.
// Randomize to avoid getting responses from queries sent before restart, which could lead to mixing results
// between different queries. Note that frontend verifies the user, so it cannot leak results between tenants.
// This isn't perfect, but better than nothing.
- f.lastQueryID.Store(rand.Uint64())
+ f.lastQueryID.Store(rand.Uint64()) //#nosec G404 -- See above comment, this can't leak data or otherwise result in a vuln, simply very rarely cause confusing behavior. A CSPRNG would not help.
promauto.With(reg).NewGaugeFunc(prometheus.GaugeOpts{
Namespace: metricsNamespace,
diff --git a/pkg/pattern/drain/drain.go b/pkg/pattern/drain/drain.go
index 9308ab99a511d..2082610a7a401 100644
--- a/pkg/pattern/drain/drain.go
+++ b/pkg/pattern/drain/drain.go
@@ -548,9 +548,9 @@ func (d *Drain) createTemplate(tokens, matchClusterTokens []string) []string {
}
func unsafeString(s []byte) string {
- return unsafe.String(unsafe.SliceData(s), len(s))
+ return unsafe.String(unsafe.SliceData(s), len(s)) // #nosec G103 -- we know the string is not mutated
}
func unsafeBytes(s string) []byte {
- return unsafe.Slice(unsafe.StringData(s), len(s))
+ return unsafe.Slice(unsafe.StringData(s), len(s)) // #nosec G103 -- we know the string is not mutated
}
diff --git a/pkg/pattern/ingester.go b/pkg/pattern/ingester.go
index c535ab9ebc295..0e2e1ad1432a0 100644
--- a/pkg/pattern/ingester.go
+++ b/pkg/pattern/ingester.go
@@ -275,7 +275,7 @@ func (i *Ingester) loop() {
// flush at the same time. Flushing at the same time can cause concurrently
// writing the same chunk to object storage, which in AWS S3 leads to being
// rate limited.
- jitter := time.Duration(rand.Int63n(int64(float64(i.cfg.FlushCheckPeriod.Nanoseconds()) * 0.8)))
+ jitter := time.Duration(rand.Int63n(int64(float64(i.cfg.FlushCheckPeriod.Nanoseconds()) * 0.8))) //#nosec G404 -- Jitter does not require a CSPRNG
initialDelay := time.NewTimer(jitter)
defer initialDelay.Stop()
diff --git a/pkg/queue/tenant_queues.go b/pkg/queue/tenant_queues.go
index 46f9f7adccd43..c4c38904bb8b1 100644
--- a/pkg/queue/tenant_queues.go
+++ b/pkg/queue/tenant_queues.go
@@ -336,7 +336,7 @@ func shuffleConsumersForTenants(userSeed int64, consumersToSelect int, allSorted
}
result := make(map[string]struct{}, consumersToSelect)
- rnd := rand.New(rand.NewSource(userSeed))
+ rnd := rand.New(rand.NewSource(userSeed)) //#nosec G404 -- Load spreading does not require CSPRNG
scratchpad = scratchpad[:0]
scratchpad = append(scratchpad, allSortedConsumers...)
diff --git a/pkg/ruler/base/mapper.go b/pkg/ruler/base/mapper.go
index 9da3e43aebad5..38a3e229a6a86 100644
--- a/pkg/ruler/base/mapper.go
+++ b/pkg/ruler/base/mapper.go
@@ -1,7 +1,6 @@
package base
import (
- "crypto/md5"
"net/url"
"os"
"path/filepath"
@@ -11,6 +10,7 @@ import (
"github.com/go-kit/log/level"
"github.com/prometheus/prometheus/model/rulefmt"
"github.com/spf13/afero"
+ "golang.org/x/crypto/sha3"
"gopkg.in/yaml.v3"
)
@@ -148,11 +148,14 @@ func (m *mapper) writeRuleGroupsIfNewer(groups []rulefmt.RuleGroup, filename str
if err != nil {
return false, err
}
- newHash := md5.New()
- currentHash := md5.New()
+ newHash := sha3.New256()
+ currentHash := sha3.New256()
+
+ newHash.Write(d)
+ currentHash.Write(current)
// bailout if there is no update
- if string(currentHash.Sum(current)) == string(newHash.Sum(d)) {
+ if string(currentHash.Sum(nil)) == string(newHash.Sum(nil)) {
return false, nil
}
}
diff --git a/pkg/ruler/base/ruler.go b/pkg/ruler/base/ruler.go
index 2e6c74c759dfb..adb1a7c136def 100644
--- a/pkg/ruler/base/ruler.go
+++ b/pkg/ruler/base/ruler.go
@@ -783,7 +783,7 @@ func cloneGroupWithRule(g *rulespb.RuleGroupDesc, r *rulespb.RuleDesc) *rulespb.
}
// the delimiter is prefixed with ";" since that is what Prometheus uses for its group key
-const ruleTokenDelimiter = ";rule-shard-token"
+const ruleTokenDelimiter = ";rule-shard-token" //#nosec G101 -- False positive
// AddRuleTokenToGroupName adds a rule shard token to a given group's name to make it unique.
// Only relevant when using "by-rule" sharding strategy.
diff --git a/pkg/storage/bloom/v1/archive.go b/pkg/storage/bloom/v1/archive.go
index a7b7232f230d1..0e854efcef5b8 100644
--- a/pkg/storage/bloom/v1/archive.go
+++ b/pkg/storage/bloom/v1/archive.go
@@ -5,6 +5,7 @@ import (
"io"
"os"
"path/filepath"
+ "strings"
"github.com/pkg/errors"
@@ -73,8 +74,17 @@ func UnTarCompress(enc compression.Codec, dst string, r io.Reader) error {
}
func UnTar(dst string, r io.Reader) error {
+ // Add safety checks for destination
+ dst = filepath.Clean(dst)
+ if !filepath.IsAbs(dst) {
+ return errors.New("destination path must be absolute")
+ }
tarballer := tar.NewReader(r)
+ // Track total size to prevent decompression bombs
+ var totalSize int64
+ const maxSize = 20 << 30 // 20GB limit
+
for {
header, err := tarballer.Next()
if err == io.EOF {
@@ -84,7 +94,17 @@ func UnTar(dst string, r io.Reader) error {
return errors.Wrap(err, "error reading tarball header")
}
+ // Check for path traversal
target := filepath.Join(dst, header.Name)
+ if !isWithinDir(target, dst) {
+ return errors.Errorf("invalid path %q: path traversal attempt", header.Name)
+ }
+
+ // Update and check total size
+ totalSize += header.Size
+ if totalSize > maxSize {
+ return errors.New("decompression bomb: extracted content too large")
+ }
// check the file type
switch header.Typeflag {
@@ -92,7 +112,7 @@ func UnTar(dst string, r io.Reader) error {
// if its a dir and it doesn't exist create it
case tar.TypeDir:
if _, err := os.Stat(target); err != nil {
- if err := os.MkdirAll(target, 0755); err != nil {
+ if err := os.MkdirAll(target, 0750); err != nil {
return err
}
}
@@ -104,13 +124,21 @@ func UnTar(dst string, r io.Reader) error {
return errors.Wrapf(err, "error creating file %s", target)
}
- // copy over contents
- if _, err := io.Copy(f, tarballer); err != nil {
+ // Use LimitReader to prevent reading more than declared size
+ limited := io.LimitReader(tarballer, header.Size)
+ written, err := io.Copy(f, limited)
+ if err != nil {
+ f.Close()
return errors.Wrapf(err, "error copying contents of file %s", target)
}
- // manually close here after each file operation; defering would cause each file close
- // to wait until all operations have completed.
+ // Verify the actual bytes written match the header size
+ if written != header.Size {
+ f.Close()
+ return errors.Errorf("size mismatch for %s: header claimed %d bytes but got %d bytes",
+ header.Name, header.Size, written)
+ }
+
if err := f.Close(); err != nil {
return errors.Wrapf(err, "error closing file %s", target)
}
@@ -120,3 +148,16 @@ func UnTar(dst string, r io.Reader) error {
return nil
}
+
+// Helper function to check for path traversal
+func isWithinDir(target, dir string) bool {
+ targetPath := filepath.Clean(target)
+ dirPath := filepath.Clean(dir)
+
+ relative, err := filepath.Rel(dirPath, targetPath)
+ if err != nil {
+ return false
+ }
+
+ return !strings.HasPrefix(relative, ".."+string(filepath.Separator))
+}
diff --git a/pkg/storage/bloom/v1/bloom_tester.go b/pkg/storage/bloom/v1/bloom_tester.go
index 1682556bf7d60..ed3939e38a76f 100644
--- a/pkg/storage/bloom/v1/bloom_tester.go
+++ b/pkg/storage/bloom/v1/bloom_tester.go
@@ -163,7 +163,7 @@ func (kvm keyValueMatcherTest) Matches(series labels.Labels, bloom filter.Checke
var (
combined = fmt.Sprintf("%s=%s", kvm.matcher.Key, kvm.matcher.Value)
- rawCombined = unsafe.Slice(unsafe.StringData(combined), len(combined))
+ rawCombined = unsafe.Slice(unsafe.StringData(combined), len(combined)) // #nosec G103 -- we know the string is not mutated
)
return kvm.match(series, bloom, rawCombined)
@@ -199,7 +199,7 @@ func (kvm keyValueMatcherTest) match(series labels.Labels, bloom filter.Checker,
// appendToBuf is the equivalent of append(buf[:prefixLen], str). len(buf) must
// be greater than or equal to prefixLen+len(str) to avoid allocations.
func appendToBuf(buf []byte, prefixLen int, str string) []byte {
- rawString := unsafe.Slice(unsafe.StringData(str), len(str))
+ rawString := unsafe.Slice(unsafe.StringData(str), len(str)) // #nosec G103 -- we know the string is not mutated
return append(buf[:prefixLen], rawString...)
}
diff --git a/pkg/storage/chunk/cache/redis_client.go b/pkg/storage/chunk/cache/redis_client.go
index 55731c7b11a4a..a43441d4a12bb 100644
--- a/pkg/storage/chunk/cache/redis_client.go
+++ b/pkg/storage/chunk/cache/redis_client.go
@@ -74,7 +74,7 @@ func NewRedisClient(cfg *RedisConfig) (*RedisClient, error) {
RouteRandomly: cfg.RouteRandomly,
}
if cfg.EnableTLS {
- opt.TLSConfig = &tls.Config{InsecureSkipVerify: cfg.InsecureSkipVerify}
+ opt.TLSConfig = &tls.Config{InsecureSkipVerify: cfg.InsecureSkipVerify} //#nosec G402 -- User has explicitly requested to disable TLS
}
return &RedisClient{
expiration: cfg.Expiration,
@@ -208,7 +208,7 @@ func (c *RedisClient) Close() error {
// StringToBytes converts string to byte slice. (copied from vendor/github.com/go-redis/redis/v8/internal/util/unsafe.go)
func StringToBytes(s string) []byte {
- return *(*[]byte)(unsafe.Pointer(
+ return *(*[]byte)(unsafe.Pointer( // #nosec G103 -- we know the string is not mutated
&struct {
string
Cap int
diff --git a/pkg/storage/chunk/chunk.go b/pkg/storage/chunk/chunk.go
index aadfe6ea937b2..a4cb63a442cb5 100644
--- a/pkg/storage/chunk/chunk.go
+++ b/pkg/storage/chunk/chunk.go
@@ -214,11 +214,11 @@ func readOneHexPart(hex []byte) (part []byte, i int) {
}
func unsafeGetBytes(s string) []byte {
- return unsafe.Slice(unsafe.StringData(s), len(s))
+ return unsafe.Slice(unsafe.StringData(s), len(s)) // #nosec G103 -- we know the string is not mutated
}
func unsafeGetString(buf []byte) string {
- return *((*string)(unsafe.Pointer(&buf)))
+ return *((*string)(unsafe.Pointer(&buf))) // #nosec G103 -- we know the string is not mutated
}
var writerPool = sync.Pool{
diff --git a/pkg/storage/chunk/client/aws/s3_storage_client.go b/pkg/storage/chunk/client/aws/s3_storage_client.go
index 65817f38c9d9f..e7891e4963fc0 100644
--- a/pkg/storage/chunk/client/aws/s3_storage_client.go
+++ b/pkg/storage/chunk/client/aws/s3_storage_client.go
@@ -230,7 +230,7 @@ func buildS3Client(cfg S3Config, hedgingCfg hedging.Config, hedging bool) (*s3.S
}
tlsConfig := &tls.Config{
- InsecureSkipVerify: cfg.HTTPConfig.InsecureSkipVerify,
+ InsecureSkipVerify: cfg.HTTPConfig.InsecureSkipVerify, //#nosec G402 -- User has explicitly requested to disable TLS
}
if cfg.HTTPConfig.CAFile != "" {
diff --git a/pkg/storage/chunk/client/gcp/gcs_object_client.go b/pkg/storage/chunk/client/gcp/gcs_object_client.go
index 1d44659b3f3cc..95b55c3319929 100644
--- a/pkg/storage/chunk/client/gcp/gcs_object_client.go
+++ b/pkg/storage/chunk/client/gcp/gcs_object_client.go
@@ -357,7 +357,7 @@ func gcsTransport(ctx context.Context, scope string, insecure bool, http2 bool,
}
transportOptions := []option.ClientOption{option.WithScopes(scope)}
if insecure {
- customTransport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
+ customTransport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true} //#nosec G402 -- User has explicitly requested to disable TLS
transportOptions = append(transportOptions, option.WithoutAuthentication())
}
if serviceAccount.String() != "" {
diff --git a/pkg/storage/hack/main.go b/pkg/storage/hack/main.go
index 4e6c348ceb3e0..a9e57affa9940 100644
--- a/pkg/storage/hack/main.go
+++ b/pkg/storage/hack/main.go
@@ -143,7 +143,7 @@ const charset = "abcdefghijklmnopqrstuvwxyz" +
func randStringWithCharset(length int, charset string) string {
b := make([]byte, length)
for i := range b {
- b[i] = charset[rand.Intn(len(charset)-1)]
+ b[i] = charset[rand.Intn(len(charset)-1)] //#nosec G404 -- Generation of test data does not require CSPRNG, this is not meant to be secret.
}
return string(b)
}
diff --git a/pkg/storage/stores/series/index/caching_index_client.go b/pkg/storage/stores/series/index/caching_index_client.go
index 40181ba794c71..d0945003f4c92 100644
--- a/pkg/storage/stores/series/index/caching_index_client.go
+++ b/pkg/storage/stores/series/index/caching_index_client.go
@@ -259,7 +259,7 @@ func (s *cachingIndexClient) doQueries(ctx context.Context, queries []Query, cal
}
func yoloString(buf []byte) string {
- return *((*string)(unsafe.Pointer(&buf)))
+ return *((*string)(unsafe.Pointer(&buf))) // #nosec G103 -- we know the string is not mutated
}
// Iterator implements chunk.ReadBatch.
diff --git a/pkg/storage/stores/series/index/table_manager.go b/pkg/storage/stores/series/index/table_manager.go
index 414e08f494c89..820c880c1e00a 100644
--- a/pkg/storage/stores/series/index/table_manager.go
+++ b/pkg/storage/stores/series/index/table_manager.go
@@ -228,7 +228,7 @@ func (m *TableManager) loop(ctx context.Context) error {
// Sleep for a bit to spread the sync load across different times if the tablemanagers are all started at once.
select {
- case <-time.After(time.Duration(rand.Int63n(int64(m.cfg.PollInterval)))):
+ case <-time.After(time.Duration(rand.Int63n(int64(m.cfg.PollInterval)))): //#nosec G404 -- This is also just essentially jitter, no need for CSPRNG.
case <-ctx.Done():
return nil
}
diff --git a/pkg/storage/stores/shipper/indexshipper/boltdb/compactor/util.go b/pkg/storage/stores/shipper/indexshipper/boltdb/compactor/util.go
index 04948c38a17c1..4fedf6f0fffe2 100644
--- a/pkg/storage/stores/shipper/indexshipper/boltdb/compactor/util.go
+++ b/pkg/storage/stores/shipper/indexshipper/boltdb/compactor/util.go
@@ -19,7 +19,7 @@ import (
// unsafeGetString is like yolostring but with a meaningful name
func unsafeGetString(buf []byte) string {
- return *((*string)(unsafe.Pointer(&buf)))
+ return *((*string)(unsafe.Pointer(&buf))) // #nosec G103 -- we know the string is not mutated
}
func createChunk(t testing.TB, chunkFormat byte, headBlockFmt chunkenc.HeadBlockFmt, userID string, lbs labels.Labels, from model.Time, through model.Time) chunk.Chunk {
diff --git a/pkg/storage/stores/shipper/indexshipper/downloads/testutil.go b/pkg/storage/stores/shipper/indexshipper/downloads/testutil.go
index 4f30762bdd244..e942782e546cd 100644
--- a/pkg/storage/stores/shipper/indexshipper/downloads/testutil.go
+++ b/pkg/storage/stores/shipper/indexshipper/downloads/testutil.go
@@ -33,13 +33,13 @@ func (m *mockIndex) Reader() (io.ReadSeeker, error) {
}
func setupIndexesAtPath(t *testing.T, userID, path string, start, end int) []string {
- require.NoError(t, os.MkdirAll(path, 0755))
+ require.NoError(t, os.MkdirAll(path, 0750))
var testIndexes []string
for ; start < end; start++ {
fileName := buildIndexFilename(userID, start)
indexPath := filepath.Join(path, fileName)
- require.NoError(t, os.WriteFile(indexPath, []byte(fileName), 0755))
+ require.NoError(t, os.WriteFile(indexPath, []byte(fileName), 0640)) // #nosec G306 -- this is fencing off the "other" permissions
testIndexes = append(testIndexes, indexPath)
}
diff --git a/pkg/storage/stores/shipper/indexshipper/shipper.go b/pkg/storage/stores/shipper/indexshipper/shipper.go
index 2917b1fc7974f..5eaeaaa08d598 100644
--- a/pkg/storage/stores/shipper/indexshipper/shipper.go
+++ b/pkg/storage/stores/shipper/indexshipper/shipper.go
@@ -112,7 +112,7 @@ func (cfg *Config) GetUniqueUploaderName() (string, error) {
if !os.IsNotExist(err) {
return "", err
}
- if err := os.WriteFile(uploaderFilePath, []byte(uploader), 0o666); err != nil {
+ if err := os.WriteFile(uploaderFilePath, []byte(uploader), 0640); err != nil { // #nosec G306 -- this is fencing off the "other" permissions
return "", err
}
} else {
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/builder.go b/pkg/storage/stores/shipper/indexshipper/tsdb/builder.go
index a86ac392d259e..815888c14586f 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/builder.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/builder.go
@@ -108,7 +108,7 @@ func (b *Builder) Build(
}
// First write tenant/index-bounds-random.staging
- rng := rand.Int63()
+ rng := rand.Int63() //#nosec G404 -- just generating a random filename in a slightly unidiomatic way. Collision resistance is not a concern.
name := fmt.Sprintf("%s-%x.staging", index.IndexFilename, rng)
tmpPath := filepath.Join(scratchDir, name)
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/compactor.go b/pkg/storage/stores/shipper/indexshipper/tsdb/compactor.go
index df8ea85465142..06028ce1d63d2 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/compactor.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/compactor.go
@@ -409,5 +409,5 @@ func (c *compactedIndex) ToIndexFile() (shipperindex.Index, error) {
}
func getUnsafeBytes(s string) []byte {
- return *((*[]byte)(unsafe.Pointer(&s)))
+ return *((*[]byte)(unsafe.Pointer(&s))) // #nosec G103 -- we know the string is not mutated
}
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/index/index.go b/pkg/storage/stores/shipper/indexshipper/tsdb/index/index.go
index 0766bd058fdf4..0e10f8648a375 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/index/index.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/index/index.go
@@ -2520,7 +2520,7 @@ func readChunkMetaWithForcedMintime(d *encoding.Decbuf, mint int64, chunkMeta *C
}
func yoloString(b []byte) string {
- return *((*string)(unsafe.Pointer(&b)))
+ return *((*string)(unsafe.Pointer(&b))) // #nosec G103 -- we know the string is not mutated
}
func overlap(from, through, chkFrom, chkThrough int64) bool {
diff --git a/pkg/storage/stores/shipper/indexshipper/util/util.go b/pkg/storage/stores/shipper/indexshipper/util/util.go
index f47cea40d6d7d..8c74342c7e9c0 100644
--- a/pkg/storage/stores/shipper/indexshipper/util/util.go
+++ b/pkg/storage/stores/shipper/indexshipper/util/util.go
@@ -82,11 +82,11 @@ func safeOpenBoltDbFile(path string, ret chan *result) {
// }
func GetUnsafeBytes(s string) []byte {
- return *((*[]byte)(unsafe.Pointer(&s)))
+ return *((*[]byte)(unsafe.Pointer(&s))) // #nosec G103 -- we know the string is not mutated
}
func GetUnsafeString(buf []byte) string {
- return *((*string)(unsafe.Pointer(&buf)))
+ return *((*string)(unsafe.Pointer(&buf))) // #nosec G103 -- we know the string is not mutated
}
func logPanic(p interface{}) {
diff --git a/pkg/tool/commands/rules.go b/pkg/tool/commands/rules.go
index c030e9c24dfc5..b91da0f324abe 100644
--- a/pkg/tool/commands/rules.go
+++ b/pkg/tool/commands/rules.go
@@ -772,7 +772,7 @@ func save(nss map[string]rules.RuleNamespace, i bool) error {
filepath = filepath + ".result"
}
- if err := os.WriteFile(filepath, payload, 0644); err != nil {
+ if err := os.WriteFile(filepath, payload, 0640); err != nil { // #nosec G306 -- this is fencing off the "other" permissions
return err
}
}
diff --git a/pkg/util/conv.go b/pkg/util/conv.go
index e3ff145676e6e..dec752291fbca 100644
--- a/pkg/util/conv.go
+++ b/pkg/util/conv.go
@@ -13,7 +13,7 @@ func ModelLabelSetToMap(m model.LabelSet) map[string]string {
if len(m) == 0 {
return map[string]string{}
}
- return *(*map[string]string)(unsafe.Pointer(&m))
+ return *(*map[string]string)(unsafe.Pointer(&m)) // #nosec G103 -- we know the string is not mutated
}
// MapToModelLabelSet converts a map into a model.LabelSet
@@ -21,7 +21,7 @@ func MapToModelLabelSet(m map[string]string) model.LabelSet {
if len(m) == 0 {
return model.LabelSet{}
}
- return *(*map[model.LabelName]model.LabelValue)(unsafe.Pointer(&m))
+ return *(*map[model.LabelName]model.LabelValue)(unsafe.Pointer(&m)) // #nosec G103 -- we know the string is not mutated
}
// RoundToMilliseconds returns milliseconds precision time from nanoseconds.
diff --git a/pkg/util/marshal/query.go b/pkg/util/marshal/query.go
index cbf3b40d94856..901a57daeca51 100644
--- a/pkg/util/marshal/query.go
+++ b/pkg/util/marshal/query.go
@@ -113,7 +113,7 @@ func NewStream(s logproto.Stream) (loghttp.Stream, error) {
// Avoid a nil entries slice to be consistent with the decoding
entries := []loghttp.Entry{}
if len(s.Entries) > 0 {
- entries = *(*[]loghttp.Entry)(unsafe.Pointer(&s.Entries))
+ entries = *(*[]loghttp.Entry)(unsafe.Pointer(&s.Entries)) //#nosec G103 -- Just preventing an allocation, safe. Entry types are the same.
}
ret := loghttp.Stream{
diff --git a/pkg/util/mempool/pool.go b/pkg/util/mempool/pool.go
index e02bc628881ff..1bc5b9a369a15 100644
--- a/pkg/util/mempool/pool.go
+++ b/pkg/util/mempool/pool.go
@@ -42,7 +42,7 @@ func (s *slab) init() {
s.buffer = make(chan unsafe.Pointer, s.count)
for i := 0; i < s.count; i++ {
buf := make([]byte, 0, s.size)
- ptr := unsafe.Pointer(unsafe.SliceData(buf))
+ ptr := unsafe.Pointer(unsafe.SliceData(buf)) //#nosec G103 -- Simple arena allocator implementation, does not appear to allow for any unsafe operations.
s.buffer <- ptr
}
s.metrics.availableBuffersPerSlab.WithLabelValues(s.name).Set(float64(s.count))
@@ -55,7 +55,7 @@ func (s *slab) get(size int) ([]byte, error) {
waitStart := time.Now()
// wait for available buffer on channel
ptr := <-s.buffer
- buf := unsafe.Slice((*byte)(ptr), s.size)
+ buf := unsafe.Slice((*byte)(ptr), s.size) //#nosec G103 -- Simple arena allocator implementation, does not appear to allow for any unsafe operations.
s.metrics.waitDuration.WithLabelValues(s.name).Observe(time.Since(waitStart).Seconds())
return buf[:size], nil
@@ -67,7 +67,8 @@ func (s *slab) put(buf []byte) {
panic("slab is not initialized")
}
- ptr := unsafe.Pointer(unsafe.SliceData(buf))
+ ptr := unsafe.Pointer(unsafe.SliceData(buf)) //#nosec G103 -- Simple arena allocator implementation, does not appear to allow for any unsafe operations.
+ // Note that memory is NOT zero'd on return, but since all allocations are of defined widths and we only ever then read a record of exactly that width into the allocation, it will always be overwritten before use and can't leak.
s.buffer <- ptr
}
diff --git a/pkg/util/shard.go b/pkg/util/shard.go
index 8e7f8e8c6c2e6..404d38b992f18 100644
--- a/pkg/util/shard.go
+++ b/pkg/util/shard.go
@@ -29,7 +29,7 @@ var (
// ShuffleShardSeed returns seed for random number generator, computed from provided identifier.
func ShuffleShardSeed(identifier, zone string) int64 {
// Use the identifier to compute an hash we'll use to seed the random.
- hasher := md5.New()
+ hasher := md5.New() //#nosec G401 -- This does not require collision resistance, this is an intentionally predictable value
hasher.Write(YoloBuf(identifier)) // nolint:errcheck
if zone != "" {
hasher.Write(seedSeparator) // nolint:errcheck
diff --git a/pkg/util/ticker.go b/pkg/util/ticker.go
index e3a8ee244225f..4edde3d7c9d2e 100644
--- a/pkg/util/ticker.go
+++ b/pkg/util/ticker.go
@@ -19,7 +19,7 @@ func NewJitter(b time.Duration, d time.Duration) Jitter {
// Duration returns a random duration from the base duration and +/- jitter
func (j Jitter) Duration() time.Duration {
base := j.base - j.deviation
- jitter := time.Duration(rand.Int63n(int64(float64(2 * j.deviation.Nanoseconds()))))
+ jitter := time.Duration(rand.Int63n(int64(float64(2 * j.deviation.Nanoseconds())))) //#nosec G404 -- Jitter does not require CSPRNG
return base + jitter
}
diff --git a/pkg/util/time.go b/pkg/util/time.go
index 9de06f381c88c..5b620a73d0a86 100644
--- a/pkg/util/time.go
+++ b/pkg/util/time.go
@@ -58,7 +58,7 @@ func DurationWithJitter(input time.Duration, variancePerc float64) time.Duration
}
variance := int64(float64(input) * variancePerc)
- jitter := rand.Int63n(variance*2) - variance
+ jitter := rand.Int63n(variance*2) - variance //#nosec G404 -- Jitter does not require CSPRNG
return input + time.Duration(jitter)
}
@@ -71,7 +71,7 @@ func DurationWithPositiveJitter(input time.Duration, variancePerc float64) time.
}
variance := int64(float64(input) * variancePerc)
- jitter := rand.Int63n(variance)
+ jitter := rand.Int63n(variance) //#nosec G404 -- Jitter does not require CSPRNG
return input + time.Duration(jitter)
}
diff --git a/pkg/util/unmarshal/unmarshal.go b/pkg/util/unmarshal/unmarshal.go
index 4b048d7089c65..ce20a97d373e4 100644
--- a/pkg/util/unmarshal/unmarshal.go
+++ b/pkg/util/unmarshal/unmarshal.go
@@ -19,7 +19,7 @@ func DecodePushRequest(b io.Reader, r *logproto.PushRequest) error {
}
*r = logproto.PushRequest{
- Streams: *(*[]logproto.Stream)(unsafe.Pointer(&request.Streams)),
+ Streams: *(*[]logproto.Stream)(unsafe.Pointer(&request.Streams)), //#nosec G103 -- Just preventing an allocation, safe, there's no chance of an incorrect type cast here.
}
return nil
diff --git a/pkg/util/yolo.go b/pkg/util/yolo.go
index 9870d296ff903..14fd8a5807766 100644
--- a/pkg/util/yolo.go
+++ b/pkg/util/yolo.go
@@ -3,5 +3,5 @@ package util
import "unsafe"
func YoloBuf(s string) []byte {
- return *((*[]byte)(unsafe.Pointer(&s)))
+ return *((*[]byte)(unsafe.Pointer(&s))) //#nosec G103 -- This is used correctly; all uses of this function do not allow the mutable reference to escape
}
diff --git a/tools/lambda-promtail/lambda-promtail/promtail_client.go b/tools/lambda-promtail/lambda-promtail/promtail_client.go
index a322e82452b7a..6aa28ab8063ca 100644
--- a/tools/lambda-promtail/lambda-promtail/promtail_client.go
+++ b/tools/lambda-promtail/lambda-promtail/promtail_client.go
@@ -42,7 +42,7 @@ func NewPromtailClient(cfg *promtailClientConfig, log *log.Logger) *promtailClie
func NewHTTPClient(cfg *httpClientConfig) *http.Client {
transport := http.DefaultTransport
if cfg.skipTlsVerify {
- transport = &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}}
+ transport = &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}} //#nosec G402 -- User has explicitly requested to disable TLS
}
return &http.Client{
Timeout: cfg.timeout,
|
chore
|
Preparation for incoming static code analysis CI check (#15164)
|
1b2f3e0cb2ebe2f012ebaa0731a0623ad6c537e2
|
2023-01-11 22:27:02
|
Robert Jacob
|
operator: Fix status not updating when state of pods changes (#8087)
| false
|
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index 11b38a9e1135d..f83797ec1a9e9 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,6 @@
## Main
+- [8087](https://github.com/grafana/loki/pull/8087) **xperimental**: Fix status not updating when state of pods changes
- [8068](https://github.com/grafana/loki/pull/8068) **periklis**: Use lokistack-gateway replicas from size table
- [7839](https://github.com/grafana/loki/pull/7839) **aminesnow**: Configure Alertmanager per-tenant
- [7910](https://github.com/grafana/loki/pull/7910) **periklis**: Update Loki operand to v2.7.1
diff --git a/operator/apis/loki/v1/lokistack_types.go b/operator/apis/loki/v1/lokistack_types.go
index de550b5aae6b7..54f22edabfde2 100644
--- a/operator/apis/loki/v1/lokistack_types.go
+++ b/operator/apis/loki/v1/lokistack_types.go
@@ -739,7 +739,7 @@ const (
// ConditionReady defines the condition that all components in the Loki deployment are ready.
ConditionReady LokiStackConditionType = "Ready"
- // ConditionPending defines the conditioin that some or all components are in pending state.
+ // ConditionPending defines the condition that some or all components are in pending state.
ConditionPending LokiStackConditionType = "Pending"
// ConditionFailed defines the condition that components in the Loki deployment failed to roll out.
diff --git a/operator/controllers/loki/internal/lokistack/certrotation_discovery.go b/operator/controllers/loki/internal/lokistack/certrotation_discovery.go
index c92e0115016c8..cefbfd3254588 100644
--- a/operator/controllers/loki/internal/lokistack/certrotation_discovery.go
+++ b/operator/controllers/loki/internal/lokistack/certrotation_discovery.go
@@ -31,13 +31,8 @@ func AnnotateForRequiredCertRotation(ctx context.Context, k k8s.Client, name, na
}
ss := s.DeepCopy()
- if ss.Annotations == nil {
- ss.Annotations = make(map[string]string)
- }
-
- ss.Annotations[certRotationRequiredAtKey] = time.Now().UTC().Format(time.RFC3339)
-
- if err := k.Update(ctx, ss); err != nil {
+ timeStamp := time.Now().UTC().Format(time.RFC3339)
+ if err := updateAnnotation(ctx, k, ss, certRotationRequiredAtKey, timeStamp); err != nil {
return kverrors.Wrap(err, fmt.Sprintf("failed to update lokistack `%s` annotation", certRotationRequiredAtKey), "key", key)
}
diff --git a/operator/controllers/loki/internal/lokistack/ruler_config_discovery.go b/operator/controllers/loki/internal/lokistack/ruler_config_discovery.go
index f4f2e00d9aef8..ce7195b36893a 100644
--- a/operator/controllers/loki/internal/lokistack/ruler_config_discovery.go
+++ b/operator/controllers/loki/internal/lokistack/ruler_config_discovery.go
@@ -11,6 +11,10 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
)
+const (
+ annotationRulerConfigDiscoveredAt = "loki.grafana.com/rulerConfigDiscoveredAt"
+)
+
// AnnotateForRulerConfig adds/updates the `loki.grafana.com/rulerConfigDiscoveredAt` annotation
// to the named Lokistack in the same namespace of the RulerConfig. If no LokiStack is found, then
// skip reconciliation.
@@ -28,13 +32,8 @@ func AnnotateForRulerConfig(ctx context.Context, k k8s.Client, name, namespace s
}
ss := s.DeepCopy()
- if ss.Annotations == nil {
- ss.Annotations = make(map[string]string)
- }
-
- ss.Annotations["loki.grafana.com/rulerConfigDiscoveredAt"] = time.Now().UTC().Format(time.RFC3339)
-
- if err := k.Update(ctx, ss); err != nil {
+ timeStamp := time.Now().UTC().Format(time.RFC3339)
+ if err := updateAnnotation(ctx, k, ss, annotationRulerConfigDiscoveredAt, timeStamp); err != nil {
return kverrors.Wrap(err, "failed to update lokistack `rulerConfigDiscoveredAt` annotation", "key", key)
}
diff --git a/operator/controllers/loki/internal/lokistack/rules_discovery.go b/operator/controllers/loki/internal/lokistack/rules_discovery.go
index f5082d7884def..fb1a3ce071df0 100644
--- a/operator/controllers/loki/internal/lokistack/rules_discovery.go
+++ b/operator/controllers/loki/internal/lokistack/rules_discovery.go
@@ -11,9 +11,15 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
)
+const (
+ annotationRulesDiscoveredAt = "loki.grafana.com/rulesDiscoveredAt"
+)
+
// AnnotateForDiscoveredRules adds/updates the `loki.grafana.com/rulesDiscoveredAt` annotation
// to all instance of LokiStack on all namespaces to trigger the reconciliation loop.
func AnnotateForDiscoveredRules(ctx context.Context, k k8s.Client) error {
+ timeStamp := time.Now().UTC().Format(time.RFC3339)
+
var stacks lokiv1.LokiStackList
err := k.List(ctx, &stacks, client.MatchingLabelsSelector{Selector: labels.Everything()})
if err != nil {
@@ -22,13 +28,7 @@ func AnnotateForDiscoveredRules(ctx context.Context, k k8s.Client) error {
for _, s := range stacks.Items {
ss := s.DeepCopy()
- if ss.Annotations == nil {
- ss.Annotations = make(map[string]string)
- }
-
- ss.Annotations["loki.grafana.com/rulesDiscoveredAt"] = time.Now().UTC().Format(time.RFC3339)
-
- if err := k.Update(ctx, ss); err != nil {
+ if err := updateAnnotation(ctx, k, ss, annotationRulesDiscoveredAt, timeStamp); err != nil {
return kverrors.Wrap(err, "failed to update lokistack `rulesDiscoveredAt` annotation", "name", ss.Name, "namespace", ss.Namespace)
}
}
diff --git a/operator/controllers/loki/internal/lokistack/update.go b/operator/controllers/loki/internal/lokistack/update.go
new file mode 100644
index 0000000000000..aca04ab855499
--- /dev/null
+++ b/operator/controllers/loki/internal/lokistack/update.go
@@ -0,0 +1,43 @@
+package lokistack
+
+import (
+ "context"
+
+ lokiv1 "github.com/grafana/loki/operator/apis/loki/v1"
+ "github.com/grafana/loki/operator/internal/external/k8s"
+ "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/client-go/util/retry"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+)
+
+func updateAnnotation(ctx context.Context, k k8s.Client, stack *lokiv1.LokiStack, key, value string) error {
+ if stack.Annotations == nil {
+ stack.Annotations = make(map[string]string)
+ }
+ stack.Annotations[key] = value
+
+ err := k.Update(ctx, stack)
+ switch {
+ case err == nil:
+ return nil
+ case errors.IsConflict(err):
+ // break into retry logic below on conflict
+ break
+ case err != nil:
+ return err
+ }
+
+ objectKey := client.ObjectKeyFromObject(stack)
+ return retry.RetryOnConflict(retry.DefaultRetry, func() error {
+ if err := k.Get(ctx, objectKey, stack); err != nil {
+ return err
+ }
+
+ if stack.Annotations == nil {
+ stack.Annotations = make(map[string]string)
+ }
+ stack.Annotations[key] = value
+
+ return k.Update(ctx, stack)
+ })
+}
diff --git a/operator/controllers/loki/lokistack_controller.go b/operator/controllers/loki/lokistack_controller.go
index e60d048b2dd67..c9dd7b58341ec 100644
--- a/operator/controllers/loki/lokistack_controller.go
+++ b/operator/controllers/loki/lokistack_controller.go
@@ -76,6 +76,22 @@ var (
},
GenericFunc: func(e event.GenericEvent) bool { return false },
})
+ updateOrDeleteWithStatusPred = builder.WithPredicates(predicate.Funcs{
+ UpdateFunc: func(e event.UpdateEvent) bool {
+ return e.ObjectOld.GetGeneration() != e.ObjectNew.GetGeneration() || statusDifferent(e)
+ },
+ CreateFunc: func(_ event.CreateEvent) bool {
+ return false
+ },
+ DeleteFunc: func(e event.DeleteEvent) bool {
+ // DeleteStateUnknown evaluates to false only if the object
+ // has been confirmed as deleted by the api server.
+ return !e.DeleteStateUnknown
+ },
+ GenericFunc: func(_ event.GenericEvent) bool {
+ return false
+ },
+ })
)
// LokiStackReconciler reconciles a LokiStack object
@@ -173,8 +189,8 @@ func (r *LokiStackReconciler) buildController(bld k8s.Builder) error {
Owns(&corev1.Secret{}, updateOrDeleteOnlyPred).
Owns(&corev1.ServiceAccount{}, updateOrDeleteOnlyPred).
Owns(&corev1.Service{}, updateOrDeleteOnlyPred).
- Owns(&appsv1.Deployment{}, updateOrDeleteOnlyPred).
- Owns(&appsv1.StatefulSet{}, updateOrDeleteOnlyPred).
+ Owns(&appsv1.Deployment{}, updateOrDeleteWithStatusPred).
+ Owns(&appsv1.StatefulSet{}, updateOrDeleteWithStatusPred).
Owns(&rbacv1.ClusterRole{}, updateOrDeleteOnlyPred).
Owns(&rbacv1.ClusterRoleBinding{}, updateOrDeleteOnlyPred).
Owns(&rbacv1.Role{}, updateOrDeleteOnlyPred).
@@ -224,3 +240,16 @@ func (r *LokiStackReconciler) enqueueAllLokiStacksHandler() handler.EventHandler
return requests
})
}
+
+func statusDifferent(e event.UpdateEvent) bool {
+ switch old := e.ObjectOld.(type) {
+ case *appsv1.Deployment:
+ newObject := e.ObjectNew.(*appsv1.Deployment)
+ return cmp.Diff(old.Status, newObject.Status) != ""
+ case *appsv1.StatefulSet:
+ newObject := e.ObjectNew.(*appsv1.StatefulSet)
+ return cmp.Diff(old.Status, newObject.Status) != ""
+ default:
+ return false
+ }
+}
diff --git a/operator/controllers/loki/lokistack_controller_test.go b/operator/controllers/loki/lokistack_controller_test.go
index dc066218f949f..1bd497c24d5dd 100644
--- a/operator/controllers/loki/lokistack_controller_test.go
+++ b/operator/controllers/loki/lokistack_controller_test.go
@@ -110,13 +110,13 @@ func TestLokiStackController_RegisterOwnedResourcesForUpdateOrDeleteOnly(t *test
obj: &appsv1.Deployment{},
index: 4,
ownCallsCount: 11,
- pred: updateOrDeleteOnlyPred,
+ pred: updateOrDeleteWithStatusPred,
},
{
obj: &appsv1.StatefulSet{},
index: 5,
ownCallsCount: 11,
- pred: updateOrDeleteOnlyPred,
+ pred: updateOrDeleteWithStatusPred,
},
{
obj: &rbacv1.ClusterRole{},
diff --git a/operator/internal/status/lokistack.go b/operator/internal/status/lokistack.go
index 0a8304719feb5..f0d06133720dd 100644
--- a/operator/internal/status/lokistack.go
+++ b/operator/internal/status/lokistack.go
@@ -7,11 +7,17 @@ import (
"github.com/ViaQ/logerr/v2/kverrors"
lokiv1 "github.com/grafana/loki/operator/apis/loki/v1"
"github.com/grafana/loki/operator/internal/external/k8s"
+ "k8s.io/client-go/util/retry"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
ctrl "sigs.k8s.io/controller-runtime"
- "sigs.k8s.io/controller-runtime/pkg/client"
+)
+
+const (
+ messageReady = "All components ready"
+ messageFailed = "Some LokiStack components failed"
+ messagePending = "Some LokiStack components pending on dependencies"
)
// DegradedError contains information about why the managed LokiStack has an invalid configuration.
@@ -28,183 +34,97 @@ func (e *DegradedError) Error() string {
// SetReadyCondition updates or appends the condition Ready to the lokistack status conditions.
// In addition it resets all other Status conditions to false.
func SetReadyCondition(ctx context.Context, k k8s.Client, req ctrl.Request) error {
- var s lokiv1.LokiStack
- if err := k.Get(ctx, req.NamespacedName, &s); err != nil {
- if apierrors.IsNotFound(err) {
- return nil
- }
- return kverrors.Wrap(err, "failed to lookup lokistack", "name", req.NamespacedName)
- }
-
- for _, cond := range s.Status.Conditions {
- if cond.Type == string(lokiv1.ConditionReady) && cond.Status == metav1.ConditionTrue {
- return nil
- }
- }
-
ready := metav1.Condition{
- Type: string(lokiv1.ConditionReady),
- Status: metav1.ConditionTrue,
- LastTransitionTime: metav1.Now(),
- Message: "All components ready",
- Reason: string(lokiv1.ReasonReadyComponents),
- }
-
- index := -1
- for i := range s.Status.Conditions {
- // Reset all other conditions first
- s.Status.Conditions[i].Status = metav1.ConditionFalse
- s.Status.Conditions[i].LastTransitionTime = metav1.Now()
-
- // Locate existing ready condition if any
- if s.Status.Conditions[i].Type == string(lokiv1.ConditionReady) {
- index = i
- }
+ Type: string(lokiv1.ConditionReady),
+ Message: messageReady,
+ Reason: string(lokiv1.ReasonReadyComponents),
}
- if index == -1 {
- s.Status.Conditions = append(s.Status.Conditions, ready)
- } else {
- s.Status.Conditions[index] = ready
- }
-
- return k.Status().Update(ctx, &s, &client.UpdateOptions{})
+ return updateCondition(ctx, k, req, ready)
}
// SetFailedCondition updates or appends the condition Failed to the lokistack status conditions.
// In addition it resets all other Status conditions to false.
func SetFailedCondition(ctx context.Context, k k8s.Client, req ctrl.Request) error {
- var s lokiv1.LokiStack
- if err := k.Get(ctx, req.NamespacedName, &s); err != nil {
- if apierrors.IsNotFound(err) {
- return nil
- }
- return kverrors.Wrap(err, "failed to lookup lokistack", "name", req.NamespacedName)
- }
-
- for _, cond := range s.Status.Conditions {
- if cond.Type == string(lokiv1.ConditionFailed) && cond.Status == metav1.ConditionTrue {
- return nil
- }
- }
-
failed := metav1.Condition{
- Type: string(lokiv1.ConditionFailed),
- Status: metav1.ConditionTrue,
- LastTransitionTime: metav1.Now(),
- Message: "Some LokiStack components failed",
- Reason: string(lokiv1.ReasonFailedComponents),
- }
-
- index := -1
- for i := range s.Status.Conditions {
- // Reset all other conditions first
- s.Status.Conditions[i].Status = metav1.ConditionFalse
- s.Status.Conditions[i].LastTransitionTime = metav1.Now()
-
- // Locate existing failed condition if any
- if s.Status.Conditions[i].Type == string(lokiv1.ConditionFailed) {
- index = i
- }
- }
-
- if index == -1 {
- s.Status.Conditions = append(s.Status.Conditions, failed)
- } else {
- s.Status.Conditions[index] = failed
+ Type: string(lokiv1.ConditionFailed),
+ Message: messageFailed,
+ Reason: string(lokiv1.ReasonFailedComponents),
}
- return k.Status().Update(ctx, &s, &client.UpdateOptions{})
+ return updateCondition(ctx, k, req, failed)
}
// SetPendingCondition updates or appends the condition Pending to the lokistack status conditions.
// In addition it resets all other Status conditions to false.
func SetPendingCondition(ctx context.Context, k k8s.Client, req ctrl.Request) error {
- var s lokiv1.LokiStack
- if err := k.Get(ctx, req.NamespacedName, &s); err != nil {
- if apierrors.IsNotFound(err) {
- return nil
- }
- return kverrors.Wrap(err, "failed to lookup lokistack", "name", req.NamespacedName)
- }
-
- for _, cond := range s.Status.Conditions {
- if cond.Type == string(lokiv1.ConditionPending) && cond.Status == metav1.ConditionTrue {
- return nil
- }
- }
-
pending := metav1.Condition{
- Type: string(lokiv1.ConditionPending),
- Status: metav1.ConditionTrue,
- LastTransitionTime: metav1.Now(),
- Message: "Some LokiStack components pending on dependendies",
- Reason: string(lokiv1.ReasonPendingComponents),
+ Type: string(lokiv1.ConditionPending),
+ Message: messagePending,
+ Reason: string(lokiv1.ReasonPendingComponents),
}
- index := -1
- for i := range s.Status.Conditions {
- // Reset all other conditions first
- s.Status.Conditions[i].Status = metav1.ConditionFalse
- s.Status.Conditions[i].LastTransitionTime = metav1.Now()
-
- // Locate existing pending condition if any
- if s.Status.Conditions[i].Type == string(lokiv1.ConditionPending) {
- index = i
- }
- }
+ return updateCondition(ctx, k, req, pending)
+}
- if index == -1 {
- s.Status.Conditions = append(s.Status.Conditions, pending)
- } else {
- s.Status.Conditions[index] = pending
+// SetDegradedCondition appends the condition Degraded to the lokistack status conditions.
+func SetDegradedCondition(ctx context.Context, k k8s.Client, req ctrl.Request, msg string, reason lokiv1.LokiStackConditionReason) error {
+ degraded := metav1.Condition{
+ Type: string(lokiv1.ConditionDegraded),
+ Message: msg,
+ Reason: string(reason),
}
- return k.Status().Update(ctx, &s, &client.UpdateOptions{})
+ return updateCondition(ctx, k, req, degraded)
}
-// SetDegradedCondition appends the condition Degraded to the lokistack status conditions.
-func SetDegradedCondition(ctx context.Context, k k8s.Client, req ctrl.Request, msg string, reason lokiv1.LokiStackConditionReason) error {
- var s lokiv1.LokiStack
- if err := k.Get(ctx, req.NamespacedName, &s); err != nil {
+func updateCondition(ctx context.Context, k k8s.Client, req ctrl.Request, condition metav1.Condition) error {
+ var stack lokiv1.LokiStack
+ if err := k.Get(ctx, req.NamespacedName, &stack); err != nil {
if apierrors.IsNotFound(err) {
return nil
}
- return kverrors.Wrap(err, "failed to lookup lokistack", "name", req.NamespacedName)
+ return kverrors.Wrap(err, "failed to lookup LokiStack", "name", req.NamespacedName)
}
- reasonStr := string(reason)
- for _, cond := range s.Status.Conditions {
- if cond.Type == string(lokiv1.ConditionDegraded) && cond.Reason == reasonStr && cond.Status == metav1.ConditionTrue {
+ for _, c := range stack.Status.Conditions {
+ if c.Type == condition.Type &&
+ c.Reason == condition.Reason &&
+ c.Message == condition.Message &&
+ c.Status == metav1.ConditionTrue {
+ // resource already has desired condition
return nil
}
}
- degraded := metav1.Condition{
- Type: string(lokiv1.ConditionDegraded),
- Status: metav1.ConditionTrue,
- LastTransitionTime: metav1.Now(),
- Reason: reasonStr,
- Message: msg,
- }
+ condition.Status = metav1.ConditionTrue
+
+ return retry.RetryOnConflict(retry.DefaultRetry, func() error {
+ if err := k.Get(ctx, req.NamespacedName, &stack); err != nil {
+ return err
+ }
+
+ now := metav1.Now()
+ condition.LastTransitionTime = now
- index := -1
- for i := range s.Status.Conditions {
- // Reset all other conditions first
- s.Status.Conditions[i].Status = metav1.ConditionFalse
- s.Status.Conditions[i].LastTransitionTime = metav1.Now()
+ index := -1
+ for i := range stack.Status.Conditions {
+ // Reset all other conditions first
+ stack.Status.Conditions[i].Status = metav1.ConditionFalse
+ stack.Status.Conditions[i].LastTransitionTime = now
- // Locate existing pending condition if any
- if s.Status.Conditions[i].Type == string(lokiv1.ConditionDegraded) {
- index = i
+ // Locate existing pending condition if any
+ if stack.Status.Conditions[i].Type == condition.Type {
+ index = i
+ }
}
- }
- if index == -1 {
- s.Status.Conditions = append(s.Status.Conditions, degraded)
- } else {
- s.Status.Conditions[index] = degraded
- }
+ if index == -1 {
+ stack.Status.Conditions = append(stack.Status.Conditions, condition)
+ } else {
+ stack.Status.Conditions[index] = condition
+ }
- return k.Status().Update(ctx, &s, &client.UpdateOptions{})
+ return k.Status().Update(ctx, &stack)
+ })
}
diff --git a/operator/internal/status/lokistack_test.go b/operator/internal/status/lokistack_test.go
index 8e507aad2b6bd..4208cd9c2dea9 100644
--- a/operator/internal/status/lokistack_test.go
+++ b/operator/internal/status/lokistack_test.go
@@ -1,4 +1,4 @@
-package status_test
+package status
import (
"context"
@@ -6,7 +6,6 @@ import (
lokiv1 "github.com/grafana/loki/operator/apis/loki/v1"
"github.com/grafana/loki/operator/internal/external/k8s/k8sfakes"
- "github.com/grafana/loki/operator/internal/status"
"github.com/stretchr/testify/require"
@@ -18,9 +17,29 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
)
-func TestSetReadyCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.T) {
+func setupFakesNoError(t *testing.T, stack *lokiv1.LokiStack) (*k8sfakes.FakeClient, *k8sfakes.FakeStatusWriter) {
+ sw := &k8sfakes.FakeStatusWriter{}
k := &k8sfakes.FakeClient{}
+ k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
+ if name.Name == stack.Name && name.Namespace == stack.Namespace {
+ k.SetClientObject(object, stack)
+ return nil
+ }
+ return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
+ }
+ k.StatusStub = func() client.StatusWriter { return sw }
+
+ sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
+ actual := obj.(*lokiv1.LokiStack)
+ require.NotEmpty(t, actual.Status.Conditions)
+ require.Equal(t, metav1.ConditionTrue, actual.Status.Conditions[0].Status)
+ return nil
+ }
+
+ return k, sw
+}
+func TestSetReadyCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.T) {
r := ctrl.Request{
NamespacedName: types.NamespacedName{
Name: "my-stack",
@@ -28,17 +47,16 @@ func TestSetReadyCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.T
},
}
+ k := &k8sfakes.FakeClient{}
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
return apierrors.NewBadRequest("something wasn't found")
}
- err := status.SetReadyCondition(context.TODO(), k, r)
+ err := SetReadyCondition(context.Background(), k, r)
require.Error(t, err)
}
func TestSetReadyCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
r := ctrl.Request{
NamespacedName: types.NamespacedName{
Name: "my-stack",
@@ -46,17 +64,16 @@ func TestSetReadyCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing.
},
}
+ k := &k8sfakes.FakeClient{}
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
}
- err := status.SetReadyCondition(context.TODO(), k, r)
+ err := SetReadyCondition(context.Background(), k, r)
require.NoError(t, err)
}
func TestSetReadyCondition_WhenExisting_DoNothing(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
s := lokiv1.LokiStack{
ObjectMeta: metav1.ObjectMeta{
Name: "my-stack",
@@ -65,8 +82,10 @@ func TestSetReadyCondition_WhenExisting_DoNothing(t *testing.T) {
Status: lokiv1.LokiStackStatus{
Conditions: []metav1.Condition{
{
- Type: string(lokiv1.ConditionReady),
- Status: metav1.ConditionTrue,
+ Type: string(lokiv1.ConditionReady),
+ Message: messageReady,
+ Reason: string(lokiv1.ReasonReadyComponents),
+ Status: metav1.ConditionTrue,
},
},
},
@@ -79,25 +98,14 @@ func TestSetReadyCondition_WhenExisting_DoNothing(t *testing.T) {
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
+ k, _ := setupFakesNoError(t, &s)
- err := status.SetReadyCondition(context.TODO(), k, r)
+ err := SetReadyCondition(context.Background(), k, r)
require.NoError(t, err)
require.Zero(t, k.StatusCallCount())
}
func TestSetReadyCondition_WhenExisting_SetReadyConditionTrue(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
s := lokiv1.LokiStack{
ObjectMeta: metav1.ObjectMeta{
Name: "my-stack",
@@ -120,22 +128,9 @@ func TestSetReadyCondition_WhenExisting_SetReadyConditionTrue(t *testing.T) {
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
-
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- actual := obj.(*lokiv1.LokiStack)
- require.NotEmpty(t, actual.Status.Conditions)
- require.Equal(t, metav1.ConditionTrue, actual.Status.Conditions[0].Status)
- return nil
- }
+ k, sw := setupFakesNoError(t, &s)
- err := status.SetReadyCondition(context.TODO(), k, r)
+ err := SetReadyCondition(context.Background(), k, r)
require.NoError(t, err)
require.NotZero(t, k.StatusCallCount())
@@ -143,11 +138,6 @@ func TestSetReadyCondition_WhenExisting_SetReadyConditionTrue(t *testing.T) {
}
func TestSetReadyCondition_WhenNoneExisting_AppendReadyCondition(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
s := lokiv1.LokiStack{
ObjectMeta: metav1.ObjectMeta{
Name: "my-stack",
@@ -162,21 +152,9 @@ func TestSetReadyCondition_WhenNoneExisting_AppendReadyCondition(t *testing.T) {
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
+ k, sw := setupFakesNoError(t, &s)
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- actual := obj.(*lokiv1.LokiStack)
- require.NotEmpty(t, actual.Status.Conditions)
- return nil
- }
-
- err := status.SetReadyCondition(context.TODO(), k, r)
+ err := SetReadyCondition(context.Background(), k, r)
require.NoError(t, err)
require.NotZero(t, k.StatusCallCount())
@@ -184,8 +162,6 @@ func TestSetReadyCondition_WhenNoneExisting_AppendReadyCondition(t *testing.T) {
}
func TestSetFailedCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
r := ctrl.Request{
NamespacedName: types.NamespacedName{
Name: "my-stack",
@@ -193,17 +169,16 @@ func TestSetFailedCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.
},
}
+ k := &k8sfakes.FakeClient{}
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
return apierrors.NewBadRequest("something wasn't found")
}
- err := status.SetFailedCondition(context.TODO(), k, r)
+ err := SetFailedCondition(context.Background(), k, r)
require.Error(t, err)
}
func TestSetFailedCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
r := ctrl.Request{
NamespacedName: types.NamespacedName{
Name: "my-stack",
@@ -211,17 +186,16 @@ func TestSetFailedCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing
},
}
+ k := &k8sfakes.FakeClient{}
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
}
- err := status.SetFailedCondition(context.TODO(), k, r)
+ err := SetFailedCondition(context.Background(), k, r)
require.NoError(t, err)
}
func TestSetFailedCondition_WhenExisting_DoNothing(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
s := lokiv1.LokiStack{
ObjectMeta: metav1.ObjectMeta{
Name: "my-stack",
@@ -230,8 +204,10 @@ func TestSetFailedCondition_WhenExisting_DoNothing(t *testing.T) {
Status: lokiv1.LokiStackStatus{
Conditions: []metav1.Condition{
{
- Type: string(lokiv1.ConditionFailed),
- Status: metav1.ConditionTrue,
+ Type: string(lokiv1.ConditionFailed),
+ Reason: string(lokiv1.ReasonFailedComponents),
+ Message: messageFailed,
+ Status: metav1.ConditionTrue,
},
},
},
@@ -244,25 +220,14 @@ func TestSetFailedCondition_WhenExisting_DoNothing(t *testing.T) {
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
+ k, _ := setupFakesNoError(t, &s)
- err := status.SetFailedCondition(context.TODO(), k, r)
+ err := SetFailedCondition(context.Background(), k, r)
require.NoError(t, err)
require.Zero(t, k.StatusCallCount())
}
func TestSetFailedCondition_WhenExisting_SetFailedConditionTrue(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
s := lokiv1.LokiStack{
ObjectMeta: metav1.ObjectMeta{
Name: "my-stack",
@@ -285,22 +250,9 @@ func TestSetFailedCondition_WhenExisting_SetFailedConditionTrue(t *testing.T) {
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
+ k, sw := setupFakesNoError(t, &s)
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- actual := obj.(*lokiv1.LokiStack)
- require.NotEmpty(t, actual.Status.Conditions)
- require.Equal(t, metav1.ConditionTrue, actual.Status.Conditions[0].Status)
- return nil
- }
-
- err := status.SetFailedCondition(context.TODO(), k, r)
+ err := SetFailedCondition(context.Background(), k, r)
require.NoError(t, err)
require.NotZero(t, k.StatusCallCount())
@@ -308,11 +260,6 @@ func TestSetFailedCondition_WhenExisting_SetFailedConditionTrue(t *testing.T) {
}
func TestSetFailedCondition_WhenNoneExisting_AppendFailedCondition(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
s := lokiv1.LokiStack{
ObjectMeta: metav1.ObjectMeta{
Name: "my-stack",
@@ -327,21 +274,9 @@ func TestSetFailedCondition_WhenNoneExisting_AppendFailedCondition(t *testing.T)
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
-
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- actual := obj.(*lokiv1.LokiStack)
- require.NotEmpty(t, actual.Status.Conditions)
- return nil
- }
+ k, sw := setupFakesNoError(t, &s)
- err := status.SetFailedCondition(context.TODO(), k, r)
+ err := SetFailedCondition(context.Background(), k, r)
require.NoError(t, err)
require.NotZero(t, k.StatusCallCount())
@@ -349,8 +284,6 @@ func TestSetFailedCondition_WhenNoneExisting_AppendFailedCondition(t *testing.T)
}
func TestSetDegradedCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
msg := "tell me nothing"
reason := lokiv1.ReasonMissingObjectStorageSecret
@@ -361,17 +294,16 @@ func TestSetDegradedCondition_WhenGetLokiStackReturnsError_ReturnError(t *testin
},
}
+ k := &k8sfakes.FakeClient{}
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
return apierrors.NewBadRequest("something wasn't found")
}
- err := status.SetDegradedCondition(context.TODO(), k, r, msg, reason)
+ err := SetDegradedCondition(context.Background(), k, r, msg, reason)
require.Error(t, err)
}
func TestSetPendingCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
r := ctrl.Request{
NamespacedName: types.NamespacedName{
Name: "my-stack",
@@ -379,17 +311,16 @@ func TestSetPendingCondition_WhenGetLokiStackReturnsError_ReturnError(t *testing
},
}
+ k := &k8sfakes.FakeClient{}
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
return apierrors.NewBadRequest("something wasn't found")
}
- err := status.SetPendingCondition(context.TODO(), k, r)
+ err := SetPendingCondition(context.Background(), k, r)
require.Error(t, err)
}
func TestSetPendingCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
r := ctrl.Request{
NamespacedName: types.NamespacedName{
Name: "my-stack",
@@ -397,17 +328,16 @@ func TestSetPendingCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testin
},
}
+ k := &k8sfakes.FakeClient{}
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
}
- err := status.SetPendingCondition(context.TODO(), k, r)
+ err := SetPendingCondition(context.Background(), k, r)
require.NoError(t, err)
}
func TestSetPendingCondition_WhenExisting_DoNothing(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
s := lokiv1.LokiStack{
ObjectMeta: metav1.ObjectMeta{
Name: "my-stack",
@@ -416,8 +346,10 @@ func TestSetPendingCondition_WhenExisting_DoNothing(t *testing.T) {
Status: lokiv1.LokiStackStatus{
Conditions: []metav1.Condition{
{
- Type: string(lokiv1.ConditionPending),
- Status: metav1.ConditionTrue,
+ Type: string(lokiv1.ConditionPending),
+ Reason: string(lokiv1.ReasonPendingComponents),
+ Message: messagePending,
+ Status: metav1.ConditionTrue,
},
},
},
@@ -430,25 +362,14 @@ func TestSetPendingCondition_WhenExisting_DoNothing(t *testing.T) {
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
+ k, _ := setupFakesNoError(t, &s)
- err := status.SetPendingCondition(context.TODO(), k, r)
+ err := SetPendingCondition(context.Background(), k, r)
require.NoError(t, err)
require.Zero(t, k.StatusCallCount())
}
func TestSetPendingCondition_WhenExisting_SetPendingConditionTrue(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
s := lokiv1.LokiStack{
ObjectMeta: metav1.ObjectMeta{
Name: "my-stack",
@@ -471,33 +392,15 @@ func TestSetPendingCondition_WhenExisting_SetPendingConditionTrue(t *testing.T)
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
+ k, sw := setupFakesNoError(t, &s)
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- actual := obj.(*lokiv1.LokiStack)
- require.NotEmpty(t, actual.Status.Conditions)
- require.Equal(t, metav1.ConditionTrue, actual.Status.Conditions[0].Status)
- return nil
- }
-
- err := status.SetPendingCondition(context.TODO(), k, r)
+ err := SetPendingCondition(context.Background(), k, r)
require.NoError(t, err)
require.NotZero(t, k.StatusCallCount())
require.NotZero(t, sw.UpdateCallCount())
}
func TestSetPendingCondition_WhenNoneExisting_AppendPendingCondition(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
s := lokiv1.LokiStack{
ObjectMeta: metav1.ObjectMeta{
Name: "my-stack",
@@ -512,21 +415,9 @@ func TestSetPendingCondition_WhenNoneExisting_AppendPendingCondition(t *testing.
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
-
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- actual := obj.(*lokiv1.LokiStack)
- require.NotEmpty(t, actual.Status.Conditions)
- return nil
- }
+ k, sw := setupFakesNoError(t, &s)
- err := status.SetPendingCondition(context.TODO(), k, r)
+ err := SetPendingCondition(context.Background(), k, r)
require.NoError(t, err)
require.NotZero(t, k.StatusCallCount())
@@ -534,8 +425,6 @@ func TestSetPendingCondition_WhenNoneExisting_AppendPendingCondition(t *testing.
}
func TestSetDegradedCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
msg := "tell me nothing"
reason := lokiv1.ReasonMissingObjectStorageSecret
@@ -546,17 +435,16 @@ func TestSetDegradedCondition_WhenGetLokiStackReturnsNotFound_DoNothing(t *testi
},
}
+ k := &k8sfakes.FakeClient{}
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
}
- err := status.SetDegradedCondition(context.TODO(), k, r, msg, reason)
+ err := SetDegradedCondition(context.Background(), k, r, msg, reason)
require.NoError(t, err)
}
func TestSetDegradedCondition_WhenExisting_DoNothing(t *testing.T) {
- k := &k8sfakes.FakeClient{}
-
msg := "tell me nothing"
reason := lokiv1.ReasonMissingObjectStorageSecret
s := lokiv1.LokiStack{
@@ -567,9 +455,10 @@ func TestSetDegradedCondition_WhenExisting_DoNothing(t *testing.T) {
Status: lokiv1.LokiStackStatus{
Conditions: []metav1.Condition{
{
- Type: string(lokiv1.ConditionDegraded),
- Reason: string(reason),
- Status: metav1.ConditionTrue,
+ Type: string(lokiv1.ConditionDegraded),
+ Reason: string(reason),
+ Message: msg,
+ Status: metav1.ConditionTrue,
},
},
},
@@ -582,25 +471,14 @@ func TestSetDegradedCondition_WhenExisting_DoNothing(t *testing.T) {
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
+ k, _ := setupFakesNoError(t, &s)
- err := status.SetDegradedCondition(context.TODO(), k, r, msg, reason)
+ err := SetDegradedCondition(context.Background(), k, r, msg, reason)
require.NoError(t, err)
require.Zero(t, k.StatusCallCount())
}
func TestSetDegradedCondition_WhenExisting_SetDegradedConditionTrue(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
msg := "tell me something"
reason := lokiv1.ReasonMissingObjectStorageSecret
s := lokiv1.LokiStack{
@@ -626,33 +504,15 @@ func TestSetDegradedCondition_WhenExisting_SetDegradedConditionTrue(t *testing.T
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
+ k, sw := setupFakesNoError(t, &s)
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- actual := obj.(*lokiv1.LokiStack)
- require.NotEmpty(t, actual.Status.Conditions)
- require.Equal(t, metav1.ConditionTrue, actual.Status.Conditions[0].Status)
- return nil
- }
-
- err := status.SetDegradedCondition(context.TODO(), k, r, msg, reason)
+ err := SetDegradedCondition(context.Background(), k, r, msg, reason)
require.NoError(t, err)
require.NotZero(t, k.StatusCallCount())
require.NotZero(t, sw.UpdateCallCount())
}
func TestSetDegradedCondition_WhenNoneExisting_AppendDegradedCondition(t *testing.T) {
- sw := &k8sfakes.FakeStatusWriter{}
- k := &k8sfakes.FakeClient{}
-
- k.StatusStub = func() client.StatusWriter { return sw }
-
msg := "tell me something"
reason := lokiv1.ReasonMissingObjectStorageSecret
s := lokiv1.LokiStack{
@@ -669,21 +529,9 @@ func TestSetDegradedCondition_WhenNoneExisting_AppendDegradedCondition(t *testin
},
}
- k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
- k.SetClientObject(object, &s)
- return nil
- }
- return apierrors.NewNotFound(schema.GroupResource{}, "something wasn't found")
- }
-
- sw.UpdateStub = func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
- actual := obj.(*lokiv1.LokiStack)
- require.NotEmpty(t, actual.Status.Conditions)
- return nil
- }
+ k, sw := setupFakesNoError(t, &s)
- err := status.SetDegradedCondition(context.TODO(), k, r, msg, reason)
+ err := SetDegradedCondition(context.Background(), k, r, msg, reason)
require.NoError(t, err)
require.NotZero(t, k.StatusCallCount())
|
operator
|
Fix status not updating when state of pods changes (#8087)
|
3d1f39fab607232a944f38577625dd05f7a8fe21
|
2019-08-08 20:46:53
|
Michael Dai
|
pipeline: Fixed labels process test with same objects (#869)
| false
|
diff --git a/pkg/logentry/stages/labels_test.go b/pkg/logentry/stages/labels_test.go
index 944bd7665c382..88781331c9354 100644
--- a/pkg/logentry/stages/labels_test.go
+++ b/pkg/logentry/stages/labels_test.go
@@ -158,7 +158,7 @@ func TestLabelStage_Process(t *testing.T) {
t.Fatal(err)
}
st.Process(test.inputLabels, test.extractedData, nil, nil)
- assert.Equal(t, test.expectedLabels, test.expectedLabels)
+ assert.Equal(t, test.expectedLabels, test.inputLabels)
})
}
}
|
pipeline
|
Fixed labels process test with same objects (#869)
|
292f91170b1caebea5d095cd58cc8a89d7f83ef6
|
2024-08-12 19:16:36
|
renovate[bot]
|
chore(deps): update helm/chart-testing-action action to v2.6.1 (#13855)
| false
|
diff --git a/.github/workflows/helm-ci.yml b/.github/workflows/helm-ci.yml
index f74cac11483e1..cbeaa4f986480 100644
--- a/.github/workflows/helm-ci.yml
+++ b/.github/workflows/helm-ci.yml
@@ -60,7 +60,7 @@ jobs:
python-version: 3.7
- name: Set up chart-testing
- uses: helm/[email protected]
+ uses: helm/[email protected]
- name: Run chart-testing (list-changed)
id: list-changed
|
chore
|
update helm/chart-testing-action action to v2.6.1 (#13855)
|
5195296a175774c6ae319c1aab581fdcf1b92dc3
|
2025-02-13 18:31:52
|
Cyril Tovena
|
fix(dataobj): Fixes timerange predicate (#16245)
| false
|
diff --git a/pkg/dataobj/predicate_test.go b/pkg/dataobj/predicate_test.go
new file mode 100644
index 0000000000000..37363a901aa66
--- /dev/null
+++ b/pkg/dataobj/predicate_test.go
@@ -0,0 +1,432 @@
+package dataobj
+
+import (
+ "testing"
+ "time"
+
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/v3/pkg/dataobj/internal/sections/streams"
+)
+
+func TestMatchStreamsTimeRangePredicate(t *testing.T) {
+ now := time.Now()
+
+ tests := []struct {
+ name string
+ stream streams.Stream
+ pred Predicate
+ expected bool
+ }{
+ {
+ name: "stream fully inside range inclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(1 * time.Hour),
+ MaxTimestamp: now.Add(2 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "stream fully inside range exclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(1 * time.Hour),
+ MaxTimestamp: now.Add(2 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: false,
+ },
+ expected: true,
+ },
+ {
+ name: "stream overlaps start inclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(-1 * time.Hour),
+ MaxTimestamp: now.Add(1 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "stream overlaps start exclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(-1 * time.Hour),
+ MaxTimestamp: now.Add(1 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: false,
+ },
+ expected: true,
+ },
+ {
+ name: "stream overlaps end inclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(2 * time.Hour),
+ MaxTimestamp: now.Add(4 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "stream overlaps end exclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(2 * time.Hour),
+ MaxTimestamp: now.Add(4 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: false,
+ },
+ expected: true,
+ },
+ {
+ name: "stream encompasses range inclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(-1 * time.Hour),
+ MaxTimestamp: now.Add(4 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "stream encompasses range exclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(-1 * time.Hour),
+ MaxTimestamp: now.Add(4 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: false,
+ },
+ expected: true,
+ },
+ {
+ name: "stream before range inclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(-2 * time.Hour),
+ MaxTimestamp: now.Add(-1 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: false,
+ },
+ {
+ name: "stream after range inclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(4 * time.Hour),
+ MaxTimestamp: now.Add(5 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: false,
+ },
+ {
+ name: "stream exactly at start inclusive",
+ stream: streams.Stream{
+ MinTimestamp: now,
+ MaxTimestamp: now.Add(1 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "stream exactly at start exclusive",
+ stream: streams.Stream{
+ MinTimestamp: now,
+ MaxTimestamp: now.Add(1 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "stream exactly at end inclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(2 * time.Hour),
+ MaxTimestamp: now.Add(3 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "stream exactly at end exclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(2 * time.Hour),
+ MaxTimestamp: now.Add(3 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: false,
+ },
+ expected: true,
+ },
+ {
+ name: "stream end at start inclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(1 * time.Hour),
+ MaxTimestamp: now.Add(2 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now.Add(2 * time.Hour),
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "stream end at start exclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(1 * time.Hour),
+ MaxTimestamp: now.Add(2 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now.Add(2 * time.Hour),
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: true,
+ },
+ expected: false,
+ },
+ {
+ name: "stream start at end inclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(3 * time.Hour),
+ MaxTimestamp: now.Add(4 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now.Add(2 * time.Hour),
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "stream start at end exclusive",
+ stream: streams.Stream{
+ MinTimestamp: now.Add(3 * time.Hour),
+ MaxTimestamp: now.Add(4 * time.Hour),
+ },
+ pred: TimeRangePredicate[StreamsPredicate]{
+ StartTime: now.Add(2 * time.Hour),
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: false,
+ },
+ expected: false,
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ result := matchStreamsPredicate(tt.pred, tt.stream)
+ require.Equal(t, tt.expected, result, "matchStreamsPredicate returned unexpected result")
+ })
+ }
+}
+
+func TestMatchTimestamp(t *testing.T) {
+ now := time.Now()
+
+ tests := []struct {
+ name string
+ ts time.Time
+ pred TimeRangePredicate[LogsPredicate]
+ expected bool
+ }{
+ {
+ name: "timestamp inside range inclusive",
+ ts: now.Add(1 * time.Hour),
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "timestamp inside range exclusive",
+ ts: now.Add(1 * time.Hour),
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: false,
+ },
+ expected: true,
+ },
+ {
+ name: "timestamp before range inclusive",
+ ts: now.Add(-1 * time.Hour),
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: false,
+ },
+ {
+ name: "timestamp after range inclusive",
+ ts: now.Add(4 * time.Hour),
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: false,
+ },
+ expected: false,
+ },
+ {
+ name: "timestamp exactly at start inclusive",
+ ts: now,
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: false,
+ },
+ expected: true,
+ },
+ {
+ name: "timestamp exactly at start exclusive",
+ ts: now,
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: false,
+ },
+ expected: false,
+ },
+ {
+ name: "timestamp exactly at end inclusive",
+ ts: now.Add(3 * time.Hour),
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "timestamp exactly at end exclusive",
+ ts: now.Add(3 * time.Hour),
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: false,
+ },
+ expected: false,
+ },
+ {
+ name: "timestamp exactly at both bounds inclusive",
+ ts: now,
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now,
+ IncludeStart: true,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ {
+ name: "timestamp exactly at both bounds exclusive",
+ ts: now,
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now,
+ IncludeStart: false,
+ IncludeEnd: false,
+ },
+ expected: false,
+ },
+ {
+ name: "timestamp exactly at start with mixed bounds",
+ ts: now,
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: true,
+ IncludeEnd: false,
+ },
+ expected: true,
+ },
+ {
+ name: "timestamp exactly at end with mixed bounds",
+ ts: now.Add(3 * time.Hour),
+ pred: TimeRangePredicate[LogsPredicate]{
+ StartTime: now,
+ EndTime: now.Add(3 * time.Hour),
+ IncludeStart: false,
+ IncludeEnd: true,
+ },
+ expected: true,
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ result := matchTimestamp(tt.pred, tt.ts)
+ require.Equal(t, tt.expected, result, "matchTimestamp returned unexpected result")
+ })
+ }
+}
diff --git a/pkg/dataobj/streams_reader.go b/pkg/dataobj/streams_reader.go
index 77f84a7671e5b..d71e7bb7d90d2 100644
--- a/pkg/dataobj/streams_reader.go
+++ b/pkg/dataobj/streams_reader.go
@@ -192,8 +192,8 @@ func matchStreamsPredicate(p Predicate, stream streams.Stream) bool {
case NotPredicate[StreamsPredicate]:
return !matchStreamsPredicate(p.Inner, stream)
case TimeRangePredicate[StreamsPredicate]:
- // A stream is in range if either its min or max timestamp is in the range.
- return matchTimestamp(p, stream.MinTimestamp) || matchTimestamp(p, stream.MaxTimestamp)
+ // A stream matches if its time range overlaps with the query range
+ return overlapsTimeRange(p, stream.MinTimestamp, stream.MaxTimestamp)
case LabelMatcherPredicate:
return stream.Labels.Get(p.Name) == p.Value
case LabelFilterPredicate:
@@ -205,16 +205,31 @@ func matchStreamsPredicate(p Predicate, stream streams.Stream) bool {
}
}
+func overlapsTimeRange[P Predicate](p TimeRangePredicate[P], start, end time.Time) bool {
+ switch {
+ case p.IncludeStart && p.IncludeEnd:
+ return !end.Before(p.StartTime) && !start.After(p.EndTime)
+ case p.IncludeStart && !p.IncludeEnd:
+ return !end.Before(p.StartTime) && start.Before(p.EndTime)
+ case !p.IncludeStart && p.IncludeEnd:
+ return end.After(p.StartTime) && !start.After(p.EndTime)
+ case !p.IncludeStart && !p.IncludeEnd:
+ return end.After(p.StartTime) && start.Before(p.EndTime)
+ default:
+ panic("unreachable")
+ }
+}
+
func matchTimestamp[P Predicate](p TimeRangePredicate[P], ts time.Time) bool {
switch {
- case p.IncludeStart && p.IncludeEnd: // start <= ts <= end
- return (p.StartTime.Before(ts) || p.StartTime.Equal(ts)) && (ts.Before(p.EndTime) || ts.Equal(p.EndTime))
- case p.IncludeStart && !p.IncludeEnd: // start <= ts < end
- return (p.StartTime.Before(ts) || p.StartTime.Equal(ts)) && ts.Before(p.EndTime)
- case !p.IncludeStart && p.IncludeEnd: // start < ts <= end
- return p.StartTime.Before(ts) && (ts.Before(p.EndTime) || ts.Equal(p.EndTime))
- case !p.IncludeStart && !p.IncludeEnd: // start < ts < end
- return p.StartTime.Before(ts) && ts.Before(p.EndTime)
+ case p.IncludeStart && p.IncludeEnd:
+ return !ts.Before(p.StartTime) && !ts.After(p.EndTime) // ts >= start && ts <= end
+ case p.IncludeStart && !p.IncludeEnd:
+ return !ts.Before(p.StartTime) && ts.Before(p.EndTime) // ts >= start && ts < end
+ case !p.IncludeStart && p.IncludeEnd:
+ return ts.After(p.StartTime) && !ts.After(p.EndTime) // ts > start && ts <= end
+ case !p.IncludeStart && !p.IncludeEnd:
+ return ts.After(p.StartTime) && ts.Before(p.EndTime) // ts > start && ts < end
default:
panic("unreachable")
}
|
fix
|
Fixes timerange predicate (#16245)
|
282e38548ceb96b1c518010c47b8eabf4317e8fd
|
2024-04-24 02:10:33
|
Jay Clifford
|
feat: Update getting started demo to Loki 3.0 (#12723)
| false
|
diff --git a/docs/sources/get-started/quick-start.md b/docs/sources/get-started/quick-start.md
index 16e14be923acc..b08f07a8e7973 100644
--- a/docs/sources/get-started/quick-start.md
+++ b/docs/sources/get-started/quick-start.md
@@ -12,14 +12,15 @@ If you want to experiment with Loki, you can run Loki locally using the Docker C
The Docker Compose configuration instantiates the following components, each in its own container:
- **flog** a sample application which generates log lines. [flog](https://github.com/mingrammer/flog) is a log generator for common log formats.
-- **Promtail** which scrapes the log lines from flog, and pushes them to Loki through the gateway.
+- **Grafana Alloy** which scrapes the log lines from flog, and pushes them to Loki through the gateway.
- **Gateway** (NGINX) which receives requests and redirects them to the appropriate container based on the request's URL.
-- One Loki **read** component.
-- One Loki **write** component.
+- One Loki **read** component (Query Frontend, Querier).
+- One Loki **write** component (Distributor, Ingester).
+- One Loki **backend** component (Index Gateway, Compactor, Ruler, Bloom Compactor (Experimental), Bloom Gateway (Experimental)).
- **Minio** an S3-compatible object store which Loki uses to store its index and chunks.
- **Grafana** which provides visualization of the log lines captured within Loki.
-{{< figure max-width="75%" src="/media/docs/loki/get-started-flog-v2.png" caption="Getting started sample application" alt="Getting started sample application">}}
+{{< figure max-width="75%" src="/media/docs/loki/get-started-flog-v3.png" caption="Getting started sample application" alt="Getting started sample application">}}
## Installing Loki and collecting sample logs
@@ -41,11 +42,11 @@ This quickstart assumes you are running Linux.
cd evaluate-loki
```
-1. Download `loki-config.yaml`, `promtail-local-config.yaml`, and `docker-compose.yaml`:
+1. Download `loki-config.yaml`, `alloy-local-config.yaml`, and `docker-compose.yaml`:
```bash
wget https://raw.githubusercontent.com/grafana/loki/main/examples/getting-started/loki-config.yaml -O loki-config.yaml
- wget https://raw.githubusercontent.com/grafana/loki/main/examples/getting-started/promtail-local-config.yaml -O promtail-local-config.yaml
+ wget https://raw.githubusercontent.com/grafana/loki/main/examples/getting-started/alloy-local-config.yaml -O alloy-local-config.yaml
wget https://raw.githubusercontent.com/grafana/loki/main/examples/getting-started/docker-compose.yaml -O docker-compose.yaml
```
@@ -63,16 +64,20 @@ This quickstart assumes you are running Linux.
✔ Network evaluate-loki_loki Created 0.1s
✔ Container evaluate-loki-minio-1 Started 0.6s
✔ Container evaluate-loki-flog-1 Started 0.6s
+ ✔ Container evaluate-loki-backend-1 Started 0.8s
✔ Container evaluate-loki-write-1 Started 0.8s
✔ Container evaluate-loki-read-1 Started 0.8s
✔ Container evaluate-loki-gateway-1 Started 1.1s
✔ Container evaluate-loki-grafana-1 Started 1.4s
- ✔ Container evaluate-loki-promtail-1 Started 1.4s
+ ✔ Container evaluate-loki-alloy-1 Started 1.4s
```
1. (Optional) Verify that the Loki cluster is up and running.
- The read component returns `ready` when you point a web browser at [http://localhost:3101/ready](http://localhost:3101/ready). The message `Query Frontend not ready: not ready: number of schedulers this worker is connected to is 0` will show prior to the read component being ready.
- The write component returns `ready` when you point a web browser at [http://localhost:3102/ready](http://localhost:3102/ready). The message `Ingester not ready: waiting for 15s after being ready` will show prior to the write component being ready.
+
+1. (Optional) Verify that Grafana Alloy is running.
+ - Grafana Alloy's UI can be accessed at [http://localhost:12345](http://localhost:12345).
## Viewing your logs in Grafana
diff --git a/examples/getting-started/alloy-local-config.yaml b/examples/getting-started/alloy-local-config.yaml
new file mode 100644
index 0000000000000..ff0448ac54353
--- /dev/null
+++ b/examples/getting-started/alloy-local-config.yaml
@@ -0,0 +1,30 @@
+discovery.docker "flog_scrape" {
+ host = "unix:///var/run/docker.sock"
+ refresh_interval = "5s"
+}
+
+discovery.relabel "flog_scrape" {
+ targets = []
+
+ rule {
+ source_labels = ["__meta_docker_container_name"]
+ regex = "/(.*)"
+ target_label = "container"
+ }
+}
+
+loki.source.docker "flog_scrape" {
+ host = "unix:///var/run/docker.sock"
+ targets = discovery.docker.flog_scrape.targets
+ forward_to = [loki.write.default.receiver]
+ relabel_rules = discovery.relabel.flog_scrape.rules
+ refresh_interval = "5s"
+}
+
+loki.write "default" {
+ endpoint {
+ url = "http://gateway:3100/loki/api/v1/push"
+ tenant_id = "tenant1"
+ }
+ external_labels = {}
+}
diff --git a/examples/getting-started/docker-compose.yaml b/examples/getting-started/docker-compose.yaml
index 83dcde94d273e..449fe55f2b6e2 100644
--- a/examples/getting-started/docker-compose.yaml
+++ b/examples/getting-started/docker-compose.yaml
@@ -6,7 +6,7 @@ networks:
services:
read:
- image: grafana/loki:2.9.2
+ image: grafana/loki:3.0.0
command: "-config.file=/etc/loki/config.yaml -target=read"
ports:
- 3101:3100
@@ -27,7 +27,7 @@ services:
- loki
write:
- image: grafana/loki:2.9.2
+ image: grafana/loki:3.0.0
command: "-config.file=/etc/loki/config.yaml -target=write"
ports:
- 3102:3100
@@ -45,12 +45,14 @@ services:
networks:
<<: *loki-dns
- promtail:
- image: grafana/promtail:2.9.2
+ alloy:
+ image: grafana/alloy:latest
volumes:
- - ./promtail-local-config.yaml:/etc/promtail/config.yaml:ro
+ - ./alloy-local-config.yaml:/etc/alloy/config.alloy:ro
- /var/run/docker.sock:/var/run/docker.sock
- command: -config.file=/etc/promtail/config.yaml
+ command: run --server.http.listen-addr=0.0.0.0:12345 --storage.path=/var/lib/alloy/data /etc/alloy/config.alloy
+ ports:
+ - 12345:12345
depends_on:
- gateway
networks:
@@ -118,6 +120,20 @@ services:
networks:
- loki
+ backend:
+ image: grafana/loki:3.0.0
+ volumes:
+ - ./loki-config.yaml:/etc/loki/config.yaml
+ ports:
+ - "3100"
+ - "7946"
+ command: "-config.file=/etc/loki/config.yaml -target=backend -legacy-read-mode=false"
+ depends_on:
+ - gateway
+ networks:
+ - loki
+
+
gateway:
image: nginx:latest
depends_on:
@@ -186,6 +202,7 @@ services:
retries: 5
networks:
- loki
+
flog:
image: mingrammer/flog
diff --git a/examples/getting-started/loki-config.yaml b/examples/getting-started/loki-config.yaml
index 73ca66f78796a..3228092e4e8f4 100644
--- a/examples/getting-started/loki-config.yaml
+++ b/examples/getting-started/loki-config.yaml
@@ -1,9 +1,17 @@
---
server:
+ http_listen_address: 0.0.0.0
http_listen_port: 3100
+
memberlist:
- join_members:
- - loki:7946
+ join_members: ["read", "write", "backend"]
+ dead_node_reclaim_time: 30s
+ gossip_to_dead_nodes_time: 15s
+ left_ingesters_timeout: 30s
+ bind_addr: ['0.0.0.0']
+ bind_port: 7946
+ gossip_interval: 2s
+
schema_config:
configs:
- from: 2021-08-01
@@ -16,6 +24,7 @@ schema_config:
common:
path_prefix: /loki
replication_factor: 1
+ compactor_address: http://backend:3100
storage:
s3:
endpoint: minio:9000
@@ -31,3 +40,6 @@ ruler:
storage:
s3:
bucketnames: loki-ruler
+
+compactor:
+ working_directory: /tmp/compactor
\ No newline at end of file
diff --git a/examples/getting-started/promtail-local-config.yaml b/examples/getting-started/promtail-local-config.yaml
deleted file mode 100644
index dcb2d3eed81a2..0000000000000
--- a/examples/getting-started/promtail-local-config.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
----
-server:
- http_listen_port: 9080
- grpc_listen_port: 0
-
-positions:
- filename: /tmp/positions.yaml
-
-clients:
- - url: http://gateway:3100/loki/api/v1/push
- tenant_id: tenant1
-
-scrape_configs:
- - job_name: flog_scrape
- docker_sd_configs:
- - host: unix:///var/run/docker.sock
- refresh_interval: 5s
- relabel_configs:
- - source_labels: ['__meta_docker_container_name']
- regex: '/(.*)'
- target_label: 'container'
-
|
feat
|
Update getting started demo to Loki 3.0 (#12723)
|
e57af51c37d0100c844ab7435a194890a9d3f349
|
2021-03-02 02:17:34
|
Ed Welch
|
promtail: Add pack stage (#3401)
| false
|
diff --git a/docs/sources/clients/promtail/stages/_index.md b/docs/sources/clients/promtail/stages/_index.md
index 1ee91611b13b1..5b5e2b833a6db 100644
--- a/docs/sources/clients/promtail/stages/_index.md
+++ b/docs/sources/clients/promtail/stages/_index.md
@@ -17,6 +17,7 @@ Parsing stages:
Transform stages:
- [template](template/): Use Go templates to modify extracted data.
+ - [pack](pack/): Packs a log line in a JSON object allowing extracted values and labels to be placed inside the log line.
Action stages:
diff --git a/docs/sources/clients/promtail/stages/pack.md b/docs/sources/clients/promtail/stages/pack.md
new file mode 100644
index 0000000000000..af767603c43d3
--- /dev/null
+++ b/docs/sources/clients/promtail/stages/pack.md
@@ -0,0 +1,84 @@
+---
+title: pack
+---
+# `pack` stage
+
+The `pack` stage is a transform stage which lets you embed extracted values and labels into the log line by packing the log line and labels inside a JSON object.
+
+For example, if you wanted to remove the labels `container` and `pod` but still wanted to keep their values you could use this stage to create the following output:
+
+```json
+{
+ "container": "myapp",
+ "pod": "pod-3223f",
+ "_entry": "original log message"
+}
+```
+
+The original message will be stored under the `_entry` key.
+
+This stage is useful if you have some label or other metadata you would like to keep but it doesn't make a good label (isn't useful for querying or is too high cardinality)
+
+The querying capabilities of Loki make it easy to still access this data and filter/aggregate on it at query time.
+
+## Pack stage schema
+
+```yaml
+pack:
+ # Name from extracted data and/or line labels
+ # Labels provided here are automatically removed from the output labels.
+ labels:
+ - [<string>]
+
+ # If the resulting log line should use any existing timestamp or use time.Now() when the line was processed.
+ # To avoid out of order issues with Loki, when combining several log streams (separate source files) into one
+ # you will want to set a new timestamp on the log line, `ingest_timestamp: true`
+ # If you are not combining multiple source files or you know your log lines won't have interlaced timestamps
+ # you can set this value to false.
+ [ingest_timestamp: <bool> | default = true]
+```
+
+## Examples
+
+Removing the container label and embed it into the log line (Kubernetes pods could have multiple containers)
+
+```yaml
+pack:
+ labels:
+ - container
+```
+
+This would create a log line
+
+```json
+{
+ "container": "myapp",
+ "_entry": "original log message"
+}
+```
+
+Loki 2.0 has some tools to make querying packed log lines easier as well.
+
+Display the log line as if it were never packed:
+
+```
+{cluster="us-central1", job="myjob"} | json | line_format "{{._entry}}"
+```
+
+Use the packed labels for filtering:
+
+```
+{cluster="us-central1", job="myjob"} | json | container="myapp" | line_format "{{._entry}}"
+```
+
+You can even use the `json` parser twice if your original message was json:
+
+```
+{cluster="us-central1", job="myjob"} | json | container="myapp" | line_format "{{._entry}}" | json | val_from_original_log_json="foo"
+```
+
+Or any other parser
+
+```
+{cluster="us-central1", job="myjob"} | json | container="myapp" | line_format "{{._entry}}" | logfmt | val_from_original_log_json="foo"
+```
diff --git a/pkg/logentry/stages/pack.go b/pkg/logentry/stages/pack.go
new file mode 100644
index 0000000000000..52aee9254a080
--- /dev/null
+++ b/pkg/logentry/stages/pack.go
@@ -0,0 +1,222 @@
+package stages
+
+import (
+ "bytes"
+ "errors"
+ "fmt"
+ "reflect"
+ "sort"
+ "time"
+
+ "github.com/go-kit/kit/log"
+ "github.com/go-kit/kit/log/level"
+ json "github.com/json-iterator/go"
+ "github.com/mitchellh/mapstructure"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/common/model"
+)
+
+const (
+ entryKey = "_entry"
+)
+
+var (
+ reallyTrue = true
+ reallyFalse = false
+)
+
+type Packed struct {
+ Labels map[string]string `json:",inline"`
+ Entry string `json:"_entry"`
+}
+
+// UnmarshalJSON populates a Packed struct where every key except the _entry key is added to the Labels field
+func (w *Packed) UnmarshalJSON(data []byte) error {
+ m := &map[string]interface{}{}
+ err := json.Unmarshal(data, m)
+ if err != nil {
+ return err
+ }
+ w.Labels = map[string]string{}
+ for k, v := range *m {
+ // _entry key goes to the Entry field, everything else becomes a label
+ if k == entryKey {
+ if s, ok := v.(string); ok {
+ w.Entry = s
+ } else {
+ return errors.New("failed to unmarshal json, all values must be of type string")
+ }
+ } else {
+ if s, ok := v.(string); ok {
+ w.Labels[k] = s
+ } else {
+ return errors.New("failed to unmarshal json, all values must be of type string")
+ }
+ }
+ }
+ return nil
+}
+
+// MarshalJSON creates a Packed struct as JSON where the Labels are flattened into the top level of the object
+func (w Packed) MarshalJSON() ([]byte, error) {
+
+ // Marshal the entry to properly escape if it's json or contains quotes
+ b, err := json.Marshal(w.Entry)
+ if err != nil {
+ return nil, err
+ }
+
+ // Creating a map and marshalling from a map results in a non deterministic ordering of the resulting json object
+ // This is functionally ok but really annoying to humans and automated tests.
+ // Instead we will build the json ourselves after sorting all the labels to get a consistent output
+ keys := make([]string, 0, len(w.Labels))
+ for k := range w.Labels {
+ keys = append(keys, k)
+ }
+ sort.Strings(keys)
+
+ var buf bytes.Buffer
+
+ buf.WriteString("{")
+ for i, k := range keys {
+ if i != 0 {
+ buf.WriteString(",")
+ }
+ // marshal key
+ key, err := json.Marshal(k)
+ if err != nil {
+ return nil, err
+ }
+ buf.Write(key)
+ buf.WriteString(":")
+ // marshal value
+ val, err := json.Marshal(w.Labels[k])
+ if err != nil {
+ return nil, err
+ }
+ buf.Write(val)
+ }
+ // Only add the comma if something exists in the buffer other than "{"
+ if buf.Len() > 1 {
+ buf.WriteString(",")
+ }
+ // Add the line entry
+ buf.WriteString("\"" + entryKey + "\":")
+ buf.Write(b)
+
+ buf.WriteString("}")
+ return buf.Bytes(), nil
+}
+
+// PackConfig contains the configuration for a packStage
+type PackConfig struct {
+ Labels []string `mapstrcuture:"labels"`
+ IngestTimestamp *bool `mapstructure:"ingest_timestamp"`
+}
+
+//nolint:unparam // Always returns nil until someone adds more validation and can remove this.
+// validatePackConfig validates the PackConfig for the packStage
+func validatePackConfig(cfg *PackConfig) error {
+ // Default the IngestTimestamp value to be true
+ if cfg.IngestTimestamp == nil {
+ cfg.IngestTimestamp = &reallyTrue
+ }
+ return nil
+}
+
+// newPackStage creates a DropStage from config
+func newPackStage(logger log.Logger, config interface{}, registerer prometheus.Registerer) (Stage, error) {
+ cfg := &PackConfig{}
+ err := mapstructure.WeakDecode(config, cfg)
+ if err != nil {
+ return nil, err
+ }
+ err = validatePackConfig(cfg)
+ if err != nil {
+ return nil, err
+ }
+
+ return &packStage{
+ logger: log.With(logger, "component", "stage", "type", "pack"),
+ cfg: cfg,
+ dropCount: getDropCountMetric(registerer),
+ }, nil
+}
+
+// packStage applies Label matchers to determine if the include stages should be run
+type packStage struct {
+ logger log.Logger
+ cfg *PackConfig
+ dropCount *prometheus.CounterVec
+}
+
+func (m *packStage) Run(in chan Entry) chan Entry {
+ out := make(chan Entry)
+ go func() {
+ defer close(out)
+ for e := range in {
+ out <- m.pack(e)
+ }
+ }()
+ return out
+}
+
+func (m *packStage) pack(e Entry) Entry {
+ lbls := e.Labels
+ packedLabels := make(map[string]string, len(m.cfg.Labels))
+ foundLabels := []model.LabelName{}
+
+ // Iterate through all the extracted map (which also includes all the labels)
+ for lk, lv := range e.Extracted {
+ for _, wl := range m.cfg.Labels {
+ if lk == wl {
+ sv, err := getString(lv)
+ if err != nil {
+ if Debug {
+ level.Debug(m.logger).Log("msg", fmt.Sprintf("value for key: '%s' cannot be converted to a string and cannot be packed", lk), "err", err, "type", reflect.TypeOf(lv))
+ }
+ continue
+ }
+ packedLabels[wl] = sv
+ foundLabels = append(foundLabels, model.LabelName(lk))
+ }
+ }
+ }
+
+ // Embed the extracted labels into the wrapper object
+ w := Packed{
+ Labels: packedLabels,
+ Entry: e.Line,
+ }
+
+ // Marshal to json
+ wl, err := json.Marshal(w)
+ if err != nil {
+ if Debug {
+ level.Debug(m.logger).Log("msg", "pack stage failed to marshal packed object to json, packing will be skipped", "err", err)
+ }
+ return e
+ }
+
+ // Remove anything found which is also a label, do this after the marshalling to not remove labels until
+ // we are sure the line can be successfully packed.
+ for _, fl := range foundLabels {
+ delete(lbls, fl)
+ }
+
+ // Replace the labels and the line with new values
+ e.Labels = lbls
+ e.Line = string(wl)
+
+ // If the config says to re-write the timestamp to the ingested time, do that now
+ if m.cfg.IngestTimestamp != nil && *m.cfg.IngestTimestamp {
+ e.Timestamp = time.Now()
+ }
+
+ return e
+}
+
+// Name implements Stage
+func (m *packStage) Name() string {
+ return StageTypePack
+}
diff --git a/pkg/logentry/stages/pack_test.go b/pkg/logentry/stages/pack_test.go
new file mode 100644
index 0000000000000..cdfc5195d7ce1
--- /dev/null
+++ b/pkg/logentry/stages/pack_test.go
@@ -0,0 +1,368 @@
+package stages
+
+import (
+ "testing"
+ "time"
+
+ util_log "github.com/cortexproject/cortex/pkg/util/log"
+ json "github.com/json-iterator/go"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/common/model"
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+ ww "github.com/weaveworks/common/server"
+
+ "github.com/grafana/loki/pkg/logproto"
+ "github.com/grafana/loki/pkg/promtail/api"
+)
+
+// Not all these are tested but are here to make sure the different types marshal without error
+var testPackYaml = `
+pipeline_stages:
+- match:
+ selector: "{container=\"foo\"}"
+ stages:
+ - pack:
+ labels:
+ - pod
+ - container
+ ingest_timestamp: false
+- match:
+ selector: "{container=\"bar\"}"
+ stages:
+ - pack:
+ labels:
+ - pod
+ - container
+ ingest_timestamp: true
+`
+
+// TestDropPipeline is used to verify we properly parse the yaml config and create a working pipeline
+func TestPackPipeline(t *testing.T) {
+ registry := prometheus.NewRegistry()
+ plName := "test_pipeline_deal_with_it_linter"
+ pl, err := NewPipeline(util_log.Logger, loadConfig(testPackYaml), &plName, registry)
+ require.NoError(t, err)
+
+ l1Lbls := model.LabelSet{
+ "pod": "foo-xsfs3",
+ "container": "foo",
+ "namespace": "dev",
+ "cluster": "us-eu-1",
+ }
+
+ l2Lbls := model.LabelSet{
+ "pod": "foo-vvsdded",
+ "container": "bar",
+ "namespace": "dev",
+ "cluster": "us-eu-1",
+ }
+
+ testTime := time.Now()
+
+ // Submit these both separately to get a deterministic output
+ out1 := processEntries(pl, newEntry(nil, l1Lbls, testMatchLogLineApp1, testTime))[0]
+ out2 := processEntries(pl, newEntry(nil, l2Lbls, testRegexLogLine, testTime))[0]
+
+ // Expected labels should remove the packed labels
+ expectedLbls := model.LabelSet{
+ "namespace": "dev",
+ "cluster": "us-eu-1",
+ }
+ assert.Equal(t, expectedLbls, out1.Labels)
+ assert.Equal(t, expectedLbls, out2.Labels)
+
+ // Validate timestamps
+ // Line 1 should use the first matcher and should use the log line timestamp
+ assert.Equal(t, testTime, out1.Timestamp)
+ // Line 2 should use the second matcher and should get timestamp by the pack stage
+ assert.True(t, out2.Timestamp.After(testTime))
+
+ // Unmarshal the packed object and validate line1
+ w := &Packed{}
+ assert.NoError(t, json.Unmarshal([]byte(out1.Entry.Entry.Line), w))
+ expectedPackedLabels := map[string]string{
+ "pod": "foo-xsfs3",
+ "container": "foo",
+ }
+ assert.Equal(t, expectedPackedLabels, w.Labels)
+ assert.Equal(t, testMatchLogLineApp1, w.Entry)
+
+ // Validate line 2
+ w = &Packed{}
+ assert.NoError(t, json.Unmarshal([]byte(out2.Entry.Entry.Line), w))
+ expectedPackedLabels = map[string]string{
+ "pod": "foo-vvsdded",
+ "container": "bar",
+ }
+ assert.Equal(t, expectedPackedLabels, w.Labels)
+ assert.Equal(t, testRegexLogLine, w.Entry)
+}
+
+func Test_packStage_Run(t *testing.T) {
+ // Enable debug logging
+ cfg := &ww.Config{}
+ require.Nil(t, cfg.LogLevel.Set("debug"))
+ util_log.InitLogger(cfg)
+ Debug = true
+
+ tests := []struct {
+ name string
+ config *PackConfig
+ inputEntry Entry
+ expectedEntry Entry
+ }{
+ {
+ name: "no supplied labels list",
+ config: &PackConfig{
+ Labels: nil,
+ IngestTimestamp: &reallyFalse,
+ },
+ inputEntry: Entry{
+ Extracted: map[string]interface{}{},
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "foo": "bar",
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "test line 1",
+ },
+ },
+ },
+ expectedEntry: Entry{
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "foo": "bar",
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "{\"" + entryKey + "\":\"test line 1\"}",
+ },
+ },
+ },
+ },
+ {
+ name: "match one supplied label",
+ config: &PackConfig{
+ Labels: []string{"foo"},
+ IngestTimestamp: &reallyFalse,
+ },
+ inputEntry: Entry{
+ Extracted: map[string]interface{}{},
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "foo": "bar",
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "test line 1",
+ },
+ },
+ },
+ expectedEntry: Entry{
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "{\"foo\":\"bar\",\"" + entryKey + "\":\"test line 1\"}",
+ },
+ },
+ },
+ },
+ {
+ name: "match all supplied labels",
+ config: &PackConfig{
+ Labels: []string{"foo", "bar"},
+ IngestTimestamp: &reallyFalse,
+ },
+ inputEntry: Entry{
+ Extracted: map[string]interface{}{},
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "foo": "bar",
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "test line 1",
+ },
+ },
+ },
+ expectedEntry: Entry{
+ Entry: api.Entry{
+ Labels: model.LabelSet{},
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "{\"bar\":\"baz\",\"foo\":\"bar\",\"" + entryKey + "\":\"test line 1\"}",
+ },
+ },
+ },
+ },
+ {
+ name: "match extracted map and labels",
+ config: &PackConfig{
+ Labels: []string{"foo", "extr1"},
+ IngestTimestamp: &reallyFalse,
+ },
+ inputEntry: Entry{
+ Extracted: map[string]interface{}{
+ "extr1": "etr1val",
+ "extr2": "etr2val",
+ },
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "foo": "bar",
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "test line 1",
+ },
+ },
+ },
+ expectedEntry: Entry{
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "{\"extr1\":\"etr1val\",\"foo\":\"bar\",\"" + entryKey + "\":\"test line 1\"}",
+ },
+ },
+ },
+ },
+ {
+ name: "extracted map value not convertable to a string",
+ config: &PackConfig{
+ Labels: []string{"foo", "extr2"},
+ IngestTimestamp: &reallyFalse,
+ },
+ inputEntry: Entry{
+ Extracted: map[string]interface{}{
+ "extr1": "etr1val",
+ "extr2": []int{1, 2, 3},
+ },
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "foo": "bar",
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "test line 1",
+ },
+ },
+ },
+ expectedEntry: Entry{
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "{\"foo\":\"bar\",\"" + entryKey + "\":\"test line 1\"}",
+ },
+ },
+ },
+ },
+ {
+ name: "escape quotes",
+ config: &PackConfig{
+ Labels: []string{"foo", "ex\"tr2"},
+ IngestTimestamp: &reallyFalse,
+ },
+ inputEntry: Entry{
+ Extracted: map[string]interface{}{
+ "extr1": "etr1val",
+ "ex\"tr2": `"fd"`,
+ },
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "foo": "bar",
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "test line 1",
+ },
+ },
+ },
+ expectedEntry: Entry{
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "{\"ex\\\"tr2\":\"\\\"fd\\\"\",\"foo\":\"bar\",\"" + entryKey + "\":\"test line 1\"}",
+ },
+ },
+ },
+ },
+ {
+ name: "ingest timestamp",
+ config: &PackConfig{
+ Labels: nil,
+ IngestTimestamp: &reallyTrue,
+ },
+ inputEntry: Entry{
+ Extracted: map[string]interface{}{},
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "foo": "bar",
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0),
+ Line: "test line 1",
+ },
+ },
+ },
+ expectedEntry: Entry{
+ Entry: api.Entry{
+ Labels: model.LabelSet{
+ "foo": "bar",
+ "bar": "baz",
+ },
+ Entry: logproto.Entry{
+ Timestamp: time.Unix(1, 0), // Ignored in test execution below
+ Line: "{\"" + entryKey + "\":\"test line 1\"}",
+ },
+ },
+ },
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ err := validatePackConfig(tt.config)
+ if err != nil {
+ t.Error(err)
+ }
+ m, err := newPackStage(util_log.Logger, tt.config, prometheus.DefaultRegisterer)
+ require.NoError(t, err)
+ // Normal pipeline operation will put all the labels into the extracted map
+ // replicate that here.
+ for labelName, labelValue := range tt.inputEntry.Labels {
+ tt.inputEntry.Extracted[string(labelName)] = string(labelValue)
+ }
+ out := processEntries(m, tt.inputEntry)
+ // Only verify the labels, line, and timestamp, this stage doesn't modify the extracted map
+ // so there is no reason to verify it
+ assert.Equal(t, tt.expectedEntry.Labels, out[0].Labels)
+ assert.Equal(t, tt.expectedEntry.Line, out[0].Line)
+ if *tt.config.IngestTimestamp {
+ assert.True(t, out[0].Timestamp.After(tt.inputEntry.Timestamp))
+ } else {
+ assert.Equal(t, tt.expectedEntry.Timestamp, out[0].Timestamp)
+ }
+
+ })
+ }
+}
diff --git a/pkg/logentry/stages/stage.go b/pkg/logentry/stages/stage.go
index 82c265074ebc1..c3e77f066560d 100644
--- a/pkg/logentry/stages/stage.go
+++ b/pkg/logentry/stages/stage.go
@@ -28,6 +28,7 @@ const (
StageTypeTenant = "tenant"
StageTypeDrop = "drop"
StageTypeMultiline = "multiline"
+ StageTypePack = "pack"
)
// Processor takes an existing set of labels, timestamp and log entry and returns either a possibly mutated
@@ -145,6 +146,11 @@ func New(logger log.Logger, jobName *string, stageType string,
if err != nil {
return nil, err
}
+ case StageTypePack:
+ s, err = newPackStage(logger, cfg, registerer)
+ if err != nil {
+ return nil, err
+ }
default:
return nil, errors.Errorf("Unknown stage type: %s", stageType)
}
|
promtail
|
Add pack stage (#3401)
|
41d8f959d4b1ab3a66df0d1629e9fdf2c9f5e0f0
|
2024-11-26 15:19:15
|
renovate[bot]
|
fix(deps): update module github.com/minio/minio-go/v7 to v7.0.81 (#15114)
| false
|
diff --git a/go.mod b/go.mod
index a207f41fbb376..4451bbb6a28cb 100644
--- a/go.mod
+++ b/go.mod
@@ -68,7 +68,7 @@ require (
github.com/klauspost/pgzip v1.2.6
github.com/leodido/go-syslog/v4 v4.2.0
github.com/mattn/go-ieproxy v0.0.12
- github.com/minio/minio-go/v7 v7.0.80
+ github.com/minio/minio-go/v7 v7.0.81
github.com/mitchellh/go-wordwrap v1.0.1
github.com/mitchellh/mapstructure v1.5.0
github.com/modern-go/reflect2 v1.0.2
diff --git a/go.sum b/go.sum
index ab6894ccbbd88..cc118394c5502 100644
--- a/go.sum
+++ b/go.sum
@@ -2164,8 +2164,8 @@ github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8/go.mod h1:mC1jAcs
github.com/minio/c2goasm v0.0.0-20190812172519-36a3d3bbc4f3/go.mod h1:RagcQ7I8IeTMnF8JTXieKnO4Z6JCsikNEzj0DwauVzE=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
-github.com/minio/minio-go/v7 v7.0.80 h1:2mdUHXEykRdY/BigLt3Iuu1otL0JTogT0Nmltg0wujk=
-github.com/minio/minio-go/v7 v7.0.80/go.mod h1:84gmIilaX4zcvAWWzJ5Z1WI5axN+hAbM5w25xf8xvC0=
+github.com/minio/minio-go/v7 v7.0.81 h1:SzhMN0TQ6T/xSBu6Nvw3M5M8voM+Ht8RH3hE8S7zxaA=
+github.com/minio/minio-go/v7 v7.0.81/go.mod h1:84gmIilaX4zcvAWWzJ5Z1WI5axN+hAbM5w25xf8xvC0=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/cli v1.1.4/go.mod h1:vTLESy5mRhKOs9KDp0/RATawxP1UqBmdrpVRMnpcvKQ=
diff --git a/vendor/github.com/minio/minio-go/v7/api-prompt-object.go b/vendor/github.com/minio/minio-go/v7/api-prompt-object.go
new file mode 100644
index 0000000000000..dac062a75b035
--- /dev/null
+++ b/vendor/github.com/minio/minio-go/v7/api-prompt-object.go
@@ -0,0 +1,78 @@
+/*
+ * MinIO Go Library for Amazon S3 Compatible Cloud Storage
+ * Copyright 2015-2024 MinIO, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package minio
+
+import (
+ "bytes"
+ "context"
+ "io"
+ "net/http"
+
+ "github.com/goccy/go-json"
+ "github.com/minio/minio-go/v7/pkg/s3utils"
+)
+
+// PromptObject performs language model inference with the prompt and referenced object as context.
+// Inference is performed using a Lambda handler that can process the prompt and object.
+// Currently, this functionality is limited to certain MinIO servers.
+func (c *Client) PromptObject(ctx context.Context, bucketName, objectName, prompt string, opts PromptObjectOptions) (io.ReadCloser, error) {
+ // Input validation.
+ if err := s3utils.CheckValidBucketName(bucketName); err != nil {
+ return nil, ErrorResponse{
+ StatusCode: http.StatusBadRequest,
+ Code: "InvalidBucketName",
+ Message: err.Error(),
+ }
+ }
+ if err := s3utils.CheckValidObjectName(objectName); err != nil {
+ return nil, ErrorResponse{
+ StatusCode: http.StatusBadRequest,
+ Code: "XMinioInvalidObjectName",
+ Message: err.Error(),
+ }
+ }
+
+ opts.AddLambdaArnToReqParams(opts.LambdaArn)
+ opts.SetHeader("Content-Type", "application/json")
+ opts.AddPromptArg("prompt", prompt)
+ promptReqBytes, err := json.Marshal(opts.PromptArgs)
+ if err != nil {
+ return nil, err
+ }
+
+ // Execute POST on bucket/object.
+ resp, err := c.executeMethod(ctx, http.MethodPost, requestMetadata{
+ bucketName: bucketName,
+ objectName: objectName,
+ queryValues: opts.toQueryValues(),
+ customHeader: opts.Header(),
+ contentSHA256Hex: sum256Hex(promptReqBytes),
+ contentBody: bytes.NewReader(promptReqBytes),
+ contentLength: int64(len(promptReqBytes)),
+ })
+ if err != nil {
+ return nil, err
+ }
+
+ if resp.StatusCode != http.StatusOK {
+ defer closeResponse(resp)
+ return nil, httpRespToErrorResponse(resp, bucketName, objectName)
+ }
+
+ return resp.Body, nil
+}
diff --git a/vendor/github.com/minio/minio-go/v7/api-prompt-options.go b/vendor/github.com/minio/minio-go/v7/api-prompt-options.go
new file mode 100644
index 0000000000000..4493a75d4c779
--- /dev/null
+++ b/vendor/github.com/minio/minio-go/v7/api-prompt-options.go
@@ -0,0 +1,84 @@
+/*
+ * MinIO Go Library for Amazon S3 Compatible Cloud Storage
+ * Copyright 2015-2024 MinIO, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package minio
+
+import (
+ "net/http"
+ "net/url"
+)
+
+// PromptObjectOptions provides options to PromptObject call.
+// LambdaArn is the ARN of the Prompt Lambda to be invoked.
+// PromptArgs is a map of key-value pairs to be passed to the inference action on the Prompt Lambda.
+// "prompt" is a reserved key and should not be used as a key in PromptArgs.
+type PromptObjectOptions struct {
+ LambdaArn string
+ PromptArgs map[string]any
+ headers map[string]string
+ reqParams url.Values
+}
+
+// Header returns the http.Header representation of the POST options.
+func (o PromptObjectOptions) Header() http.Header {
+ headers := make(http.Header, len(o.headers))
+ for k, v := range o.headers {
+ headers.Set(k, v)
+ }
+ return headers
+}
+
+// AddPromptArg Add a key value pair to the prompt arguments where the key is a string and
+// the value is a JSON serializable.
+func (o *PromptObjectOptions) AddPromptArg(key string, value any) {
+ if o.PromptArgs == nil {
+ o.PromptArgs = make(map[string]any)
+ }
+ o.PromptArgs[key] = value
+}
+
+// AddLambdaArnToReqParams adds the lambdaArn to the request query string parameters.
+func (o *PromptObjectOptions) AddLambdaArnToReqParams(lambdaArn string) {
+ if o.reqParams == nil {
+ o.reqParams = make(url.Values)
+ }
+ o.reqParams.Add("lambdaArn", lambdaArn)
+}
+
+// SetHeader adds a key value pair to the options. The
+// key-value pair will be part of the HTTP POST request
+// headers.
+func (o *PromptObjectOptions) SetHeader(key, value string) {
+ if o.headers == nil {
+ o.headers = make(map[string]string)
+ }
+ o.headers[http.CanonicalHeaderKey(key)] = value
+}
+
+// toQueryValues - Convert the reqParams in Options to query string parameters.
+func (o *PromptObjectOptions) toQueryValues() url.Values {
+ urlValues := make(url.Values)
+ if o.reqParams != nil {
+ for key, values := range o.reqParams {
+ for _, value := range values {
+ urlValues.Add(key, value)
+ }
+ }
+ }
+
+ return urlValues
+}
diff --git a/vendor/github.com/minio/minio-go/v7/api-put-object-fan-out.go b/vendor/github.com/minio/minio-go/v7/api-put-object-fan-out.go
index 0ae9142e1d3c8..3023b949cd444 100644
--- a/vendor/github.com/minio/minio-go/v7/api-put-object-fan-out.go
+++ b/vendor/github.com/minio/minio-go/v7/api-put-object-fan-out.go
@@ -85,7 +85,10 @@ func (c *Client) PutObjectFanOut(ctx context.Context, bucket string, fanOutData
policy.SetEncryption(fanOutReq.SSE)
// Set checksum headers if any.
- policy.SetChecksum(fanOutReq.Checksum)
+ err := policy.SetChecksum(fanOutReq.Checksum)
+ if err != nil {
+ return nil, err
+ }
url, formData, err := c.PresignedPostPolicy(ctx, policy)
if err != nil {
diff --git a/vendor/github.com/minio/minio-go/v7/api.go b/vendor/github.com/minio/minio-go/v7/api.go
index 380ec4fdefe42..88e8d43477785 100644
--- a/vendor/github.com/minio/minio-go/v7/api.go
+++ b/vendor/github.com/minio/minio-go/v7/api.go
@@ -133,7 +133,7 @@ type Options struct {
// Global constants.
const (
libraryName = "minio-go"
- libraryVersion = "v7.0.80"
+ libraryVersion = "v7.0.81"
)
// User Agent should always following the below style.
diff --git a/vendor/github.com/minio/minio-go/v7/functional_tests.go b/vendor/github.com/minio/minio-go/v7/functional_tests.go
index c0180b36b7015..43383d13486b3 100644
--- a/vendor/github.com/minio/minio-go/v7/functional_tests.go
+++ b/vendor/github.com/minio/minio-go/v7/functional_tests.go
@@ -160,7 +160,7 @@ func logError(testName, function string, args map[string]interface{}, startTime
} else {
logFailure(testName, function, args, startTime, alert, message, err)
if !isRunOnFail() {
- panic(err)
+ panic(fmt.Sprintf("Test failed with message: %s, err: %v", message, err))
}
}
}
@@ -393,6 +393,42 @@ func getFuncNameLoc(caller int) string {
return strings.TrimPrefix(runtime.FuncForPC(pc).Name(), "main.")
}
+type ClientConfig struct {
+ // MinIO client configuration
+ TraceOn bool // Turn on tracing of HTTP requests and responses to stderr
+ CredsV2 bool // Use V2 credentials if true, otherwise use v4
+ TrailingHeaders bool // Send trailing headers in requests
+}
+
+func NewClient(config ClientConfig) (*minio.Client, error) {
+ // Instantiate new MinIO client
+ var creds *credentials.Credentials
+ if config.CredsV2 {
+ creds = credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), "")
+ } else {
+ creds = credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), "")
+ }
+ opts := &minio.Options{
+ Creds: creds,
+ Transport: createHTTPTransport(),
+ Secure: mustParseBool(os.Getenv(enableHTTPS)),
+ TrailingHeaders: config.TrailingHeaders,
+ }
+ client, err := minio.New(os.Getenv(serverEndpoint), opts)
+ if err != nil {
+ return nil, err
+ }
+
+ if config.TraceOn {
+ client.TraceOn(os.Stderr)
+ }
+
+ // Set user agent.
+ client.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+
+ return client, nil
+}
+
// Tests bucket re-create errors.
func testMakeBucketError() {
region := "eu-central-1"
@@ -407,27 +443,12 @@ func testMakeBucketError() {
"region": region,
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- Transport: createHTTPTransport(),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -462,20 +483,12 @@ func testMetadataSizeLimit() {
"objectName": "",
"opts.UserMetadata": "",
}
- rand.Seed(startTime.Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- Transport: createHTTPTransport(),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client creation failed", err)
return
}
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -531,27 +544,12 @@ func testMakeBucketRegions() {
"region": region,
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -598,27 +596,12 @@ func testPutObjectReadAt() {
"opts": "objectContentType",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -697,27 +680,12 @@ func testListObjectVersions() {
"recursive": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -817,27 +785,12 @@ func testStatObjectWithVersioning() {
function := "StatObject"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -935,27 +888,12 @@ func testGetObjectWithVersioning() {
function := "GetObject()"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -1075,27 +1013,12 @@ func testPutObjectWithVersioning() {
function := "GetObject()"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -1223,28 +1146,12 @@ func testListMultipartUpload() {
function := "GetObject()"
args := map[string]interface{}{}
- // Instantiate new minio client object.
- opts := &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- }
- c, err := minio.New(os.Getenv(serverEndpoint), opts)
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- core, err := minio.NewCore(os.Getenv(serverEndpoint), opts)
- if err != nil {
- logError(testName, function, args, startTime, "", "MinIO core client object creation failed", err)
- return
- }
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ core := minio.Core{Client: c}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
@@ -1347,27 +1254,12 @@ func testCopyObjectWithVersioning() {
function := "CopyObject()"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -1485,27 +1377,12 @@ func testConcurrentCopyObjectWithVersioning() {
function := "CopyObject()"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -1646,27 +1523,12 @@ func testComposeObjectWithVersioning() {
function := "ComposeObject()"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -1787,27 +1649,12 @@ func testRemoveObjectWithVersioning() {
function := "DeleteObject()"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -1900,27 +1747,12 @@ func testRemoveObjectsWithVersioning() {
function := "DeleteObjects()"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -1996,27 +1828,12 @@ func testObjectTaggingWithVersioning() {
function := "{Get,Set,Remove}ObjectTagging()"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -2164,27 +1981,12 @@ func testPutObjectWithChecksums() {
return
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -2230,7 +2032,7 @@ func testPutObjectWithChecksums() {
h := test.cs.Hasher()
h.Reset()
- // Test with Wrong CRC.
+ // Test with a bad CRC - we haven't called h.Write(b), so this is a checksum of empty data
meta[test.cs.Key()] = base64.StdEncoding.EncodeToString(h.Sum(nil))
args["metadata"] = meta
args["range"] = "false"
@@ -2350,28 +2152,12 @@ func testPutObjectWithTrailingChecksums() {
return
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- TrailingHeaders: true,
- })
+ c, err := NewClient(ClientConfig{TrailingHeaders: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -2541,28 +2327,12 @@ func testPutMultipartObjectWithChecksums(trailing bool) {
return
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- TrailingHeaders: trailing,
- })
+ c, err := NewClient(ClientConfig{TrailingHeaders: trailing})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -2620,7 +2390,7 @@ func testPutMultipartObjectWithChecksums(trailing bool) {
cmpChecksum := func(got, want string) {
if want != got {
logError(testName, function, args, startTime, "", "checksum mismatch", fmt.Errorf("want %s, got %s", want, got))
- //fmt.Printf("want %s, got %s\n", want, got)
+ // fmt.Printf("want %s, got %s\n", want, got)
return
}
}
@@ -2741,25 +2511,12 @@ func testTrailingChecksums() {
return
}
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- TrailingHeaders: true,
- })
+ c, err := NewClient(ClientConfig{TrailingHeaders: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -2881,7 +2638,6 @@ func testTrailingChecksums() {
test.ChecksumCRC32C = hashMultiPart(b, int(test.PO.PartSize), test.hasher)
// Set correct CRC.
- // c.TraceOn(os.Stderr)
resp, err := c.PutObject(context.Background(), bucketName, objectName, bytes.NewReader(b), int64(bufSize), test.PO)
if err != nil {
logError(testName, function, args, startTime, "", "PutObject failed", err)
@@ -2933,6 +2689,8 @@ func testTrailingChecksums() {
delete(args, "metadata")
}
+
+ logSuccess(testName, function, args, startTime)
}
// Test PutObject with custom checksums.
@@ -2952,25 +2710,12 @@ func testPutObjectWithAutomaticChecksums() {
return
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- TrailingHeaders: true,
- })
+ c, err := NewClient(ClientConfig{TrailingHeaders: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -2997,8 +2742,6 @@ func testPutObjectWithAutomaticChecksums() {
{header: "x-amz-checksum-crc32c", hasher: crc32.New(crc32.MakeTable(crc32.Castagnoli))},
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
// defer c.TraceOff()
for i, test := range tests {
@@ -3108,20 +2851,12 @@ func testGetObjectAttributes() {
return
}
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- TrailingHeaders: true,
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{TrailingHeaders: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
err = c.MakeBucket(
@@ -3315,19 +3050,12 @@ func testGetObjectAttributesSSECEncryption() {
return
}
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- TrailingHeaders: true,
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- Transport: createHTTPTransport(),
- })
+ c, err := NewClient(ClientConfig{TrailingHeaders: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
err = c.MakeBucket(
@@ -3401,19 +3129,12 @@ func testGetObjectAttributesErrorCases() {
return
}
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- TrailingHeaders: true,
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{TrailingHeaders: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
unknownBucket := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-bucket-")
unknownObject := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-object-")
@@ -3657,27 +3378,12 @@ func testPutObjectWithMetadata() {
return
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -3764,27 +3470,12 @@ func testPutObjectWithContentLanguage() {
"opts": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -3834,27 +3525,12 @@ func testPutObjectStreaming() {
"opts": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -3906,27 +3582,12 @@ func testGetObjectSeekEnd() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -4029,27 +3690,12 @@ func testGetObjectClosedTwice() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -4120,26 +3766,13 @@ func testRemoveObjectsContext() {
"bucketName": "",
}
- // Seed random based on current tie.
- rand.Seed(time.Now().Unix())
-
// Instantiate new minio client.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
- // Enable tracing, write to stdout.
- // c.TraceOn(os.Stderr)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -4217,27 +3850,12 @@ func testRemoveMultipleObjects() {
"bucketName": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
- // Enable tracing, write to stdout.
- // c.TraceOn(os.Stderr)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -4301,27 +3919,12 @@ func testRemoveMultipleObjectsWithResult() {
"bucketName": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
- // Enable tracing, write to stdout.
- // c.TraceOn(os.Stderr)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -4437,27 +4040,12 @@ func testFPutObjectMultipart() {
"opts": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -4543,27 +4131,12 @@ func testFPutObject() {
"opts": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
location := "us-east-1"
@@ -4713,27 +4286,13 @@ func testFPutObjectContext() {
"fileName": "",
"opts": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -4814,27 +4373,13 @@ func testFPutObjectContextV2() {
"objectName": "",
"opts": "minio.PutObjectOptions{ContentType:objectContentType}",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -4919,24 +4464,12 @@ func testPutObjectContext() {
"opts": "",
}
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Make a new bucket.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -4989,27 +4522,12 @@ func testGetObjectS3Zip() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{"x-minio-extract": true}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -5173,27 +4691,12 @@ func testGetObjectReadSeekFunctional() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -5343,27 +4846,12 @@ func testGetObjectReadAtFunctional() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -5521,27 +5009,12 @@ func testGetObjectReadAtWhenEOFWasReached() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -5641,27 +5114,12 @@ func testPresignedPostPolicy() {
"policy": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
@@ -5689,50 +5147,22 @@ func testPresignedPostPolicy() {
return
}
- // Save the data
- _, err = c.PutObject(context.Background(), bucketName, objectName, bytes.NewReader(buf), int64(len(buf)), minio.PutObjectOptions{ContentType: "binary/octet-stream"})
- if err != nil {
- logError(testName, function, args, startTime, "", "PutObject failed", err)
- return
- }
-
policy := minio.NewPostPolicy()
-
- if err := policy.SetBucket(""); err == nil {
- logError(testName, function, args, startTime, "", "SetBucket did not fail for invalid conditions", err)
- return
- }
- if err := policy.SetKey(""); err == nil {
- logError(testName, function, args, startTime, "", "SetKey did not fail for invalid conditions", err)
- return
- }
- if err := policy.SetExpires(time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC)); err == nil {
- logError(testName, function, args, startTime, "", "SetExpires did not fail for invalid conditions", err)
- return
- }
- if err := policy.SetContentType(""); err == nil {
- logError(testName, function, args, startTime, "", "SetContentType did not fail for invalid conditions", err)
- return
- }
- if err := policy.SetContentLengthRange(1024*1024, 1024); err == nil {
- logError(testName, function, args, startTime, "", "SetContentLengthRange did not fail for invalid conditions", err)
- return
- }
- if err := policy.SetUserMetadata("", ""); err == nil {
- logError(testName, function, args, startTime, "", "SetUserMetadata did not fail for invalid conditions", err)
- return
- }
-
policy.SetBucket(bucketName)
policy.SetKey(objectName)
policy.SetExpires(time.Now().UTC().AddDate(0, 0, 10)) // expires in 10 days
policy.SetContentType("binary/octet-stream")
policy.SetContentLengthRange(10, 1024*1024)
policy.SetUserMetadata(metadataKey, metadataValue)
+ policy.SetContentEncoding("gzip")
// Add CRC32C
checksum := minio.ChecksumCRC32C.ChecksumBytes(buf)
- policy.SetChecksum(checksum)
+ err = policy.SetChecksum(checksum)
+ if err != nil {
+ logError(testName, function, args, startTime, "", "SetChecksum failed", err)
+ return
+ }
args["policy"] = policy.String()
@@ -5828,7 +5258,7 @@ func testPresignedPostPolicy() {
expectedLocation := scheme + os.Getenv(serverEndpoint) + "/" + bucketName + "/" + objectName
expectedLocationBucketDNS := scheme + bucketName + "." + os.Getenv(serverEndpoint) + "/" + objectName
- if !strings.Contains(expectedLocation, "s3.amazonaws.com/") {
+ if !strings.Contains(expectedLocation, ".amazonaws.com/") {
// Test when not against AWS S3.
if val, ok := res.Header["Location"]; ok {
if val[0] != expectedLocation && val[0] != expectedLocationBucketDNS {
@@ -5840,9 +5270,194 @@ func testPresignedPostPolicy() {
return
}
}
- want := checksum.Encoded()
- if got := res.Header.Get("X-Amz-Checksum-Crc32c"); got != want {
- logError(testName, function, args, startTime, "", fmt.Sprintf("Want checksum %q, got %q", want, got), nil)
+ wantChecksumCrc32c := checksum.Encoded()
+ if got := res.Header.Get("X-Amz-Checksum-Crc32c"); got != wantChecksumCrc32c {
+ logError(testName, function, args, startTime, "", fmt.Sprintf("Want checksum %q, got %q", wantChecksumCrc32c, got), nil)
+ return
+ }
+
+ // Ensure that when we subsequently GetObject, the checksum is returned
+ gopts := minio.GetObjectOptions{Checksum: true}
+ r, err := c.GetObject(context.Background(), bucketName, objectName, gopts)
+ if err != nil {
+ logError(testName, function, args, startTime, "", "GetObject failed", err)
+ return
+ }
+ st, err := r.Stat()
+ if err != nil {
+ logError(testName, function, args, startTime, "", "Stat failed", err)
+ return
+ }
+ if st.ChecksumCRC32C != wantChecksumCrc32c {
+ logError(testName, function, args, startTime, "", fmt.Sprintf("Want checksum %s, got %s", wantChecksumCrc32c, st.ChecksumCRC32C), nil)
+ return
+ }
+
+ logSuccess(testName, function, args, startTime)
+}
+
+// testPresignedPostPolicyWrongFile tests that when we have a policy with a checksum, we cannot POST the wrong file
+func testPresignedPostPolicyWrongFile() {
+ // initialize logging params
+ startTime := time.Now()
+ testName := getFuncName()
+ function := "PresignedPostPolicy(policy)"
+ args := map[string]interface{}{
+ "policy": "",
+ }
+
+ c, err := NewClient(ClientConfig{})
+ if err != nil {
+ logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
+ return
+ }
+
+ // Generate a new random bucket name.
+ bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
+
+ // Make a new bucket in 'us-east-1' (source bucket).
+ err = c.MakeBucket(context.Background(), bucketName, minio.MakeBucketOptions{Region: "us-east-1"})
+ if err != nil {
+ logError(testName, function, args, startTime, "", "MakeBucket failed", err)
+ return
+ }
+
+ defer cleanupBucket(bucketName, c)
+
+ // Generate 33K of data.
+ reader := getDataReader("datafile-33-kB")
+ defer reader.Close()
+
+ objectName := randString(60, rand.NewSource(time.Now().UnixNano()), "")
+ // Azure requires the key to not start with a number
+ metadataKey := randString(60, rand.NewSource(time.Now().UnixNano()), "user")
+ metadataValue := randString(60, rand.NewSource(time.Now().UnixNano()), "")
+
+ buf, err := io.ReadAll(reader)
+ if err != nil {
+ logError(testName, function, args, startTime, "", "ReadAll failed", err)
+ return
+ }
+
+ policy := minio.NewPostPolicy()
+ policy.SetBucket(bucketName)
+ policy.SetKey(objectName)
+ policy.SetExpires(time.Now().UTC().AddDate(0, 0, 10)) // expires in 10 days
+ policy.SetContentType("binary/octet-stream")
+ policy.SetContentLengthRange(10, 1024*1024)
+ policy.SetUserMetadata(metadataKey, metadataValue)
+
+ // Add CRC32C of the 33kB file that the policy will explicitly allow.
+ checksum := minio.ChecksumCRC32C.ChecksumBytes(buf)
+ err = policy.SetChecksum(checksum)
+ if err != nil {
+ logError(testName, function, args, startTime, "", "SetChecksum failed", err)
+ return
+ }
+
+ args["policy"] = policy.String()
+
+ presignedPostPolicyURL, formData, err := c.PresignedPostPolicy(context.Background(), policy)
+ if err != nil {
+ logError(testName, function, args, startTime, "", "PresignedPostPolicy failed", err)
+ return
+ }
+
+ // At this stage, we have a policy that allows us to upload datafile-33-kB.
+ // Test that uploading datafile-10-kB, with a different checksum, fails as expected
+ filePath := getMintDataDirFilePath("datafile-10-kB")
+ if filePath == "" {
+ // Make a temp file with 10 KB data.
+ file, err := os.CreateTemp(os.TempDir(), "PresignedPostPolicyTest")
+ if err != nil {
+ logError(testName, function, args, startTime, "", "TempFile creation failed", err)
+ return
+ }
+ if _, err = io.Copy(file, getDataReader("datafile-10-kB")); err != nil {
+ logError(testName, function, args, startTime, "", "Copy failed", err)
+ return
+ }
+ if err = file.Close(); err != nil {
+ logError(testName, function, args, startTime, "", "File Close failed", err)
+ return
+ }
+ filePath = file.Name()
+ }
+ fileReader := getDataReader("datafile-10-kB")
+ defer fileReader.Close()
+ buf10k, err := io.ReadAll(fileReader)
+ if err != nil {
+ logError(testName, function, args, startTime, "", "ReadAll failed", err)
+ return
+ }
+ otherChecksum := minio.ChecksumCRC32C.ChecksumBytes(buf10k)
+
+ var formBuf bytes.Buffer
+ writer := multipart.NewWriter(&formBuf)
+ for k, v := range formData {
+ if k == "x-amz-checksum-crc32c" {
+ v = otherChecksum.Encoded()
+ }
+ writer.WriteField(k, v)
+ }
+
+ // Add file to post request
+ f, err := os.Open(filePath)
+ defer f.Close()
+ if err != nil {
+ logError(testName, function, args, startTime, "", "File open failed", err)
+ return
+ }
+ w, err := writer.CreateFormFile("file", filePath)
+ if err != nil {
+ logError(testName, function, args, startTime, "", "CreateFormFile failed", err)
+ return
+ }
+ _, err = io.Copy(w, f)
+ if err != nil {
+ logError(testName, function, args, startTime, "", "Copy failed", err)
+ return
+ }
+ writer.Close()
+
+ httpClient := &http.Client{
+ Timeout: 30 * time.Second,
+ Transport: createHTTPTransport(),
+ }
+ args["url"] = presignedPostPolicyURL.String()
+
+ req, err := http.NewRequest(http.MethodPost, presignedPostPolicyURL.String(), bytes.NewReader(formBuf.Bytes()))
+ if err != nil {
+ logError(testName, function, args, startTime, "", "HTTP request failed", err)
+ return
+ }
+
+ req.Header.Set("Content-Type", writer.FormDataContentType())
+
+ // Make the POST request with the form data.
+ res, err := httpClient.Do(req)
+ if err != nil {
+ logError(testName, function, args, startTime, "", "HTTP request failed", err)
+ return
+ }
+ defer res.Body.Close()
+ if res.StatusCode != http.StatusForbidden {
+ logError(testName, function, args, startTime, "", "HTTP request unexpected status", errors.New(res.Status))
+ return
+ }
+
+ // Read the response body, ensure it has checksum failure message
+ resBody, err := io.ReadAll(res.Body)
+ if err != nil {
+ logError(testName, function, args, startTime, "", "ReadAll failed", err)
+ return
+ }
+
+ // Normalize the response body, because S3 uses quotes around the policy condition components
+ // in the error message, MinIO does not.
+ resBodyStr := strings.ReplaceAll(string(resBody), `"`, "")
+ if !strings.Contains(resBodyStr, "Policy Condition failed: [eq, $x-amz-checksum-crc32c, aHnJMw==]") {
+ logError(testName, function, args, startTime, "", "Unexpected response body", errors.New(resBodyStr))
return
}
@@ -5857,27 +5472,12 @@ func testCopyObject() {
function := "CopyObject(dst, src)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
@@ -6052,27 +5652,12 @@ func testSSECEncryptedGetObjectReadSeekFunctional() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -6235,27 +5820,12 @@ func testSSES3EncryptedGetObjectReadSeekFunctional() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -6416,27 +5986,12 @@ func testSSECEncryptedGetObjectReadAtFunctional() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -6600,27 +6155,12 @@ func testSSES3EncryptedGetObjectReadAtFunctional() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -6785,27 +6325,13 @@ func testSSECEncryptionPutGet() {
"objectName": "",
"sse": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -6895,27 +6421,13 @@ func testSSECEncryptionFPut() {
"contentType": "",
"sse": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -7018,27 +6530,13 @@ func testSSES3EncryptionPutGet() {
"objectName": "",
"sse": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -7126,27 +6624,13 @@ func testSSES3EncryptionFPut() {
"contentType": "",
"sse": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -7255,26 +6739,12 @@ func testBucketNotification() {
return
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable to debug
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
bucketName := os.Getenv("NOTIFY_BUCKET")
args["bucketName"] = bucketName
@@ -7350,26 +6820,12 @@ func testFunctional() {
functionAll := ""
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, nil, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable to debug
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
@@ -8029,24 +7485,12 @@ func testGetObjectModified() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Make a new bucket.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -8125,24 +7569,12 @@ func testPutObjectUploadSeekedObject() {
"contentType": "binary/octet-stream",
}
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Make a new bucket.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -8245,27 +7677,12 @@ func testMakeBucketErrorV2() {
"region": "eu-west-1",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
region := "eu-west-1"
@@ -8305,27 +7722,12 @@ func testGetObjectClosedTwiceV2() {
"region": "eu-west-1",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -8396,27 +7798,12 @@ func testFPutObjectV2() {
"opts": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -8557,27 +7944,12 @@ func testMakeBucketRegionsV2() {
"region": "eu-west-1",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -8620,27 +7992,12 @@ func testGetObjectReadSeekFunctionalV2() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -8775,27 +8132,12 @@ func testGetObjectReadAtFunctionalV2() {
function := "GetObject(bucketName, objectName)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -8937,27 +8279,12 @@ func testCopyObjectV2() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
@@ -9156,13 +8483,7 @@ func testComposeObjectErrorCasesV2() {
function := "ComposeObject(destination, sourceList)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9254,13 +8575,7 @@ func testCompose10KSourcesV2() {
function := "ComposeObject(destination, sourceList)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9276,13 +8591,7 @@ func testEncryptedEmptyObject() {
function := "PutObject(bucketName, objectName, reader, objectSize, opts)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
@@ -9430,7 +8739,7 @@ func testEncryptedCopyObjectWrapper(c *minio.Client, bucketName string, sseSrc,
dstEncryption = sseDst
}
// 3. get copied object and check if content is equal
- coreClient := minio.Core{c}
+ coreClient := minio.Core{Client: c}
reader, _, _, err := coreClient.GetObject(context.Background(), bucketName, "dstObject", minio.GetObjectOptions{ServerSideEncryption: dstEncryption})
if err != nil {
logError(testName, function, args, startTime, "", "GetObject failed", err)
@@ -9537,13 +8846,7 @@ func testUnencryptedToSSECCopyObject() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9552,7 +8855,6 @@ func testUnencryptedToSSECCopyObject() {
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
sseDst := encrypt.DefaultPBKDF([]byte("correct horse battery staple"), []byte(bucketName+"dstObject"))
- // c.TraceOn(os.Stderr)
testEncryptedCopyObjectWrapper(c, bucketName, nil, sseDst)
}
@@ -9564,13 +8866,7 @@ func testUnencryptedToSSES3CopyObject() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9580,7 +8876,6 @@ func testUnencryptedToSSES3CopyObject() {
var sseSrc encrypt.ServerSide
sseDst := encrypt.NewSSE()
- // c.TraceOn(os.Stderr)
testEncryptedCopyObjectWrapper(c, bucketName, sseSrc, sseDst)
}
@@ -9592,13 +8887,7 @@ func testUnencryptedToUnencryptedCopyObject() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9607,7 +8896,6 @@ func testUnencryptedToUnencryptedCopyObject() {
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
var sseSrc, sseDst encrypt.ServerSide
- // c.TraceOn(os.Stderr)
testEncryptedCopyObjectWrapper(c, bucketName, sseSrc, sseDst)
}
@@ -9619,13 +8907,7 @@ func testEncryptedSSECToSSECCopyObject() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9635,7 +8917,6 @@ func testEncryptedSSECToSSECCopyObject() {
sseSrc := encrypt.DefaultPBKDF([]byte("correct horse battery staple"), []byte(bucketName+"srcObject"))
sseDst := encrypt.DefaultPBKDF([]byte("correct horse battery staple"), []byte(bucketName+"dstObject"))
- // c.TraceOn(os.Stderr)
testEncryptedCopyObjectWrapper(c, bucketName, sseSrc, sseDst)
}
@@ -9647,13 +8928,7 @@ func testEncryptedSSECToSSES3CopyObject() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9663,7 +8938,6 @@ func testEncryptedSSECToSSES3CopyObject() {
sseSrc := encrypt.DefaultPBKDF([]byte("correct horse battery staple"), []byte(bucketName+"srcObject"))
sseDst := encrypt.NewSSE()
- // c.TraceOn(os.Stderr)
testEncryptedCopyObjectWrapper(c, bucketName, sseSrc, sseDst)
}
@@ -9675,13 +8949,7 @@ func testEncryptedSSECToUnencryptedCopyObject() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9691,7 +8959,6 @@ func testEncryptedSSECToUnencryptedCopyObject() {
sseSrc := encrypt.DefaultPBKDF([]byte("correct horse battery staple"), []byte(bucketName+"srcObject"))
var sseDst encrypt.ServerSide
- // c.TraceOn(os.Stderr)
testEncryptedCopyObjectWrapper(c, bucketName, sseSrc, sseDst)
}
@@ -9703,13 +8970,7 @@ func testEncryptedSSES3ToSSECCopyObject() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9719,7 +8980,6 @@ func testEncryptedSSES3ToSSECCopyObject() {
sseSrc := encrypt.NewSSE()
sseDst := encrypt.DefaultPBKDF([]byte("correct horse battery staple"), []byte(bucketName+"dstObject"))
- // c.TraceOn(os.Stderr)
testEncryptedCopyObjectWrapper(c, bucketName, sseSrc, sseDst)
}
@@ -9731,13 +8991,7 @@ func testEncryptedSSES3ToSSES3CopyObject() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9747,7 +9001,6 @@ func testEncryptedSSES3ToSSES3CopyObject() {
sseSrc := encrypt.NewSSE()
sseDst := encrypt.NewSSE()
- // c.TraceOn(os.Stderr)
testEncryptedCopyObjectWrapper(c, bucketName, sseSrc, sseDst)
}
@@ -9759,13 +9012,7 @@ func testEncryptedSSES3ToUnencryptedCopyObject() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9775,7 +9022,6 @@ func testEncryptedSSES3ToUnencryptedCopyObject() {
sseSrc := encrypt.NewSSE()
var sseDst encrypt.ServerSide
- // c.TraceOn(os.Stderr)
testEncryptedCopyObjectWrapper(c, bucketName, sseSrc, sseDst)
}
@@ -9787,13 +9033,7 @@ func testEncryptedCopyObjectV2() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9803,7 +9043,6 @@ func testEncryptedCopyObjectV2() {
sseSrc := encrypt.DefaultPBKDF([]byte("correct horse battery staple"), []byte(bucketName+"srcObject"))
sseDst := encrypt.DefaultPBKDF([]byte("correct horse battery staple"), []byte(bucketName+"dstObject"))
- // c.TraceOn(os.Stderr)
testEncryptedCopyObjectWrapper(c, bucketName, sseSrc, sseDst)
}
@@ -9814,13 +9053,7 @@ func testDecryptedCopyObject() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
@@ -9874,26 +9107,14 @@ func testSSECMultipartEncryptedToSSECCopyObjectPart() {
function := "CopyObjectPart(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- client, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ client, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
}
// Instantiate new core client object.
- c := minio.Core{client}
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ c := minio.Core{Client: client}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
@@ -10072,26 +9293,14 @@ func testSSECEncryptedToSSECCopyObjectPart() {
function := "CopyObjectPart(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- client, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ client, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
}
// Instantiate new core client object.
- c := minio.Core{client}
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ c := minio.Core{Client: client}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
@@ -10250,26 +9459,14 @@ func testSSECEncryptedToUnencryptedCopyPart() {
function := "CopyObjectPart(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- client, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ client, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
}
// Instantiate new core client object.
- c := minio.Core{client}
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ c := minio.Core{Client: client}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
@@ -10427,26 +9624,14 @@ func testSSECEncryptedToSSES3CopyObjectPart() {
function := "CopyObjectPart(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- client, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ client, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
}
// Instantiate new core client object.
- c := minio.Core{client}
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ c := minio.Core{Client: client}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
@@ -10607,26 +9792,14 @@ func testUnencryptedToSSECCopyObjectPart() {
function := "CopyObjectPart(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- client, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ client, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
}
// Instantiate new core client object.
- c := minio.Core{client}
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ c := minio.Core{Client: client}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
@@ -10782,26 +9955,14 @@ func testUnencryptedToUnencryptedCopyPart() {
function := "CopyObjectPart(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- client, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ client, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
}
// Instantiate new core client object.
- c := minio.Core{client}
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ c := minio.Core{Client: client}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
@@ -10953,26 +10114,14 @@ func testUnencryptedToSSES3CopyObjectPart() {
function := "CopyObjectPart(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- client, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ client, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
}
// Instantiate new core client object.
- c := minio.Core{client}
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ c := minio.Core{Client: client}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
@@ -11126,26 +10275,14 @@ func testSSES3EncryptedToSSECCopyObjectPart() {
function := "CopyObjectPart(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- client, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ client, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
}
// Instantiate new core client object.
- c := minio.Core{client}
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ c := minio.Core{Client: client}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
@@ -11302,26 +10439,14 @@ func testSSES3EncryptedToUnencryptedCopyPart() {
function := "CopyObjectPart(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- client, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ client, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
}
// Instantiate new core client object.
- c := minio.Core{client}
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ c := minio.Core{Client: client}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
@@ -11474,26 +10599,14 @@ func testSSES3EncryptedToSSES3CopyObjectPart() {
function := "CopyObjectPart(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- client, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ client, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
}
// Instantiate new core client object.
- c := minio.Core{client}
-
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
+ c := minio.Core{Client: client}
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
@@ -11648,19 +10761,12 @@ func testUserMetadataCopying() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // c.TraceOn(os.Stderr)
testUserMetadataCopyingWrapper(c)
}
@@ -11825,19 +10931,12 @@ func testUserMetadataCopyingV2() {
function := "CopyObject(destination, source)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v2 object creation failed", err)
return
}
- // c.TraceOn(os.Stderr)
testUserMetadataCopyingWrapper(c)
}
@@ -11848,13 +10947,7 @@ func testStorageClassMetadataPutObject() {
args := map[string]interface{}{}
testName := getFuncName()
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
@@ -11936,13 +11029,7 @@ func testStorageClassInvalidMetadataPutObject() {
args := map[string]interface{}{}
testName := getFuncName()
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
@@ -11979,13 +11066,7 @@ func testStorageClassMetadataCopyObject() {
args := map[string]interface{}{}
testName := getFuncName()
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- Transport: createHTTPTransport(),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO v4 client object creation failed", err)
return
@@ -12106,27 +11187,12 @@ func testPutObjectNoLengthV2() {
"opts": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
- logError(testName, function, args, startTime, "", "MinIO client v2 object creation failed", err)
+ logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -12182,27 +11248,12 @@ func testPutObjectsUnknownV2() {
"opts": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
- logError(testName, function, args, startTime, "", "MinIO client v2 object creation failed", err)
+ logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -12273,27 +11324,12 @@ func testPutObject0ByteV2() {
"opts": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
- logError(testName, function, args, startTime, "", "MinIO client v2 object creation failed", err)
+ logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -12338,13 +11374,7 @@ func testComposeObjectErrorCases() {
function := "ComposeObject(destination, sourceList)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
@@ -12361,13 +11391,7 @@ func testCompose10KSources() {
function := "ComposeObject(destination, sourceList)"
args := map[string]interface{}{}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
@@ -12385,26 +11409,12 @@ func testFunctionalV2() {
functionAll := ""
args := map[string]interface{}{}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
-
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- Transport: createHTTPTransport(),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v2 object creation failed", err)
return
}
- // Enable to debug
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
location := "us-east-1"
@@ -12838,27 +11848,13 @@ func testGetObjectContext() {
"bucketName": "",
"objectName": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v4 object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -12941,27 +11937,13 @@ func testFGetObjectContext() {
"objectName": "",
"fileName": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v4 object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -13033,24 +12015,12 @@ func testGetObjectRanges() {
defer cancel()
rng := rand.NewSource(time.Now().UnixNano())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v4 object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rng, "minio-go-test-")
args["bucketName"] = bucketName
@@ -13140,27 +12110,13 @@ func testGetObjectACLContext() {
"bucketName": "",
"objectName": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v4 object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -13318,24 +12274,12 @@ func testPutObjectContextV2() {
"size": "",
"opts": "",
}
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
- logError(testName, function, args, startTime, "", "MinIO client v2 object creation failed", err)
+ logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Make a new bucket.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -13390,27 +12334,13 @@ func testGetObjectContextV2() {
"bucketName": "",
"objectName": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
- logError(testName, function, args, startTime, "", "MinIO client v2 object creation failed", err)
+ logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -13491,27 +12421,13 @@ func testFGetObjectContextV2() {
"objectName": "",
"fileName": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV2(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{CredsV2: true})
if err != nil {
- logError(testName, function, args, startTime, "", "MinIO client v2 object creation failed", err)
+ logError(testName, function, args, startTime, "", "MinIO v2 client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -13580,27 +12496,13 @@ func testListObjects() {
"objectPrefix": "",
"recursive": "true",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v4 object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -13684,24 +12586,12 @@ func testCors() {
"cors": "",
}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Create or reuse a bucket that will get cors settings applied to it and deleted when done
bucketName := os.Getenv("MINIO_GO_TEST_BUCKET_CORS")
if bucketName == "" {
@@ -14420,24 +13310,12 @@ func testCorsSetGetDelete() {
"cors": "",
}
- // Instantiate new minio client object
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -14519,27 +13397,13 @@ func testRemoveObjects() {
"objectPrefix": "",
"recursive": "true",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v4 object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -14653,27 +13517,13 @@ func testGetBucketTagging() {
args := map[string]interface{}{
"bucketName": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v4 object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -14709,27 +13559,13 @@ func testSetBucketTagging() {
"bucketName": "",
"tags": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v4 object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -14795,27 +13631,13 @@ func testRemoveBucketTagging() {
args := map[string]interface{}{
"bucketName": "",
}
- // Seed random based on current time.
- rand.Seed(time.Now().Unix())
- // Instantiate new minio client object.
- c, err := minio.New(os.Getenv(serverEndpoint),
- &minio.Options{
- Creds: credentials.NewStaticV4(os.Getenv(accessKey), os.Getenv(secretKey), ""),
- Transport: createHTTPTransport(),
- Secure: mustParseBool(os.Getenv(enableHTTPS)),
- })
+ c, err := NewClient(ClientConfig{})
if err != nil {
logError(testName, function, args, startTime, "", "MinIO client v4 object creation failed", err)
return
}
- // Enable tracing, write to stderr.
- // c.TraceOn(os.Stderr)
-
- // Set user agent.
- c.SetAppInfo("MinIO-go-FunctionalTest", appVersion)
-
// Generate a new random bucket name.
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
args["bucketName"] = bucketName
@@ -14961,6 +13783,7 @@ func main() {
testGetObjectReadAtFunctional()
testGetObjectReadAtWhenEOFWasReached()
testPresignedPostPolicy()
+ testPresignedPostPolicyWrongFile()
testCopyObject()
testComposeObjectErrorCases()
testCompose10KSources()
diff --git a/vendor/github.com/minio/minio-go/v7/pkg/credentials/sts_web_identity.go b/vendor/github.com/minio/minio-go/v7/pkg/credentials/sts_web_identity.go
index f1c76c78ea0a3..787f0a38d69fa 100644
--- a/vendor/github.com/minio/minio-go/v7/pkg/credentials/sts_web_identity.go
+++ b/vendor/github.com/minio/minio-go/v7/pkg/credentials/sts_web_identity.go
@@ -58,9 +58,10 @@ type WebIdentityResult struct {
// WebIdentityToken - web identity token with expiry.
type WebIdentityToken struct {
- Token string
- AccessToken string
- Expiry int
+ Token string
+ AccessToken string
+ RefreshToken string
+ Expiry int
}
// A STSWebIdentity retrieves credentials from MinIO service, and keeps track if
diff --git a/vendor/github.com/minio/minio-go/v7/post-policy.go b/vendor/github.com/minio/minio-go/v7/post-policy.go
index 19687e027d017..26bf441b56f1c 100644
--- a/vendor/github.com/minio/minio-go/v7/post-policy.go
+++ b/vendor/github.com/minio/minio-go/v7/post-policy.go
@@ -85,7 +85,7 @@ func (p *PostPolicy) SetExpires(t time.Time) error {
// SetKey - Sets an object name for the policy based upload.
func (p *PostPolicy) SetKey(key string) error {
- if strings.TrimSpace(key) == "" || key == "" {
+ if strings.TrimSpace(key) == "" {
return errInvalidArgument("Object name is empty.")
}
policyCond := policyCondition{
@@ -118,7 +118,7 @@ func (p *PostPolicy) SetKeyStartsWith(keyStartsWith string) error {
// SetBucket - Sets bucket at which objects will be uploaded to.
func (p *PostPolicy) SetBucket(bucketName string) error {
- if strings.TrimSpace(bucketName) == "" || bucketName == "" {
+ if strings.TrimSpace(bucketName) == "" {
return errInvalidArgument("Bucket name is empty.")
}
policyCond := policyCondition{
@@ -135,7 +135,7 @@ func (p *PostPolicy) SetBucket(bucketName string) error {
// SetCondition - Sets condition for credentials, date and algorithm
func (p *PostPolicy) SetCondition(matchType, condition, value string) error {
- if strings.TrimSpace(value) == "" || value == "" {
+ if strings.TrimSpace(value) == "" {
return errInvalidArgument("No value specified for condition")
}
@@ -156,7 +156,7 @@ func (p *PostPolicy) SetCondition(matchType, condition, value string) error {
// SetTagging - Sets tagging for the object for this policy based upload.
func (p *PostPolicy) SetTagging(tagging string) error {
- if strings.TrimSpace(tagging) == "" || tagging == "" {
+ if strings.TrimSpace(tagging) == "" {
return errInvalidArgument("No tagging specified.")
}
_, err := tags.ParseObjectXML(strings.NewReader(tagging))
@@ -178,7 +178,7 @@ func (p *PostPolicy) SetTagging(tagging string) error {
// SetContentType - Sets content-type of the object for this policy
// based upload.
func (p *PostPolicy) SetContentType(contentType string) error {
- if strings.TrimSpace(contentType) == "" || contentType == "" {
+ if strings.TrimSpace(contentType) == "" {
return errInvalidArgument("No content type specified.")
}
policyCond := policyCondition{
@@ -211,7 +211,7 @@ func (p *PostPolicy) SetContentTypeStartsWith(contentTypeStartsWith string) erro
// SetContentDisposition - Sets content-disposition of the object for this policy
func (p *PostPolicy) SetContentDisposition(contentDisposition string) error {
- if strings.TrimSpace(contentDisposition) == "" || contentDisposition == "" {
+ if strings.TrimSpace(contentDisposition) == "" {
return errInvalidArgument("No content disposition specified.")
}
policyCond := policyCondition{
@@ -226,27 +226,44 @@ func (p *PostPolicy) SetContentDisposition(contentDisposition string) error {
return nil
}
+// SetContentEncoding - Sets content-encoding of the object for this policy
+func (p *PostPolicy) SetContentEncoding(contentEncoding string) error {
+ if strings.TrimSpace(contentEncoding) == "" {
+ return errInvalidArgument("No content encoding specified.")
+ }
+ policyCond := policyCondition{
+ matchType: "eq",
+ condition: "$Content-Encoding",
+ value: contentEncoding,
+ }
+ if err := p.addNewPolicy(policyCond); err != nil {
+ return err
+ }
+ p.formData["Content-Encoding"] = contentEncoding
+ return nil
+}
+
// SetContentLengthRange - Set new min and max content length
// condition for all incoming uploads.
-func (p *PostPolicy) SetContentLengthRange(min, max int64) error {
- if min > max {
+func (p *PostPolicy) SetContentLengthRange(minLen, maxLen int64) error {
+ if minLen > maxLen {
return errInvalidArgument("Minimum limit is larger than maximum limit.")
}
- if min < 0 {
+ if minLen < 0 {
return errInvalidArgument("Minimum limit cannot be negative.")
}
- if max <= 0 {
+ if maxLen <= 0 {
return errInvalidArgument("Maximum limit cannot be non-positive.")
}
- p.contentLengthRange.min = min
- p.contentLengthRange.max = max
+ p.contentLengthRange.min = minLen
+ p.contentLengthRange.max = maxLen
return nil
}
// SetSuccessActionRedirect - Sets the redirect success url of the object for this policy
// based upload.
func (p *PostPolicy) SetSuccessActionRedirect(redirect string) error {
- if strings.TrimSpace(redirect) == "" || redirect == "" {
+ if strings.TrimSpace(redirect) == "" {
return errInvalidArgument("Redirect is empty")
}
policyCond := policyCondition{
@@ -264,7 +281,7 @@ func (p *PostPolicy) SetSuccessActionRedirect(redirect string) error {
// SetSuccessStatusAction - Sets the status success code of the object for this policy
// based upload.
func (p *PostPolicy) SetSuccessStatusAction(status string) error {
- if strings.TrimSpace(status) == "" || status == "" {
+ if strings.TrimSpace(status) == "" {
return errInvalidArgument("Status is empty")
}
policyCond := policyCondition{
@@ -282,10 +299,10 @@ func (p *PostPolicy) SetSuccessStatusAction(status string) error {
// SetUserMetadata - Set user metadata as a key/value couple.
// Can be retrieved through a HEAD request or an event.
func (p *PostPolicy) SetUserMetadata(key, value string) error {
- if strings.TrimSpace(key) == "" || key == "" {
+ if strings.TrimSpace(key) == "" {
return errInvalidArgument("Key is empty")
}
- if strings.TrimSpace(value) == "" || value == "" {
+ if strings.TrimSpace(value) == "" {
return errInvalidArgument("Value is empty")
}
headerName := fmt.Sprintf("x-amz-meta-%s", key)
@@ -304,7 +321,7 @@ func (p *PostPolicy) SetUserMetadata(key, value string) error {
// SetUserMetadataStartsWith - Set how an user metadata should starts with.
// Can be retrieved through a HEAD request or an event.
func (p *PostPolicy) SetUserMetadataStartsWith(key, value string) error {
- if strings.TrimSpace(key) == "" || key == "" {
+ if strings.TrimSpace(key) == "" {
return errInvalidArgument("Key is empty")
}
headerName := fmt.Sprintf("x-amz-meta-%s", key)
@@ -321,11 +338,29 @@ func (p *PostPolicy) SetUserMetadataStartsWith(key, value string) error {
}
// SetChecksum sets the checksum of the request.
-func (p *PostPolicy) SetChecksum(c Checksum) {
+func (p *PostPolicy) SetChecksum(c Checksum) error {
if c.IsSet() {
p.formData[amzChecksumAlgo] = c.Type.String()
p.formData[c.Type.Key()] = c.Encoded()
+
+ policyCond := policyCondition{
+ matchType: "eq",
+ condition: fmt.Sprintf("$%s", amzChecksumAlgo),
+ value: c.Type.String(),
+ }
+ if err := p.addNewPolicy(policyCond); err != nil {
+ return err
+ }
+ policyCond = policyCondition{
+ matchType: "eq",
+ condition: fmt.Sprintf("$%s", c.Type.Key()),
+ value: c.Encoded(),
+ }
+ if err := p.addNewPolicy(policyCond); err != nil {
+ return err
+ }
}
+ return nil
}
// SetEncryption - sets encryption headers for POST API
diff --git a/vendor/github.com/minio/minio-go/v7/retry-continous.go b/vendor/github.com/minio/minio-go/v7/retry-continous.go
index bfeea95f30d63..81fcf16f1b912 100644
--- a/vendor/github.com/minio/minio-go/v7/retry-continous.go
+++ b/vendor/github.com/minio/minio-go/v7/retry-continous.go
@@ -20,7 +20,7 @@ package minio
import "time"
// newRetryTimerContinous creates a timer with exponentially increasing delays forever.
-func (c *Client) newRetryTimerContinous(unit, cap time.Duration, jitter float64, doneCh chan struct{}) <-chan int {
+func (c *Client) newRetryTimerContinous(baseSleep, maxSleep time.Duration, jitter float64, doneCh chan struct{}) <-chan int {
attemptCh := make(chan int)
// normalize jitter to the range [0, 1.0]
@@ -39,10 +39,10 @@ func (c *Client) newRetryTimerContinous(unit, cap time.Duration, jitter float64,
if attempt > maxAttempt {
attempt = maxAttempt
}
- // sleep = random_between(0, min(cap, base * 2 ** attempt))
- sleep := unit * time.Duration(1<<uint(attempt))
- if sleep > cap {
- sleep = cap
+ // sleep = random_between(0, min(maxSleep, base * 2 ** attempt))
+ sleep := baseSleep * time.Duration(1<<uint(attempt))
+ if sleep > maxSleep {
+ sleep = maxSleep
}
if jitter != NoJitter {
sleep -= time.Duration(c.random.Float64() * float64(sleep) * jitter)
diff --git a/vendor/github.com/minio/minio-go/v7/retry.go b/vendor/github.com/minio/minio-go/v7/retry.go
index d15eb59013e38..4cc45920c4acb 100644
--- a/vendor/github.com/minio/minio-go/v7/retry.go
+++ b/vendor/github.com/minio/minio-go/v7/retry.go
@@ -45,7 +45,7 @@ var DefaultRetryCap = time.Second
// newRetryTimer creates a timer with exponentially increasing
// delays until the maximum retry attempts are reached.
-func (c *Client) newRetryTimer(ctx context.Context, maxRetry int, unit, cap time.Duration, jitter float64) <-chan int {
+func (c *Client) newRetryTimer(ctx context.Context, maxRetry int, baseSleep, maxSleep time.Duration, jitter float64) <-chan int {
attemptCh := make(chan int)
// computes the exponential backoff duration according to
@@ -59,10 +59,10 @@ func (c *Client) newRetryTimer(ctx context.Context, maxRetry int, unit, cap time
jitter = MaxJitter
}
- // sleep = random_between(0, min(cap, base * 2 ** attempt))
- sleep := unit * time.Duration(1<<uint(attempt))
- if sleep > cap {
- sleep = cap
+ // sleep = random_between(0, min(maxSleep, base * 2 ** attempt))
+ sleep := baseSleep * time.Duration(1<<uint(attempt))
+ if sleep > maxSleep {
+ sleep = maxSleep
}
if jitter != NoJitter {
sleep -= time.Duration(c.random.Float64() * float64(sleep) * jitter)
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 5ae1cad037db6..ec032fa8876b9 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1232,7 +1232,7 @@ github.com/miekg/dns
# github.com/minio/md5-simd v1.1.2
## explicit; go 1.14
github.com/minio/md5-simd
-# github.com/minio/minio-go/v7 v7.0.80
+# github.com/minio/minio-go/v7 v7.0.81
## explicit; go 1.22
github.com/minio/minio-go/v7
github.com/minio/minio-go/v7/pkg/cors
|
fix
|
update module github.com/minio/minio-go/v7 to v7.0.81 (#15114)
|
8207a9bf5e4aa4e963ed6ca05509ca10f720032f
|
2024-12-30 10:08:03
|
renovate[bot]
|
fix(deps): update module github.com/hashicorp/consul/api to v1.31.0 (#15540)
| false
|
diff --git a/go.mod b/go.mod
index 6b6160384af2f..9bc640de719ee 100644
--- a/go.mod
+++ b/go.mod
@@ -58,7 +58,7 @@ require (
github.com/grafana/tail v0.0.0-20230510142333-77b18831edf0
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.2.0
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645
- github.com/hashicorp/consul/api v1.30.0
+ github.com/hashicorp/consul/api v1.31.0
github.com/hashicorp/golang-lru/v2 v2.0.7
github.com/influxdata/telegraf v1.33.0
github.com/jmespath/go-jmespath v0.4.0
diff --git a/go.sum b/go.sum
index 3f6103f5fde38..5144050000b1b 100644
--- a/go.sum
+++ b/go.sum
@@ -646,8 +646,8 @@ github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645/go
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed/go.mod h1:tMWxXQ9wFIaZeTI9F+hmhFiGpFmhOHzyShyFUhRm0H4=
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
-github.com/hashicorp/consul/api v1.30.0 h1:ArHVMMILb1nQv8vZSGIwwQd2gtc+oSQZ6CalyiyH2XQ=
-github.com/hashicorp/consul/api v1.30.0/go.mod h1:B2uGchvaXVW2JhFoS8nqTxMD5PBykr4ebY4JWHTTeLM=
+github.com/hashicorp/consul/api v1.31.0 h1:32BUNLembeSRek0G/ZAM6WNfdEwYdYo8oQ4+JoqGkNQ=
+github.com/hashicorp/consul/api v1.31.0/go.mod h1:2ZGIiXM3A610NmDULmCHd/aqBJj8CkMfOhswhOafxRg=
github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/consul/sdk v0.16.1 h1:V8TxTnImoPD5cj0U9Spl0TUxcytjcbbJeADFF07KdHg=
github.com/hashicorp/consul/sdk v0.16.1/go.mod h1:fSXvwxB2hmh1FMZCNl6PwX0Q/1wdWtHJcZ7Ea5tns0s=
diff --git a/vendor/github.com/hashicorp/consul/api/api.go b/vendor/github.com/hashicorp/consul/api/api.go
index d4d853d5d4b1b..27af1ea5697aa 100644
--- a/vendor/github.com/hashicorp/consul/api/api.go
+++ b/vendor/github.com/hashicorp/consul/api/api.go
@@ -1087,8 +1087,23 @@ func (c *Client) doRequest(r *request) (time.Duration, *http.Response, error) {
if err != nil {
return 0, nil, err
}
+
+ contentType := GetContentType(req)
+
+ if req != nil {
+ req.Header.Set(contentTypeHeader, contentType)
+ }
+
start := time.Now()
resp, err := c.config.HttpClient.Do(req)
+
+ if resp != nil {
+ respContentType := resp.Header.Get(contentTypeHeader)
+ if respContentType == "" || respContentType != contentType {
+ resp.Header.Set(contentTypeHeader, contentType)
+ }
+ }
+
diff := time.Since(start)
return diff, resp, err
}
diff --git a/vendor/github.com/hashicorp/consul/api/content_type.go b/vendor/github.com/hashicorp/consul/api/content_type.go
new file mode 100644
index 0000000000000..37c8cf60aaf66
--- /dev/null
+++ b/vendor/github.com/hashicorp/consul/api/content_type.go
@@ -0,0 +1,81 @@
+// Copyright (c) HashiCorp, Inc.
+// SPDX-License-Identifier: MPL-2.0
+
+package api
+
+import (
+ "net/http"
+ "strings"
+)
+
+const (
+ contentTypeHeader = "Content-Type"
+ plainContentType = "text/plain; charset=utf-8"
+ octetStream = "application/octet-stream"
+ jsonContentType = "application/json" // Default content type
+)
+
+// ContentTypeRule defines a rule for determining the content type of an HTTP request.
+// This rule is based on the combination of the HTTP path, method, and the desired content type.
+type ContentTypeRule struct {
+ path string
+ httpMethod string
+ contentType string
+}
+
+var ContentTypeRules = []ContentTypeRule{
+ {
+ path: "/v1/snapshot",
+ httpMethod: http.MethodPut,
+ contentType: octetStream,
+ },
+ {
+ path: "/v1/kv",
+ httpMethod: http.MethodPut,
+ contentType: octetStream,
+ },
+ {
+ path: "/v1/event/fire",
+ httpMethod: http.MethodPut,
+ contentType: octetStream,
+ },
+}
+
+// GetContentType returns the content type for a request
+// This function isused as routing logic or middleware to determine and enforce
+// the appropriate content type for HTTP requests.
+func GetContentType(req *http.Request) string {
+ reqContentType := req.Header.Get(contentTypeHeader)
+
+ if isIndexPage(req) {
+ return plainContentType
+ }
+
+ // For GET, DELETE, or internal API paths, ensure a valid Content-Type is returned.
+ if req.Method == http.MethodGet || req.Method == http.MethodDelete || strings.HasPrefix(req.URL.Path, "/v1/internal") {
+ if reqContentType == "" {
+ // Default to JSON Content-Type if no Content-Type is provided.
+ return jsonContentType
+ }
+ // Return the provided Content-Type if it exists.
+ return reqContentType
+ }
+
+ for _, rule := range ContentTypeRules {
+ if matchesRule(req, rule) {
+ return rule.contentType
+ }
+ }
+ return jsonContentType
+}
+
+// matchesRule checks if a request matches a content type rule
+func matchesRule(req *http.Request, rule ContentTypeRule) bool {
+ return strings.HasPrefix(req.URL.Path, rule.path) &&
+ (rule.httpMethod == "" || req.Method == rule.httpMethod)
+}
+
+// isIndexPage checks if the request is for the index page
+func isIndexPage(req *http.Request) bool {
+ return req.URL.Path == "/" || req.URL.Path == "/ui"
+}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 8f36fbd60418e..ed0b059b38bfc 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1072,7 +1072,7 @@ github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc
# github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed
## explicit
github.com/hailocab/go-hostpool
-# github.com/hashicorp/consul/api v1.30.0
+# github.com/hashicorp/consul/api v1.31.0
## explicit; go 1.19
github.com/hashicorp/consul/api
# github.com/hashicorp/errwrap v1.1.0
|
fix
|
update module github.com/hashicorp/consul/api to v1.31.0 (#15540)
|
9e3250f87905c1454a30b533dc278a6d7c5b8c96
|
2025-01-17 20:33:32
|
renovate[bot]
|
fix(deps): update module github.com/baidubce/bce-sdk-go to v0.9.214 (#15815)
| false
|
diff --git a/go.mod b/go.mod
index 173bb5faa41da..a9efc1e1da6d9 100644
--- a/go.mod
+++ b/go.mod
@@ -21,7 +21,7 @@ require (
github.com/alicebob/miniredis/v2 v2.34.0
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible
github.com/aws/aws-sdk-go v1.55.6
- github.com/baidubce/bce-sdk-go v0.9.213
+ github.com/baidubce/bce-sdk-go v0.9.214
github.com/bmatcuk/doublestar/v4 v4.8.0
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500
github.com/cespare/xxhash/v2 v2.3.0
diff --git a/go.sum b/go.sum
index 1131c4ca1577f..75981663cd131 100644
--- a/go.sum
+++ b/go.sum
@@ -224,8 +224,8 @@ github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro=
github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/axiomhq/hyperloglog v0.2.3 h1:2ZGwz3FGcx77e9/aNjqJijsGhH6RZOlglzxnDpVBCQY=
github.com/axiomhq/hyperloglog v0.2.3/go.mod h1:DLUK9yIzpU5B6YFLjxTIcbHu1g4Y1WQb1m5RH3radaM=
-github.com/baidubce/bce-sdk-go v0.9.213 h1:4IxEiHvtMj5tJ9BCyre87bk7eAY/0TpzB4RVy/eSnos=
-github.com/baidubce/bce-sdk-go v0.9.213/go.mod h1:zbYJMQwE4IZuyrJiFO8tO8NbtYiKTFTbwh4eIsqjVdg=
+github.com/baidubce/bce-sdk-go v0.9.214 h1:bsVfwMh/emI6vreEveUEq9xAr6xtHLycTAGy2K7kvKM=
+github.com/baidubce/bce-sdk-go v0.9.214/go.mod h1:zbYJMQwE4IZuyrJiFO8tO8NbtYiKTFTbwh4eIsqjVdg=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3 h1:6df1vn4bBlDDo4tARvBm7l6KA9iVMnE3NWizDeWSrps=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3/go.mod h1:CIWtjkly68+yqLPbvwwR/fjNJA/idrtULjZWh2v1ys0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
diff --git a/vendor/github.com/baidubce/bce-sdk-go/bce/config.go b/vendor/github.com/baidubce/bce-sdk-go/bce/config.go
index 41a7dc993fdde..ae81b72987347 100644
--- a/vendor/github.com/baidubce/bce-sdk-go/bce/config.go
+++ b/vendor/github.com/baidubce/bce-sdk-go/bce/config.go
@@ -26,7 +26,7 @@ import (
// Constants and default values for the package bce
const (
- SDK_VERSION = "0.9.213"
+ SDK_VERSION = "0.9.214"
URI_PREFIX = "/" // now support uri without prefix "v1" so just set root path
DEFAULT_DOMAIN = "baidubce.com"
DEFAULT_PROTOCOL = "http"
diff --git a/vendor/github.com/baidubce/bce-sdk-go/services/bos/api/model.go b/vendor/github.com/baidubce/bce-sdk-go/services/bos/api/model.go
index c59ee8f6bf06d..632a5682f419e 100644
--- a/vendor/github.com/baidubce/bce-sdk-go/services/bos/api/model.go
+++ b/vendor/github.com/baidubce/bce-sdk-go/services/bos/api/model.go
@@ -59,8 +59,8 @@ type PrefixType struct {
}
type PutBucketArgs struct {
- TagList string `json:"-"`
- EnableMultiAz bool `json:"enableMultiAz"`
+ TagList string `json:"-"`
+ EnableMultiAz bool `json:"enableMultiAz"`
}
// ListObjectsResult defines the result structure of ListObjects api.
@@ -98,6 +98,7 @@ type AclRefererType struct {
type AclCondType struct {
IpAddress []string `json:"ipAddress"`
Referer AclRefererType `json:"referer"`
+ VpcId []string `json:"vpcId"`
}
// GrantType defines the grant struct in ACL setting
@@ -285,8 +286,8 @@ type CopyObjectArgs struct {
}
type MultiCopyObjectArgs struct {
- StorageClass string
- ObjectTagging string
+ StorageClass string
+ ObjectTagging string
TaggingDirective string
}
@@ -396,8 +397,8 @@ type EndMessage struct {
// FetchObjectArgs defines the optional arguments structure for the fetch object api.
type FetchObjectArgs struct {
- FetchMode string
- StorageClass string
+ FetchMode string
+ StorageClass string
FetchCallBackAddress string
}
@@ -683,4 +684,4 @@ type BosShareResBody struct {
ShareUrl string `json:"shareUrl"`
LinkExpireTime int64 `json:"linkExpireTime"`
ShareCode string `json:"shareCode"`
-}
\ No newline at end of file
+}
diff --git a/vendor/github.com/baidubce/bce-sdk-go/services/sts/client.go b/vendor/github.com/baidubce/bce-sdk-go/services/sts/client.go
index 8441d3a5265c7..2c5ef91b8171a 100644
--- a/vendor/github.com/baidubce/bce-sdk-go/services/sts/client.go
+++ b/vendor/github.com/baidubce/bce-sdk-go/services/sts/client.go
@@ -23,7 +23,6 @@ import (
"github.com/baidubce/bce-sdk-go/auth"
"github.com/baidubce/bce-sdk-go/bce"
"github.com/baidubce/bce-sdk-go/services/sts/api"
- "github.com/baidubce/bce-sdk-go/util"
)
const DEFAULT_SERVICE_DOMAIN = "sts." + bce.DEFAULT_REGION + "." + bce.DEFAULT_DOMAIN
@@ -57,16 +56,16 @@ func NewStsClient(ak, sk, endpoint string) (*Client, error) {
endpoint = DEFAULT_SERVICE_DOMAIN
}
defaultSignOptions := &auth.SignOptions{
- HeadersToSign: auth.DEFAULT_HEADERS_TO_SIGN,
- Timestamp: util.NowUTCSeconds(),
- ExpireSeconds: auth.DEFAULT_EXPIRE_SECONDS}
+ HeadersToSign: auth.DEFAULT_HEADERS_TO_SIGN,
+ Timestamp: 0,
+ ExpireSeconds: auth.DEFAULT_EXPIRE_SECONDS}
defaultConf := &bce.BceClientConfiguration{
- Endpoint: endpoint,
- Region: bce.DEFAULT_REGION,
- UserAgent: bce.DEFAULT_USER_AGENT,
- Credentials: credentials,
- SignOption: defaultSignOptions,
- Retry: bce.DEFAULT_RETRY_POLICY,
+ Endpoint: endpoint,
+ Region: bce.DEFAULT_REGION,
+ UserAgent: bce.DEFAULT_USER_AGENT,
+ Credentials: credentials,
+ SignOption: defaultSignOptions,
+ Retry: bce.DEFAULT_RETRY_POLICY,
ConnectionTimeoutInMillis: bce.DEFAULT_CONNECTION_TIMEOUT_IN_MILLIS}
v1Signer := &auth.BceV1Signer{}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 8e55cdbd0bbc7..49a5712e06a59 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -490,7 +490,7 @@ github.com/aws/smithy-go/transport/http/internal/io
# github.com/axiomhq/hyperloglog v0.2.3
## explicit; go 1.23
github.com/axiomhq/hyperloglog
-# github.com/baidubce/bce-sdk-go v0.9.213
+# github.com/baidubce/bce-sdk-go v0.9.214
## explicit; go 1.11
github.com/baidubce/bce-sdk-go/auth
github.com/baidubce/bce-sdk-go/bce
|
fix
|
update module github.com/baidubce/bce-sdk-go to v0.9.214 (#15815)
|
4fa5148eb505291a28277edfb7d31e118a62809d
|
2024-02-23 15:11:52
|
Salva Corts
|
refactor: Pass query plan down to bloom gateway (#12037)
| false
|
diff --git a/pkg/bloomgateway/bloomgateway.go b/pkg/bloomgateway/bloomgateway.go
index 0e18a06c93275..d0ac92db59a34 100644
--- a/pkg/bloomgateway/bloomgateway.go
+++ b/pkg/bloomgateway/bloomgateway.go
@@ -58,6 +58,7 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/grafana/loki/pkg/logproto"
+ "github.com/grafana/loki/pkg/logql/syntax"
"github.com/grafana/loki/pkg/logqlmodel/stats"
"github.com/grafana/loki/pkg/queue"
"github.com/grafana/loki/pkg/storage"
@@ -311,7 +312,7 @@ func (g *Gateway) FilterChunkRefs(ctx context.Context, req *logproto.FilterChunk
}
// Shortcut if request does not contain filters
- if len(req.Filters) == 0 {
+ if len(syntax.ExtractLineFilters(req.Plan.AST)) == 0 {
return &logproto.FilterChunkRefResponse{
ChunkRefs: req.Refs,
}, nil
@@ -332,9 +333,10 @@ func (g *Gateway) FilterChunkRefs(ctx context.Context, req *logproto.FilterChunk
}, nil
}
+ filters := syntax.ExtractLineFilters(req.Plan.AST)
tasks := make([]Task, 0, len(seriesByDay))
for _, seriesWithBounds := range seriesByDay {
- task, err := NewTask(ctx, tenantID, seriesWithBounds, req.Filters)
+ task, err := NewTask(ctx, tenantID, seriesWithBounds, filters)
if err != nil {
return nil, err
}
diff --git a/pkg/bloomgateway/bloomgateway_test.go b/pkg/bloomgateway/bloomgateway_test.go
index 9a4dea08dba26..f853398894e00 100644
--- a/pkg/bloomgateway/bloomgateway_test.go
+++ b/pkg/bloomgateway/bloomgateway_test.go
@@ -17,11 +17,11 @@ import (
"github.com/grafana/dskit/user"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
- "github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/require"
"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/logql/syntax"
+ "github.com/grafana/loki/pkg/querier/plan"
"github.com/grafana/loki/pkg/storage"
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/storage/chunk/client/local"
@@ -196,13 +196,14 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
// saturate workers
// then send additional request
for i := 0; i < gw.cfg.WorkerConcurrency+1; i++ {
+ expr, err := syntax.ParseExpr(`{foo="bar"} |= "does not match"`)
+ require.NoError(t, err)
+
req := &logproto.FilterChunkRefRequest{
From: now.Add(-24 * time.Hour),
Through: now,
Refs: groupRefs(t, chunkRefs),
- Filters: []syntax.LineFilter{
- {Ty: labels.MatchEqual, Match: "does not match"},
- },
+ Plan: plan.QueryPlan{AST: expr},
}
ctx, cancelFn := context.WithTimeout(context.Background(), 10*time.Second)
@@ -243,13 +244,14 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
// saturate workers
// then send additional request
for i := 0; i < gw.cfg.WorkerConcurrency+1; i++ {
+ expr, err := syntax.ParseExpr(`{foo="bar"} |= "does not match"`)
+ require.NoError(t, err)
+
req := &logproto.FilterChunkRefRequest{
From: now.Add(-24 * time.Hour),
Through: now,
Refs: groupRefs(t, chunkRefs),
- Filters: []syntax.LineFilter{
- {Ty: labels.MatchEqual, Match: "does not match"},
- },
+ Plan: plan.QueryPlan{AST: expr},
}
ctx, cancelFn := context.WithTimeout(context.Background(), 500*time.Millisecond)
@@ -331,13 +333,13 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
Checksum: uint32(idx),
},
}
+ expr, err := syntax.ParseExpr(`{foo="bar"} |= "foo"`)
+ require.NoError(t, err)
req := &logproto.FilterChunkRefRequest{
From: now.Add(-24 * time.Hour),
Through: now,
Refs: groupRefs(t, chunkRefs),
- Filters: []syntax.LineFilter{
- {Ty: labels.MatchEqual, Match: "foo"},
- },
+ Plan: plan.QueryPlan{AST: expr},
}
ctx := user.InjectOrgID(context.Background(), tenantID)
_, err = gw.FilterChunkRefs(ctx, req)
@@ -371,13 +373,13 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
t.Run("no match - return empty response", func(t *testing.T) {
inputChunkRefs := groupRefs(t, chunkRefs)
+ expr, err := syntax.ParseExpr(`{foo="bar"} |= "does not match"`)
+ require.NoError(t, err)
req := &logproto.FilterChunkRefRequest{
From: now.Add(-8 * time.Hour),
Through: now,
Refs: inputChunkRefs,
- Filters: []syntax.LineFilter{
- {Ty: labels.MatchEqual, Match: "does not match"},
- },
+ Plan: plan.QueryPlan{AST: expr},
}
ctx := user.InjectOrgID(context.Background(), tenantID)
res, err := gw.FilterChunkRefs(ctx, req)
@@ -402,13 +404,14 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
t.Log("x=", x, "fp=", fp, "line=", line)
+ expr, err := syntax.ParseExpr(fmt.Sprintf(`{foo="bar"} |= "%s"`, line))
+ require.NoError(t, err)
+
req := &logproto.FilterChunkRefRequest{
From: now.Add(-8 * time.Hour),
Through: now,
Refs: inputChunkRefs,
- Filters: []syntax.LineFilter{
- {Ty: labels.MatchEqual, Match: line},
- },
+ Plan: plan.QueryPlan{AST: expr},
}
ctx := user.InjectOrgID(context.Background(), tenantID)
res, err := gw.FilterChunkRefs(ctx, req)
diff --git a/pkg/bloomgateway/cache_test.go b/pkg/bloomgateway/cache_test.go
index 3ae414cc43c6e..bf1a8dbaa365b 100644
--- a/pkg/bloomgateway/cache_test.go
+++ b/pkg/bloomgateway/cache_test.go
@@ -8,13 +8,13 @@ import (
"github.com/go-kit/log"
"github.com/grafana/dskit/user"
"github.com/prometheus/common/model"
- "github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/logql/syntax"
"github.com/grafana/loki/pkg/logqlmodel/stats"
+ "github.com/grafana/loki/pkg/querier/plan"
"github.com/grafana/loki/pkg/storage/chunk/cache"
"github.com/grafana/loki/pkg/storage/chunk/cache/resultscache"
"github.com/grafana/loki/pkg/util/constants"
@@ -382,13 +382,13 @@ func TestCache(t *testing.T) {
Through: 3500,
},
}
+ expr, err := syntax.ParseExpr(`{foo="bar"} |= "does not match"`)
+ require.NoError(t, err)
req := &logproto.FilterChunkRefRequest{
From: model.Time(2000),
Through: model.Time(3000),
Refs: groupRefs(t, chunkRefs),
- Filters: []syntax.LineFilter{
- {Ty: labels.MatchEqual, Match: "foo"},
- },
+ Plan: plan.QueryPlan{AST: expr},
}
expectedRes := &logproto.FilterChunkRefResponse{
ChunkRefs: groupRefs(t, chunkRefs),
diff --git a/pkg/bloomgateway/client.go b/pkg/bloomgateway/client.go
index fe92610824657..d7328c3c8c314 100644
--- a/pkg/bloomgateway/client.go
+++ b/pkg/bloomgateway/client.go
@@ -26,8 +26,8 @@ import (
"github.com/grafana/loki/pkg/bloomutils"
"github.com/grafana/loki/pkg/distributor/clientpool"
"github.com/grafana/loki/pkg/logproto"
- "github.com/grafana/loki/pkg/logql/syntax"
"github.com/grafana/loki/pkg/logqlmodel/stats"
+ "github.com/grafana/loki/pkg/querier/plan"
"github.com/grafana/loki/pkg/queue"
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/storage/chunk/cache"
@@ -142,7 +142,7 @@ func (i *ClientConfig) Validate() error {
}
type Client interface {
- FilterChunks(ctx context.Context, tenant string, from, through model.Time, groups []*logproto.GroupedChunkRefs, filters ...syntax.LineFilter) ([]*logproto.GroupedChunkRefs, error)
+ FilterChunks(ctx context.Context, tenant string, from, through model.Time, groups []*logproto.GroupedChunkRefs, plan plan.QueryPlan) ([]*logproto.GroupedChunkRefs, error)
}
type GatewayClient struct {
@@ -224,7 +224,7 @@ func shuffleAddrs(addrs []string) []string {
}
// FilterChunkRefs implements Client
-func (c *GatewayClient) FilterChunks(ctx context.Context, tenant string, from, through model.Time, groups []*logproto.GroupedChunkRefs, filters ...syntax.LineFilter) ([]*logproto.GroupedChunkRefs, error) {
+func (c *GatewayClient) FilterChunks(ctx context.Context, tenant string, from, through model.Time, groups []*logproto.GroupedChunkRefs, plan plan.QueryPlan) ([]*logproto.GroupedChunkRefs, error) {
if !c.limits.BloomGatewayEnabled(tenant) {
return groups, nil
}
@@ -252,7 +252,7 @@ func (c *GatewayClient) FilterChunks(ctx context.Context, tenant string, from, t
From: from,
Through: through,
Refs: rs.groups,
- Filters: filters,
+ Plan: plan,
}
resp, err := client.FilterChunkRefs(ctx, req)
if err != nil {
diff --git a/pkg/bloomgateway/client_test.go b/pkg/bloomgateway/client_test.go
index 0280007443d80..e4b905c37b12c 100644
--- a/pkg/bloomgateway/client_test.go
+++ b/pkg/bloomgateway/client_test.go
@@ -16,6 +16,8 @@ import (
"github.com/grafana/loki/pkg/bloomutils"
"github.com/grafana/loki/pkg/logproto"
+ "github.com/grafana/loki/pkg/logql/syntax"
+ "github.com/grafana/loki/pkg/querier/plan"
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/validation"
)
@@ -42,7 +44,9 @@ func TestBloomGatewayClient(t *testing.T) {
t.Run("FilterChunks returns response", func(t *testing.T) {
c, err := NewClient(cfg, &mockRing{}, l, reg, logger, "loki", nil, false)
require.NoError(t, err)
- res, err := c.FilterChunks(context.Background(), "tenant", model.Now(), model.Now(), nil)
+ expr, err := syntax.ParseExpr(`{foo="bar"}`)
+ require.NoError(t, err)
+ res, err := c.FilterChunks(context.Background(), "tenant", model.Now(), model.Now(), nil, plan.QueryPlan{AST: expr})
require.NoError(t, err)
require.Equal(t, []*logproto.GroupedChunkRefs{}, res)
})
diff --git a/pkg/bloomgateway/multiplexing.go b/pkg/bloomgateway/multiplexing.go
index c952c9f6b87fd..907f8f111eb1b 100644
--- a/pkg/bloomgateway/multiplexing.go
+++ b/pkg/bloomgateway/multiplexing.go
@@ -63,7 +63,7 @@ type Task struct {
// series of the original request
series []*logproto.GroupedChunkRefs
// filters of the original request
- filters []syntax.LineFilter
+ filters []syntax.LineFilterExpr
// from..through date of the task's chunks
bounds model.Interval
// the context from the request
@@ -76,7 +76,7 @@ type Task struct {
// NewTask returns a new Task that can be enqueued to the task queue.
// In addition, it returns a result and an error channel, as well
// as an error if the instantiation fails.
-func NewTask(ctx context.Context, tenantID string, refs seriesWithBounds, filters []syntax.LineFilter) (Task, error) {
+func NewTask(ctx context.Context, tenantID string, refs seriesWithBounds, filters []syntax.LineFilterExpr) (Task, error) {
key, err := ulid.New(ulid.Now(), entropy)
if err != nil {
return Task{}, err
@@ -140,7 +140,7 @@ func (t Task) Copy(series []*logproto.GroupedChunkRefs) Task {
func (t Task) RequestIter(tokenizer *v1.NGramTokenizer) v1.Iterator[v1.Request] {
return &requestIterator{
series: v1.NewSliceIter(t.series),
- searches: convertToSearches(t.filters, tokenizer),
+ searches: convertToSearches(tokenizer, t.filters...),
channel: t.resCh,
curr: v1.Request{},
}
diff --git a/pkg/bloomgateway/multiplexing_test.go b/pkg/bloomgateway/multiplexing_test.go
index 009c825a7e84a..a6ad0270d96e0 100644
--- a/pkg/bloomgateway/multiplexing_test.go
+++ b/pkg/bloomgateway/multiplexing_test.go
@@ -62,7 +62,7 @@ func TestTask_RequestIterator(t *testing.T) {
bounds: model.Interval{Start: 0, End: math.MaxInt64},
series: []*logproto.GroupedChunkRefs{},
}
- task, _ := NewTask(context.Background(), tenant, swb, []syntax.LineFilter{})
+ task, _ := NewTask(context.Background(), tenant, swb, []syntax.LineFilterExpr{})
it := task.RequestIter(tokenizer)
// nothing to iterate over
require.False(t, it.Next())
diff --git a/pkg/bloomgateway/processor_test.go b/pkg/bloomgateway/processor_test.go
index 27d0068753d5b..84687995833b2 100644
--- a/pkg/bloomgateway/processor_test.go
+++ b/pkg/bloomgateway/processor_test.go
@@ -112,8 +112,13 @@ func TestProcessor(t *testing.T) {
},
table: config.NewDayTime(truncateDay(now)),
}
- filters := []syntax.LineFilter{
- {Ty: 0, Match: "no match"},
+ filters := []syntax.LineFilterExpr{
+ {
+ LineFilter: syntax.LineFilter{
+ Ty: 0,
+ Match: "no match",
+ },
+ },
}
t.Log("series", len(swb.series))
@@ -156,8 +161,13 @@ func TestProcessor(t *testing.T) {
},
table: config.NewDayTime(truncateDay(now)),
}
- filters := []syntax.LineFilter{
- {Ty: 0, Match: "no match"},
+ filters := []syntax.LineFilterExpr{
+ {
+ LineFilter: syntax.LineFilter{
+ Ty: 0,
+ Match: "no match",
+ },
+ },
}
t.Log("series", len(swb.series))
diff --git a/pkg/bloomgateway/querier.go b/pkg/bloomgateway/querier.go
index 799fb691c0e4d..171936d9e39c5 100644
--- a/pkg/bloomgateway/querier.go
+++ b/pkg/bloomgateway/querier.go
@@ -11,6 +11,7 @@ import (
"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/logql/syntax"
+ "github.com/grafana/loki/pkg/querier/plan"
"github.com/grafana/loki/pkg/util/constants"
)
@@ -70,9 +71,9 @@ func convertToShortRef(ref *logproto.ChunkRef) *logproto.ShortRef {
return &logproto.ShortRef{From: ref.From, Through: ref.Through, Checksum: ref.Checksum}
}
-func (bq *BloomQuerier) FilterChunkRefs(ctx context.Context, tenant string, from, through model.Time, chunkRefs []*logproto.ChunkRef, filters ...syntax.LineFilter) ([]*logproto.ChunkRef, error) {
+func (bq *BloomQuerier) FilterChunkRefs(ctx context.Context, tenant string, from, through model.Time, chunkRefs []*logproto.ChunkRef, queryPlan plan.QueryPlan) ([]*logproto.ChunkRef, error) {
// Shortcut that does not require any filtering
- if len(chunkRefs) == 0 || len(filters) == 0 {
+ if len(chunkRefs) == 0 || len(syntax.ExtractLineFilters(queryPlan.AST)) == 0 {
return chunkRefs, nil
}
@@ -84,7 +85,7 @@ func (bq *BloomQuerier) FilterChunkRefs(ctx context.Context, tenant string, from
preFilterChunks := len(chunkRefs)
preFilterSeries := len(grouped)
- refs, err := bq.c.FilterChunks(ctx, tenant, from, through, grouped, filters...)
+ refs, err := bq.c.FilterChunks(ctx, tenant, from, through, grouped, queryPlan)
if err != nil {
return nil, err
}
diff --git a/pkg/bloomgateway/querier_test.go b/pkg/bloomgateway/querier_test.go
index 57e4d501bb444..0d7872927cc42 100644
--- a/pkg/bloomgateway/querier_test.go
+++ b/pkg/bloomgateway/querier_test.go
@@ -8,11 +8,11 @@ import (
"github.com/go-kit/log"
"github.com/pkg/errors"
"github.com/prometheus/common/model"
- "github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/require"
"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/logql/syntax"
+ "github.com/grafana/loki/pkg/querier/plan"
)
type noopClient struct {
@@ -21,7 +21,7 @@ type noopClient struct {
}
// FilterChunks implements Client.
-func (c *noopClient) FilterChunks(ctx context.Context, tenant string, from, through model.Time, groups []*logproto.GroupedChunkRefs, filters ...syntax.LineFilter) ([]*logproto.GroupedChunkRefs, error) { // nolint:revive
+func (c *noopClient) FilterChunks(ctx context.Context, tenant string, from, through model.Time, groups []*logproto.GroupedChunkRefs, plan plan.QueryPlan) ([]*logproto.GroupedChunkRefs, error) { // nolint:revive
c.callCount++
return groups, c.err
}
@@ -42,8 +42,9 @@ func TestBloomQuerier(t *testing.T) {
{Fingerprint: 1000, UserID: tenant, Checksum: 2},
{Fingerprint: 2000, UserID: tenant, Checksum: 3},
}
- filters := []syntax.LineFilter{}
- res, err := bq.FilterChunkRefs(ctx, tenant, from, through, chunkRefs, filters...)
+ expr, err := syntax.ParseExpr(`{foo="bar"}`)
+ require.NoError(t, err)
+ res, err := bq.FilterChunkRefs(ctx, tenant, from, through, chunkRefs, plan.QueryPlan{AST: expr})
require.NoError(t, err)
require.Equal(t, chunkRefs, res)
require.Equal(t, 0, c.callCount)
@@ -57,10 +58,9 @@ func TestBloomQuerier(t *testing.T) {
through := model.Now()
from := through.Add(-12 * time.Hour)
chunkRefs := []*logproto.ChunkRef{}
- filters := []syntax.LineFilter{
- {Ty: labels.MatchEqual, Match: "uuid"},
- }
- res, err := bq.FilterChunkRefs(ctx, tenant, from, through, chunkRefs, filters...)
+ expr, err := syntax.ParseExpr(`{foo="bar"} |= "uuid"`)
+ require.NoError(t, err)
+ res, err := bq.FilterChunkRefs(ctx, tenant, from, through, chunkRefs, plan.QueryPlan{AST: expr})
require.NoError(t, err)
require.Equal(t, chunkRefs, res)
require.Equal(t, 0, c.callCount)
@@ -78,10 +78,9 @@ func TestBloomQuerier(t *testing.T) {
{Fingerprint: 1000, UserID: tenant, Checksum: 2},
{Fingerprint: 2000, UserID: tenant, Checksum: 3},
}
- filters := []syntax.LineFilter{
- {Ty: labels.MatchEqual, Match: "uuid"},
- }
- res, err := bq.FilterChunkRefs(ctx, tenant, from, through, chunkRefs, filters...)
+ expr, err := syntax.ParseExpr(`{foo="bar"} |= "uuid"`)
+ require.NoError(t, err)
+ res, err := bq.FilterChunkRefs(ctx, tenant, from, through, chunkRefs, plan.QueryPlan{AST: expr})
require.Error(t, err)
require.Nil(t, res)
})
diff --git a/pkg/bloomgateway/util.go b/pkg/bloomgateway/util.go
index 3ab234aaa8ae0..c3ea06a3df53d 100644
--- a/pkg/bloomgateway/util.go
+++ b/pkg/bloomgateway/util.go
@@ -48,9 +48,15 @@ func getFromThrough(refs []*logproto.ShortRef) (model.Time, model.Time) {
// convertToSearches converts a list of line filter expressions to a list of
// byte slices that can be used with the bloom filters.
-func convertToSearches(filters []syntax.LineFilter, t *v1.NGramTokenizer) [][]byte {
+func convertToSearches(t *v1.NGramTokenizer, filters ...syntax.LineFilterExpr) [][]byte {
searches := make([][]byte, 0, (13-t.N)*len(filters))
for _, f := range filters {
+ if f.Left != nil {
+ searches = append(searches, convertToSearches(t, *f.Left)...)
+ }
+ if f.Or != nil {
+ searches = append(searches, convertToSearches(t, *f.Or)...)
+ }
if f.Ty == labels.MatchEqual {
it := t.Tokens(f.Match)
for it.Next() {
diff --git a/pkg/logproto/bloomgateway.pb.go b/pkg/logproto/bloomgateway.pb.go
index e5c57e058bd2f..98a22fd13168f 100644
--- a/pkg/logproto/bloomgateway.pb.go
+++ b/pkg/logproto/bloomgateway.pb.go
@@ -9,6 +9,7 @@ import (
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
github_com_grafana_loki_pkg_logql_syntax "github.com/grafana/loki/pkg/logql/syntax"
+ github_com_grafana_loki_pkg_querier_plan "github.com/grafana/loki/pkg/querier/plan"
github_com_prometheus_common_model "github.com/prometheus/common/model"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
@@ -32,10 +33,12 @@ var _ = math.Inf
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type FilterChunkRefRequest struct {
- From github_com_prometheus_common_model.Time `protobuf:"varint,1,opt,name=from,proto3,customtype=github.com/prometheus/common/model.Time" json:"from"`
- Through github_com_prometheus_common_model.Time `protobuf:"varint,2,opt,name=through,proto3,customtype=github.com/prometheus/common/model.Time" json:"through"`
- Refs []*GroupedChunkRefs `protobuf:"bytes,3,rep,name=refs,proto3" json:"refs,omitempty"`
+ From github_com_prometheus_common_model.Time `protobuf:"varint,1,opt,name=from,proto3,customtype=github.com/prometheus/common/model.Time" json:"from"`
+ Through github_com_prometheus_common_model.Time `protobuf:"varint,2,opt,name=through,proto3,customtype=github.com/prometheus/common/model.Time" json:"through"`
+ Refs []*GroupedChunkRefs `protobuf:"bytes,3,rep,name=refs,proto3" json:"refs,omitempty"`
+ // TODO(salvacorts): Delete this field once the weekly release is done.
Filters []github_com_grafana_loki_pkg_logql_syntax.LineFilter `protobuf:"bytes,4,rep,name=filters,proto3,customtype=github.com/grafana/loki/pkg/logql/syntax.LineFilter" json:"filters"`
+ Plan github_com_grafana_loki_pkg_querier_plan.QueryPlan `protobuf:"bytes,5,opt,name=plan,proto3,customtype=github.com/grafana/loki/pkg/querier/plan.QueryPlan" json:"plan"`
}
func (m *FilterChunkRefRequest) Reset() { *m = FilterChunkRefRequest{} }
@@ -234,37 +237,40 @@ func init() {
func init() { proto.RegisterFile("pkg/logproto/bloomgateway.proto", fileDescriptor_a50b5dd1dbcd1415) }
var fileDescriptor_a50b5dd1dbcd1415 = []byte{
- // 480 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x53, 0xbd, 0x6e, 0xd4, 0x30,
- 0x1c, 0x8f, 0x7b, 0xa7, 0xf6, 0xea, 0x82, 0x40, 0x56, 0xa9, 0xa2, 0x20, 0xf9, 0xa2, 0x08, 0xc1,
- 0x4d, 0x89, 0xd4, 0x2e, 0x48, 0x6c, 0x57, 0x89, 0x0a, 0x89, 0xc9, 0x20, 0x86, 0x6e, 0xb9, 0xd4,
- 0xf9, 0x50, 0x12, 0xff, 0x53, 0xdb, 0x11, 0x74, 0xe3, 0x11, 0x78, 0x0c, 0x9e, 0x80, 0x27, 0x60,
- 0xe8, 0x78, 0x63, 0xc5, 0x50, 0x71, 0xb9, 0x85, 0xb1, 0x8f, 0x80, 0xea, 0x5c, 0x7a, 0x77, 0x15,
- 0xe8, 0x24, 0x26, 0x26, 0x7f, 0xfc, 0xff, 0x3f, 0xfb, 0xf7, 0x61, 0xe3, 0x61, 0x95, 0x27, 0x41,
- 0x01, 0x49, 0x25, 0x41, 0x43, 0x30, 0x29, 0x00, 0xca, 0x24, 0xd4, 0xfc, 0x63, 0x78, 0xe1, 0x9b,
- 0x2d, 0x32, 0xe8, 0x8a, 0xce, 0x7e, 0x02, 0x09, 0xb4, 0x7d, 0xb7, 0xb3, 0xb6, 0xee, 0x3c, 0x5d,
- 0x3b, 0xa0, 0x9b, 0xb4, 0x45, 0xef, 0xfb, 0x16, 0x7e, 0xf2, 0x3a, 0x2b, 0x34, 0x97, 0xc7, 0x69,
- 0x2d, 0x72, 0xc6, 0x63, 0xc6, 0xcf, 0x6b, 0xae, 0x34, 0x39, 0xc6, 0xfd, 0x58, 0x42, 0x69, 0x23,
- 0x17, 0x8d, 0x7a, 0xe3, 0xe0, 0xf2, 0x7a, 0x68, 0xfd, 0xb8, 0x1e, 0xbe, 0x48, 0x32, 0x9d, 0xd6,
- 0x13, 0x3f, 0x82, 0x32, 0xa8, 0x24, 0x94, 0x5c, 0xa7, 0xbc, 0x56, 0x41, 0x04, 0x65, 0x09, 0x22,
- 0x28, 0xe1, 0x8c, 0x17, 0xfe, 0xfb, 0xac, 0xe4, 0xcc, 0x80, 0xc9, 0x1b, 0xbc, 0xa3, 0x53, 0x09,
- 0x75, 0x92, 0xda, 0x5b, 0xff, 0x76, 0x4e, 0x87, 0x27, 0x3e, 0xee, 0x4b, 0x1e, 0x2b, 0xbb, 0xe7,
- 0xf6, 0x46, 0x7b, 0x87, 0x8e, 0x7f, 0x27, 0xe4, 0x44, 0x42, 0x5d, 0xf1, 0xb3, 0x8e, 0xbf, 0x62,
- 0xa6, 0x8f, 0xe4, 0x78, 0x27, 0x36, 0xc2, 0x94, 0xdd, 0x37, 0x90, 0xfd, 0x25, 0xe4, 0x6d, 0x26,
- 0x78, 0xab, 0x7a, 0xfc, 0x6a, 0x41, 0xe8, 0x68, 0x85, 0x50, 0x22, 0xc3, 0x38, 0x14, 0x61, 0x50,
- 0x40, 0x9e, 0x05, 0x0b, 0xf7, 0xce, 0x8b, 0x40, 0x5d, 0x08, 0x1d, 0x7e, 0x5a, 0x01, 0xb3, 0xee,
- 0x06, 0x8f, 0xe1, 0x83, 0xfb, 0x2e, 0xaa, 0x0a, 0x84, 0xe2, 0xe4, 0x25, 0xde, 0x8d, 0x3a, 0x66,
- 0x36, 0xda, 0xc8, 0x7d, 0xd9, 0xec, 0x7d, 0x43, 0x78, 0xf0, 0x2e, 0x05, 0xa9, 0x19, 0x8f, 0xff,
- 0xbb, 0x34, 0x1c, 0x3c, 0x88, 0x52, 0x1e, 0xe5, 0xaa, 0x2e, 0xed, 0x9e, 0x8b, 0x46, 0x0f, 0xd9,
- 0xdd, 0xda, 0xd3, 0xf8, 0xf1, 0x7d, 0x5d, 0xc4, 0xc5, 0x7b, 0x71, 0x26, 0x12, 0x2e, 0x2b, 0x99,
- 0x09, 0x6d, 0x64, 0xf4, 0xd9, 0xea, 0x16, 0x39, 0xc0, 0xdb, 0x9a, 0x8b, 0x50, 0x68, 0xc3, 0x6d,
- 0x97, 0x2d, 0x56, 0xe4, 0xf9, 0x5a, 0xee, 0x64, 0xe9, 0x5d, 0xe7, 0x4d, 0x9b, 0xf7, 0x61, 0x8c,
- 0x1f, 0x8c, 0x6f, 0x3f, 0xc7, 0x49, 0xfb, 0x39, 0xc8, 0x07, 0xfc, 0x68, 0x3d, 0x12, 0x45, 0x86,
- 0x4b, 0xf0, 0x1f, 0xdf, 0xbc, 0xe3, 0xfe, 0xbd, 0xa1, 0x8d, 0xd3, 0xb3, 0xc6, 0xa7, 0xd3, 0x19,
- 0xb5, 0xae, 0x66, 0xd4, 0xba, 0x99, 0x51, 0xf4, 0xb9, 0xa1, 0xe8, 0x6b, 0x43, 0xd1, 0x65, 0x43,
- 0xd1, 0xb4, 0xa1, 0xe8, 0x67, 0x43, 0xd1, 0xaf, 0x86, 0x5a, 0x37, 0x0d, 0x45, 0x5f, 0xe6, 0xd4,
- 0x9a, 0xce, 0xa9, 0x75, 0x35, 0xa7, 0xd6, 0xe9, 0xb3, 0x0d, 0xcf, 0xcb, 0x5c, 0x3a, 0xd9, 0x36,
- 0xc3, 0xd1, 0xef, 0x00, 0x00, 0x00, 0xff, 0xff, 0x6d, 0x30, 0x9d, 0x8e, 0xf4, 0x03, 0x00, 0x00,
+ // 525 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x53, 0xbb, 0x6e, 0x13, 0x41,
+ 0x14, 0xdd, 0xc1, 0x26, 0x8f, 0x31, 0x2f, 0x8d, 0x42, 0xb4, 0x32, 0xd2, 0x78, 0x65, 0x21, 0x70,
+ 0xb5, 0x2b, 0x39, 0x0d, 0x82, 0xce, 0x91, 0x88, 0x90, 0x28, 0x60, 0x40, 0x14, 0x29, 0x90, 0xd6,
+ 0xce, 0xdd, 0x87, 0xbc, 0x3b, 0xb3, 0x9e, 0x99, 0x15, 0xb8, 0xe3, 0x13, 0xf8, 0x08, 0x0a, 0xbe,
+ 0x80, 0x6f, 0x48, 0xe9, 0x32, 0xa2, 0x88, 0xf0, 0xba, 0xa1, 0xcc, 0x27, 0x20, 0xcf, 0x7a, 0xb3,
+ 0x76, 0x04, 0x44, 0xa2, 0xa2, 0x9a, 0xc7, 0xbd, 0xe7, 0x9e, 0x7b, 0xee, 0x03, 0x77, 0xb2, 0x71,
+ 0xe8, 0x25, 0x22, 0xcc, 0xa4, 0xd0, 0xc2, 0x1b, 0x26, 0x42, 0xa4, 0xa1, 0xaf, 0xe1, 0x83, 0x3f,
+ 0x75, 0xcd, 0x17, 0xd9, 0xa9, 0x8c, 0xed, 0xbd, 0x50, 0x84, 0xa2, 0xf4, 0x5b, 0xde, 0x4a, 0x7b,
+ 0xfb, 0xc1, 0x46, 0x80, 0xea, 0x52, 0x1a, 0xbb, 0x5f, 0x1a, 0xf8, 0xfe, 0xf3, 0x38, 0xd1, 0x20,
+ 0x0f, 0xa3, 0x9c, 0x8f, 0x19, 0x04, 0x0c, 0x26, 0x39, 0x28, 0x4d, 0x0e, 0x71, 0x33, 0x90, 0x22,
+ 0xb5, 0x91, 0x83, 0x7a, 0x8d, 0x81, 0x77, 0x7a, 0xde, 0xb1, 0xbe, 0x9f, 0x77, 0x1e, 0x87, 0xb1,
+ 0x8e, 0xf2, 0xa1, 0x3b, 0x12, 0xa9, 0x97, 0x49, 0x91, 0x82, 0x8e, 0x20, 0x57, 0xde, 0x48, 0xa4,
+ 0xa9, 0xe0, 0x5e, 0x2a, 0x4e, 0x20, 0x71, 0xdf, 0xc6, 0x29, 0x30, 0x03, 0x26, 0x2f, 0xf0, 0xb6,
+ 0x8e, 0xa4, 0xc8, 0xc3, 0xc8, 0xbe, 0xf1, 0x6f, 0x71, 0x2a, 0x3c, 0x71, 0x71, 0x53, 0x42, 0xa0,
+ 0xec, 0x86, 0xd3, 0xe8, 0xb5, 0xfa, 0x6d, 0xf7, 0x52, 0xc8, 0x91, 0x14, 0x79, 0x06, 0x27, 0x55,
+ 0xfe, 0x8a, 0x19, 0x3f, 0x32, 0xc6, 0xdb, 0x81, 0x11, 0xa6, 0xec, 0xa6, 0x81, 0xec, 0xd5, 0x90,
+ 0x97, 0x31, 0x87, 0x52, 0xf5, 0xe0, 0xd9, 0x2a, 0xa1, 0x83, 0xb5, 0x84, 0x42, 0xe9, 0x07, 0x3e,
+ 0xf7, 0xbd, 0x44, 0x8c, 0x63, 0x6f, 0x55, 0xbd, 0x49, 0xe2, 0xa9, 0x29, 0xd7, 0xfe, 0xc7, 0x35,
+ 0x30, 0xab, 0x18, 0xc8, 0x7b, 0xdc, 0xcc, 0x12, 0x9f, 0xdb, 0x37, 0x1d, 0xd4, 0x6b, 0xf5, 0xef,
+ 0xd4, 0x4c, 0xaf, 0x12, 0x9f, 0x0f, 0x9e, 0xae, 0x38, 0xfa, 0x7f, 0xe3, 0x98, 0xe4, 0x20, 0x63,
+ 0x90, 0xde, 0x32, 0x8e, 0xfb, 0x3a, 0x07, 0x39, 0x5d, 0x62, 0x99, 0x89, 0xdb, 0x65, 0x78, 0xff,
+ 0x6a, 0x97, 0x54, 0x26, 0xb8, 0x02, 0xf2, 0x04, 0xef, 0x8e, 0x2a, 0xe5, 0x36, 0xba, 0xb6, 0x36,
+ 0xb5, 0x73, 0xf7, 0x1b, 0xc2, 0x3b, 0x6f, 0x22, 0x21, 0x35, 0x83, 0xe0, 0xbf, 0xeb, 0x76, 0x1b,
+ 0xef, 0x8c, 0x22, 0x18, 0x8d, 0x55, 0x9e, 0xda, 0x0d, 0x07, 0xf5, 0x6e, 0xb3, 0xcb, 0x77, 0x57,
+ 0xe3, 0x7b, 0x57, 0x75, 0x11, 0x07, 0xb7, 0x82, 0x98, 0x87, 0x20, 0x33, 0x19, 0x73, 0x6d, 0x64,
+ 0x34, 0xd9, 0xfa, 0x17, 0xd9, 0xc7, 0x5b, 0x1a, 0xb8, 0xcf, 0xb5, 0xc9, 0x6d, 0x97, 0xad, 0x5e,
+ 0xe4, 0xd1, 0xc6, 0x5c, 0x91, 0xba, 0x76, 0x55, 0x6d, 0xca, 0x79, 0xea, 0x07, 0xf8, 0xd6, 0x60,
+ 0xb9, 0x7c, 0x47, 0xe5, 0xf2, 0x91, 0x77, 0xf8, 0xee, 0x66, 0x4b, 0x14, 0xe9, 0xd4, 0xe0, 0xdf,
+ 0xee, 0x54, 0xdb, 0xf9, 0xb3, 0x43, 0xd9, 0xce, 0xae, 0x35, 0x38, 0x9e, 0xcd, 0xa9, 0x75, 0x36,
+ 0xa7, 0xd6, 0xc5, 0x9c, 0xa2, 0x4f, 0x05, 0x45, 0x5f, 0x0b, 0x8a, 0x4e, 0x0b, 0x8a, 0x66, 0x05,
+ 0x45, 0x3f, 0x0a, 0x8a, 0x7e, 0x16, 0xd4, 0xba, 0x28, 0x28, 0xfa, 0xbc, 0xa0, 0xd6, 0x6c, 0x41,
+ 0xad, 0xb3, 0x05, 0xb5, 0x8e, 0x1f, 0x5e, 0x33, 0xbe, 0x86, 0x74, 0xb8, 0x65, 0x8e, 0x83, 0x5f,
+ 0x01, 0x00, 0x00, 0xff, 0xff, 0xbe, 0xe2, 0x64, 0x8a, 0x54, 0x04, 0x00, 0x00,
}
func (this *FilterChunkRefRequest) Equal(that interface{}) bool {
@@ -308,6 +314,9 @@ func (this *FilterChunkRefRequest) Equal(that interface{}) bool {
return false
}
}
+ if !this.Plan.Equal(that1.Plan) {
+ return false
+ }
return true
}
func (this *FilterChunkRefResponse) Equal(that interface{}) bool {
@@ -408,7 +417,7 @@ func (this *FilterChunkRefRequest) GoString() string {
if this == nil {
return "nil"
}
- s := make([]string, 0, 8)
+ s := make([]string, 0, 9)
s = append(s, "&logproto.FilterChunkRefRequest{")
s = append(s, "From: "+fmt.Sprintf("%#v", this.From)+",\n")
s = append(s, "Through: "+fmt.Sprintf("%#v", this.Through)+",\n")
@@ -416,6 +425,7 @@ func (this *FilterChunkRefRequest) GoString() string {
s = append(s, "Refs: "+fmt.Sprintf("%#v", this.Refs)+",\n")
}
s = append(s, "Filters: "+fmt.Sprintf("%#v", this.Filters)+",\n")
+ s = append(s, "Plan: "+fmt.Sprintf("%#v", this.Plan)+",\n")
s = append(s, "}")
return strings.Join(s, "")
}
@@ -566,6 +576,16 @@ func (m *FilterChunkRefRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
+ {
+ size := m.Plan.Size()
+ i -= size
+ if _, err := m.Plan.MarshalTo(dAtA[i:]); err != nil {
+ return 0, err
+ }
+ i = encodeVarintBloomgateway(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2a
if len(m.Filters) > 0 {
for iNdEx := len(m.Filters) - 1; iNdEx >= 0; iNdEx-- {
{
@@ -766,6 +786,8 @@ func (m *FilterChunkRefRequest) Size() (n int) {
n += 1 + l + sovBloomgateway(uint64(l))
}
}
+ l = m.Plan.Size()
+ n += 1 + l + sovBloomgateway(uint64(l))
return n
}
@@ -844,6 +866,7 @@ func (this *FilterChunkRefRequest) String() string {
`Through:` + fmt.Sprintf("%v", this.Through) + `,`,
`Refs:` + repeatedStringForRefs + `,`,
`Filters:` + fmt.Sprintf("%v", this.Filters) + `,`,
+ `Plan:` + fmt.Sprintf("%v", this.Plan) + `,`,
`}`,
}, "")
return s
@@ -1035,6 +1058,39 @@ func (m *FilterChunkRefRequest) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
+ case 5:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Plan", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowBloomgateway
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthBloomgateway
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthBloomgateway
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.Plan.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipBloomgateway(dAtA[iNdEx:])
diff --git a/pkg/logproto/bloomgateway.proto b/pkg/logproto/bloomgateway.proto
index 473ecf3f81535..13d5c25e763f6 100644
--- a/pkg/logproto/bloomgateway.proto
+++ b/pkg/logproto/bloomgateway.proto
@@ -17,10 +17,15 @@ message FilterChunkRefRequest {
(gogoproto.nullable) = false
];
repeated GroupedChunkRefs refs = 3;
- repeated logproto.LineFilter filters = 4 [
+ // TODO(salvacorts): Delete this field once the weekly release is done.
+ repeated LineFilter filters = 4 [
(gogoproto.customtype) = "github.com/grafana/loki/pkg/logql/syntax.LineFilter",
(gogoproto.nullable) = false
];
+ Plan plan = 5 [
+ (gogoproto.customtype) = "github.com/grafana/loki/pkg/querier/plan.QueryPlan",
+ (gogoproto.nullable) = false
+ ];
}
message FilterChunkRefResponse {
diff --git a/pkg/logproto/compat.go b/pkg/logproto/compat.go
index ee3d9ce1e003e..0e65a90da02fa 100644
--- a/pkg/logproto/compat.go
+++ b/pkg/logproto/compat.go
@@ -367,24 +367,8 @@ func (m *FilterChunkRefRequest) GetQuery() string {
chunksHash = h.Sum64()
}
- // Short circuit if there are no filters.
- if len(m.Filters) == 0 {
- return fmt.Sprintf("%d", chunksHash)
- }
-
- var sb strings.Builder
- for i, filter := range m.Filters {
- if i > 0 {
- sb.WriteString(",")
- }
- sb.Write(fmt.Appendf(encodeBuf[:0], "%d", filter.Ty))
- sb.WriteString("-")
- sb.WriteString(filter.Match)
- sb.WriteString("-")
- sb.WriteString(filter.Op)
- }
-
- return fmt.Sprintf("%d/%s", chunksHash, sb.String())
+ // TODO(salvacorts): plan.String() will return the whole query. This is not optimal since we are only interested in the filter expressions.
+ return fmt.Sprintf("%d/%d", chunksHash, m.Plan.Hash())
}
// GetCachingOptions returns the caching options.
diff --git a/pkg/logproto/compat_test.go b/pkg/logproto/compat_test.go
index 4cfad825e183d..a066fe65fed1b 100644
--- a/pkg/logproto/compat_test.go
+++ b/pkg/logproto/compat_test.go
@@ -7,12 +7,12 @@ import (
"testing"
"unsafe"
+ "github.com/grafana/loki/pkg/logql/syntax"
+ "github.com/grafana/loki/pkg/querier/plan"
jsoniter "github.com/json-iterator/go"
"github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
-
- "github.com/grafana/loki/pkg/logql/syntax"
)
// This test verifies that jsoninter uses our custom method for marshalling.
@@ -287,7 +287,7 @@ func TestFilterChunkRefRequestGetQuery(t *testing.T) {
}{
{
desc: "empty request",
- expected: `0`,
+ expected: `0/0`,
},
{
desc: "request no filters",
@@ -299,19 +299,16 @@ func TestFilterChunkRefRequestGetQuery(t *testing.T) {
},
},
},
- expected: `9962287286179718960`,
+ expected: `9962287286179718960/0`,
},
{
desc: "request with filters but no chunks",
request: FilterChunkRefRequest{
- Filters: []syntax.LineFilter{
- {
- Ty: 0,
- Match: "uuid",
- },
+ Plan: plan.QueryPlan{
+ AST: syntax.MustParseExpr(`{foo="bar"} |= "uuid"`),
},
},
- expected: `0/0-uuid-`,
+ expected: `0/938557591`,
},
{
desc: "request with filters and chunks",
@@ -326,18 +323,11 @@ func TestFilterChunkRefRequestGetQuery(t *testing.T) {
Tenant: "test",
},
},
- Filters: []syntax.LineFilter{
- {
- Ty: 0,
- Match: "uuid",
- },
- {
- Ty: 1,
- Match: "trace",
- },
+ Plan: plan.QueryPlan{
+ AST: syntax.MustParseExpr(`{foo="bar"} |= "uuid" != "trace"`),
},
},
- expected: `8827404902424034886/0-uuid-,1-trace-`,
+ expected: `8827404902424034886/2710035654`,
},
} {
t.Run(tc.desc, func(t *testing.T) {
diff --git a/pkg/logproto/logproto.pb.go b/pkg/logproto/logproto.pb.go
index f0d826c5df6d1..d50ae7d1e5db4 100644
--- a/pkg/logproto/logproto.pb.go
+++ b/pkg/logproto/logproto.pb.go
@@ -1784,10 +1784,12 @@ func (m *LineFilter) GetRaw() []byte {
}
type GetChunkRefRequest struct {
- From github_com_prometheus_common_model.Time `protobuf:"varint,1,opt,name=from,proto3,customtype=github.com/prometheus/common/model.Time" json:"from"`
- Through github_com_prometheus_common_model.Time `protobuf:"varint,2,opt,name=through,proto3,customtype=github.com/prometheus/common/model.Time" json:"through"`
- Matchers string `protobuf:"bytes,3,opt,name=matchers,proto3" json:"matchers,omitempty"`
- Filters []github_com_grafana_loki_pkg_logql_syntax.LineFilter `protobuf:"bytes,4,rep,name=filters,proto3,customtype=github.com/grafana/loki/pkg/logql/syntax.LineFilter" json:"filters"`
+ From github_com_prometheus_common_model.Time `protobuf:"varint,1,opt,name=from,proto3,customtype=github.com/prometheus/common/model.Time" json:"from"`
+ Through github_com_prometheus_common_model.Time `protobuf:"varint,2,opt,name=through,proto3,customtype=github.com/prometheus/common/model.Time" json:"through"`
+ Matchers string `protobuf:"bytes,3,opt,name=matchers,proto3" json:"matchers,omitempty"`
+ // TODO(salvacorts): Delete this field once the weekly release is done.
+ Filters []github_com_grafana_loki_pkg_logql_syntax.LineFilter `protobuf:"bytes,4,rep,name=filters,proto3,customtype=github.com/grafana/loki/pkg/logql/syntax.LineFilter" json:"filters"`
+ Plan github_com_grafana_loki_pkg_querier_plan.QueryPlan `protobuf:"bytes,5,opt,name=plan,proto3,customtype=github.com/grafana/loki/pkg/querier/plan.QueryPlan" json:"plan"`
}
func (m *GetChunkRefRequest) Reset() { *m = GetChunkRefRequest{} }
@@ -2561,149 +2563,150 @@ func init() {
func init() { proto.RegisterFile("pkg/logproto/logproto.proto", fileDescriptor_c28a5f14f1f4c79a) }
var fileDescriptor_c28a5f14f1f4c79a = []byte{
- // 2265 bytes of a gzipped FileDescriptorProto
+ // 2278 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xd4, 0x19, 0x4d, 0x6f, 0x1b, 0xc7,
0x95, 0x4b, 0x2e, 0xbf, 0x1e, 0x29, 0x59, 0x1e, 0x31, 0x36, 0x41, 0xdb, 0xa4, 0x3c, 0x48, 0x1d,
0xc1, 0x71, 0xc8, 0x58, 0x6e, 0xdc, 0xd4, 0x6e, 0xd0, 0x9a, 0x52, 0xec, 0xc8, 0x96, 0x3f, 0x32,
- 0x72, 0xdd, 0xc2, 0x68, 0x61, 0xac, 0xc4, 0x11, 0x45, 0x88, 0xbb, 0x4b, 0xef, 0x0e, 0x63, 0x0b,
- 0xe8, 0xa1, 0x7f, 0xa0, 0x68, 0x6e, 0x45, 0x2f, 0x45, 0x0f, 0x05, 0x52, 0xa0, 0xc8, 0xa5, 0x3f,
+ 0x72, 0xdd, 0xc2, 0x68, 0x6b, 0xac, 0xc4, 0x11, 0x45, 0x88, 0xbb, 0x4b, 0xef, 0x0e, 0x63, 0x0b,
+ 0xe8, 0xa1, 0x7f, 0x20, 0x68, 0x6e, 0x45, 0x2f, 0x45, 0x0f, 0x05, 0x52, 0xa0, 0xe8, 0xa5, 0x3f,
0xa0, 0xbd, 0xf4, 0xe0, 0xde, 0xdc, 0x5b, 0x90, 0x03, 0x5b, 0xcb, 0x97, 0x42, 0xa7, 0xdc, 0x72,
- 0x2d, 0xe6, 0x6b, 0x77, 0x96, 0xa2, 0xdc, 0xd0, 0x75, 0x11, 0xf8, 0xc2, 0x9d, 0xf7, 0xe6, 0xcd,
- 0x9b, 0xf7, 0x35, 0xef, 0xcd, 0x1b, 0xc2, 0x89, 0xc1, 0x4e, 0xb7, 0xd5, 0xf7, 0xbb, 0x83, 0xc0,
- 0x67, 0x7e, 0x34, 0x68, 0x8a, 0x5f, 0x54, 0xd0, 0x70, 0xad, 0xd2, 0xf5, 0xbb, 0xbe, 0xa4, 0xe1,
- 0x23, 0x39, 0x5f, 0x6b, 0x74, 0x7d, 0xbf, 0xdb, 0xa7, 0x2d, 0x01, 0x6d, 0x0c, 0xb7, 0x5a, 0xac,
- 0xe7, 0xd2, 0x90, 0x39, 0xee, 0x40, 0x11, 0x2c, 0x28, 0xee, 0x0f, 0xfb, 0xae, 0xdf, 0xa1, 0xfd,
- 0x56, 0xc8, 0x1c, 0x16, 0xca, 0x5f, 0x45, 0x31, 0xcf, 0x29, 0x06, 0xc3, 0x70, 0x5b, 0xfc, 0x48,
- 0x24, 0xae, 0x00, 0x5a, 0x67, 0x01, 0x75, 0x5c, 0xe2, 0x30, 0x1a, 0x12, 0xfa, 0x70, 0x48, 0x43,
- 0x86, 0x6f, 0xc2, 0x7c, 0x02, 0x1b, 0x0e, 0x7c, 0x2f, 0xa4, 0xe8, 0x22, 0x94, 0xc2, 0x18, 0x5d,
- 0xb5, 0x16, 0x32, 0x8b, 0xa5, 0xa5, 0x4a, 0x33, 0x52, 0x25, 0x5e, 0x43, 0x4c, 0x42, 0xfc, 0x3b,
- 0x0b, 0x20, 0x9e, 0x43, 0x75, 0x00, 0x39, 0xfb, 0x91, 0x13, 0x6e, 0x57, 0xad, 0x05, 0x6b, 0xd1,
- 0x26, 0x06, 0x06, 0x9d, 0x83, 0xa3, 0x31, 0x74, 0xcb, 0x5f, 0xdf, 0x76, 0x82, 0x4e, 0x35, 0x2d,
- 0xc8, 0x0e, 0x4e, 0x20, 0x04, 0x76, 0xe0, 0x30, 0x5a, 0xcd, 0x2c, 0x58, 0x8b, 0x19, 0x22, 0xc6,
- 0xe8, 0x18, 0xe4, 0x18, 0xf5, 0x1c, 0x8f, 0x55, 0xed, 0x05, 0x6b, 0xb1, 0x48, 0x14, 0xc4, 0xf1,
- 0x5c, 0x77, 0x1a, 0x56, 0xb3, 0x0b, 0xd6, 0xe2, 0x0c, 0x51, 0x10, 0xfe, 0x2c, 0x03, 0xe5, 0x8f,
- 0x87, 0x34, 0xd8, 0x55, 0x06, 0x40, 0x75, 0x28, 0x84, 0xb4, 0x4f, 0x37, 0x99, 0x1f, 0x08, 0x01,
- 0x8b, 0xed, 0x74, 0xd5, 0x22, 0x11, 0x0e, 0x55, 0x20, 0xdb, 0xef, 0xb9, 0x3d, 0x26, 0xc4, 0x9a,
- 0x21, 0x12, 0x40, 0x97, 0x20, 0x1b, 0x32, 0x27, 0x60, 0x42, 0x96, 0xd2, 0x52, 0xad, 0x29, 0x9d,
- 0xd6, 0xd4, 0x4e, 0x6b, 0xde, 0xd5, 0x4e, 0x6b, 0x17, 0x9e, 0x8c, 0x1a, 0xa9, 0x4f, 0xff, 0xd9,
- 0xb0, 0x88, 0x5c, 0x82, 0x2e, 0x42, 0x86, 0x7a, 0x1d, 0x21, 0xef, 0x37, 0x5d, 0xc9, 0x17, 0xa0,
- 0xf3, 0x50, 0xec, 0xf4, 0x02, 0xba, 0xc9, 0x7a, 0xbe, 0x27, 0xb4, 0x9a, 0x5d, 0x9a, 0x8f, 0x3d,
- 0xb2, 0xa2, 0xa7, 0x48, 0x4c, 0x85, 0xce, 0x41, 0x2e, 0xe4, 0xa6, 0x0b, 0xab, 0xf9, 0x85, 0xcc,
- 0x62, 0xb1, 0x5d, 0xd9, 0x1f, 0x35, 0xe6, 0x24, 0xe6, 0x9c, 0xef, 0xf6, 0x18, 0x75, 0x07, 0x6c,
- 0x97, 0x28, 0x1a, 0x74, 0x16, 0xf2, 0x1d, 0xda, 0xa7, 0xdc, 0xe1, 0x05, 0xe1, 0xf0, 0x39, 0x83,
- 0xbd, 0x98, 0x20, 0x9a, 0x00, 0xdd, 0x07, 0x7b, 0xd0, 0x77, 0xbc, 0x6a, 0x51, 0x68, 0x31, 0x1b,
- 0x13, 0xde, 0xe9, 0x3b, 0x5e, 0xfb, 0xe2, 0x97, 0xa3, 0xc6, 0x52, 0xb7, 0xc7, 0xb6, 0x87, 0x1b,
- 0xcd, 0x4d, 0xdf, 0x6d, 0x75, 0x03, 0x67, 0xcb, 0xf1, 0x9c, 0x56, 0xdf, 0xdf, 0xe9, 0xb5, 0x78,
- 0x70, 0x3e, 0x1c, 0xd2, 0xa0, 0x47, 0x83, 0x16, 0xe7, 0xd1, 0x14, 0xfe, 0xe0, 0xeb, 0x88, 0xe0,
- 0x79, 0xdd, 0x2e, 0xe4, 0xe6, 0xf2, 0x78, 0x94, 0x06, 0xb4, 0xee, 0xb8, 0x83, 0x3e, 0x9d, 0xca,
- 0x5f, 0x91, 0x67, 0xd2, 0x2f, 0xed, 0x99, 0xcc, 0xb4, 0x9e, 0x89, 0xcd, 0x6c, 0x4f, 0x67, 0xe6,
- 0xec, 0x37, 0x35, 0x73, 0xee, 0xd5, 0x9b, 0x19, 0x57, 0xc1, 0xe6, 0x10, 0x9a, 0x83, 0x4c, 0xe0,
- 0x3c, 0x12, 0xc6, 0x2c, 0x13, 0x3e, 0xc4, 0x6b, 0x90, 0x93, 0x82, 0xa0, 0xda, 0xb8, 0xb5, 0x93,
- 0x27, 0x23, 0xb6, 0x74, 0x46, 0xdb, 0x70, 0x2e, 0xb6, 0x61, 0x46, 0x58, 0x07, 0xff, 0xde, 0x82,
- 0x19, 0xe5, 0x42, 0x95, 0x5d, 0x36, 0x20, 0x2f, 0x4f, 0xb7, 0xce, 0x2c, 0xc7, 0xc7, 0x33, 0xcb,
- 0x95, 0x8e, 0x33, 0x60, 0x34, 0x68, 0xb7, 0x9e, 0x8c, 0x1a, 0xd6, 0x97, 0xa3, 0xc6, 0x5b, 0x2f,
- 0xd2, 0x52, 0x24, 0x39, 0x95, 0x75, 0x34, 0x63, 0xf4, 0xb6, 0x90, 0x8e, 0x85, 0x2a, 0x0e, 0x8e,
- 0x34, 0x65, 0x82, 0x5c, 0xf5, 0xba, 0x34, 0xe4, 0x9c, 0x6d, 0xee, 0x42, 0x22, 0x69, 0xf0, 0x2f,
- 0x60, 0x3e, 0x11, 0x6a, 0x4a, 0xce, 0xf7, 0x21, 0x17, 0x72, 0x03, 0x6a, 0x31, 0x0d, 0x47, 0xad,
- 0x0b, 0x7c, 0x7b, 0x56, 0xc9, 0x97, 0x93, 0x30, 0x51, 0xf4, 0xd3, 0xed, 0xfe, 0x37, 0x0b, 0xca,
- 0x6b, 0xce, 0x06, 0xed, 0xeb, 0x18, 0x47, 0x60, 0x7b, 0x8e, 0x4b, 0x95, 0xc5, 0xc5, 0x98, 0x27,
- 0xb4, 0x4f, 0x9c, 0xfe, 0x90, 0x4a, 0x96, 0x05, 0xa2, 0xa0, 0x69, 0x33, 0x91, 0xf5, 0xd2, 0x99,
- 0xc8, 0x8a, 0xe3, 0xbd, 0x02, 0x59, 0x1e, 0x59, 0xbb, 0x22, 0x0b, 0x15, 0x89, 0x04, 0xf0, 0x5b,
- 0x30, 0xa3, 0xb4, 0x50, 0xe6, 0x8b, 0x45, 0xe6, 0xe6, 0x2b, 0x6a, 0x91, 0xb1, 0x0b, 0x39, 0x69,
- 0x6d, 0xf4, 0x26, 0x14, 0xa3, 0xea, 0x26, 0xb4, 0xcd, 0xb4, 0x73, 0xfb, 0xa3, 0x46, 0x9a, 0x85,
- 0x24, 0x9e, 0x40, 0x0d, 0xc8, 0x8a, 0x95, 0x42, 0x73, 0xab, 0x5d, 0xdc, 0x1f, 0x35, 0x24, 0x82,
- 0xc8, 0x0f, 0x3a, 0x09, 0xf6, 0x36, 0x2f, 0x30, 0xdc, 0x04, 0x76, 0xbb, 0xb0, 0x3f, 0x6a, 0x08,
- 0x98, 0x88, 0x5f, 0x7c, 0x0d, 0xca, 0x6b, 0xb4, 0xeb, 0x6c, 0xee, 0xaa, 0x4d, 0x2b, 0x9a, 0x1d,
- 0xdf, 0xd0, 0xd2, 0x3c, 0x4e, 0x43, 0x39, 0xda, 0xf1, 0x81, 0x1b, 0xaa, 0xa0, 0x2e, 0x45, 0xb8,
- 0x9b, 0x21, 0xfe, 0xad, 0x05, 0xca, 0xcf, 0x08, 0x43, 0xae, 0xcf, 0x75, 0x0d, 0x55, 0x0e, 0x82,
- 0xfd, 0x51, 0x43, 0x61, 0x88, 0xfa, 0xa2, 0xcb, 0x90, 0x0f, 0xc5, 0x8e, 0x9c, 0xd9, 0x78, 0xf8,
- 0x88, 0x89, 0xf6, 0x11, 0x1e, 0x06, 0xfb, 0xa3, 0x86, 0x26, 0x24, 0x7a, 0x80, 0x9a, 0x89, 0xca,
- 0x29, 0x15, 0x9b, 0xdd, 0x1f, 0x35, 0x0c, 0xac, 0x59, 0x49, 0xf1, 0xd7, 0x16, 0x94, 0xee, 0x3a,
- 0xbd, 0x28, 0x84, 0xaa, 0xda, 0x45, 0x71, 0x8e, 0x94, 0x08, 0x7e, 0xa4, 0x3b, 0xb4, 0xef, 0xec,
- 0x5e, 0xf5, 0x03, 0xc1, 0x77, 0x86, 0x44, 0x70, 0x5c, 0xec, 0xec, 0x89, 0xc5, 0x2e, 0x3b, 0x7d,
- 0x4a, 0xfd, 0x3f, 0x26, 0xb0, 0xeb, 0x76, 0x21, 0x3d, 0x97, 0xc1, 0x9f, 0x5b, 0x50, 0x96, 0x9a,
- 0xab, 0xb0, 0xfb, 0x19, 0xe4, 0xa4, 0x61, 0x84, 0xee, 0x2f, 0x48, 0x2e, 0x6f, 0x4f, 0x93, 0x58,
- 0x14, 0x4f, 0xf4, 0x43, 0x98, 0xed, 0x04, 0xfe, 0x60, 0x40, 0x3b, 0xeb, 0x2a, 0x85, 0xa5, 0xc7,
- 0x53, 0xd8, 0x8a, 0x39, 0x4f, 0xc6, 0xc8, 0xf1, 0xdf, 0x2d, 0x98, 0x51, 0xd9, 0x42, 0xf9, 0x2a,
- 0xb2, 0xaf, 0xf5, 0xd2, 0x25, 0x2b, 0x3d, 0x6d, 0xc9, 0x3a, 0x06, 0xb9, 0x6e, 0xe0, 0x0f, 0x07,
- 0x61, 0x35, 0x23, 0xcf, 0xa6, 0x84, 0xa6, 0x2b, 0x65, 0xf8, 0x3a, 0xcc, 0x6a, 0x55, 0x0e, 0x49,
- 0x99, 0xb5, 0xf1, 0x94, 0xb9, 0xda, 0xa1, 0x1e, 0xeb, 0x6d, 0xf5, 0xa2, 0x24, 0xa8, 0xe8, 0xf1,
- 0xaf, 0x2d, 0x98, 0x1b, 0x27, 0x41, 0x2b, 0xc6, 0x39, 0xe3, 0xec, 0xce, 0x1c, 0xce, 0xae, 0x29,
- 0x92, 0x4f, 0xf8, 0xa1, 0xc7, 0x82, 0x5d, 0xcd, 0x5a, 0xae, 0xad, 0xbd, 0x07, 0x25, 0x63, 0x92,
- 0x97, 0xa8, 0x1d, 0xaa, 0x4e, 0x06, 0xe1, 0xc3, 0x38, 0x25, 0xa4, 0x65, 0x42, 0x13, 0x00, 0xfe,
- 0x8d, 0x05, 0x33, 0x09, 0x5f, 0xa2, 0xf7, 0xc1, 0xde, 0x0a, 0x7c, 0x77, 0x2a, 0x47, 0x89, 0x15,
- 0xe8, 0xbb, 0x90, 0x66, 0xfe, 0x54, 0x6e, 0x4a, 0x33, 0x9f, 0x7b, 0x49, 0xa9, 0x9f, 0x91, 0xb7,
- 0x5b, 0x09, 0xe1, 0xf7, 0xa0, 0x28, 0x14, 0xba, 0xe3, 0xf4, 0x82, 0x89, 0xd5, 0x62, 0xb2, 0x42,
- 0x97, 0xe1, 0x88, 0xcc, 0x84, 0x93, 0x17, 0x97, 0x27, 0x2d, 0x2e, 0xeb, 0xc5, 0x27, 0x20, 0xbb,
- 0xbc, 0x3d, 0xf4, 0x76, 0xf8, 0x92, 0x8e, 0xc3, 0x1c, 0xbd, 0x84, 0x8f, 0xf1, 0x1b, 0x30, 0xcf,
- 0xcf, 0x20, 0x0d, 0xc2, 0x65, 0x7f, 0xe8, 0x31, 0xdd, 0x5d, 0x9c, 0x83, 0x4a, 0x12, 0xad, 0xa2,
- 0xa4, 0x02, 0xd9, 0x4d, 0x8e, 0x10, 0x3c, 0x66, 0x88, 0x04, 0xf0, 0x1f, 0x2c, 0x40, 0xd7, 0x28,
- 0x13, 0xbb, 0xac, 0xae, 0x44, 0xc7, 0xa3, 0x06, 0x05, 0xd7, 0x61, 0x9b, 0xdb, 0x34, 0x08, 0xf5,
- 0x1d, 0x44, 0xc3, 0xdf, 0xc6, 0x6d, 0x0f, 0x9f, 0x87, 0xf9, 0x84, 0x94, 0x4a, 0xa7, 0x1a, 0x14,
- 0x36, 0x15, 0x4e, 0xd5, 0xbb, 0x08, 0xc6, 0x7f, 0x4e, 0x43, 0x41, 0x2c, 0x20, 0x74, 0x0b, 0x9d,
- 0x87, 0xd2, 0x56, 0xcf, 0xeb, 0xd2, 0x60, 0x10, 0xf4, 0x94, 0x09, 0xec, 0xf6, 0x91, 0xfd, 0x51,
- 0xc3, 0x44, 0x13, 0x13, 0x40, 0xef, 0x40, 0x7e, 0x18, 0xd2, 0xe0, 0x41, 0x4f, 0x9e, 0xf4, 0x62,
- 0xbb, 0xb2, 0x37, 0x6a, 0xe4, 0x7e, 0x1c, 0xd2, 0x60, 0x75, 0x85, 0x57, 0x9e, 0xa1, 0x18, 0x11,
- 0xf9, 0xed, 0xa0, 0x1b, 0x2a, 0x4c, 0xc5, 0x25, 0xac, 0xfd, 0x3d, 0x2e, 0xfe, 0x58, 0xaa, 0x1b,
- 0x04, 0xbe, 0x4b, 0xd9, 0x36, 0x1d, 0x86, 0xad, 0x4d, 0xdf, 0x75, 0x7d, 0xaf, 0x25, 0x7a, 0x49,
- 0xa1, 0x34, 0x2f, 0x9f, 0x7c, 0xb9, 0x8a, 0xdc, 0xbb, 0x90, 0x67, 0xdb, 0x81, 0x3f, 0xec, 0x6e,
- 0x8b, 0xaa, 0x90, 0x69, 0x5f, 0x9a, 0x9e, 0x9f, 0xe6, 0x40, 0xf4, 0x00, 0x9d, 0xe6, 0xd6, 0xa2,
- 0x9b, 0x3b, 0xe1, 0xd0, 0x95, 0x1d, 0x5a, 0x3b, 0xbb, 0x3f, 0x6a, 0x58, 0xef, 0x90, 0x08, 0x8d,
- 0x7f, 0x95, 0x86, 0x86, 0x08, 0xd4, 0x7b, 0xe2, 0xda, 0x70, 0xd5, 0x0f, 0x6e, 0x52, 0x16, 0xf4,
- 0x36, 0x6f, 0x39, 0x2e, 0xd5, 0xb1, 0xd1, 0x80, 0x92, 0x2b, 0x90, 0x0f, 0x8c, 0x23, 0x00, 0x6e,
- 0x44, 0x87, 0x4e, 0x01, 0x88, 0x33, 0x23, 0xe7, 0xe5, 0x69, 0x28, 0x0a, 0x8c, 0x98, 0x5e, 0x4e,
- 0x58, 0xaa, 0x35, 0xa5, 0x66, 0xca, 0x42, 0xab, 0xe3, 0x16, 0x9a, 0x9a, 0x4f, 0x64, 0x16, 0x33,
- 0xd6, 0xb3, 0xc9, 0x58, 0xc7, 0xff, 0xb0, 0xa0, 0xbe, 0xa6, 0x25, 0x7f, 0x49, 0x73, 0x68, 0x7d,
- 0xd3, 0xaf, 0x48, 0xdf, 0xcc, 0xff, 0xa6, 0x2f, 0xae, 0x03, 0xac, 0xf5, 0x3c, 0x7a, 0xb5, 0xd7,
- 0x67, 0x34, 0x98, 0xd0, 0x89, 0x7c, 0x9e, 0x8e, 0x53, 0x02, 0xa1, 0x5b, 0x5a, 0xcf, 0x65, 0x23,
- 0x0f, 0xbf, 0x0a, 0x35, 0xd2, 0xaf, 0xd0, 0x6d, 0x99, 0xb1, 0x14, 0xb5, 0x03, 0xf9, 0x2d, 0xa1,
- 0x9e, 0x2c, 0xa9, 0x89, 0x67, 0x94, 0x58, 0xf7, 0xf6, 0x65, 0xb5, 0xf9, 0x85, 0x17, 0x5d, 0x48,
- 0xc4, 0xab, 0x4f, 0x2b, 0xdc, 0xf5, 0x98, 0xf3, 0xd8, 0x58, 0x4c, 0xf4, 0x0e, 0xf8, 0x83, 0x38,
- 0x37, 0x09, 0x73, 0xa9, 0xdc, 0x74, 0x06, 0xec, 0x80, 0x6e, 0xe9, 0x22, 0x8a, 0x62, 0x01, 0x22,
- 0x4a, 0x31, 0x8f, 0xff, 0x62, 0xc1, 0xdc, 0x35, 0xca, 0x92, 0xd7, 0x93, 0xd7, 0xc8, 0xd8, 0xf8,
- 0x23, 0x38, 0x6a, 0xc8, 0xaf, 0xb4, 0xbf, 0x30, 0x76, 0x27, 0x79, 0x23, 0xd6, 0x7f, 0xd5, 0xeb,
- 0xd0, 0xc7, 0xaa, 0x97, 0x4b, 0x5e, 0x47, 0xee, 0x40, 0xc9, 0x98, 0x44, 0x57, 0xc6, 0x2e, 0x22,
- 0xc6, 0xcb, 0x4b, 0x54, 0x4c, 0xdb, 0x15, 0xa5, 0x93, 0xec, 0xe6, 0xd4, 0x35, 0x33, 0x2a, 0xda,
- 0xeb, 0x80, 0xc4, 0x0d, 0x56, 0xb0, 0x35, 0xcb, 0x86, 0xc0, 0xde, 0x88, 0x6e, 0x24, 0x11, 0x8c,
- 0x4e, 0x83, 0x1d, 0xf8, 0x8f, 0xf4, 0x0d, 0x73, 0x26, 0xde, 0x92, 0xf8, 0x8f, 0x88, 0x98, 0xc2,
- 0x97, 0x21, 0x43, 0xfc, 0x47, 0xa8, 0x0e, 0x10, 0x38, 0x5e, 0x97, 0xde, 0x8b, 0x1a, 0x9b, 0x32,
- 0x31, 0x30, 0x87, 0x94, 0xf4, 0x65, 0x38, 0x6a, 0x4a, 0x24, 0xdd, 0xdd, 0x84, 0xfc, 0xc7, 0x43,
- 0xd3, 0x5c, 0x95, 0x31, 0x73, 0xc9, 0x1e, 0x59, 0x13, 0xf1, 0x98, 0x81, 0x18, 0x8f, 0x4e, 0x42,
- 0x91, 0x39, 0x1b, 0x7d, 0x7a, 0x2b, 0x4e, 0x40, 0x31, 0x82, 0xcf, 0xf2, 0x9e, 0xec, 0x9e, 0x71,
- 0x37, 0x89, 0x11, 0xe8, 0x2c, 0xcc, 0xc5, 0x32, 0xdf, 0x09, 0xe8, 0x56, 0xef, 0xb1, 0xf0, 0x70,
- 0x99, 0x1c, 0xc0, 0xa3, 0x45, 0x38, 0x12, 0xe3, 0xd6, 0xc5, 0x1d, 0xc0, 0x16, 0xa4, 0xe3, 0x68,
- 0x6e, 0x1b, 0xa1, 0xee, 0x87, 0x0f, 0x87, 0x4e, 0x5f, 0x64, 0xd5, 0x32, 0x31, 0x30, 0xf8, 0xaf,
- 0x16, 0x1c, 0x95, 0xae, 0xe6, 0xdd, 0xf8, 0xeb, 0x18, 0xf5, 0x9f, 0x59, 0x80, 0x4c, 0x0d, 0x54,
- 0x68, 0x7d, 0xc7, 0x7c, 0x66, 0xe1, 0x97, 0x8c, 0x92, 0x68, 0x35, 0x25, 0x2a, 0x7e, 0x29, 0xc1,
- 0x90, 0x13, 0x17, 0x15, 0xd9, 0xf3, 0xda, 0xb2, 0x97, 0x95, 0x18, 0xa2, 0xbe, 0xbc, 0x05, 0xdf,
- 0xd8, 0x65, 0x34, 0x54, 0x9d, 0xa8, 0x68, 0xc1, 0x05, 0x82, 0xc8, 0x0f, 0xdf, 0x8b, 0x7a, 0x4c,
- 0x44, 0x8d, 0x1d, 0xef, 0xa5, 0x50, 0x44, 0x0f, 0xf0, 0x9f, 0xd2, 0x30, 0x73, 0xcf, 0xef, 0x0f,
- 0xe3, 0x92, 0xf5, 0x3a, 0xa5, 0xf2, 0x44, 0x7b, 0x9c, 0xd5, 0xed, 0x31, 0x02, 0x3b, 0x64, 0x74,
- 0x20, 0x22, 0x2b, 0x43, 0xc4, 0x18, 0x61, 0x28, 0x33, 0x27, 0xe8, 0x52, 0x26, 0xfb, 0x8e, 0x6a,
- 0x4e, 0x5c, 0x08, 0x13, 0x38, 0xb4, 0x00, 0x25, 0xa7, 0xdb, 0x0d, 0x68, 0xd7, 0x61, 0xb4, 0xbd,
- 0x5b, 0xcd, 0x8b, 0xcd, 0x4c, 0x14, 0xfe, 0x29, 0xcc, 0x6a, 0x63, 0x29, 0x97, 0xbe, 0x0b, 0xf9,
- 0x4f, 0x04, 0x66, 0xc2, 0x93, 0x94, 0x24, 0x55, 0x69, 0x4c, 0x93, 0x25, 0xdf, 0xaf, 0xb5, 0xcc,
- 0xf8, 0x3a, 0xe4, 0x24, 0x39, 0x3a, 0x69, 0x76, 0x0f, 0xf2, 0xed, 0x84, 0xc3, 0xaa, 0x15, 0xc0,
- 0x90, 0x93, 0x8c, 0x94, 0xe3, 0x45, 0x6c, 0x48, 0x0c, 0x51, 0xdf, 0xb3, 0x67, 0xa0, 0x18, 0x3d,
- 0x3e, 0xa3, 0x12, 0xe4, 0xaf, 0xde, 0x26, 0x3f, 0xb9, 0x42, 0x56, 0xe6, 0x52, 0xa8, 0x0c, 0x85,
- 0xf6, 0x95, 0xe5, 0x1b, 0x02, 0xb2, 0x96, 0xbe, 0xb6, 0x75, 0x66, 0x09, 0xd0, 0x0f, 0x20, 0x2b,
- 0xd3, 0xc5, 0xb1, 0x58, 0x7e, 0xf3, 0x99, 0xb7, 0x76, 0xfc, 0x00, 0x5e, 0x5a, 0x00, 0xa7, 0xde,
- 0xb5, 0xd0, 0x2d, 0x28, 0x09, 0xa4, 0x7a, 0xd0, 0x39, 0x39, 0xfe, 0xae, 0x92, 0xe0, 0x74, 0xea,
- 0x90, 0x59, 0x83, 0xdf, 0x25, 0xc8, 0x0a, 0x9f, 0x98, 0xd2, 0x98, 0x0f, 0x72, 0xa6, 0x34, 0x89,
- 0x27, 0x2e, 0x9c, 0x42, 0xdf, 0x07, 0x9b, 0xb7, 0x38, 0xc8, 0x28, 0x2a, 0xc6, 0x3b, 0x4c, 0xed,
- 0xd8, 0x38, 0xda, 0xd8, 0xf6, 0x83, 0xe8, 0x39, 0xe9, 0xf8, 0x78, 0x5b, 0xab, 0x97, 0x57, 0x0f,
- 0x4e, 0x44, 0x3b, 0xdf, 0x96, 0xef, 0x1e, 0xba, 0xb9, 0x42, 0xa7, 0x92, 0x5b, 0x8d, 0xf5, 0x62,
- 0xb5, 0xfa, 0x61, 0xd3, 0x11, 0xc3, 0x35, 0x28, 0x19, 0x8d, 0x8d, 0x69, 0xd6, 0x83, 0x5d, 0x99,
- 0x69, 0xd6, 0x09, 0xdd, 0x10, 0x4e, 0xa1, 0x6b, 0x50, 0xe0, 0xa5, 0x98, 0x67, 0x24, 0x74, 0x62,
- 0xbc, 0xe2, 0x1a, 0x99, 0xb6, 0x76, 0x72, 0xf2, 0x64, 0xc4, 0xe8, 0x47, 0x50, 0xbc, 0x46, 0x99,
- 0x0a, 0xd7, 0xe3, 0xe3, 0xf1, 0x3e, 0xc1, 0x52, 0xc9, 0x33, 0x83, 0x53, 0x4b, 0x3f, 0xd7, 0x7f,
- 0x4a, 0xad, 0x38, 0xcc, 0x41, 0xb7, 0x61, 0x56, 0x08, 0x16, 0xfd, 0x6b, 0x95, 0x08, 0xa0, 0x03,
- 0x7f, 0x91, 0x25, 0x02, 0xe8, 0xe0, 0x5f, 0x65, 0x38, 0xd5, 0xbe, 0xff, 0xf4, 0x59, 0x3d, 0xf5,
- 0xc5, 0xb3, 0x7a, 0xea, 0xab, 0x67, 0x75, 0xeb, 0x97, 0x7b, 0x75, 0xeb, 0x8f, 0x7b, 0x75, 0xeb,
- 0xc9, 0x5e, 0xdd, 0x7a, 0xba, 0x57, 0xb7, 0xfe, 0xb5, 0x57, 0xb7, 0xfe, 0xbd, 0x57, 0x4f, 0x7d,
- 0xb5, 0x57, 0xb7, 0x3e, 0x7d, 0x5e, 0x4f, 0x3d, 0x7d, 0x5e, 0x4f, 0x7d, 0xf1, 0xbc, 0x9e, 0xba,
- 0xff, 0xe6, 0x7f, 0xb9, 0xe8, 0xc9, 0x46, 0x34, 0x27, 0x3e, 0x17, 0xfe, 0x13, 0x00, 0x00, 0xff,
- 0xff, 0xb0, 0x19, 0x00, 0xf7, 0x53, 0x1c, 0x00, 0x00,
+ 0x2d, 0xe6, 0x6b, 0x77, 0x96, 0xa2, 0xdd, 0x50, 0x75, 0x51, 0xf8, 0xc2, 0x9d, 0x79, 0xf3, 0xe6,
+ 0xcd, 0xfb, 0x9a, 0xf7, 0x31, 0x84, 0x13, 0x83, 0x9d, 0x6e, 0xab, 0xef, 0x77, 0x07, 0x81, 0xcf,
+ 0xfc, 0x68, 0xd0, 0x14, 0xbf, 0xa8, 0xa0, 0xe7, 0xb5, 0x4a, 0xd7, 0xef, 0xfa, 0x12, 0x87, 0x8f,
+ 0xe4, 0x7a, 0xad, 0xd1, 0xf5, 0xfd, 0x6e, 0x9f, 0xb6, 0xc4, 0x6c, 0x63, 0xb8, 0xd5, 0x62, 0x3d,
+ 0x97, 0x86, 0xcc, 0x71, 0x07, 0x0a, 0x61, 0x41, 0x51, 0x7f, 0xd8, 0x77, 0xfd, 0x0e, 0xed, 0xb7,
+ 0x42, 0xe6, 0xb0, 0x50, 0xfe, 0x2a, 0x8c, 0x79, 0x8e, 0x31, 0x18, 0x86, 0xdb, 0xe2, 0x47, 0x02,
+ 0x71, 0x05, 0xd0, 0x3a, 0x0b, 0xa8, 0xe3, 0x12, 0x87, 0xd1, 0x90, 0xd0, 0x87, 0x43, 0x1a, 0x32,
+ 0x7c, 0x13, 0xe6, 0x13, 0xd0, 0x70, 0xe0, 0x7b, 0x21, 0x45, 0x17, 0xa1, 0x14, 0xc6, 0xe0, 0xaa,
+ 0xb5, 0x90, 0x59, 0x2c, 0x2d, 0x55, 0x9a, 0x91, 0x28, 0xf1, 0x1e, 0x62, 0x22, 0xe2, 0xdf, 0x58,
+ 0x00, 0xf1, 0x1a, 0xaa, 0x03, 0xc8, 0xd5, 0x8f, 0x9c, 0x70, 0xbb, 0x6a, 0x2d, 0x58, 0x8b, 0x36,
+ 0x31, 0x20, 0xe8, 0x1c, 0x1c, 0x8d, 0x67, 0xb7, 0xfc, 0xf5, 0x6d, 0x27, 0xe8, 0x54, 0xd3, 0x02,
+ 0xed, 0xe0, 0x02, 0x42, 0x60, 0x07, 0x0e, 0xa3, 0xd5, 0xcc, 0x82, 0xb5, 0x98, 0x21, 0x62, 0x8c,
+ 0x8e, 0x41, 0x8e, 0x51, 0xcf, 0xf1, 0x58, 0xd5, 0x5e, 0xb0, 0x16, 0x8b, 0x44, 0xcd, 0x38, 0x9c,
+ 0xcb, 0x4e, 0xc3, 0x6a, 0x76, 0xc1, 0x5a, 0x9c, 0x21, 0x6a, 0x86, 0x3f, 0xcf, 0x40, 0xf9, 0xe3,
+ 0x21, 0x0d, 0x76, 0x95, 0x02, 0x50, 0x1d, 0x0a, 0x21, 0xed, 0xd3, 0x4d, 0xe6, 0x07, 0x82, 0xc1,
+ 0x62, 0x3b, 0x5d, 0xb5, 0x48, 0x04, 0x43, 0x15, 0xc8, 0xf6, 0x7b, 0x6e, 0x8f, 0x09, 0xb6, 0x66,
+ 0x88, 0x9c, 0xa0, 0x4b, 0x90, 0x0d, 0x99, 0x13, 0x30, 0xc1, 0x4b, 0x69, 0xa9, 0xd6, 0x94, 0x46,
+ 0x6b, 0x6a, 0xa3, 0x35, 0xef, 0x6a, 0xa3, 0xb5, 0x0b, 0x4f, 0x46, 0x8d, 0xd4, 0x67, 0xff, 0x68,
+ 0x58, 0x44, 0x6e, 0x41, 0x17, 0x21, 0x43, 0xbd, 0x8e, 0xe0, 0xf7, 0x9b, 0xee, 0xe4, 0x1b, 0xd0,
+ 0x79, 0x28, 0x76, 0x7a, 0x01, 0xdd, 0x64, 0x3d, 0xdf, 0x13, 0x52, 0xcd, 0x2e, 0xcd, 0xc7, 0x16,
+ 0x59, 0xd1, 0x4b, 0x24, 0xc6, 0x42, 0xe7, 0x20, 0x17, 0x72, 0xd5, 0x85, 0xd5, 0xfc, 0x42, 0x66,
+ 0xb1, 0xd8, 0xae, 0xec, 0x8f, 0x1a, 0x73, 0x12, 0x72, 0xce, 0x77, 0x7b, 0x8c, 0xba, 0x03, 0xb6,
+ 0x4b, 0x14, 0x0e, 0x3a, 0x0b, 0xf9, 0x0e, 0xed, 0x53, 0x6e, 0xf0, 0x82, 0x30, 0xf8, 0x9c, 0x41,
+ 0x5e, 0x2c, 0x10, 0x8d, 0x80, 0xee, 0x83, 0x3d, 0xe8, 0x3b, 0x5e, 0xb5, 0x28, 0xa4, 0x98, 0x8d,
+ 0x11, 0xef, 0xf4, 0x1d, 0xaf, 0x7d, 0xf1, 0xcb, 0x51, 0x63, 0xa9, 0xdb, 0x63, 0xdb, 0xc3, 0x8d,
+ 0xe6, 0xa6, 0xef, 0xb6, 0xba, 0x81, 0xb3, 0xe5, 0x78, 0x4e, 0xab, 0xef, 0xef, 0xf4, 0x5a, 0xdc,
+ 0x39, 0x1f, 0x0e, 0x69, 0xd0, 0xa3, 0x41, 0x8b, 0xd3, 0x68, 0x0a, 0x7b, 0xf0, 0x7d, 0x44, 0xd0,
+ 0xbc, 0x6e, 0x17, 0x72, 0x73, 0x79, 0x3c, 0x4a, 0x03, 0x5a, 0x77, 0xdc, 0x41, 0x9f, 0x4e, 0x65,
+ 0xaf, 0xc8, 0x32, 0xe9, 0x43, 0x5b, 0x26, 0x33, 0xad, 0x65, 0x62, 0x35, 0xdb, 0xd3, 0xa9, 0x39,
+ 0xfb, 0x4d, 0xd5, 0x9c, 0x7b, 0xf5, 0x6a, 0xc6, 0x55, 0xb0, 0xf9, 0x0c, 0xcd, 0x41, 0x26, 0x70,
+ 0x1e, 0x09, 0x65, 0x96, 0x09, 0x1f, 0xe2, 0x35, 0xc8, 0x49, 0x46, 0x50, 0x6d, 0x5c, 0xdb, 0xc9,
+ 0x9b, 0x11, 0x6b, 0x3a, 0xa3, 0x75, 0x38, 0x17, 0xeb, 0x30, 0x23, 0xb4, 0x83, 0x7f, 0x6b, 0xc1,
+ 0x8c, 0x32, 0xa1, 0x8a, 0x2e, 0x1b, 0x90, 0x97, 0xb7, 0x5b, 0x47, 0x96, 0xe3, 0xe3, 0x91, 0xe5,
+ 0x4a, 0xc7, 0x19, 0x30, 0x1a, 0xb4, 0x5b, 0x4f, 0x46, 0x0d, 0xeb, 0xcb, 0x51, 0xe3, 0xad, 0x97,
+ 0x49, 0x29, 0x82, 0x9c, 0x8a, 0x3a, 0x9a, 0x30, 0x7a, 0x5b, 0x70, 0xc7, 0x42, 0xe5, 0x07, 0x47,
+ 0x9a, 0x32, 0x40, 0xae, 0x7a, 0x5d, 0x1a, 0x72, 0xca, 0x36, 0x37, 0x21, 0x91, 0x38, 0xf8, 0xe7,
+ 0x30, 0x9f, 0x70, 0x35, 0xc5, 0xe7, 0xfb, 0x90, 0x0b, 0xb9, 0x02, 0x35, 0x9b, 0x86, 0xa1, 0xd6,
+ 0x05, 0xbc, 0x3d, 0xab, 0xf8, 0xcb, 0xc9, 0x39, 0x51, 0xf8, 0xd3, 0x9d, 0xfe, 0x57, 0x0b, 0xca,
+ 0x6b, 0xce, 0x06, 0xed, 0x6b, 0x1f, 0x47, 0x60, 0x7b, 0x8e, 0x4b, 0x95, 0xc6, 0xc5, 0x98, 0x07,
+ 0xb4, 0x4f, 0x9c, 0xfe, 0x90, 0x4a, 0x92, 0x05, 0xa2, 0x66, 0xd3, 0x46, 0x22, 0xeb, 0xd0, 0x91,
+ 0xc8, 0x8a, 0xfd, 0xbd, 0x02, 0x59, 0xee, 0x59, 0xbb, 0x22, 0x0a, 0x15, 0x89, 0x9c, 0xe0, 0xb7,
+ 0x60, 0x46, 0x49, 0xa1, 0xd4, 0x17, 0xb3, 0xcc, 0xd5, 0x57, 0xd4, 0x2c, 0x63, 0x17, 0x72, 0x52,
+ 0xdb, 0xe8, 0x4d, 0x28, 0x46, 0xd9, 0x4d, 0x48, 0x9b, 0x69, 0xe7, 0xf6, 0x47, 0x8d, 0x34, 0x0b,
+ 0x49, 0xbc, 0x80, 0x1a, 0x90, 0x15, 0x3b, 0x85, 0xe4, 0x56, 0xbb, 0xb8, 0x3f, 0x6a, 0x48, 0x00,
+ 0x91, 0x1f, 0x74, 0x12, 0xec, 0x6d, 0x9e, 0x60, 0xb8, 0x0a, 0xec, 0x76, 0x61, 0x7f, 0xd4, 0x10,
+ 0x73, 0x22, 0x7e, 0xf1, 0x35, 0x28, 0xaf, 0xd1, 0xae, 0xb3, 0xb9, 0xab, 0x0e, 0xad, 0x68, 0x72,
+ 0xfc, 0x40, 0x4b, 0xd3, 0x38, 0x0d, 0xe5, 0xe8, 0xc4, 0x07, 0x6e, 0xa8, 0x9c, 0xba, 0x14, 0xc1,
+ 0x6e, 0x86, 0xf8, 0xd7, 0x16, 0x28, 0x3b, 0x23, 0x0c, 0xb9, 0x3e, 0x97, 0x35, 0x54, 0x31, 0x08,
+ 0xf6, 0x47, 0x0d, 0x05, 0x21, 0xea, 0x8b, 0x2e, 0x43, 0x3e, 0x14, 0x27, 0x72, 0x62, 0xe3, 0xee,
+ 0x23, 0x16, 0xda, 0x47, 0xb8, 0x1b, 0xec, 0x8f, 0x1a, 0x1a, 0x91, 0xe8, 0x01, 0x6a, 0x26, 0x32,
+ 0xa7, 0x14, 0x6c, 0x76, 0x7f, 0xd4, 0x30, 0xa0, 0x66, 0x26, 0xc5, 0x5f, 0x5b, 0x50, 0xba, 0xeb,
+ 0xf4, 0x22, 0x17, 0xaa, 0x6a, 0x13, 0xc5, 0x31, 0x52, 0x02, 0xf8, 0x95, 0xee, 0xd0, 0xbe, 0xb3,
+ 0x7b, 0xd5, 0x0f, 0x04, 0xdd, 0x19, 0x12, 0xcd, 0xe3, 0x64, 0x67, 0x4f, 0x4c, 0x76, 0xd9, 0xe9,
+ 0x43, 0xea, 0xff, 0x30, 0x80, 0x5d, 0xb7, 0x0b, 0xe9, 0xb9, 0x0c, 0xfe, 0xa3, 0x05, 0x65, 0x29,
+ 0xb9, 0x72, 0xbb, 0x9f, 0x40, 0x4e, 0x2a, 0x46, 0xc8, 0xfe, 0x92, 0xe0, 0xf2, 0xf6, 0x34, 0x81,
+ 0x45, 0xd1, 0x44, 0xdf, 0x87, 0xd9, 0x4e, 0xe0, 0x0f, 0x06, 0xb4, 0xb3, 0xae, 0x42, 0x58, 0x7a,
+ 0x3c, 0x84, 0xad, 0x98, 0xeb, 0x64, 0x0c, 0x1d, 0xff, 0xcd, 0x82, 0x19, 0x15, 0x2d, 0x94, 0xad,
+ 0x22, 0xfd, 0x5a, 0x87, 0x4e, 0x59, 0xe9, 0x69, 0x53, 0xd6, 0x31, 0xc8, 0x75, 0x03, 0x7f, 0x38,
+ 0x08, 0xab, 0x19, 0x79, 0x37, 0xe5, 0x6c, 0xba, 0x54, 0x86, 0xaf, 0xc3, 0xac, 0x16, 0xe5, 0x05,
+ 0x21, 0xb3, 0x36, 0x1e, 0x32, 0x57, 0x3b, 0xd4, 0x63, 0xbd, 0xad, 0x5e, 0x14, 0x04, 0x15, 0x3e,
+ 0xfe, 0xa5, 0x05, 0x73, 0xe3, 0x28, 0x68, 0xc5, 0xb8, 0x67, 0x9c, 0xdc, 0x99, 0x17, 0x93, 0x6b,
+ 0x8a, 0xe0, 0x13, 0x7e, 0xe8, 0xb1, 0x60, 0x57, 0x93, 0x96, 0x7b, 0x6b, 0xef, 0x41, 0xc9, 0x58,
+ 0xe4, 0x29, 0x6a, 0x87, 0xaa, 0x9b, 0x41, 0xf8, 0x30, 0x0e, 0x09, 0x69, 0x19, 0xd0, 0xc4, 0x04,
+ 0xff, 0xca, 0x82, 0x99, 0x84, 0x2d, 0xd1, 0xfb, 0x60, 0x6f, 0x05, 0xbe, 0x3b, 0x95, 0xa1, 0xc4,
+ 0x0e, 0xf4, 0x6d, 0x48, 0x33, 0x7f, 0x2a, 0x33, 0xa5, 0x99, 0xcf, 0xad, 0xa4, 0xc4, 0xcf, 0xc8,
+ 0xea, 0x56, 0xce, 0xf0, 0x7b, 0x50, 0x14, 0x02, 0xdd, 0x71, 0x7a, 0xc1, 0xc4, 0x6c, 0x31, 0x59,
+ 0xa0, 0xcb, 0x70, 0x44, 0x46, 0xc2, 0xc9, 0x9b, 0xcb, 0x93, 0x36, 0x97, 0xf5, 0xe6, 0x13, 0x90,
+ 0x5d, 0xde, 0x1e, 0x7a, 0x3b, 0x7c, 0x4b, 0xc7, 0x61, 0x8e, 0xde, 0xc2, 0xc7, 0xf8, 0x0d, 0x98,
+ 0xe7, 0x77, 0x90, 0x06, 0xe1, 0xb2, 0x3f, 0xf4, 0x98, 0xee, 0x2e, 0xce, 0x41, 0x25, 0x09, 0x56,
+ 0x5e, 0x52, 0x81, 0xec, 0x26, 0x07, 0x08, 0x1a, 0x33, 0x44, 0x4e, 0xf0, 0xef, 0x2c, 0x40, 0xd7,
+ 0x28, 0x13, 0xa7, 0xac, 0xae, 0x44, 0xd7, 0xa3, 0x06, 0x05, 0xd7, 0x61, 0x9b, 0xdb, 0x34, 0x08,
+ 0x75, 0x0d, 0xa2, 0xe7, 0xff, 0x8f, 0x6a, 0x0f, 0x9f, 0x87, 0xf9, 0x04, 0x97, 0x4a, 0xa6, 0x1a,
+ 0x14, 0x36, 0x15, 0x4c, 0xe5, 0xbb, 0x68, 0x8e, 0xff, 0x94, 0x86, 0x82, 0xd8, 0x40, 0xe8, 0x16,
+ 0x3a, 0x0f, 0xa5, 0xad, 0x9e, 0xd7, 0xa5, 0xc1, 0x20, 0xe8, 0x29, 0x15, 0xd8, 0xed, 0x23, 0xfb,
+ 0xa3, 0x86, 0x09, 0x26, 0xe6, 0x04, 0xbd, 0x03, 0xf9, 0x61, 0x48, 0x83, 0x07, 0x3d, 0x79, 0xd3,
+ 0x8b, 0xed, 0xca, 0xde, 0xa8, 0x91, 0xfb, 0x61, 0x48, 0x83, 0xd5, 0x15, 0x9e, 0x79, 0x86, 0x62,
+ 0x44, 0xe4, 0xb7, 0x83, 0x6e, 0x28, 0x37, 0x15, 0x45, 0x58, 0xfb, 0x3b, 0x9c, 0xfd, 0xb1, 0x50,
+ 0x37, 0x08, 0x7c, 0x97, 0xb2, 0x6d, 0x3a, 0x0c, 0x5b, 0x9b, 0xbe, 0xeb, 0xfa, 0x5e, 0x4b, 0xf4,
+ 0x92, 0x42, 0x68, 0x9e, 0x3e, 0xf9, 0x76, 0xe5, 0xb9, 0x77, 0x21, 0xcf, 0xb6, 0x03, 0x7f, 0xd8,
+ 0xdd, 0x16, 0x59, 0x21, 0xd3, 0xbe, 0x34, 0x3d, 0x3d, 0x4d, 0x81, 0xe8, 0x01, 0x3a, 0xcd, 0xb5,
+ 0x45, 0x37, 0x77, 0xc2, 0xa1, 0x2b, 0x3b, 0xb4, 0x76, 0x76, 0x7f, 0xd4, 0xb0, 0xde, 0x21, 0x11,
+ 0x18, 0x7f, 0x9a, 0x86, 0x86, 0x70, 0xd4, 0x7b, 0xa2, 0x6c, 0xb8, 0xea, 0x07, 0x37, 0x29, 0x0b,
+ 0x7a, 0x9b, 0xb7, 0x1c, 0x97, 0x6a, 0xdf, 0x68, 0x40, 0xc9, 0x15, 0xc0, 0x07, 0xc6, 0x15, 0x00,
+ 0x37, 0xc2, 0x43, 0xa7, 0x00, 0xc4, 0x9d, 0x91, 0xeb, 0xf2, 0x36, 0x14, 0x05, 0x44, 0x2c, 0x2f,
+ 0x27, 0x34, 0xd5, 0x9a, 0x52, 0x32, 0xa5, 0xa1, 0xd5, 0x71, 0x0d, 0x4d, 0x4d, 0x27, 0x52, 0x8b,
+ 0xe9, 0xeb, 0xd9, 0xa4, 0xaf, 0xe3, 0xbf, 0x5b, 0x50, 0x5f, 0xd3, 0x9c, 0x1f, 0x52, 0x1d, 0x5a,
+ 0xde, 0xf4, 0x2b, 0x92, 0x37, 0xf3, 0xdf, 0xc9, 0x8b, 0xeb, 0x00, 0x6b, 0x3d, 0x8f, 0x5e, 0xed,
+ 0xf5, 0x19, 0x0d, 0x26, 0x74, 0x22, 0x9f, 0x66, 0xe2, 0x90, 0x40, 0xe8, 0x96, 0x96, 0x73, 0xd9,
+ 0x88, 0xc3, 0xaf, 0x42, 0x8c, 0xf4, 0x2b, 0x34, 0x5b, 0x66, 0x2c, 0x44, 0xed, 0x40, 0x7e, 0x4b,
+ 0x88, 0x27, 0x53, 0x6a, 0xe2, 0x19, 0x25, 0x96, 0xbd, 0x7d, 0x59, 0x1d, 0x7e, 0xe1, 0x65, 0x05,
+ 0x89, 0x78, 0xf5, 0x69, 0x85, 0xbb, 0x1e, 0x73, 0x1e, 0x1b, 0x9b, 0x89, 0x3e, 0x01, 0xfd, 0x4c,
+ 0x95, 0x5b, 0xd9, 0x89, 0xe5, 0x96, 0xbe, 0xb9, 0x87, 0xef, 0x19, 0x3f, 0x88, 0x63, 0x9f, 0x30,
+ 0x87, 0x8a, 0x7d, 0x67, 0xc0, 0x0e, 0xe8, 0x96, 0x4e, 0xd2, 0x28, 0x3e, 0x36, 0xc2, 0x14, 0xeb,
+ 0xf8, 0xcf, 0x16, 0xcc, 0x5d, 0xa3, 0x2c, 0x59, 0xfe, 0xbc, 0x46, 0xc6, 0xc4, 0x1f, 0xc1, 0x51,
+ 0x83, 0x7f, 0x25, 0xfd, 0x85, 0xb1, 0x9a, 0xe7, 0x8d, 0x58, 0xfe, 0x55, 0xaf, 0x43, 0x1f, 0xab,
+ 0x5e, 0x31, 0x59, 0xee, 0xdc, 0x81, 0x92, 0xb1, 0x88, 0xae, 0x8c, 0x15, 0x3a, 0xc6, 0xcb, 0x4e,
+ 0x94, 0xac, 0xdb, 0x15, 0x25, 0x93, 0xec, 0x16, 0x55, 0x19, 0x1b, 0x15, 0x05, 0xeb, 0x80, 0x84,
+ 0xb9, 0x04, 0x59, 0x33, 0x2d, 0x09, 0xe8, 0x8d, 0xa8, 0xe2, 0x89, 0xe6, 0xe8, 0x34, 0xd8, 0x81,
+ 0xff, 0x48, 0x57, 0xb0, 0x33, 0xf1, 0x91, 0xc4, 0x7f, 0x44, 0xc4, 0x12, 0xbe, 0x0c, 0x19, 0xe2,
+ 0x3f, 0x42, 0x75, 0x80, 0xc0, 0xf1, 0xba, 0xf4, 0x5e, 0xd4, 0x38, 0x95, 0x89, 0x01, 0x79, 0x41,
+ 0xc9, 0xb0, 0x0c, 0x47, 0x4d, 0x8e, 0xa4, 0xb9, 0x9b, 0x90, 0xff, 0x78, 0x68, 0xaa, 0xab, 0x32,
+ 0xa6, 0x2e, 0xd9, 0x83, 0x6b, 0x24, 0xee, 0x33, 0x10, 0xc3, 0xd1, 0x49, 0x28, 0x32, 0x67, 0xa3,
+ 0x4f, 0x6f, 0xc5, 0x01, 0x2e, 0x06, 0xf0, 0x55, 0xde, 0xf3, 0xdd, 0x33, 0x6a, 0x9f, 0x18, 0x80,
+ 0xce, 0xc2, 0x5c, 0xcc, 0xf3, 0x9d, 0x80, 0x6e, 0xf5, 0x1e, 0x0b, 0x0b, 0x97, 0xc9, 0x01, 0x38,
+ 0x5a, 0x84, 0x23, 0x31, 0x6c, 0x5d, 0xd4, 0x18, 0xb6, 0x40, 0x1d, 0x07, 0x73, 0xdd, 0x08, 0x71,
+ 0x3f, 0x7c, 0x38, 0x74, 0xfa, 0xe2, 0xe6, 0x95, 0x89, 0x01, 0xc1, 0x7f, 0xb1, 0xe0, 0xa8, 0x34,
+ 0x35, 0xef, 0xf6, 0x5f, 0x47, 0xaf, 0xff, 0xdc, 0x02, 0x64, 0x4a, 0xa0, 0x5c, 0xeb, 0x5b, 0xe6,
+ 0x33, 0x0e, 0x2f, 0x62, 0x4a, 0xa2, 0x95, 0x95, 0xa0, 0xf8, 0x25, 0x06, 0x43, 0x4e, 0x14, 0x42,
+ 0xb2, 0xa7, 0xb6, 0x65, 0xaf, 0x2c, 0x21, 0x44, 0x7d, 0x79, 0x8b, 0xbf, 0xb1, 0xcb, 0x68, 0xa8,
+ 0x3a, 0x5d, 0xd1, 0xe2, 0x0b, 0x00, 0x91, 0x1f, 0x7e, 0x16, 0xf5, 0x98, 0xf0, 0x1a, 0x3b, 0x3e,
+ 0x4b, 0x81, 0x88, 0x1e, 0xe0, 0x3f, 0xa4, 0x61, 0xe6, 0x9e, 0xdf, 0x1f, 0xc6, 0x29, 0xf1, 0x75,
+ 0x4a, 0x15, 0x89, 0xf6, 0x3b, 0xab, 0xdb, 0x6f, 0x04, 0x76, 0xc8, 0xe8, 0x40, 0x78, 0x56, 0x86,
+ 0x88, 0x31, 0xc2, 0x50, 0x66, 0x4e, 0xd0, 0xa5, 0x4c, 0xf6, 0x35, 0xd5, 0x9c, 0x28, 0x38, 0x13,
+ 0x30, 0xb4, 0x00, 0x25, 0xa7, 0xdb, 0x0d, 0x68, 0xd7, 0x61, 0xb4, 0xbd, 0x5b, 0xcd, 0x8b, 0xc3,
+ 0x4c, 0x10, 0xfe, 0x31, 0xcc, 0x6a, 0x65, 0x29, 0x93, 0xbe, 0x0b, 0xf9, 0x4f, 0x04, 0x64, 0xc2,
+ 0x93, 0x97, 0x44, 0x55, 0x61, 0x4c, 0xa3, 0x25, 0xdf, 0xc7, 0x35, 0xcf, 0xf8, 0x3a, 0xe4, 0x24,
+ 0x3a, 0x3a, 0x69, 0x76, 0x27, 0xf2, 0x6d, 0x86, 0xcf, 0x55, 0xab, 0x81, 0x21, 0x27, 0x09, 0x29,
+ 0xc3, 0x0b, 0xdf, 0x90, 0x10, 0xa2, 0xbe, 0x67, 0xcf, 0x40, 0x31, 0x7a, 0xdc, 0x46, 0x25, 0xc8,
+ 0x5f, 0xbd, 0x4d, 0x7e, 0x74, 0x85, 0xac, 0xcc, 0xa5, 0x50, 0x19, 0x0a, 0xed, 0x2b, 0xcb, 0x37,
+ 0xc4, 0xcc, 0x5a, 0xfa, 0xda, 0xd6, 0x91, 0x25, 0x40, 0xdf, 0x83, 0xac, 0x0c, 0x17, 0xc7, 0x62,
+ 0xfe, 0xcd, 0x67, 0xe4, 0xda, 0xf1, 0x03, 0x70, 0xa9, 0x01, 0x9c, 0x7a, 0xd7, 0x42, 0xb7, 0xa0,
+ 0x24, 0x80, 0xea, 0xc1, 0xe8, 0xe4, 0xf8, 0xbb, 0x4d, 0x82, 0xd2, 0xa9, 0x17, 0xac, 0x1a, 0xf4,
+ 0x2e, 0x41, 0x56, 0xd8, 0xc4, 0xe4, 0xc6, 0x7c, 0xf0, 0x33, 0xb9, 0x49, 0x3c, 0xa1, 0xe1, 0x14,
+ 0xfa, 0x2e, 0xd8, 0xbc, 0x85, 0x42, 0x46, 0x52, 0x31, 0xde, 0x79, 0x6a, 0xc7, 0xc6, 0xc1, 0xc6,
+ 0xb1, 0x1f, 0x44, 0xcf, 0x55, 0xc7, 0xc7, 0xdb, 0x66, 0xbd, 0xbd, 0x7a, 0x70, 0x21, 0x3a, 0xf9,
+ 0xb6, 0x7c, 0x57, 0xd1, 0xcd, 0x1b, 0x3a, 0x95, 0x3c, 0x6a, 0xac, 0xd7, 0xab, 0xd5, 0x5f, 0xb4,
+ 0x1c, 0x11, 0x5c, 0x83, 0x92, 0xd1, 0x38, 0x99, 0x6a, 0x3d, 0xd8, 0xf5, 0x99, 0x6a, 0x9d, 0xd0,
+ 0x6d, 0xe1, 0x14, 0xba, 0x06, 0x05, 0x9e, 0x8a, 0x79, 0x44, 0x42, 0x27, 0xc6, 0x33, 0xae, 0x11,
+ 0x69, 0x6b, 0x27, 0x27, 0x2f, 0x46, 0x84, 0x7e, 0x00, 0xc5, 0x6b, 0x94, 0x29, 0x77, 0x3d, 0x3e,
+ 0xee, 0xef, 0x13, 0x34, 0x95, 0xbc, 0x33, 0x38, 0xb5, 0xf4, 0x53, 0xfd, 0xa7, 0xd7, 0x8a, 0xc3,
+ 0x1c, 0x74, 0x1b, 0x66, 0x05, 0x63, 0xd1, 0xbf, 0x62, 0x09, 0x07, 0x3a, 0xf0, 0x17, 0x5c, 0xc2,
+ 0x81, 0x0e, 0xfe, 0x15, 0x87, 0x53, 0xed, 0xfb, 0x4f, 0x9f, 0xd5, 0x53, 0x5f, 0x3c, 0xab, 0xa7,
+ 0xbe, 0x7a, 0x56, 0xb7, 0x7e, 0xb1, 0x57, 0xb7, 0x7e, 0xbf, 0x57, 0xb7, 0x9e, 0xec, 0xd5, 0xad,
+ 0xa7, 0x7b, 0x75, 0xeb, 0x9f, 0x7b, 0x75, 0xeb, 0x5f, 0x7b, 0xf5, 0xd4, 0x57, 0x7b, 0x75, 0xeb,
+ 0xb3, 0xe7, 0xf5, 0xd4, 0xd3, 0xe7, 0xf5, 0xd4, 0x17, 0xcf, 0xeb, 0xa9, 0xfb, 0x6f, 0xfe, 0x87,
+ 0x42, 0x52, 0x36, 0xba, 0x39, 0xf1, 0xb9, 0xf0, 0xef, 0x00, 0x00, 0x00, 0xff, 0xff, 0x3e, 0xbe,
+ 0x5b, 0x4c, 0xb3, 0x1c, 0x00, 0x00,
}
func (x Direction) String() string {
@@ -3772,6 +3775,9 @@ func (this *GetChunkRefRequest) Equal(that interface{}) bool {
return false
}
}
+ if !this.Plan.Equal(that1.Plan) {
+ return false
+ }
return true
}
func (this *GetChunkRefResponse) Equal(that interface{}) bool {
@@ -4586,12 +4592,13 @@ func (this *GetChunkRefRequest) GoString() string {
if this == nil {
return "nil"
}
- s := make([]string, 0, 8)
+ s := make([]string, 0, 9)
s = append(s, "&logproto.GetChunkRefRequest{")
s = append(s, "From: "+fmt.Sprintf("%#v", this.From)+",\n")
s = append(s, "Through: "+fmt.Sprintf("%#v", this.Through)+",\n")
s = append(s, "Matchers: "+fmt.Sprintf("%#v", this.Matchers)+",\n")
s = append(s, "Filters: "+fmt.Sprintf("%#v", this.Filters)+",\n")
+ s = append(s, "Plan: "+fmt.Sprintf("%#v", this.Plan)+",\n")
s = append(s, "}")
return strings.Join(s, "")
}
@@ -6720,6 +6727,16 @@ func (m *GetChunkRefRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
+ {
+ size := m.Plan.Size()
+ i -= size
+ if _, err := m.Plan.MarshalTo(dAtA[i:]); err != nil {
+ return 0, err
+ }
+ i = encodeVarintLogproto(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2a
if len(m.Filters) > 0 {
for iNdEx := len(m.Filters) - 1; iNdEx >= 0; iNdEx-- {
{
@@ -7942,6 +7959,8 @@ func (m *GetChunkRefRequest) Size() (n int) {
n += 1 + l + sovLogproto(uint64(l))
}
}
+ l = m.Plan.Size()
+ n += 1 + l + sovLogproto(uint64(l))
return n
}
@@ -8620,6 +8639,7 @@ func (this *GetChunkRefRequest) String() string {
`Through:` + fmt.Sprintf("%v", this.Through) + `,`,
`Matchers:` + fmt.Sprintf("%v", this.Matchers) + `,`,
`Filters:` + fmt.Sprintf("%v", this.Filters) + `,`,
+ `Plan:` + fmt.Sprintf("%v", this.Plan) + `,`,
`}`,
}, "")
return s
@@ -13037,6 +13057,39 @@ func (m *GetChunkRefRequest) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
+ case 5:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Plan", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLogproto
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthLogproto
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthLogproto
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.Plan.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipLogproto(dAtA[iNdEx:])
diff --git a/pkg/logproto/logproto.proto b/pkg/logproto/logproto.proto
index 0fde83d2715d9..bf175168cfd93 100644
--- a/pkg/logproto/logproto.proto
+++ b/pkg/logproto/logproto.proto
@@ -311,10 +311,15 @@ message GetChunkRefRequest {
(gogoproto.nullable) = false
];
string matchers = 3;
+ // TODO(salvacorts): Delete this field once the weekly release is done.
repeated LineFilter filters = 4 [
(gogoproto.customtype) = "github.com/grafana/loki/pkg/logql/syntax.LineFilter",
(gogoproto.nullable) = false
];
+ Plan plan = 5 [
+ (gogoproto.customtype) = "github.com/grafana/loki/pkg/querier/plan.QueryPlan",
+ (gogoproto.nullable) = false
+ ];
}
message GetChunkRefResponse {
diff --git a/pkg/logql/syntax/ast.go b/pkg/logql/syntax/ast.go
index cea41f4d95c5d..26e77779c4b35 100644
--- a/pkg/logql/syntax/ast.go
+++ b/pkg/logql/syntax/ast.go
@@ -54,6 +54,22 @@ func MustClone[T Expr](e T) T {
return copied
}
+func ExtractLineFilters(e Expr) []LineFilterExpr {
+ if e == nil {
+ return nil
+ }
+ var filters []LineFilterExpr
+ visitor := &DepthFirstTraversal{
+ VisitLineFilterFn: func(v RootVisitor, e *LineFilterExpr) {
+ if e != nil {
+ filters = append(filters, *e)
+ }
+ },
+ }
+ e.Accept(visitor)
+ return filters
+}
+
// implicit holds default implementations
type implicit struct{}
diff --git a/pkg/logql/syntax/serialize_test.go b/pkg/logql/syntax/serialize_test.go
index f4051caaf7ea1..2c6bb6f0ef663 100644
--- a/pkg/logql/syntax/serialize_test.go
+++ b/pkg/logql/syntax/serialize_test.go
@@ -30,6 +30,9 @@ func TestJSONSerializationRoundTrip(t *testing.T) {
"regexp": {
query: `{env="prod", app=~"loki.*"} |~ ".*foo.*"`,
},
+ "line filter": {
+ query: `{env="prod", app=~"loki.*"} |= "foo" |= "bar" or "baz" | line_format "blip{{ .foo }}blop" |= "blip"`,
+ },
"vector matching": {
query: `(sum by (cluster)(rate({foo="bar"}[5m])) / ignoring (cluster) count(rate({foo="bar"}[5m])))`,
},
diff --git a/pkg/querier/plan/plan.go b/pkg/querier/plan/plan.go
index 6822932d7b241..d6548537a394c 100644
--- a/pkg/querier/plan/plan.go
+++ b/pkg/querier/plan/plan.go
@@ -4,6 +4,7 @@ import (
"bytes"
"github.com/grafana/loki/pkg/logql/syntax"
+ "github.com/grafana/loki/pkg/util"
)
type QueryPlan struct {
@@ -78,6 +79,20 @@ func (t QueryPlan) Equal(other QueryPlan) bool {
return bytes.Equal(left, right)
}
+func (t QueryPlan) String() string {
+ if t.AST == nil {
+ return ""
+ }
+ return t.AST.String()
+}
+
+func (t *QueryPlan) Hash() uint32 {
+ if t.AST == nil {
+ return 0
+ }
+ return util.HashedQuery(t.AST.String())
+}
+
// countWriter is not writing any bytes. It just counts the bytes that would be
// written.
type countWriter struct {
diff --git a/pkg/storage/chunk/predicate.go b/pkg/storage/chunk/predicate.go
index 9dd769ccc4da1..391b1e9163235 100644
--- a/pkg/storage/chunk/predicate.go
+++ b/pkg/storage/chunk/predicate.go
@@ -1,16 +1,22 @@
package chunk
import (
+ "github.com/grafana/loki/pkg/querier/plan"
"github.com/prometheus/prometheus/model/labels"
-
- "github.com/grafana/loki/pkg/logql/syntax"
)
type Predicate struct {
Matchers []*labels.Matcher
- Filters []syntax.LineFilter
+ plan *plan.QueryPlan
+}
+
+func NewPredicate(m []*labels.Matcher, p *plan.QueryPlan) Predicate {
+ return Predicate{Matchers: m, plan: p}
}
-func NewPredicate(m []*labels.Matcher, f []syntax.LineFilter) Predicate {
- return Predicate{Matchers: m, Filters: f}
+func (p Predicate) Plan() plan.QueryPlan {
+ if p.plan != nil {
+ return *p.plan
+ }
+ return plan.QueryPlan{}
}
diff --git a/pkg/storage/store.go b/pkg/storage/store.go
index de5a60b7f038f..57918066052a7 100644
--- a/pkg/storage/store.go
+++ b/pkg/storage/store.go
@@ -21,10 +21,8 @@ import (
"github.com/grafana/loki/pkg/iter"
"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/logql"
- "github.com/grafana/loki/pkg/logql/syntax"
"github.com/grafana/loki/pkg/logqlmodel/stats"
"github.com/grafana/loki/pkg/querier/astmapper"
- "github.com/grafana/loki/pkg/querier/plan"
"github.com/grafana/loki/pkg/storage/chunk"
"github.com/grafana/loki/pkg/storage/chunk/cache"
"github.com/grafana/loki/pkg/storage/chunk/client"
@@ -477,34 +475,15 @@ func (s *LokiStore) SelectSeries(ctx context.Context, req logql.SelectLogParams)
return result, nil
}
-func extractLineFilters(p *plan.QueryPlan) []syntax.LineFilter {
- lineFilters := make([]syntax.LineFilter, 0)
- visitor := &syntax.DepthFirstTraversal{
- VisitLineFilterFn: func(v syntax.RootVisitor, e *syntax.LineFilterExpr) {
- if e.Left != nil {
- e.Left.Accept(v)
- }
- if e.Or != nil {
- e.Or.Accept(v)
- }
- lineFilters = append(lineFilters, e.LineFilter)
- },
- }
- p.AST.Accept(visitor)
- return lineFilters
-}
-
// SelectLogs returns an iterator that will query the store for more chunks while iterating instead of fetching all chunks upfront
// for that request.
func (s *LokiStore) SelectLogs(ctx context.Context, req logql.SelectLogParams) (iter.EntryIterator, error) {
- lf := extractLineFilters(req.Plan)
-
matchers, from, through, err := decodeReq(req)
if err != nil {
return nil, err
}
- lazyChunks, err := s.lazyChunks(ctx, from, through, chunk.NewPredicate(matchers, lf))
+ lazyChunks, err := s.lazyChunks(ctx, from, through, chunk.NewPredicate(matchers, req.Plan))
if err != nil {
return nil, err
}
@@ -546,14 +525,12 @@ func (s *LokiStore) SelectLogs(ctx context.Context, req logql.SelectLogParams) (
}
func (s *LokiStore) SelectSamples(ctx context.Context, req logql.SelectSampleParams) (iter.SampleIterator, error) {
- lf := extractLineFilters(req.Plan)
-
matchers, from, through, err := decodeReq(req)
if err != nil {
return nil, err
}
- lazyChunks, err := s.lazyChunks(ctx, from, through, chunk.NewPredicate(matchers, lf))
+ lazyChunks, err := s.lazyChunks(ctx, from, through, chunk.NewPredicate(matchers, req.Plan))
if err != nil {
return nil, err
}
diff --git a/pkg/storage/stores/series/series_index_gateway_store.go b/pkg/storage/stores/series/series_index_gateway_store.go
index d937042275b86..00059fe16c1a3 100644
--- a/pkg/storage/stores/series/series_index_gateway_store.go
+++ b/pkg/storage/stores/series/series_index_gateway_store.go
@@ -33,7 +33,7 @@ func (c *IndexGatewayClientStore) GetChunkRefs(ctx context.Context, _ string, fr
From: from,
Through: through,
Matchers: (&syntax.MatchersExpr{Mts: predicate.Matchers}).String(),
- Filters: predicate.Filters,
+ Plan: predicate.Plan(),
})
if err != nil {
return nil, err
diff --git a/pkg/storage/stores/shipper/indexshipper/indexgateway/gateway.go b/pkg/storage/stores/shipper/indexshipper/indexgateway/gateway.go
index 8b0f186386bdf..25ce68a3bffd8 100644
--- a/pkg/storage/stores/shipper/indexshipper/indexgateway/gateway.go
+++ b/pkg/storage/stores/shipper/indexshipper/indexgateway/gateway.go
@@ -6,9 +6,6 @@ import (
"sort"
"sync"
- "github.com/grafana/loki/pkg/storage/chunk"
- "github.com/grafana/loki/pkg/storage/stores/index/seriesvolume"
-
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/grafana/dskit/services"
@@ -20,9 +17,12 @@ import (
"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/logql/syntax"
+ "github.com/grafana/loki/pkg/querier/plan"
+ "github.com/grafana/loki/pkg/storage/chunk"
"github.com/grafana/loki/pkg/storage/config"
"github.com/grafana/loki/pkg/storage/stores"
"github.com/grafana/loki/pkg/storage/stores/index"
+ "github.com/grafana/loki/pkg/storage/stores/index/seriesvolume"
seriesindex "github.com/grafana/loki/pkg/storage/stores/series/index"
"github.com/grafana/loki/pkg/util/spanlogger"
)
@@ -49,7 +49,7 @@ type IndexClientWithRange struct {
}
type BloomQuerier interface {
- FilterChunkRefs(ctx context.Context, tenant string, from, through model.Time, chunks []*logproto.ChunkRef, filters ...syntax.LineFilter) ([]*logproto.ChunkRef, error)
+ FilterChunkRefs(ctx context.Context, tenant string, from, through model.Time, chunks []*logproto.ChunkRef, plan plan.QueryPlan) ([]*logproto.ChunkRef, error)
}
type Gateway struct {
@@ -204,7 +204,7 @@ func (g *Gateway) GetChunkRef(ctx context.Context, req *logproto.GetChunkRefRequ
return nil, err
}
- predicate := chunk.NewPredicate(matchers, req.Filters)
+ predicate := chunk.NewPredicate(matchers, &req.Plan)
chunks, _, err := g.indexQuerier.GetChunks(ctx, instanceID, req.From, req.Through, predicate)
if err != nil {
return nil, err
@@ -221,17 +221,20 @@ func (g *Gateway) GetChunkRef(ctx context.Context, req *logproto.GetChunkRefRequ
initialChunkCount := len(result.Refs)
- // Return unfiltered results if there is no bloom querier (Bloom Gateway disabled) or if there are not filters.
- if g.bloomQuerier == nil || len(req.Filters) == 0 {
- level.Info(g.log).Log("msg", "chunk filtering is not enabled or there is no line filter", "filters", len(req.Filters))
+ // Return unfiltered results if there is no bloom querier (Bloom Gateway disabled)
+ if g.bloomQuerier == nil {
+ level.Info(g.log).Log("msg", "chunk filtering is not enabled")
+ return result, nil
+ }
+
+ // Extract LineFiltersExpr from the plan. If there is none, we can short-circuit and return before making a req
+ // to the bloom-gateway (through the g.bloomQuerier)
+ if len(syntax.ExtractLineFilters(req.Plan.AST)) == 0 {
+ level.Info(g.log).Log("msg", "there are no line filters")
return result, nil
}
- // TODO(chaudum): Take the chunks from the index querier's GetChunks()
- // response and send them to the bloom gateway along with the filter
- // expression that we got from the request object.
- // The bloom gateway returns the list of matching ChunkRefs.
- chunkRefs, err := g.bloomQuerier.FilterChunkRefs(ctx, instanceID, req.From, req.Through, result.Refs, req.Filters...)
+ chunkRefs, err := g.bloomQuerier.FilterChunkRefs(ctx, instanceID, req.From, req.Through, result.Refs, req.Plan)
if err != nil {
return nil, err
}
|
refactor
|
Pass query plan down to bloom gateway (#12037)
|
2427fab32d42117b37f7f72250ab5491628c134f
|
2021-10-21 23:28:08
|
Trevor Whitney
|
configuration: add a common config section for object storage (#4473)
| false
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 7c00a49f831d7..c6631fa6bb508 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -4,6 +4,7 @@
* [4440](https://github.com/grafana/loki/pull/4440) **DylanGuedes**: Config: Override distributor's default ring KV store
* [4443](https://github.com/grafana/loki/pull/4443) **DylanGuedes**: Loki: Change how push API checks for contentType
* [4415](https://github.com/grafana/loki/pull/4415) **DylanGuedes**: Change default limits to common values
+* [4473](https://github.com/grafana/loki/pull/4473) **trevorwhitney**: Config: add object storage configuration to common config
# 2.3.0 (2021/08/06)
diff --git a/pkg/loki/common/common.go b/pkg/loki/common/common.go
index cce66321362cf..c0e11bf2f2abe 100644
--- a/pkg/loki/common/common.go
+++ b/pkg/loki/common/common.go
@@ -1,6 +1,23 @@
package common
+import (
+ "github.com/grafana/loki/pkg/storage/chunk/aws"
+ "github.com/grafana/loki/pkg/storage/chunk/azure"
+ "github.com/grafana/loki/pkg/storage/chunk/gcp"
+ "github.com/grafana/loki/pkg/storage/chunk/local"
+ "github.com/grafana/loki/pkg/storage/chunk/openstack"
+)
+
// Config holds common config that can be shared between multiple other config sections
type Config struct {
- PathPrefix string `yaml:"path_prefix"`
+ PathPrefix string `yaml:"path_prefix"`
+ Storage Storage `yaml:"storage"`
+}
+
+type Storage struct {
+ S3 *aws.S3Config `yaml:"s3"`
+ GCS *gcp.GCSConfig `yaml:"gcs"`
+ Azure *azure.BlobStorageConfig `yaml:"azure"`
+ Swift *openstack.SwiftConfig `yaml:"swift"`
+ FSConfig *local.FSConfig `yaml:"filesystem"`
}
diff --git a/pkg/loki/config_wrapper.go b/pkg/loki/config_wrapper.go
index d7df5455a475f..ecacd94ce8a52 100644
--- a/pkg/loki/config_wrapper.go
+++ b/pkg/loki/config_wrapper.go
@@ -3,10 +3,12 @@ package loki
import (
"flag"
"fmt"
+ "reflect"
"github.com/grafana/dskit/flagext"
"github.com/pkg/errors"
+ "github.com/grafana/loki/pkg/storage/chunk/storage"
"github.com/grafana/loki/pkg/util/cfg"
)
@@ -72,6 +74,7 @@ func (c *ConfigWrapper) ApplyDynamicConfig() cfg.Source {
}
applyMemberlistConfig(r)
+ applyStorageConfig(r, &defaults)
return nil
}
@@ -89,3 +92,97 @@ func applyMemberlistConfig(r *ConfigWrapper) {
r.Ruler.Ring.KVStore.Store = memberlistStr
}
}
+
+// applyStorageConfig will attempt to apply a common storage config for either
+// s3, gcs, azure, or swift to all the places we create an object storage client.
+// If any specific configs for an object storage client have been provided elsewhere in the
+// configuration file, applyStorageConfig will not override them.
+// If multiple storage configurations are provided, applyStorageConfig will apply
+// all of them, and will set the value for the Ruler's StoreConfig `type` to the
+// last one (alphabetically) that was defined.
+func applyStorageConfig(cfg, defaults *ConfigWrapper) {
+ rulerStoreConfigsToApply := make([]func(*ConfigWrapper), 0, 4)
+ chunkStorageConfigsToApply := make([]func(*ConfigWrapper), 0, 4)
+
+ if cfg.Common.Storage.Azure != nil {
+ rulerStoreConfigsToApply = append(rulerStoreConfigsToApply, func(r *ConfigWrapper) {
+ r.Ruler.StoreConfig.Type = "azure"
+ r.Ruler.StoreConfig.Azure = r.Common.Storage.Azure.ToCortexAzureConfig()
+ })
+
+ chunkStorageConfigsToApply = append(chunkStorageConfigsToApply, func(r *ConfigWrapper) {
+ r.StorageConfig.AzureStorageConfig = *r.Common.Storage.Azure
+ r.CompactorConfig.SharedStoreType = storage.StorageTypeAzure
+ })
+ }
+
+ if cfg.Common.Storage.GCS != nil {
+ rulerStoreConfigsToApply = append(rulerStoreConfigsToApply, func(r *ConfigWrapper) {
+ r.Ruler.StoreConfig.Type = "gcs"
+ r.Ruler.StoreConfig.GCS = r.Common.Storage.GCS.ToCortexGCSConfig()
+ })
+
+ chunkStorageConfigsToApply = append(chunkStorageConfigsToApply, func(r *ConfigWrapper) {
+ r.StorageConfig.GCSConfig = *r.Common.Storage.GCS
+ r.CompactorConfig.SharedStoreType = storage.StorageTypeGCS
+ })
+ }
+
+ if cfg.Common.Storage.FSConfig != nil {
+ rulerStoreConfigsToApply = append(rulerStoreConfigsToApply, func(r *ConfigWrapper) {
+ r.Ruler.StoreConfig.Type = "local"
+ r.Ruler.StoreConfig.Local = r.Common.Storage.FSConfig.ToCortexLocalConfig()
+ })
+
+ chunkStorageConfigsToApply = append(chunkStorageConfigsToApply, func(r *ConfigWrapper) {
+ r.StorageConfig.FSConfig = *r.Common.Storage.FSConfig
+ r.CompactorConfig.SharedStoreType = storage.StorageTypeFileSystem
+ })
+ }
+
+ if cfg.Common.Storage.S3 != nil {
+ rulerStoreConfigsToApply = append(rulerStoreConfigsToApply, func(r *ConfigWrapper) {
+ r.Ruler.StoreConfig.Type = "s3"
+ r.Ruler.StoreConfig.S3 = r.Common.Storage.S3.ToCortexS3Config()
+ })
+
+ chunkStorageConfigsToApply = append(chunkStorageConfigsToApply, func(r *ConfigWrapper) {
+ r.StorageConfig.AWSStorageConfig.S3Config = *r.Common.Storage.S3
+ r.CompactorConfig.SharedStoreType = storage.StorageTypeS3
+ })
+ }
+
+ if cfg.Common.Storage.Swift != nil {
+ rulerStoreConfigsToApply = append(rulerStoreConfigsToApply, func(r *ConfigWrapper) {
+ r.Ruler.StoreConfig.Type = "swift"
+ r.Ruler.StoreConfig.Swift = r.Common.Storage.Swift.ToCortexSwiftConfig()
+ })
+
+ chunkStorageConfigsToApply = append(chunkStorageConfigsToApply, func(r *ConfigWrapper) {
+ r.StorageConfig.Swift = *r.Common.Storage.Swift
+ r.CompactorConfig.SharedStoreType = storage.StorageTypeSwift
+ })
+ }
+
+ // store change funcs in slices and apply all at once, because once we change the
+ // config we can no longer compare it to the default, this allows us to only
+ // do that comparison once
+ applyRulerStoreConfigs(cfg, defaults, rulerStoreConfigsToApply)
+ applyChunkStorageConfigs(cfg, defaults, chunkStorageConfigsToApply)
+}
+
+func applyRulerStoreConfigs(cfg, defaults *ConfigWrapper, apply []func(*ConfigWrapper)) {
+ if reflect.DeepEqual(cfg.Ruler.StoreConfig, defaults.Ruler.StoreConfig) {
+ for _, ap := range apply {
+ ap(cfg)
+ }
+ }
+}
+
+func applyChunkStorageConfigs(cfg, defaults *ConfigWrapper, apply []func(*ConfigWrapper)) {
+ if reflect.DeepEqual(cfg.StorageConfig, defaults.StorageConfig) {
+ for _, ap := range apply {
+ ap(cfg)
+ }
+ }
+}
diff --git a/pkg/loki/config_wrapper_test.go b/pkg/loki/config_wrapper_test.go
index ffaa23039f45a..a8f458ae3eee7 100644
--- a/pkg/loki/config_wrapper_test.go
+++ b/pkg/loki/config_wrapper_test.go
@@ -3,12 +3,21 @@ package loki
import (
"flag"
"io/ioutil"
+ "net/url"
"os"
"testing"
+ "time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
+ cortex_aws "github.com/cortexproject/cortex/pkg/chunk/aws"
+ cortex_azure "github.com/cortexproject/cortex/pkg/chunk/azure"
+ cortex_gcp "github.com/cortexproject/cortex/pkg/chunk/gcp"
+ cortex_local "github.com/cortexproject/cortex/pkg/ruler/rulestore/local"
+ cortex_swift "github.com/cortexproject/cortex/pkg/storage/bucket/swift"
+
+ "github.com/grafana/loki/pkg/storage/chunk/storage"
"github.com/grafana/loki/pkg/util/cfg"
)
@@ -43,10 +52,15 @@ func Test_CommonConfig(t *testing.T) {
return config, defaults
}
+ //the unmarshaller overwrites default values with 0s when a completely empty
+ //config file is passed, so our "empty" config has some non-relevant config in it
+ const emptyConfigString = `---
+server:
+ http_listen_port: 80`
+
t.Run("common path prefix config", func(t *testing.T) {
t.Run("does not override defaults for file paths when not provided", func(t *testing.T) {
- configFileString := `---`
- config, defaults := testContext(configFileString, nil)
+ config, defaults := testContext(emptyConfigString, nil)
assert.EqualValues(t, defaults.Ruler.RulePath, config.Ruler.RulePath)
assert.EqualValues(t, defaults.Ingester.WAL.Dir, config.Ingester.WAL.Dir)
@@ -92,8 +106,7 @@ common:
// * ruler
t.Run("does not automatically configure memberlist when no top-level memberlist config is provided", func(t *testing.T) {
- configFileString := `---`
- config, defaults := testContext(configFileString, nil)
+ config, defaults := testContext(emptyConfigString, nil)
assert.EqualValues(t, defaults.Ingester.LifecyclerConfig.RingConfig.KVStore.Store, config.Ingester.LifecyclerConfig.RingConfig.KVStore.Store)
assert.EqualValues(t, defaults.Distributor.DistributorRing.KVStore.Store, config.Distributor.DistributorRing.KVStore.Store)
@@ -145,6 +158,424 @@ memberlist:
assert.EqualValues(t, memberlistStr, config.Distributor.DistributorRing.KVStore.Store)
})
})
+
+ t.Run("common object store config", func(t *testing.T) {
+ //config file structure
+ //common:
+ // storage:
+ // azure: azure.BlobStorageConfig
+ // gcs: gcp.GCSConfig
+ // s3: aws.S3Config
+ // swift: openstack.SwiftConfig
+
+ t.Run("does not automatically configure cloud object storage", func(t *testing.T) {
+ config, defaults := testContext(emptyConfigString, nil)
+
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Type, config.Ruler.StoreConfig.Type)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Azure, config.Ruler.StoreConfig.Azure)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.GCS, config.Ruler.StoreConfig.GCS)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.S3, config.Ruler.StoreConfig.S3)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Swift, config.Ruler.StoreConfig.Swift)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Local, config.Ruler.StoreConfig.Local)
+
+ assert.EqualValues(t, defaults.StorageConfig.AWSStorageConfig, config.StorageConfig.AWSStorageConfig)
+ assert.EqualValues(t, defaults.StorageConfig.AzureStorageConfig, config.StorageConfig.AzureStorageConfig)
+ assert.EqualValues(t, defaults.StorageConfig.GCSConfig, config.StorageConfig.GCSConfig)
+ assert.EqualValues(t, defaults.StorageConfig.Swift, config.StorageConfig.Swift)
+ assert.EqualValues(t, defaults.StorageConfig.FSConfig, config.StorageConfig.FSConfig)
+ })
+
+ t.Run("when multiple configs are provided, the last (alphabetically) is used as the ruler store type", func(t *testing.T) {
+ multipleConfig := `common:
+ storage:
+ s3:
+ s3: s3://foo-bucket/example
+ endpoint: s3://foo-bucket
+ region: us-east1
+ access_key_id: abc123
+ secret_access_key: def789
+ gcs:
+ bucket_name: foobar
+ chunk_buffer_size: 27
+ request_timeout: 5m`
+
+ config, _ := testContext(multipleConfig, nil)
+ assert.Equal(t, "s3", config.Ruler.StoreConfig.Type)
+
+ assert.Equal(t, "s3://foo-bucket", config.Ruler.StoreConfig.S3.Endpoint)
+ assert.Equal(t, "foobar", config.Ruler.StoreConfig.GCS.BucketName)
+
+ assert.Equal(t, "s3://foo-bucket", config.StorageConfig.AWSStorageConfig.S3Config.Endpoint)
+ assert.Equal(t, "foobar", config.StorageConfig.GCSConfig.BucketName)
+ })
+
+ t.Run("when common s3 storage config is provided, ruler and storage config are defaulted to use it", func(t *testing.T) {
+ s3Config := `common:
+ storage:
+ s3:
+ s3: s3://foo-bucket/example
+ endpoint: s3://foo-bucket
+ region: us-east1
+ access_key_id: abc123
+ secret_access_key: def789
+ insecure: true
+ signature_version: v4
+ http_config:
+ idle_conn_timeout: 5m
+ response_header_timeout: 5m`
+
+ config, defaults := testContext(s3Config, nil)
+
+ expected, err := url.Parse("s3://foo-bucket/example")
+ require.NoError(t, err)
+
+ assert.Equal(t, "s3", config.Ruler.StoreConfig.Type)
+
+ for _, actual := range []cortex_aws.S3Config{
+ config.Ruler.StoreConfig.S3,
+ config.StorageConfig.AWSStorageConfig.S3Config.ToCortexS3Config(),
+ } {
+ require.NotNil(t, actual.S3.URL)
+ assert.Equal(t, *expected, *actual.S3.URL)
+
+ assert.Equal(t, false, actual.S3ForcePathStyle)
+ assert.Equal(t, "s3://foo-bucket", actual.Endpoint)
+ assert.Equal(t, "us-east1", actual.Region)
+ assert.Equal(t, "abc123", actual.AccessKeyID)
+ assert.Equal(t, "def789", actual.SecretAccessKey)
+ assert.Equal(t, true, actual.Insecure)
+ assert.Equal(t, false, actual.SSEEncryption)
+ assert.Equal(t, 5*time.Minute, actual.HTTPConfig.IdleConnTimeout)
+ assert.Equal(t, 5*time.Minute, actual.HTTPConfig.ResponseHeaderTimeout)
+ assert.Equal(t, false, actual.HTTPConfig.InsecureSkipVerify)
+ assert.Equal(t, "v4", actual.SignatureVersion)
+ }
+
+ //should remain empty
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Azure, config.Ruler.StoreConfig.Azure)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.GCS, config.Ruler.StoreConfig.GCS)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Swift, config.Ruler.StoreConfig.Swift)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Local, config.Ruler.StoreConfig.Local)
+
+ //should remain empty
+ assert.EqualValues(t, defaults.StorageConfig.AzureStorageConfig, config.StorageConfig.AzureStorageConfig)
+ assert.EqualValues(t, defaults.StorageConfig.GCSConfig, config.StorageConfig.GCSConfig)
+ assert.EqualValues(t, defaults.StorageConfig.Swift, config.StorageConfig.Swift)
+ assert.EqualValues(t, defaults.StorageConfig.FSConfig, config.StorageConfig.FSConfig)
+ })
+
+ t.Run("when common gcs storage config is provided, ruler and storage config are defaulted to use it", func(t *testing.T) {
+ gcsConfig := `common:
+ storage:
+ gcs:
+ bucket_name: foobar
+ chunk_buffer_size: 27
+ request_timeout: 5m
+ enable_opencensus: true`
+
+ config, defaults := testContext(gcsConfig, nil)
+
+ assert.Equal(t, "gcs", config.Ruler.StoreConfig.Type)
+
+ for _, actual := range []cortex_gcp.GCSConfig{
+ config.Ruler.StoreConfig.GCS,
+ config.StorageConfig.GCSConfig.ToCortexGCSConfig(),
+ } {
+ assert.Equal(t, "foobar", actual.BucketName)
+ assert.Equal(t, 27, actual.ChunkBufferSize)
+ assert.Equal(t, 5*time.Minute, actual.RequestTimeout)
+ assert.Equal(t, true, actual.EnableOpenCensus)
+ }
+
+ //should remain empty
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Azure, config.Ruler.StoreConfig.Azure)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.S3, config.Ruler.StoreConfig.S3)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Swift, config.Ruler.StoreConfig.Swift)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Local, config.Ruler.StoreConfig.Local)
+ //should remain empty
+ assert.EqualValues(t, defaults.StorageConfig.AzureStorageConfig, config.StorageConfig.AzureStorageConfig)
+ assert.EqualValues(t, defaults.StorageConfig.AWSStorageConfig.S3Config, config.StorageConfig.AWSStorageConfig.S3Config)
+ assert.EqualValues(t, defaults.StorageConfig.Swift, config.StorageConfig.Swift)
+ assert.EqualValues(t, defaults.StorageConfig.FSConfig, config.StorageConfig.FSConfig)
+ })
+
+ t.Run("when common azure storage config is provided, ruler and storage config are defaulted to use it", func(t *testing.T) {
+ azureConfig := `common:
+ storage:
+ azure:
+ environment: earth
+ container_name: milkyway
+ account_name: 3rd_planet
+ account_key: water
+ download_buffer_size: 27
+ upload_buffer_size: 42
+ upload_buffer_count: 13
+ request_timeout: 5m
+ max_retries: 3
+ min_retry_delay: 10s
+ max_retry_delay: 10m`
+
+ config, defaults := testContext(azureConfig, nil)
+
+ assert.Equal(t, "azure", config.Ruler.StoreConfig.Type)
+
+ for _, actual := range []cortex_azure.BlobStorageConfig{
+ config.Ruler.StoreConfig.Azure,
+ config.StorageConfig.AzureStorageConfig.ToCortexAzureConfig(),
+ } {
+ assert.Equal(t, "earth", actual.Environment)
+ assert.Equal(t, "milkyway", actual.ContainerName)
+ assert.Equal(t, "3rd_planet", actual.AccountName)
+ assert.Equal(t, "water", actual.AccountKey.Value)
+ assert.Equal(t, 27, actual.DownloadBufferSize)
+ assert.Equal(t, 42, actual.UploadBufferSize)
+ assert.Equal(t, 13, actual.UploadBufferCount)
+ assert.Equal(t, 5*time.Minute, actual.RequestTimeout)
+ assert.Equal(t, 3, actual.MaxRetries)
+ assert.Equal(t, 10*time.Second, actual.MinRetryDelay)
+ assert.Equal(t, 10*time.Minute, actual.MaxRetryDelay)
+ }
+
+ //should remain empty
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.GCS, config.Ruler.StoreConfig.GCS)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.S3, config.Ruler.StoreConfig.S3)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Swift, config.Ruler.StoreConfig.Swift)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Local, config.Ruler.StoreConfig.Local)
+
+ //should remain empty
+ assert.EqualValues(t, defaults.StorageConfig.GCSConfig, config.StorageConfig.GCSConfig)
+ assert.EqualValues(t, defaults.StorageConfig.AWSStorageConfig.S3Config, config.StorageConfig.AWSStorageConfig.S3Config)
+ assert.EqualValues(t, defaults.StorageConfig.Swift, config.StorageConfig.Swift)
+ assert.EqualValues(t, defaults.StorageConfig.FSConfig, config.StorageConfig.FSConfig)
+ })
+
+ t.Run("when common swift storage config is provided, ruler and storage config are defaulted to use it", func(t *testing.T) {
+ swiftConfig := `common:
+ storage:
+ swift:
+ auth_version: 3
+ auth_url: http://example.com
+ username: steve
+ user_domain_name: example.com
+ user_domain_id: 1
+ user_id: 27
+ password: supersecret
+ domain_id: 2
+ domain_name: test.com
+ project_id: 13
+ project_name: tower
+ project_domain_id: 3
+ project_domain_name: tower.com
+ region_name: us-east1
+ container_name: tupperware
+ max_retries: 6
+ connect_timeout: 5m
+ request_timeout: 5s`
+
+ config, defaults := testContext(swiftConfig, nil)
+
+ assert.Equal(t, "swift", config.Ruler.StoreConfig.Type)
+
+ for _, actual := range []cortex_swift.Config{
+ config.Ruler.StoreConfig.Swift.Config,
+ config.StorageConfig.Swift.Config,
+ } {
+ assert.Equal(t, 3, actual.AuthVersion)
+ assert.Equal(t, "http://example.com", actual.AuthURL)
+ assert.Equal(t, "steve", actual.Username)
+ assert.Equal(t, "example.com", actual.UserDomainName)
+ assert.Equal(t, "1", actual.UserDomainID)
+ assert.Equal(t, "27", actual.UserID)
+ assert.Equal(t, "supersecret", actual.Password)
+ assert.Equal(t, "2", actual.DomainID)
+ assert.Equal(t, "test.com", actual.DomainName)
+ assert.Equal(t, "13", actual.ProjectID)
+ assert.Equal(t, "tower", actual.ProjectName)
+ assert.Equal(t, "3", actual.ProjectDomainID)
+ assert.Equal(t, "tower.com", actual.ProjectDomainName)
+ assert.Equal(t, "us-east1", actual.RegionName)
+ assert.Equal(t, "tupperware", actual.ContainerName)
+ assert.Equal(t, 6, actual.MaxRetries)
+ assert.Equal(t, 5*time.Minute, actual.ConnectTimeout)
+ assert.Equal(t, 5*time.Second, actual.RequestTimeout)
+ }
+
+ //should remain empty
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.GCS, config.Ruler.StoreConfig.GCS)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.S3, config.Ruler.StoreConfig.S3)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Azure, config.Ruler.StoreConfig.Azure)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Local, config.Ruler.StoreConfig.Local)
+
+ //should remain empty
+ assert.EqualValues(t, defaults.StorageConfig.GCSConfig, config.StorageConfig.GCSConfig)
+ assert.EqualValues(t, defaults.StorageConfig.AWSStorageConfig.S3Config, config.StorageConfig.AWSStorageConfig.S3Config)
+ assert.EqualValues(t, defaults.StorageConfig.AzureStorageConfig, config.StorageConfig.AzureStorageConfig)
+ assert.EqualValues(t, defaults.StorageConfig.FSConfig, config.StorageConfig.FSConfig)
+ })
+
+ t.Run("when common filesystem/local config is provided, ruler and storage config are defaulted to use it", func(t *testing.T) {
+ fsConfig := `common:
+ storage:
+ filesystem:
+ directory: /tmp/foo`
+
+ config, defaults := testContext(fsConfig, nil)
+
+ assert.Equal(t, "local", config.Ruler.StoreConfig.Type)
+
+ for _, actual := range []cortex_local.Config{
+ config.Ruler.StoreConfig.Local,
+ config.StorageConfig.FSConfig.ToCortexLocalConfig(),
+ } {
+ assert.Equal(t, "/tmp/foo", actual.Directory)
+ }
+
+ //should remain empty
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.GCS, config.Ruler.StoreConfig.GCS)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.S3, config.Ruler.StoreConfig.S3)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Azure, config.Ruler.StoreConfig.Azure)
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.Swift, config.Ruler.StoreConfig.Swift)
+
+ //should remain empty
+ assert.EqualValues(t, defaults.StorageConfig.GCSConfig, config.StorageConfig.GCSConfig)
+ assert.EqualValues(t, defaults.StorageConfig.AWSStorageConfig.S3Config, config.StorageConfig.AWSStorageConfig.S3Config)
+ assert.EqualValues(t, defaults.StorageConfig.AzureStorageConfig, config.StorageConfig.AzureStorageConfig)
+ assert.EqualValues(t, defaults.StorageConfig.Swift, config.StorageConfig.Swift)
+ })
+
+ t.Run("explicit ruler storage object storage configuration provided via config file is preserved", func(t *testing.T) {
+ specificRulerConfig := `common:
+ storage:
+ gcs:
+ bucket_name: foobar
+ chunk_buffer_size: 27
+ request_timeout: 5m
+ruler:
+ storage:
+ type: s3
+ s3:
+ endpoint: s3://foo-bucket
+ region: us-east1
+ access_key_id: abc123
+ secret_access_key: def789`
+ config, defaults := testContext(specificRulerConfig, nil)
+
+ assert.Equal(t, "s3", config.Ruler.StoreConfig.Type)
+ assert.Equal(t, "s3://foo-bucket", config.Ruler.StoreConfig.S3.Endpoint)
+ assert.Equal(t, "us-east1", config.Ruler.StoreConfig.S3.Region)
+ assert.Equal(t, "abc123", config.Ruler.StoreConfig.S3.AccessKeyID)
+ assert.Equal(t, "def789", config.Ruler.StoreConfig.S3.SecretAccessKey)
+
+ //should remain empty
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.GCS, config.Ruler.StoreConfig.GCS)
+
+ //should be set by common config
+ assert.EqualValues(t, "foobar", config.StorageConfig.GCSConfig.BucketName)
+ assert.EqualValues(t, 27, config.StorageConfig.GCSConfig.ChunkBufferSize)
+ assert.EqualValues(t, 5*time.Minute, config.StorageConfig.GCSConfig.RequestTimeout)
+
+ //should remain empty
+ assert.EqualValues(t, defaults.StorageConfig.AWSStorageConfig.S3Config, config.StorageConfig.AWSStorageConfig.S3Config)
+ })
+
+ t.Run("explicit storage config provided via config file is preserved", func(t *testing.T) {
+ specificRulerConfig := `common:
+ storage:
+ gcs:
+ bucket_name: foobar
+ chunk_buffer_size: 27
+ request_timeout: 5m
+storage_config:
+ aws:
+ endpoint: s3://foo-bucket
+ region: us-east1
+ access_key_id: abc123
+ secret_access_key: def789`
+
+ config, defaults := testContext(specificRulerConfig, nil)
+
+ assert.Equal(t, "s3://foo-bucket", config.StorageConfig.AWSStorageConfig.S3Config.Endpoint)
+ assert.Equal(t, "us-east1", config.StorageConfig.AWSStorageConfig.S3Config.Region)
+ assert.Equal(t, "abc123", config.StorageConfig.AWSStorageConfig.S3Config.AccessKeyID)
+ assert.Equal(t, "def789", config.StorageConfig.AWSStorageConfig.S3Config.SecretAccessKey)
+
+ //should remain empty
+ assert.EqualValues(t, defaults.StorageConfig.GCSConfig, config.StorageConfig.GCSConfig)
+
+ //should be set by common config
+ assert.EqualValues(t, "foobar", config.Ruler.StoreConfig.GCS.BucketName)
+ assert.EqualValues(t, 27, config.Ruler.StoreConfig.GCS.ChunkBufferSize)
+ assert.EqualValues(t, 5*time.Minute, config.Ruler.StoreConfig.GCS.RequestTimeout)
+
+ //should remain empty
+ assert.EqualValues(t, defaults.Ruler.StoreConfig.S3, config.Ruler.StoreConfig.S3)
+ })
+
+ t.Run("when common object store config is provided, compactor shared store is defaulted to use it", func(t *testing.T) {
+ for _, tt := range []struct {
+ configString string
+ expected string
+ }{
+ {
+ configString: `common:
+ storage:
+ s3:
+ s3: s3://foo-bucket/example
+ access_key_id: abc123
+ secret_access_key: def789`,
+ expected: storage.StorageTypeS3,
+ },
+ {
+ configString: `common:
+ storage:
+ gcs:
+ bucket_name: foobar`,
+ expected: storage.StorageTypeGCS,
+ },
+ {
+ configString: `common:
+ storage:
+ azure:
+ account_name: 3rd_planet
+ account_key: water`,
+ expected: storage.StorageTypeAzure,
+ },
+ {
+ configString: `common:
+ storage:
+ swift:
+ username: steve
+ password: supersecret`,
+ expected: storage.StorageTypeSwift,
+ },
+ {
+ configString: `common:
+ storage:
+ filesystem:
+ directory: /tmp/foo`,
+ expected: storage.StorageTypeFileSystem,
+ },
+ } {
+ config, _ := testContext(tt.configString, nil)
+
+ assert.Equal(t, tt.expected, config.CompactorConfig.SharedStoreType)
+ }
+ })
+
+ t.Run("explicit compactor shared_store config is preserved", func(t *testing.T) {
+ configString := `common:
+ storage:
+ s3:
+ s3: s3://foo-bucket/example
+ access_key_id: abc123
+ secret_access_key: def789
+compactor:
+ shared_store: gcs`
+ config, _ := testContext(configString, nil)
+
+ assert.Equal(t, "gcs", config.CompactorConfig.SharedStoreType)
+ })
+ })
}
// Can't use a totally empty yaml file or it causes weird behavior in the unmarhsalling
diff --git a/pkg/storage/chunk/aws/s3_storage_client.go b/pkg/storage/chunk/aws/s3_storage_client.go
index d65e4829f7268..38703825dea1c 100644
--- a/pkg/storage/chunk/aws/s3_storage_client.go
+++ b/pkg/storage/chunk/aws/s3_storage_client.go
@@ -28,6 +28,7 @@ import (
awscommon "github.com/weaveworks/common/aws"
"github.com/weaveworks/common/instrument"
+ cortex_aws "github.com/cortexproject/cortex/pkg/chunk/aws"
cortex_s3 "github.com/cortexproject/cortex/pkg/storage/bucket/s3"
"github.com/cortexproject/cortex/pkg/util"
"github.com/grafana/dskit/flagext"
@@ -125,6 +126,32 @@ func (cfg *S3Config) Validate() error {
return nil
}
+func (cfg *S3Config) ToCortexS3Config() cortex_aws.S3Config {
+ return cortex_aws.S3Config{
+ S3: cfg.S3,
+ S3ForcePathStyle: cfg.S3ForcePathStyle,
+ BucketNames: cfg.BucketNames,
+ Endpoint: cfg.Endpoint,
+ Region: cfg.Region,
+ AccessKeyID: cfg.AccessKeyID,
+ SecretAccessKey: cfg.SecretAccessKey,
+ Insecure: cfg.Insecure,
+ SSEEncryption: cfg.SSEEncryption,
+ HTTPConfig: cfg.HTTPConfig.ToCortexHTTPConfig(),
+ SignatureVersion: cfg.SignatureVersion,
+ SSEConfig: cfg.SSEConfig,
+ Inject: cortex_aws.InjectRequestMiddleware(cfg.Inject),
+ }
+}
+
+func (cfg *HTTPConfig) ToCortexHTTPConfig() cortex_aws.HTTPConfig {
+ return cortex_aws.HTTPConfig{
+ IdleConnTimeout: cfg.IdleConnTimeout,
+ ResponseHeaderTimeout: cfg.ResponseHeaderTimeout,
+ InsecureSkipVerify: cfg.InsecureSkipVerify,
+ }
+}
+
type S3ObjectClient struct {
bucketNames []string
S3 s3iface.S3API
diff --git a/pkg/storage/chunk/azure/blob_storage_client.go b/pkg/storage/chunk/azure/blob_storage_client.go
index 8a0cca55df6d8..af4b4e50ab282 100644
--- a/pkg/storage/chunk/azure/blob_storage_client.go
+++ b/pkg/storage/chunk/azure/blob_storage_client.go
@@ -13,6 +13,7 @@ import (
"github.com/Azure/azure-pipeline-go/pipeline"
"github.com/Azure/azure-storage-blob-go/azblob"
+ cortex_azure "github.com/cortexproject/cortex/pkg/chunk/azure"
"github.com/cortexproject/cortex/pkg/util"
"github.com/cortexproject/cortex/pkg/util/log"
"github.com/grafana/dskit/flagext"
@@ -87,6 +88,22 @@ func (c *BlobStorageConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagS
f.DurationVar(&c.MaxRetryDelay, prefix+"azure.max-retry-delay", 500*time.Millisecond, "Maximum time to wait before retrying a request.")
}
+func (c *BlobStorageConfig) ToCortexAzureConfig() cortex_azure.BlobStorageConfig {
+ return cortex_azure.BlobStorageConfig{
+ Environment: c.Environment,
+ ContainerName: c.ContainerName,
+ AccountName: c.AccountName,
+ AccountKey: c.AccountKey,
+ DownloadBufferSize: c.DownloadBufferSize,
+ UploadBufferSize: c.UploadBufferSize,
+ UploadBufferCount: c.UploadBufferCount,
+ RequestTimeout: c.RequestTimeout,
+ MaxRetries: c.MaxRetries,
+ MinRetryDelay: c.MinRetryDelay,
+ MaxRetryDelay: c.MaxRetryDelay,
+ }
+}
+
// BlobStorage is used to interact with azure blob storage for setting or getting time series chunks.
// Implements ObjectStorage
type BlobStorage struct {
diff --git a/pkg/storage/chunk/gcp/gcs_object_client.go b/pkg/storage/chunk/gcp/gcs_object_client.go
index b73c22bd50b49..9878cbe1b9e5e 100644
--- a/pkg/storage/chunk/gcp/gcs_object_client.go
+++ b/pkg/storage/chunk/gcp/gcs_object_client.go
@@ -7,6 +7,7 @@ import (
"time"
"cloud.google.com/go/storage"
+ cortex_gcp "github.com/cortexproject/cortex/pkg/chunk/gcp"
"github.com/pkg/errors"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
@@ -42,6 +43,15 @@ func (cfg *GCSConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.BoolVar(&cfg.EnableOpenCensus, prefix+"gcs.enable-opencensus", true, "Enabled OpenCensus (OC) instrumentation for all requests.")
}
+func (cfg *GCSConfig) ToCortexGCSConfig() cortex_gcp.GCSConfig {
+ return cortex_gcp.GCSConfig{
+ BucketName: cfg.BucketName,
+ ChunkBufferSize: cfg.ChunkBufferSize,
+ RequestTimeout: cfg.RequestTimeout,
+ EnableOpenCensus: cfg.EnableOpenCensus,
+ }
+}
+
// NewGCSObjectClient makes a new chunk.Client that writes chunks to GCS.
func NewGCSObjectClient(ctx context.Context, cfg GCSConfig) (*GCSObjectClient, error) {
var opts []option.ClientOption
diff --git a/pkg/storage/chunk/local/fs_object_client.go b/pkg/storage/chunk/local/fs_object_client.go
index b59607e49a83c..e671b6ff480fc 100644
--- a/pkg/storage/chunk/local/fs_object_client.go
+++ b/pkg/storage/chunk/local/fs_object_client.go
@@ -13,6 +13,7 @@ import (
"github.com/pkg/errors"
"github.com/thanos-io/thanos/pkg/runutil"
+ cortex_local "github.com/cortexproject/cortex/pkg/ruler/rulestore/local"
util_log "github.com/cortexproject/cortex/pkg/util/log"
"github.com/grafana/loki/pkg/storage/chunk"
@@ -34,6 +35,12 @@ func (cfg *FSConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.StringVar(&cfg.Directory, prefix+"local.chunk-directory", "", "Directory to store chunks in.")
}
+func (cfg *FSConfig) ToCortexLocalConfig() cortex_local.Config {
+ return cortex_local.Config{
+ Directory: cfg.Directory,
+ }
+}
+
// FSObjectClient holds config for filesystem as object store
type FSObjectClient struct {
cfg FSConfig
diff --git a/pkg/storage/chunk/openstack/swift_object_client.go b/pkg/storage/chunk/openstack/swift_object_client.go
index e45f58a6c7cd3..e9a71d5e57ae6 100644
--- a/pkg/storage/chunk/openstack/swift_object_client.go
+++ b/pkg/storage/chunk/openstack/swift_object_client.go
@@ -11,6 +11,7 @@ import (
"github.com/ncw/swift"
"github.com/pkg/errors"
+ cortex_openstack "github.com/cortexproject/cortex/pkg/chunk/openstack"
cortex_swift "github.com/cortexproject/cortex/pkg/storage/bucket/swift"
"github.com/cortexproject/cortex/pkg/util/log"
@@ -42,6 +43,12 @@ func (cfg *SwiftConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet)
cfg.Config.RegisterFlagsWithPrefix(prefix, f)
}
+func (cfg *SwiftConfig) ToCortexSwiftConfig() cortex_openstack.SwiftConfig {
+ return cortex_openstack.SwiftConfig{
+ Config: cfg.Config,
+ }
+}
+
// NewSwiftObjectClient makes a new chunk.Client that writes chunks to OpenStack Swift.
func NewSwiftObjectClient(cfg SwiftConfig) (*SwiftObjectClient, error) {
log.WarnExperimentalUse("OpenStack Swift Storage")
|
configuration
|
add a common config section for object storage (#4473)
|
26432c0bf86446cb7450bf724c44115daf6930e5
|
2023-12-21 01:19:51
|
Joao Marcal
|
operator: Add serviceaccount per lokistack resource (#11533)
| false
|
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index 600bc945a1720..f5fb473b7b8fd 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,6 @@
## Main
+- [11533](https://github.com/grafana/loki/pull/11533) **periklis**: Add serviceaccount per LokiStack resource
- [11158](https://github.com/grafana/loki/pull/11158) **btaani**: operator: Add warning for old schema configuration
- [11473](https://github.com/grafana/loki/pull/11473) **JoaoBraveCoding**: Adds structured metadata dashboards
- [11448](https://github.com/grafana/loki/pull/11448) **periklis**: Update Loki operand to v2.9.3
diff --git a/operator/internal/handlers/lokistack_create_or_update_test.go b/operator/internal/handlers/lokistack_create_or_update_test.go
index 781966ac2ddad..79928b4a82e50 100644
--- a/operator/internal/handlers/lokistack_create_or_update_test.go
+++ b/operator/internal/handlers/lokistack_create_or_update_test.go
@@ -233,7 +233,8 @@ func TestCreateOrUpdateLokiStack_SetsNamespaceOnAllObjects(t *testing.T) {
}
k.GetStub = func(_ context.Context, name types.NamespacedName, out client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := out.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(out, &stack)
return nil
}
@@ -319,7 +320,8 @@ func TestCreateOrUpdateLokiStack_SetsOwnerRefOnAllObjects(t *testing.T) {
// Create looks up the CR first, so we need to return our fake stack
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, &stack)
return nil
}
@@ -410,7 +412,8 @@ func TestCreateOrUpdateLokiStack_WhenSetControllerRefInvalid_ContinueWithOtherOb
// Create looks up the CR first, so we need to return our fake stack
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, &stack)
}
if defaultSecret.Name == name.Name {
@@ -509,7 +512,8 @@ func TestCreateOrUpdateLokiStack_WhenGetReturnsNoError_UpdateObjects(t *testing.
// Create looks up the CR first, so we need to return our fake stack
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, &stack)
}
if defaultSecret.Name == name.Name {
@@ -572,7 +576,8 @@ func TestCreateOrUpdateLokiStack_WhenCreateReturnsError_ContinueWithOtherObjects
// GetStub looks up the CR first, so we need to return our fake stack
// return NotFound for everything else to trigger create.
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, &stack)
return nil
}
@@ -679,7 +684,8 @@ func TestCreateOrUpdateLokiStack_WhenUpdateReturnsError_ContinueWithOtherObjects
// GetStub looks up the CR first, so we need to return our fake stack
// return NotFound for everything else to trigger create.
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, &stack)
}
if defaultSecret.Name == name.Name {
@@ -749,7 +755,8 @@ func TestCreateOrUpdateLokiStack_WhenMissingSecret_SetDegraded(t *testing.T) {
// GetStub looks up the CR first, so we need to return our fake stack
// return NotFound for everything else to trigger create.
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, stack)
return nil
}
@@ -810,7 +817,8 @@ func TestCreateOrUpdateLokiStack_WhenInvalidSecret_SetDegraded(t *testing.T) {
// GetStub looks up the CR first, so we need to return our fake stack
// return NotFound for everything else to trigger create.
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, stack)
return nil
}
@@ -884,7 +892,8 @@ func TestCreateOrUpdateLokiStack_WithInvalidStorageSchema_SetDegraded(t *testing
// GetStub looks up the CR first, so we need to return our fake stack
// return NotFound for everything else to trigger create.
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, stack)
return nil
}
@@ -954,7 +963,8 @@ func TestCreateOrUpdateLokiStack_WhenMissingCAConfigMap_SetDegraded(t *testing.T
// GetStub looks up the CR first, so we need to return our fake stack
// return NotFound for everything else to trigger create.
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, stack)
return nil
}
@@ -1026,7 +1036,8 @@ func TestCreateOrUpdateLokiStack_WhenInvalidCAConfigMap_SetDegraded(t *testing.T
// GetStub looks up the CR first, so we need to return our fake stack
// return NotFound for everything else to trigger create.
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, stack)
return nil
}
@@ -1115,7 +1126,8 @@ func TestCreateOrUpdateLokiStack_WhenInvalidTenantsConfiguration_SetDegraded(t *
// GetStub looks up the CR first, so we need to return our fake stack
// return NotFound for everything else to trigger create.
k.GetStub = func(_ context.Context, name types.NamespacedName, object client.Object, _ ...client.GetOption) error {
- if r.Name == name.Name && r.Namespace == name.Namespace {
+ _, isLokiStack := object.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(object, stack)
return nil
}
@@ -1523,7 +1535,9 @@ func TestCreateOrUpdateLokiStack_RemovesRulerResourcesWhenDisabled(t *testing.T)
if ok {
return apierrors.NewNotFound(schema.GroupResource{}, "no ruler config")
}
- if r.Name == name.Name && r.Namespace == name.Namespace {
+
+ _, isLokiStack := out.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(out, &stack)
return nil
}
@@ -1550,7 +1564,7 @@ func TestCreateOrUpdateLokiStack_RemovesRulerResourcesWhenDisabled(t *testing.T)
return nil
}
- k.ListStub = func(_ context.Context, list client.ObjectList, _ ...client.ListOption) error {
+ k.ListStub = func(_ context.Context, list client.ObjectList, options ...client.ListOption) error {
switch list.(type) {
case *corev1.ConfigMapList:
k.SetClientObjectList(list, &corev1.ConfigMapList{
@@ -1588,7 +1602,9 @@ func TestCreateOrUpdateLokiStack_RemovesRulerResourcesWhenDisabled(t *testing.T)
k.SetClientObject(out, &rulerSS)
return nil
}
- if r.Name == name.Name && r.Namespace == name.Namespace {
+
+ _, isLokiStack := out.(*lokiv1.LokiStack)
+ if r.Name == name.Name && r.Namespace == name.Namespace && isLokiStack {
k.SetClientObject(out, &stack)
return nil
}
diff --git a/operator/internal/manifests/build.go b/operator/internal/manifests/build.go
index d19bd488f42af..cc0da5771c0c1 100644
--- a/operator/internal/manifests/build.go
+++ b/operator/internal/manifests/build.go
@@ -15,6 +15,8 @@ import (
func BuildAll(opts Options) ([]client.Object, error) {
res := make([]client.Object, 0)
+ sa := BuildServiceAccount(opts)
+
cm, sha1C, mapErr := LokiConfigMap(opts)
if mapErr != nil {
return nil, mapErr
@@ -52,6 +54,7 @@ func BuildAll(opts Options) ([]client.Object, error) {
}
res = append(res, cm)
+ res = append(res, sa)
res = append(res, distributorObjs...)
res = append(res, ingesterObjs...)
res = append(res, querierObjs...)
diff --git a/operator/internal/manifests/compactor.go b/operator/internal/manifests/compactor.go
index 0362b8d40010c..fc2d9cb602f72 100644
--- a/operator/internal/manifests/compactor.go
+++ b/operator/internal/manifests/compactor.go
@@ -69,7 +69,8 @@ func NewCompactorStatefulSet(opts Options) *appsv1.StatefulSet {
l := ComponentLabels(LabelCompactorComponent, opts.Name)
a := commonAnnotations(opts.ConfigSHA1, opts.ObjectStorage.SecretSHA1, opts.CertRotationRequiredAt)
podSpec := corev1.PodSpec{
- Affinity: configureAffinity(LabelCompactorComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.Compactor),
+ ServiceAccountName: opts.Name,
+ Affinity: configureAffinity(LabelCompactorComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.Compactor),
Volumes: []corev1.Volume{
{
Name: configVolumeName,
diff --git a/operator/internal/manifests/distributor.go b/operator/internal/manifests/distributor.go
index ea856762cb6ac..e92b09c5ba9d0 100644
--- a/operator/internal/manifests/distributor.go
+++ b/operator/internal/manifests/distributor.go
@@ -69,7 +69,8 @@ func NewDistributorDeployment(opts Options) *appsv1.Deployment {
l := ComponentLabels(LabelDistributorComponent, opts.Name)
a := commonAnnotations(opts.ConfigSHA1, opts.ObjectStorage.SecretSHA1, opts.CertRotationRequiredAt)
podSpec := corev1.PodSpec{
- Affinity: configureAffinity(LabelDistributorComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.Distributor),
+ ServiceAccountName: opts.Name,
+ Affinity: configureAffinity(LabelDistributorComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.Distributor),
Volumes: []corev1.Volume{
{
Name: configVolumeName,
diff --git a/operator/internal/manifests/indexgateway.go b/operator/internal/manifests/indexgateway.go
index 3f43075875108..18ffc6cdc32a7 100644
--- a/operator/internal/manifests/indexgateway.go
+++ b/operator/internal/manifests/indexgateway.go
@@ -75,7 +75,8 @@ func NewIndexGatewayStatefulSet(opts Options) *appsv1.StatefulSet {
l := ComponentLabels(LabelIndexGatewayComponent, opts.Name)
a := commonAnnotations(opts.ConfigSHA1, opts.ObjectStorage.SecretSHA1, opts.CertRotationRequiredAt)
podSpec := corev1.PodSpec{
- Affinity: configureAffinity(LabelIndexGatewayComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.IndexGateway),
+ ServiceAccountName: opts.Name,
+ Affinity: configureAffinity(LabelIndexGatewayComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.IndexGateway),
Volumes: []corev1.Volume{
{
Name: configVolumeName,
diff --git a/operator/internal/manifests/ingester.go b/operator/internal/manifests/ingester.go
index 0f20ce84776fd..50740a6a640d8 100644
--- a/operator/internal/manifests/ingester.go
+++ b/operator/internal/manifests/ingester.go
@@ -75,7 +75,8 @@ func NewIngesterStatefulSet(opts Options) *appsv1.StatefulSet {
l := ComponentLabels(LabelIngesterComponent, opts.Name)
a := commonAnnotations(opts.ConfigSHA1, opts.ObjectStorage.SecretSHA1, opts.CertRotationRequiredAt)
podSpec := corev1.PodSpec{
- Affinity: configureAffinity(LabelIngesterComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.Ingester),
+ ServiceAccountName: opts.Name,
+ Affinity: configureAffinity(LabelIngesterComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.Ingester),
Volumes: []corev1.Volume{
{
Name: configVolumeName,
diff --git a/operator/internal/manifests/querier.go b/operator/internal/manifests/querier.go
index b75997e4553c5..ad5bd3cdda348 100644
--- a/operator/internal/manifests/querier.go
+++ b/operator/internal/manifests/querier.go
@@ -75,7 +75,8 @@ func NewQuerierDeployment(opts Options) *appsv1.Deployment {
l := ComponentLabels(LabelQuerierComponent, opts.Name)
a := commonAnnotations(opts.ConfigSHA1, opts.ObjectStorage.SecretSHA1, opts.CertRotationRequiredAt)
podSpec := corev1.PodSpec{
- Affinity: configureAffinity(LabelQuerierComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.Querier),
+ ServiceAccountName: opts.Name,
+ Affinity: configureAffinity(LabelQuerierComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.Querier),
Volumes: []corev1.Volume{
{
Name: configVolumeName,
diff --git a/operator/internal/manifests/query-frontend.go b/operator/internal/manifests/query-frontend.go
index 119f28f7e4f72..0cacb3076aaef 100644
--- a/operator/internal/manifests/query-frontend.go
+++ b/operator/internal/manifests/query-frontend.go
@@ -69,7 +69,8 @@ func NewQueryFrontendDeployment(opts Options) *appsv1.Deployment {
l := ComponentLabels(LabelQueryFrontendComponent, opts.Name)
a := commonAnnotations(opts.ConfigSHA1, opts.ObjectStorage.SecretSHA1, opts.CertRotationRequiredAt)
podSpec := corev1.PodSpec{
- Affinity: configureAffinity(LabelQueryFrontendComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.QueryFrontend),
+ ServiceAccountName: opts.Name,
+ Affinity: configureAffinity(LabelQueryFrontendComponent, opts.Name, opts.Gates.DefaultNodeAffinity, opts.Stack.Template.QueryFrontend),
Volumes: []corev1.Volume{
{
Name: configVolumeName,
diff --git a/operator/internal/manifests/serviceaccount.go b/operator/internal/manifests/serviceaccount.go
new file mode 100644
index 0000000000000..114af1ab3740b
--- /dev/null
+++ b/operator/internal/manifests/serviceaccount.go
@@ -0,0 +1,25 @@
+package manifests
+
+import (
+ corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/utils/pointer"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+)
+
+// BuildRulerServiceAccount returns a k8s object for the LokiStack
+// serviceaccount.
+func BuildServiceAccount(opts Options) client.Object {
+ return &corev1.ServiceAccount{
+ TypeMeta: metav1.TypeMeta{
+ Kind: "ServiceAccount",
+ APIVersion: corev1.SchemeGroupVersion.String(),
+ },
+ ObjectMeta: metav1.ObjectMeta{
+ Name: opts.Name,
+ Namespace: opts.Namespace,
+ Labels: commonLabels(opts.Name),
+ },
+ AutomountServiceAccountToken: pointer.Bool(true),
+ }
+}
diff --git a/operator/internal/manifests/serviceaccount_test.go b/operator/internal/manifests/serviceaccount_test.go
new file mode 100644
index 0000000000000..dc08b62f700c0
--- /dev/null
+++ b/operator/internal/manifests/serviceaccount_test.go
@@ -0,0 +1,69 @@
+package manifests
+
+import (
+ "testing"
+
+ "github.com/stretchr/testify/assert"
+
+ lokiv1 "github.com/grafana/loki/operator/apis/loki/v1"
+ "github.com/grafana/loki/operator/internal/manifests/storage"
+)
+
+func TestServiceAccountName_MatchesPodSpecServiceAccountName(t *testing.T) {
+ opts := Options{
+ Name: "lokistack",
+ Namespace: "ns",
+ Stack: lokiv1.LokiStackSpec{
+ Template: &lokiv1.LokiTemplateSpec{
+ Compactor: &lokiv1.LokiComponentSpec{
+ Replicas: 1,
+ },
+ Distributor: &lokiv1.LokiComponentSpec{
+ Replicas: 1,
+ },
+ Ingester: &lokiv1.LokiComponentSpec{
+ Replicas: 1,
+ },
+ Querier: &lokiv1.LokiComponentSpec{
+ Replicas: 1,
+ },
+ QueryFrontend: &lokiv1.LokiComponentSpec{
+ Replicas: 1,
+ },
+ IndexGateway: &lokiv1.LokiComponentSpec{
+ Replicas: 1,
+ },
+ Ruler: &lokiv1.LokiComponentSpec{
+ Replicas: 1,
+ },
+ },
+ },
+ ObjectStorage: storage.Options{},
+ }
+
+ sa := BuildServiceAccount(opts)
+
+ t.Run("distributor", func(t *testing.T) {
+ assert.Equal(t, sa.GetName(), NewDistributorDeployment(opts).Spec.Template.Spec.ServiceAccountName)
+ })
+
+ t.Run("query_frontend", func(t *testing.T) {
+ assert.Equal(t, sa.GetName(), NewQueryFrontendDeployment(opts).Spec.Template.Spec.ServiceAccountName)
+ })
+
+ t.Run("querier", func(t *testing.T) {
+ assert.Equal(t, sa.GetName(), NewQuerierDeployment(opts).Spec.Template.Spec.ServiceAccountName)
+ })
+
+ t.Run("ingester", func(t *testing.T) {
+ assert.Equal(t, sa.GetName(), NewIngesterStatefulSet(opts).Spec.Template.Spec.ServiceAccountName)
+ })
+
+ t.Run("compactor", func(t *testing.T) {
+ assert.Equal(t, sa.GetName(), NewCompactorStatefulSet(opts).Spec.Template.Spec.ServiceAccountName)
+ })
+
+ t.Run("index_gateway", func(t *testing.T) {
+ assert.Equal(t, sa.GetName(), NewIndexGatewayStatefulSet(opts).Spec.Template.Spec.ServiceAccountName)
+ })
+}
|
operator
|
Add serviceaccount per lokistack resource (#11533)
|
d1fd6b78832e2c4dff1e0adf7bcf1b884ca7c251
|
2023-05-26 22:32:14
|
risson
|
helm: gateway: also listen on IPv6 (#9510)
| false
|
diff --git a/production/helm/loki/CHANGELOG.md b/production/helm/loki/CHANGELOG.md
index 84968f856fec7..44363218a0159 100644
--- a/production/helm/loki/CHANGELOG.md
+++ b/production/helm/loki/CHANGELOG.md
@@ -13,6 +13,10 @@ Entries should include a reference to the pull request that introduced the chang
[//]: # (<AUTOMATED_UPDATES_LOCATOR> : do not remove this line. This locator is used by the CI pipeline to automatically create a changelog entry for each new Loki release. Add other chart versions and respective changelog entries bellow this line.)
+## 5.5.10
+
+- [CHANGE] Make the gateway listen on IPv6 as well as IPv4
+
## 5.5.9
- [FEATURE] Add `loki.configStorageType` & `loki.externalConfigSecretName` values to chart and templates.
diff --git a/production/helm/loki/Chart.yaml b/production/helm/loki/Chart.yaml
index 8cebd7446dfea..67771698e628b 100644
--- a/production/helm/loki/Chart.yaml
+++ b/production/helm/loki/Chart.yaml
@@ -3,7 +3,7 @@ name: loki
description: Helm chart for Grafana Loki in simple, scalable mode
type: application
appVersion: 2.8.2
-version: 5.5.9
+version: 5.5.10
home: https://grafana.github.io/helm-charts
sources:
- https://github.com/grafana/loki
diff --git a/production/helm/loki/README.md b/production/helm/loki/README.md
index ceafb83f357a4..92b8a21cc7f0b 100644
--- a/production/helm/loki/README.md
+++ b/production/helm/loki/README.md
@@ -1,6 +1,6 @@
# loki
-  
+  
Helm chart for Grafana Loki in simple, scalable mode
diff --git a/production/helm/loki/templates/_helpers.tpl b/production/helm/loki/templates/_helpers.tpl
index 340abdbd2eeff..6c8051dacae2b 100644
--- a/production/helm/loki/templates/_helpers.tpl
+++ b/production/helm/loki/templates/_helpers.tpl
@@ -600,6 +600,7 @@ http {
server {
listen 8080;
+ listen [::]:8080;
{{- if .Values.gateway.basicAuth.enabled }}
auth_basic "Loki";
@@ -793,4 +794,4 @@ enableServiceLinks: false
{{- $schedulerAddress = printf "query-scheduler-discovery.%s.svc.%s.:9095" .Release.Namespace .Values.global.clusterDomain -}}
{{- end -}}
{{- printf "%s" $schedulerAddress }}
-{{- end }}
\ No newline at end of file
+{{- end }}
|
helm
|
gateway: also listen on IPv6 (#9510)
|
3922d3864a8a2af09df236bad34e6bbb053181ea
|
2024-03-29 05:46:42
|
Owen Diehl
|
fix(tsdb): correctly use bit prefix calculation in tsdb shard matching (#12394)
| false
|
diff --git a/pkg/logql/shards.go b/pkg/logql/shards.go
index 7ca7f67cb367d..7d35cea26d761 100644
--- a/pkg/logql/shards.go
+++ b/pkg/logql/shards.go
@@ -159,7 +159,7 @@ func (s *Shard) Match(fp model.Fingerprint) bool {
return v1.BoundsFromProto(s.Bounded.Bounds).Match(fp)
}
- return s.PowerOfTwo.Match(fp)
+ return s.PowerOfTwo.TSDB().Match(fp)
}
func (s *Shard) GetFromThrough() (model.Fingerprint, model.Fingerprint) {
|
fix
|
correctly use bit prefix calculation in tsdb shard matching (#12394)
|
630a491580da45c5f5980958c55c205ea60c08ef
|
2022-09-12 19:13:05
|
Periklis Tsirakidis
|
loki: Attach the panic recovery handler on all HTTP handlers (#6780)
| false
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index d8bd5616f1d29..1c1db5999c77c 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -20,6 +20,7 @@
##### Fixes
* [6937](https://github.com/grafana/loki/pull/6937) **ssncferreira**: Fix topk and bottomk expressions with parameter <= 0.
+* [6780](https://github.com/grafana/loki/pull/6780) **periklis**: Attach the panic recovery handler on all HTTP handlers
* [6358](https://github.com/grafana/loki/pull/6358) **taharah**: Fixes sigv4 authentication for the Ruler's remote write configuration by allowing both a global and per tenant configuration.
* [6375](https://github.com/grafana/loki/pull/6375) **dannykopping**: Fix bug that prevented users from using the `json` parser after a `line_format` pipeline stage.
* [6505](https://github.com/grafana/loki/pull/6375) **dmitri-lerko** Fixes `failed to receive pubsub messages` error with promtail GCPLog client.
diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index 80c53429387d7..74aa8436f4667 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -130,17 +130,20 @@ func (t *Loki) initServer() (services.Service, error) {
s := NewServerService(t.Server, servicesToWaitFor)
// Best effort to propagate the org ID from the start.
- t.Server.HTTPServer.Handler = func(next http.Handler) http.Handler {
+ h := func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !t.Cfg.AuthEnabled {
next.ServeHTTP(w, r.WithContext(user.InjectOrgID(r.Context(), "fake")))
return
}
+
_, ctx, _ := user.ExtractOrgIDFromHTTPRequest(r)
next.ServeHTTP(w, r.WithContext(ctx))
})
}(t.Server.HTTPServer.Handler)
+ t.Server.HTTPServer.Handler = middleware.Merge(serverutil.RecoveryHTTPMiddleware).Wrap(h)
+
return s, nil
}
|
loki
|
Attach the panic recovery handler on all HTTP handlers (#6780)
|
9a99b05ee9192632affce77f07bc1fbdac7596aa
|
2024-06-14 18:10:43
|
Javier Garea
|
docs(helm): Fix broken `Values.yaml` link in the examples docs (#13219)
| false
|
diff --git a/production/helm/loki/docs/examples/enterprise/README.md b/production/helm/loki/docs/examples/enterprise/README.md
index d28b48ed985a4..82c0d28a2c876 100644
--- a/production/helm/loki/docs/examples/enterprise/README.md
+++ b/production/helm/loki/docs/examples/enterprise/README.md
@@ -14,7 +14,7 @@ Deploy the secrets file to your k8s cluster with the command:
`kubectl apply -f enterprise-secrets.yaml`
### Configure the Helm Chart
-Open [overrides-enterprise-gcs.yaml](./overrides-enterprise-gcs.yaml) and replace `{YOUR_GCS_BUCKET}` with the name of your GCS bucket. If there are other things you'd like to configure, view the core [Values.yaml file](https://github.com/grafana/helm-charts/blob/main/charts/loki-simple-scalable/values.yaml) and override anything else you need to within the overrides-enterprise-gcs.yaml file.
+Open [overrides-enterprise-gcs.yaml](./overrides-enterprise-gcs.yaml) and replace `{YOUR_GCS_BUCKET}` with the name of your GCS bucket. If there are other things you'd like to configure, view the core [Values.yaml file](https://github.com/grafana/loki/blob/main/production/helm/loki/values.yaml) and override anything else you need to within the overrides-enterprise-gcs.yaml file.
### Install the Helm chart
diff --git a/production/helm/loki/docs/examples/oss/README.md b/production/helm/loki/docs/examples/oss/README.md
index 0326de3f232f0..9a0a410c651ca 100644
--- a/production/helm/loki/docs/examples/oss/README.md
+++ b/production/helm/loki/docs/examples/oss/README.md
@@ -13,7 +13,7 @@ Deploy the secrets file to your k8s cluster.
`kubectl apply -f loki-secrets.yaml`
### Configure the Helm Chart
-Open examples/enterprise/overides-oss-gcs.yaml and replace `{YOUR_GCS_BUCKET}` with the name of your GCS bucket. If there are other things you'd like to configure, view the core [Values.yaml file](https://github.com/grafana/helm-charts/blob/main/charts/loki-simple-scalable/values.yaml) and override anything else you need to within the overrides-enterprise-gcs.yaml file.
+Open examples/enterprise/overides-oss-gcs.yaml and replace `{YOUR_GCS_BUCKET}` with the name of your GCS bucket. If there are other things you'd like to configure, view the core [Values.yaml file](https://github.com/grafana/loki/blob/main/production/helm/loki/values.yaml) and override anything else you need to within the overrides-enterprise-gcs.yaml file.
### Install the Helm chart
|
docs
|
Fix broken `Values.yaml` link in the examples docs (#13219)
|
24fd797c1eae79d1296bcf4d7e53aa230fe07963
|
2025-03-07 21:23:17
|
renovate[bot]
|
chore(deps): update prom/alertmanager docker tag to v0.28.1 (main) (#16633)
| false
|
diff --git a/production/docker/docker-compose.yaml b/production/docker/docker-compose.yaml
index d9148faa6e552..b73d3f3ead6cc 100644
--- a/production/docker/docker-compose.yaml
+++ b/production/docker/docker-compose.yaml
@@ -180,7 +180,7 @@ services:
# alertmanager to enable receiving alerts
alertmanager:
- image: prom/alertmanager:v0.28.0
+ image: prom/alertmanager:v0.28.1
restart: unless-stopped
ports:
- "9093:9093"
|
chore
|
update prom/alertmanager docker tag to v0.28.1 (main) (#16633)
|
1a75bb8e22ae194c93e1018384484a20f2fc01c3
|
2022-07-18 23:44:24
|
Ed Welch
|
loki: Return an __error_details__ label for any line which incurs a __error__ while being processed (#6543)
| false
|
diff --git a/pkg/logql/log/fmt.go b/pkg/logql/log/fmt.go
index 75a3f8a8085b4..04d5f896cd0e1 100644
--- a/pkg/logql/log/fmt.go
+++ b/pkg/logql/log/fmt.go
@@ -137,6 +137,7 @@ func (lf *LineFormatter) Process(ts int64, line []byte, lbs *LabelsBuilder) ([]b
if err := lf.Template.Execute(lf.buf, lbs.Map()); err != nil {
lbs.SetErr(errTemplateFormat)
+ lbs.SetErrorDetails(err.Error())
return line, true
}
return lf.buf.Bytes(), true
@@ -295,6 +296,7 @@ func (lf *LabelsFormatter) Process(_ int64, l []byte, lbs *LabelsBuilder) ([]byt
}
if err := f.tmpl.Execute(lf.buf, data); err != nil {
lbs.SetErr(errTemplateFormat)
+ lbs.SetErrorDetails(err.Error())
continue
}
lbs.Set(f.Name, lf.buf.String())
diff --git a/pkg/logql/log/fmt_test.go b/pkg/logql/log/fmt_test.go
index 697e3f382e08c..f7069833354f2 100644
--- a/pkg/logql/log/fmt_test.go
+++ b/pkg/logql/log/fmt_test.go
@@ -167,7 +167,11 @@ func Test_lineFormatter_Format(t *testing.T) {
labels.Labels{{Name: "foo", Value: "blip"}, {Name: "bar", Value: "blop"}},
0,
nil,
- labels.Labels{{Name: logqlmodel.ErrorLabel, Value: errTemplateFormat}, {Name: "foo", Value: "blip"}, {Name: "bar", Value: "blop"}},
+ labels.Labels{
+ {Name: "__error__", Value: "TemplateFormatErr"},
+ {Name: "foo", Value: "blip"}, {Name: "bar", Value: "blop"},
+ {Name: "__error_details__", Value: "template: line:1:2: executing \"line\" at <.foo>: foo is not a method but has arguments"},
+ },
nil,
},
{
@@ -323,6 +327,20 @@ func Test_lineFormatter_Format(t *testing.T) {
labels.Labels{{Name: "bar", Value: "2"}},
[]byte("1"),
},
+ {
+ "template_error",
+ newMustLineFormatter("{{.foo | now}}"),
+ labels.Labels{{Name: "foo", Value: "blip"}, {Name: "bar", Value: "blop"}},
+ 0,
+ nil,
+ labels.Labels{
+ {Name: "foo", Value: "blip"},
+ {Name: "bar", Value: "blop"},
+ {Name: "__error__", Value: "TemplateFormatErr"},
+ {Name: "__error_details__", Value: "template: line:1:9: executing \"line\" at <now>: wrong number of args for now: want 0 got 1"},
+ },
+ nil,
+ },
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
@@ -391,6 +409,17 @@ func Test_labelsFormatter_Format(t *testing.T) {
labels.Labels{{Name: "bar", Value: "blop"}},
labels.Labels{{Name: "blip", Value: "- and blop"}, {Name: "bar", Value: "blop"}},
},
+ {
+ "template error",
+ mustNewLabelsFormatter([]LabelFmt{NewTemplateLabelFmt("bar", "{{replace \"test\" .foo}}")}),
+ labels.Labels{{Name: "foo", Value: "blip"}, {Name: "bar", Value: "blop"}},
+ labels.Labels{
+ {Name: "foo", Value: "blip"},
+ {Name: "bar", Value: "blop"},
+ {Name: "__error__", Value: "TemplateFormatErr"},
+ {Name: "__error_details__", Value: "template: label:1:2: executing \"label\" at <replace>: wrong number of args for replace: want 3 got 2"},
+ },
+ },
}
for _, tt := range tests {
diff --git a/pkg/logql/log/label_filter.go b/pkg/logql/log/label_filter.go
index c5b8f9eebc1f4..00871d44d156e 100644
--- a/pkg/logql/log/label_filter.go
+++ b/pkg/logql/log/label_filter.go
@@ -170,6 +170,7 @@ func (d *BytesLabelFilter) Process(_ int64, line []byte, lbs *LabelsBuilder) ([]
value, err := humanize.ParseBytes(v)
if err != nil {
lbs.SetErr(errLabelFilter)
+ lbs.SetErrorDetails(err.Error())
return line, true
}
switch d.Type {
@@ -234,6 +235,7 @@ func (d *DurationLabelFilter) Process(_ int64, line []byte, lbs *LabelsBuilder)
value, err := time.ParseDuration(v)
if err != nil {
lbs.SetErr(errLabelFilter)
+ lbs.SetErrorDetails(err.Error())
return line, true
}
switch d.Type {
@@ -292,6 +294,7 @@ func (n *NumericLabelFilter) Process(_ int64, line []byte, lbs *LabelsBuilder) (
value, err := strconv.ParseFloat(v, 64)
if err != nil {
lbs.SetErr(errLabelFilter)
+ lbs.SetErrorDetails(err.Error())
return line, true
}
switch n.Type {
diff --git a/pkg/logql/log/label_filter_test.go b/pkg/logql/log/label_filter_test.go
index a0285655c464b..2bc3c232d4081 100644
--- a/pkg/logql/log/label_filter_test.go
+++ b/pkg/logql/log/label_filter_test.go
@@ -148,6 +148,42 @@ func TestBinary_Filter(t *testing.T) {
{Name: "method", Value: "POST"},
},
},
+ {
+ NewDurationLabelFilter(LabelFilterGreaterThan, "duration", 3*time.Second),
+ labels.Labels{
+ {Name: "duration", Value: "2weeeeee"},
+ },
+ true,
+ labels.Labels{
+ {Name: "duration", Value: "2weeeeee"},
+ {Name: "__error__", Value: "LabelFilterErr"},
+ {Name: "__error_details__", Value: "time: unknown unit \"weeeeee\" in duration \"2weeeeee\""},
+ },
+ },
+ {
+ NewBytesLabelFilter(LabelFilterGreaterThan, "bytes", 100),
+ labels.Labels{
+ {Name: "bytes", Value: "2qb"},
+ },
+ true,
+ labels.Labels{
+ {Name: "bytes", Value: "2qb"},
+ {Name: "__error__", Value: "LabelFilterErr"},
+ {Name: "__error_details__", Value: "unhandled size name: qb"},
+ },
+ },
+ {
+ NewNumericLabelFilter(LabelFilterGreaterThan, "number", 100),
+ labels.Labels{
+ {Name: "number", Value: "not_a_number"},
+ },
+ true,
+ labels.Labels{
+ {Name: "number", Value: "not_a_number"},
+ {Name: "__error__", Value: "LabelFilterErr"},
+ {Name: "__error_details__", Value: "strconv.ParseFloat: parsing \"not_a_number\": invalid syntax"},
+ },
+ },
}
for _, tt := range tests {
t.Run(tt.f.String(), func(t *testing.T) {
diff --git a/pkg/logql/log/labels.go b/pkg/logql/log/labels.go
index fdf0474839be4..b432e93ab5816 100644
--- a/pkg/logql/log/labels.go
+++ b/pkg/logql/log/labels.go
@@ -69,6 +69,8 @@ type BaseLabelsBuilder struct {
// nolint:structcheck
// https://github.com/golangci/golangci-lint/issues/826
err string
+ // nolint:structcheck
+ errDetails string
groups []string
parserKeyHints ParserHint // label key hints for metric queries that allows to limit parser extractions to only this list of labels.
@@ -134,6 +136,7 @@ func (b *LabelsBuilder) Reset() {
b.del = b.del[:0]
b.add = b.add[:0]
b.err = ""
+ b.errDetails = ""
}
// ParserLabelHints returns a limited list of expected labels to extract for metric queries.
@@ -158,6 +161,19 @@ func (b *LabelsBuilder) HasErr() bool {
return b.err != ""
}
+func (b *LabelsBuilder) SetErrorDetails(desc string) *LabelsBuilder {
+ b.errDetails = desc
+ return b
+}
+
+func (b *LabelsBuilder) GetErrorDetails() string {
+ return b.errDetails
+}
+
+func (b *LabelsBuilder) HasErrorDetails() bool {
+ return b.errDetails != ""
+}
+
// BaseHas returns the base labels have the given key
func (b *LabelsBuilder) BaseHas(key string) bool {
return b.base.Has(key)
@@ -229,6 +245,9 @@ func (b *LabelsBuilder) unsortedLabels(buf labels.Labels) labels.Labels {
if b.err != "" {
buf = append(buf, labels.Label{Name: logqlmodel.ErrorLabel, Value: b.err})
}
+ if b.errDetails != "" {
+ buf = append(buf, labels.Label{Name: logqlmodel.ErrorDetailsLabel, Value: b.errDetails})
+ }
return buf
}
diff --git a/pkg/logql/log/metrics_extraction.go b/pkg/logql/log/metrics_extraction.go
index 75466007ed336..bebf0791b08af 100644
--- a/pkg/logql/log/metrics_extraction.go
+++ b/pkg/logql/log/metrics_extraction.go
@@ -185,6 +185,7 @@ func (l *streamLabelSampleExtractor) Process(ts int64, line []byte) (float64, La
v, err = l.conversionFn(stringValue)
if err != nil {
l.builder.SetErr(errSampleExtraction)
+ l.builder.SetErrorDetails(err.Error())
}
}
// post filters
diff --git a/pkg/logql/log/metrics_extraction_test.go b/pkg/logql/log/metrics_extraction_test.go
index 0ef32f087e505..f6d456051d853 100644
--- a/pkg/logql/log/metrics_extraction_test.go
+++ b/pkg/logql/log/metrics_extraction_test.go
@@ -109,6 +109,24 @@ func Test_labelSampleExtractor_Extract(t *testing.T) {
},
true,
},
+ {
+ "not convertable",
+ mustSampleExtractor(LabelExtractorWithStages(
+ "foo", ConvertFloat, []string{"bar", "buzz"}, false, false, nil, NoopStage,
+ )),
+ labels.Labels{
+ {Name: "foo", Value: "not_a_number"},
+ {Name: "bar", Value: "foo"},
+ },
+ 0,
+ labels.Labels{
+ {Name: "__error__", Value: "SampleExtractionErr"},
+ {Name: "__error_details__", Value: "strconv.ParseFloat: parsing \"not_a_number\": invalid syntax"},
+ {Name: "bar", Value: "foo"},
+ {Name: "foo", Value: "not_a_number"},
+ },
+ true,
+ },
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
diff --git a/pkg/logql/log/parser.go b/pkg/logql/log/parser.go
index 5cabbed5a23fd..7175937675f37 100644
--- a/pkg/logql/log/parser.go
+++ b/pkg/logql/log/parser.go
@@ -62,6 +62,7 @@ func (j *JSONParser) Process(_ int64, line []byte, lbs *LabelsBuilder) ([]byte,
if err := j.readObject(it); err != nil {
lbs.SetErr(errJSON)
+ lbs.SetErrorDetails(err.Error())
return line, true
}
return line, true
@@ -296,6 +297,7 @@ func (l *LogfmtParser) Process(_ int64, line []byte, lbs *LabelsBuilder) ([]byte
}
if l.dec.Err() != nil {
lbs.SetErr(errLogfmt)
+ lbs.SetErrorDetails(l.dec.Err().Error())
return line, true
}
return line, true
@@ -431,6 +433,7 @@ func (u *UnpackParser) Process(_ int64, line []byte, lbs *LabelsBuilder) ([]byte
entry, err := u.unpack(it, line, lbs)
if err != nil {
lbs.SetErr(errJSON)
+ lbs.SetErrorDetails(err.Error())
return line, true
}
return entry, true
diff --git a/pkg/logql/log/parser_test.go b/pkg/logql/log/parser_test.go
index 740c7ead6cbc0..b33e6501187e0 100644
--- a/pkg/logql/log/parser_test.go
+++ b/pkg/logql/log/parser_test.go
@@ -79,7 +79,8 @@ func Test_jsonParser_Parse(t *testing.T) {
[]byte(`{n}`),
labels.Labels{},
labels.Labels{
- {Name: logqlmodel.ErrorLabel, Value: errJSON},
+ {Name: "__error__", Value: "JSONParserErr"},
+ {Name: "__error_details__", Value: "ReadMapCB: expect \" after {, but found n, error found in #2 byte of ...|{n}|..., bigger context ...|{n}|..."},
},
},
{
@@ -570,7 +571,8 @@ func Test_logfmtParser_Parse(t *testing.T) {
},
labels.Labels{
{Name: "foo", Value: "bar"},
- {Name: logqlmodel.ErrorLabel, Value: errLogfmt},
+ {Name: "__error__", Value: "LogfmtParserErr"},
+ {Name: "__error_details__", Value: "logfmt syntax error at pos 8 : unexpected '='"},
},
},
{
@@ -746,6 +748,7 @@ func Test_unpackParser_Parse(t *testing.T) {
labels.Labels{},
labels.Labels{
{Name: "__error__", Value: "JSONParserErr"},
+ {Name: "__error_details__", Value: "expecting json object(6), but it is not"},
},
[]byte(`"app":"foo","namespace":"prod","_entry":"some message","pod":{"uid":"1"}`),
},
@@ -755,6 +758,7 @@ func Test_unpackParser_Parse(t *testing.T) {
labels.Labels{{Name: "cluster", Value: "us-central1"}},
labels.Labels{
{Name: "__error__", Value: "JSONParserErr"},
+ {Name: "__error_details__", Value: "expecting json object(6), but it is not"},
{Name: "cluster", Value: "us-central1"},
},
[]byte(`["foo","bar"]`),
diff --git a/pkg/logqlmodel/error.go b/pkg/logqlmodel/error.go
index 3d0c5aaebbee0..35078d845a4ab 100644
--- a/pkg/logqlmodel/error.go
+++ b/pkg/logqlmodel/error.go
@@ -10,10 +10,11 @@ import (
// Those errors are useful for comparing error returned by the engine.
// e.g. errors.Is(err,logqlmodel.ErrParse) let you know if this is a ast parsing error.
var (
- ErrParse = errors.New("failed to parse the log query")
- ErrPipeline = errors.New("failed execute pipeline")
- ErrLimit = errors.New("limit reached while evaluating the query")
- ErrorLabel = "__error__"
+ ErrParse = errors.New("failed to parse the log query")
+ ErrPipeline = errors.New("failed execute pipeline")
+ ErrLimit = errors.New("limit reached while evaluating the query")
+ ErrorLabel = "__error__"
+ ErrorDetailsLabel = "__error_details__"
)
// ParseError is what is returned when we failed to parse.
|
loki
|
Return an __error_details__ label for any line which incurs a __error__ while being processed (#6543)
|
4e4d4a962e9d1486b456693a4e951641e3ec6b3a
|
2024-10-30 18:11:38
|
Jay Clifford
|
docs: Deploy Loki Helm on AWS guide (#14517)
| false
|
diff --git a/docs/sources/setup/install/helm/_index.md b/docs/sources/setup/install/helm/_index.md
index ab06ae644a801..5392838e2ca74 100644
--- a/docs/sources/setup/install/helm/_index.md
+++ b/docs/sources/setup/install/helm/_index.md
@@ -22,6 +22,13 @@ This guide references the Loki Helm chart version 3.0 or greater and contains th
If you are installing Grafana Enterprise Logs, follow the [GEL Helm installation](https://grafana.com/docs/enterprise-logs/<ENTERPRISE_LOGS_VERSION>/setup/helm/).
+
+## Cloud Deployment Guides
+
+The following guides provide step-by-step instructions for deploying Loki on cloud providers:
+
+- [Amazon EKS](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/install/helm/deployment-guides/aws/)
+
## Reference
-[Values reference]({{< relref "./reference" >}})
+[Values reference](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/install/helm/reference/)
diff --git a/docs/sources/setup/install/helm/deployment-guides/_index.md b/docs/sources/setup/install/helm/deployment-guides/_index.md
new file mode 100644
index 0000000000000..4ff7d5dcaa983
--- /dev/null
+++ b/docs/sources/setup/install/helm/deployment-guides/_index.md
@@ -0,0 +1,13 @@
+---
+title: Cloud Deployment Guides
+menuTitle: Cloud Deployment Guides
+description: Step-by-step instructions for deploying Loki on cloud providers.
+weight: 500
+keywords:
+---
+
+# Cloud Deployment Guides
+
+The following guides provide step-by-step instructions for deploying Loki on cloud providers:
+
+- [Deploy Loki on AWS](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/install/helm/deployment-guides/aws/)
\ No newline at end of file
diff --git a/docs/sources/setup/install/helm/deployment-guides/aws.md b/docs/sources/setup/install/helm/deployment-guides/aws.md
new file mode 100644
index 0000000000000..bbe80da5e7761
--- /dev/null
+++ b/docs/sources/setup/install/helm/deployment-guides/aws.md
@@ -0,0 +1,676 @@
+---
+title: Deploy the Loki Helm chart on AWS
+menuTitle: Deploy on AWS
+description: Installing the Loki Helm chart on AWS.
+keywords:
+---
+
+# Deploy the Loki Helm chart on AWS
+
+This guide shows how to deploy a minimally viable Loki in **microservice** mode on AWS using the Helm chart. To run through this guide, we expect you to have the necessary tools and permissions to deploy resources on AWS, such as:
+
+- Full access to EKS (Amazon Elastic Kubernetes Service)
+- Full access to S3 (Amazon Simple Storage Service)
+- Sufficient permissions to create IAM (Identity Access Management) roles and policies
+
+There are two methods for authenticating and connecting Loki to AWS S3. We will guide you through the recommended method of granting access via an IAM role.
+
+## Considerations
+
+{{< admonition type="caution" >}}
+This guide was accurate at the time it was last updated on **21st October, 2024**. As cloud providers frequently update their services and offerings, as a best practice, you should refer to the [AWS S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) before creating your buckets and assigning roles.
+{{< /admonition >}}
+
+- **IAM Role:** The IAM role created in this guide is a basic role that allows Loki to read and write to the S3 bucket. You may wish to add more granular permissions based on your requirements.
+
+- **Authentication:** Grafana Loki comes with a basic authentication layer. The Loki gateway (NGINX) is exposed to the internet using basic authentication in this example. NGINX can also be replaced with other open-source reverse proxies. Refer to [Authentication](https://grafana.com/docs/loki/<LOKI_VERSION/operations/authentication/) for more information.
+
+- **Retention:** The retention period is set to 28 days in the `values.yaml` file. You may wish to adjust this based on your requirements.
+
+- **Costs:** Running Loki on AWS will incur costs. Make sure to monitor your usage and costs to avoid any unexpected bills. In this guide we have used a simple EKS cluster with 3 nodes and m5.xlarge instances. You may wish to adjust the instance types and number of nodes based on your workload.
+
+## Prerequisites
+
+- Helm 3 or above. Refer to [Installing Helm](https://helm.sh/docs/intro/install/). This should be installed on your local machine.
+- A running Kubernetes cluster on AWS. A simple way to get started is by using EKSctl. Refer to [Getting started with EKSctl](https://eksctl.io/).
+- Kubectl installed on your local machine. Refer to [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
+- (Optional) AWS CLI installed on your local machine. Refer to [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html). This is required if you plan to use EKSctl to create the EKS cluster and modify the IAM roles and policies locally.
+
+### EKS Minimum Requirements
+
+{{< admonition type="caution" >}}
+These EKS requirements are the minimum specification needed to deploy Loki using this guide. You may wish to adjust plugins and instance types based on your AWS environment and workload. **If you choose to do so, we cannot guarantee that this sample configuration will still meet your needs.**
+
+In this guide, we deploy Loki using `m5.xlarge` instances. This is a instance type that should work for most scenarios. However, you can modify the instance types and count based on your specific needs.
+{{< /admonition >}}
+
+The minimum requirements for deploying Loki on EKS are:
+
+- Kubernetes version `1.30` or above.
+- `3` nodes for the EKS cluster.
+- Instance type depends on your workload. A good starting point is `m5.xlarge`.
+
+Here is the EKSctl cluster configuration file used in this guide:
+
+```yaml
+# A simple example of ClusterConfig object:
+---
+apiVersion: eksctl.io/v1alpha5
+kind: ClusterConfig
+
+metadata:
+ name: <INSERT-NAME>
+ region: <INSERT-REGION>
+ version: "1.31"
+
+iam:
+ withOIDC: true
+
+addons:
+ - name: aws-ebs-csi-driver
+ - name: eks-pod-identity-agent
+
+managedNodeGroups:
+ - name: loki-workers
+ instanceType: m5.xlarge
+ desiredCapacity: 3
+ minSize: 2
+ maxSize: 3
+ amiFamily: AmazonLinux2
+ iam:
+ withAddonPolicies:
+ ebs: true
+ volumeSize: 80
+ volumeType: gp2
+ ebsOptimized: true
+
+```
+
+
+The following plugins must also be installed within the EKS cluster:
+- **Amazon EBS CSI Driver**: Enables Kubernetes to dynamically provision and manage EBS volumes as persistent storage for applications. We use this to provision the node volumes for Loki.
+- **Amazon EKS Pod Identity Agent**: Manages AWS IAM roles for pods, allowing fine-grained access control to AWS resources without needing to store credentials in containers. This is how Loki will access the S3 bucket.
+- **CoreDNS**: Provides internal DNS service for Kubernetes clusters, ensuring that services and pods can communicate with each other using DNS names.
+- **kube-proxy**: Maintains network rules on nodes, enabling communication between pods and services within the cluster.
+
+You must also install an **OIDC (OpenID Connect) provider** on the EKS cluster. This is required for the IAM roles and policies to work correctly. If you are using EKSctl, you can install the OIDC provider using the following command:
+
+{{< admonition type="tip" >}}
+If you used the above EKSctl configuration file to create the cluster, you do not need to run this command. The OIDC provider is automatically installed.
+{{< /admonition >}}
+
+```bash
+eksctl utils associate-iam-oidc-provider --cluster loki --approve
+```
+
+## Create S3 buckets
+
+{{< admonition type="warning" >}}
+ **DO NOT** use the default bucket names; `chunk`, `ruler` and `admin`. Choose a **unique** name for each bucket. For more information see the following [security update](https://grafana.com/blog/2024/06/27/grafana-security-update-grafana-loki-and-unintended-data-write-attempts-to-amazon-s3-buckets/).
+{{< /admonition >}}
+
+Before deploying Loki, you need to create two S3 buckets; one to store logs (chunks), the second to store alert rules. You can create the bucket using the AWS Management Console or the AWS CLI. The bucket name must be globally unique.
+
+{{< admonition type="note" >}}
+GEL customers will require a third bucket to store the admin data. This bucket is not required for OSS users.
+{{< /admonition >}}
+
+```bash
+aws s3api create-bucket --bucket <YOUR CHUNK BUCKET NAME e.g. `loki-aws-dev-chunks`> --region <S3 region your account is on, e.g. `eu-west-2`> --create-bucket-configuration LocationConstraint=<S3 region your account is on, e.g. `eu-west-2`> \
+aws s3api create-bucket --bucket <YOUR RULER BUCKET NAME e.g. `loki-aws-dev-ruler`> --region <S3 REGION your account is on, e.g. `eu-west-2`> --create-bucket-configuration LocationConstraint=<S3 REGION your account is on, e.g. `eu-west-2`>
+```
+Make sure to replace the `region` and `bucket` name with your desired values. We will revisit the bucket policy later in this guide.
+
+## Defining IAM roles and policies
+
+The recommended method for connecting Loki to AWS S3 is to use an IAM role. This method is more secure than using access keys and secret keys which are directly stored in the Loki configuration. The role and policy can be created using the AWS CLI or the AWS Management Console. The below steps show how to create the role and policy using the AWS CLI.
+
+1. Create a new directory and navigate to it. Make sure to create the files in this directory. All commands in this guide assume you are in this directory.
+
+1. Create a `loki-s3-policy.json` file with the following content:
+
+ ```json
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "LokiStorage",
+ "Effect": "Allow",
+ "Action": [
+ "s3:ListBucket",
+ "s3:PutObject",
+ "s3:GetObject",
+ "s3:DeleteObject"
+ ],
+ "Resource": [
+ "arn:aws:s3:::< CHUNK BUCKET NAME >",
+ "arn:aws:s3:::< CHUNK BUCKET NAME >/*",
+ "arn:aws:s3:::< RULER BUCKET NAME >",
+ "arn:aws:s3:::< RULER BUCKET NAME >/*"
+ ]
+ }
+ ]
+ }
+ ```
+
+ **Make sure to replace the placeholders with the names of the buckets you created earlier.**
+
+1. Create the IAM policy using the AWS CLI:
+
+ ```bash
+ aws iam create-policy --policy-name LokiS3AccessPolicy --policy-document file://loki-s3-policy.json
+ ```
+
+2. Create a trust policy document named `trust-policy.json` with the following content:
+
+ ```json
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "Federated": "arn:aws:iam::< ACCOUNT ID >:oidc-provider/oidc.eks.<INSERT REGION>.amazonaws.com/id/< OIDC ID >"
+ },
+ "Action": "sts:AssumeRoleWithWebIdentity",
+ "Condition": {
+ "StringEquals": {
+ "oidc.eks.<INSERT REGION>.amazonaws.com/id/< OIDC ID >:sub": "system:serviceaccount:loki:loki",
+ "oidc.eks.<INSERT REGION>.amazonaws.com/id/< OIDC ID >:aud": "sts.amazonaws.com"
+ }
+ }
+ }
+ ]
+ }
+ ```
+ **Make sure to replace the placeholders with your AWS account ID, region, and the OIDC ID (you can find this in the EKS cluster configuration).**
+
+3. Create the IAM role using the AWS CLI:
+
+ ```bash
+ aws iam create-role --role-name LokiServiceAccountRole --assume-role-policy-document file://trust-policy.json
+ ```
+
+4. Attach the policy to the role:
+
+ ```bash
+ aws iam attach-role-policy --role-name LokiServiceAccountRole --policy-arn arn:aws:iam::<Account ID>:policy/LokiS3AccessPolicy
+ ```
+ **Make sure to replace the placeholder with your AWS account ID.**
+
+### Adding the policy to the S3 buckets
+
+To allow the IAM role to access the S3 buckets, you need to add the policy to the bucket. You can do this using the AWS Management Console or the AWS CLI. The below steps show how to add the policy using the AWS CLI.
+
+1. Create a bucket policy file named `bucket-policy-chunk.json` with the following content:
+
+ ```json
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "Statement1",
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::<ACCOUNT ID>:role/LokiServiceAccountRole"
+ },
+ "Action": [
+ "s3:PutObject",
+ "s3:GetObject",
+ "s3:DeleteObject",
+ "s3:ListBucket"
+ ],
+ "Resource": [
+ "arn:aws:s3:::< CHUNK BUCKET NAME >",
+ "arn:aws:s3:::< CHUNK BUCKET NAME >/*"
+ ]
+ }
+ ]
+ }
+ ```
+ **Make sure to replace the placeholders with your AWS account ID and the bucket names.**
+
+1. Add the policy to the bucket:
+
+ ```bash
+ aws s3api put-bucket-policy --bucket <CHUNK BUCKET NAME eg. `loki-aws-dev-chunks`> --policy file://bucket-policy-chunk.json
+ ```
+1. Create a bucket policy file named `bucket-policy-ruler.json` with the following content:
+
+ ```json
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "Statement1",
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::<ACCOUNT ID>:role/LokiServiceAccountRole"
+ },
+ "Action": [
+ "s3:PutObject",
+ "s3:GetObject",
+ "s3:DeleteObject",
+ "s3:ListBucket"
+ ],
+ "Resource": [
+ "arn:aws:s3:::< RULER BUCKET NAME >",
+ "arn:aws:s3:::< RULER BUCKET NAME >/*"
+ ]
+ }
+ ]
+ }
+ ```
+ **Make sure to replace the placeholders with your AWS account ID and the bucket names.**
+
+1. Add the policy to the bucket:
+
+ ```bash
+ aws s3api put-bucket-policy --bucket <RULER BUCKET NAME eg. `loki-aws-dev-ruler`> --policy file://bucket-policy-ruler.json
+ ```
+
+## Deploying the Helm chart
+
+Before we can deploy the Loki Helm chart, we need to add the Grafana chart repository to Helm. This repository contains the Loki Helm chart.
+
+1. Add the Grafana chart repository to Helm:
+
+ ```bash
+ helm repo add grafana https://grafana.github.io/helm-charts
+ ```
+1. Update the chart repository:
+
+ ```bash
+ helm repo update
+ ```
+1. Create a new namespace for Loki:
+
+ ```bash
+ kubectl create namespace loki
+ ```
+### Loki Basic Authentication
+
+Loki by default does not come with any authentication. Since we will be deploying Loki to AWS and exposing the gateway to the internet, we recommend adding at least basic authentication. In this guide we will give Loki a username and password:
+
+1. To start we will need create a `.htpasswd` file with the username and password. You can use the `htpasswd` command to create the file:
+
+ {{< admonition type="tip" >}}
+ If you don't have the `htpasswd` command installed, you can install it using `brew` or `apt-get` or `yum` depending on your OS.
+ {{< /admonition >}}
+
+ ```bash
+ htpasswd -c .htpasswd <username>
+ ```
+ This will create a file called `auth` with the username `loki`. You will be prompted to enter a password.
+
+ 1. Create a Kubernetes secret with the `.htpasswd` file:
+
+ ```bash
+ kubectl create secret generic loki-basic-auth --from-file=.htpasswd -n loki
+ ```
+
+ This will create a secret called `loki-basic-auth` in the `loki` namespace. We will reference this secret in the Loki Helm chart configuration.
+
+1. Create a `canary-basic-auth` secret for the canary:
+
+ ```bash
+ kubectl create secret generic canary-basic-auth \
+ --from-literal=username=<USERNAME> \
+ --from-literal=password=<PASSWORD> \
+ -n loki
+ ```
+ We create a literal secret with the username and password for Loki canary to authenticate with the Loki gateway.
+ **Make sure to replace the placeholders with your desired username and password.**
+
+
+
+### Loki Helm chart configuration
+
+Create a `values.yaml` file choosing the configuration options that best suit your requirements. Below there is an example of `values.yaml` files for the Loki Helm chart in [microservices](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#microservices-mode) mode.
+
+```yaml
+loki:
+ schemaConfig:
+ configs:
+ - from: "2024-04-01"
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ storage_config:
+ aws:
+ region: <S3 BUCKET REGION> # for example, eu-west-2
+ bucketnames: <CHUNK BUCKET NAME> # Your actual S3 bucket name, for example, loki-aws-dev-chunks
+ s3forcepathstyle: false
+ ingester:
+ chunk_encoding: snappy
+ pattern_ingester:
+ enabled: true
+ limits_config:
+ allow_structured_metadata: true
+ volume_enabled: true
+ retention_period: 672h # 28 days retention
+ compactor:
+ retention_enabled: true
+ delete_request_store: s3
+ ruler:
+ enable_api: true
+ storage:
+ type: s3
+ s3:
+ region: <S3 BUCKET REGION> # for example, eu-west-2
+ bucketnames: <RULER BUCKET NAME> # Your actual S3 bucket name, for example, loki-aws-dev-ruler
+ s3forcepathstyle: false
+ alertmanager_url: http://prom:9093 # The URL of the Alertmanager to send alerts (Prometheus, Mimir, etc.)
+
+ querier:
+ max_concurrent: 4
+
+ storage:
+ type: s3
+ bucketNames:
+ chunks: "<CHUNK BUCKET NAME>" # Your actual S3 bucket name (loki-aws-dev-chunks)
+ ruler: "<RULER BUCKET NAME>" # Your actual S3 bucket name (loki-aws-dev-ruler)
+ # admin: "<Insert s3 bucket name>" # Your actual S3 bucket name (loki-aws-dev-admin) - GEL customers only
+ s3:
+ region: <S3 BUCKET REGION> # eu-west-2
+ #insecure: false
+ # s3forcepathstyle: false
+
+serviceAccount:
+ create: true
+ annotations:
+ "eks.amazonaws.com/role-arn": "arn:aws:iam::<Account ID>:role/LokiServiceAccountRole" # The service role you created
+
+deploymentMode: Distributed
+
+ingester:
+ replicas: 3
+ persistence:
+ storageClass: gp2
+ accessModes:
+ - ReadWriteOnce
+ size: 10Gi
+
+querier:
+ replicas: 3
+ maxUnavailable: 2
+ persistence:
+ storageClass: gp2
+ accessModes:
+ - ReadWriteOnce
+ size: 10Gi
+queryFrontend:
+ replicas: 2
+ maxUnavailable: 1
+queryScheduler:
+ replicas: 2
+distributor:
+ replicas: 3
+ maxUnavailable: 2
+compactor:
+ replicas: 1
+ persistence:
+ storageClass: gp2
+ accessModes:
+ - ReadWriteOnce
+ size: 10Gi
+indexGateway:
+ replicas: 2
+ maxUnavailable: 1
+ persistence:
+ storageClass: gp2
+ accessModes:
+ - ReadWriteOnce
+ size: 10Gi
+ruler:
+ replicas: 1
+ maxUnavailable: 1
+ persistence:
+ storageClass: gp2
+ accessModes:
+ - ReadWriteOnce
+ size: 10Gi
+
+
+# This exposes the Loki gateway so it can be written to and queried externaly
+gateway:
+ service:
+ type: LoadBalancer
+ basicAuth:
+ enabled: true
+ existingSecret: loki-basic-auth
+
+# Since we are using basic auth, we need to pass the username and password to the canary
+lokiCanary:
+ extraArgs:
+ - -pass=$(LOKI_PASS)
+ - -user=$(LOKI_USER)
+ extraEnv:
+ - name: LOKI_PASS
+ valueFrom:
+ secretKeyRef:
+ name: canary-basic-auth
+ key: password
+ - name: LOKI_USER
+ valueFrom:
+ secretKeyRef:
+ name: canary-basic-auth
+ key: username
+
+# Enable minio for storage
+minio:
+ enabled: false
+
+backend:
+ replicas: 0
+read:
+ replicas: 0
+write:
+ replicas: 0
+
+singleBinary:
+ replicas: 0
+```
+
+{{< admonition type="caution" >}}
+Make sure to replace the placeholders with your actual values.
+{{< /admonition >}}
+
+It is critical to define a valid `values.yaml` file for the Loki deployment. To remove the risk of misconfiguration, let's break down the configuration options to keep in mind when deploying to AWS:
+
+- **Loki Config vs. Values Config:**
+ - The `values.yaml` file contains a section called `loki`, which contains a direct representation of the Loki configuration file.
+ - This section defines the Loki configuration, including the schema, storage, and querier configuration.
+ - The key configuration to focus on for chunks is the `storage_config` section, where you define the S3 bucket region and name. This tells Loki where to store the chunks.
+ - The `ruler` section defines the configuration for the ruler, including the S3 bucket region and name. This tells Loki where to store the alert and recording rules.
+ - For the full Loki configuration, refer to the [Loki Configuration](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/) documentation.
+
+- **Storage:**
+ - Defines where the Helm chart stores data.
+ - Set the type to `s3` since we are using Amazon S3.
+ - Configure the bucket names for the chunks and ruler to match the buckets created earlier.
+ - The `s3` section specifies the region of the bucket.
+
+- **Service Account:**
+ - The `serviceAccount` section is used to define the IAM role for the Loki service account.
+ - This is where the IAM role created earlier is linked.
+
+- **Gateway:**
+ - Defines how the Loki gateway will be exposed.
+ - We are using a `LoadBalancer` service type in this configuration.
+
+
+### Deploy Loki
+
+Now that you have created the `values.yaml` file, you can deploy Loki using the Helm chart.
+
+1. Deploy using the newly created `values.yaml` file:
+
+ ```bash
+ helm install --values values.yaml loki grafana/loki -n loki --create-namespace
+ ```
+ **It is important to create a namespace called `loki` as our trust policy is set to allow the IAM role to be used by the `loki` service account in the `loki` namespace. This is configurable but make sure to update your service account.**
+
+1. Verify the deployment:
+
+ ```bash
+ kubectl get pods -n loki
+ ```
+ You should see the Loki pods running.
+
+ ```console
+ NAME READY STATUS RESTARTS AGE
+ loki-canary-crqpg 1/1 Running 0 10m
+ loki-canary-hm26p 1/1 Running 0 10m
+ loki-canary-v9wv9 1/1 Running 0 10m
+ loki-chunks-cache-0 2/2 Running 0 10m
+ loki-compactor-0 1/1 Running 0 10m
+ loki-distributor-78ccdcc9b4-9wlhl 1/1 Running 0 10m
+ loki-distributor-78ccdcc9b4-km6j2 1/1 Running 0 10m
+ loki-distributor-78ccdcc9b4-ptwrb 1/1 Running 0 10m
+ loki-gateway-5f97f78755-hm6mx 1/1 Running 0 10m
+ loki-index-gateway-0 1/1 Running 0 10m
+ loki-index-gateway-1 1/1 Running 0 10m
+ loki-ingester-zone-a-0 1/1 Running 0 10m
+ loki-ingester-zone-b-0 1/1 Running 0 10m
+ loki-ingester-zone-c-0 1/1 Running 0 10m
+ loki-querier-89d4ff448-4vr9b 1/1 Running 0 10m
+ loki-querier-89d4ff448-7nvrf 1/1 Running 0 10m
+ loki-querier-89d4ff448-q89kh 1/1 Running 0 10m
+ loki-query-frontend-678899db5-n5wc4 1/1 Running 0 10m
+ loki-query-frontend-678899db5-tf69b 1/1 Running 0 10m
+ loki-query-scheduler-7d666bf759-9xqb5 1/1 Running 0 10m
+ loki-query-scheduler-7d666bf759-kpb5q 1/1 Running 0 10m
+ loki-results-cache-0 2/2 Running 0 10m
+ loki-ruler-0 1/1 Running 0 10m
+ ```
+
+### Find the Loki Gateway Service
+
+The Loki Gateway service is a LoadBalancer service that exposes the Loki gateway to the internet. This is where you will write logs to and query logs from. By default NGINX is used as the gateway.
+
+{{< admonition type="caution" >}}
+The Loki Gateway service is exposed to the internet. We provide basic authentication using a username and password in this tutorial. Refer to the [Authentication](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/authentication/) documentation for more information.
+{{< /admonition >}}
+
+To find the Loki Gateway service, run the following command:
+
+```bash
+kubectl get svc -n loki
+```
+You should see the Loki Gateway service with an external IP address. This is the address you will use to write to and query Loki.
+
+```console
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+loki-gateway LoadBalancer 10.100.201.74 12345678975675456-1433434453245433545656563.eu-west-2.elb.amazonaws.com 80:30707/TCP 46m
+```
+
+Congratulations! You have successfully deployed Loki on AWS using the Helm chart. Before we finish, let's test the deployment.
+
+## Testing Your Loki Deployment
+
+k6 is one of the fastest ways to test your Loki deployment. This will allow you to both write and query logs to Loki. To get started with k6, follow the steps below:
+
+1. Install k6 with the Loki extension on your local machine. Refer to [Installing k6 and the xk6-loki extension](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/k6/).
+
+2. Create a `aws-test.js` file with the following content:
+
+ ```javascript
+ import {sleep, check} from 'k6';
+ import loki from 'k6/x/loki';
+
+ /**
+ * URL used for push and query requests
+ * Path is automatically appended by the client
+ * @constant {string}
+ */
+
+ const username = '<USERNAME>';
+ const password = '<PASSWORD>';
+ const external_ip = '<EXTERNAL-IP>';
+
+ const credentials = `${username}:${password}`;
+
+ const BASE_URL = `http://${credentials}@${external_ip}`;
+
+ /**
+ * Helper constant for byte values
+ * @constant {number}
+ */
+ const KB = 1024;
+
+ /**
+ * Helper constant for byte values
+ * @constant {number}
+ */
+ const MB = KB * KB;
+
+ /**
+ * Instantiate config and Loki client
+ */
+
+ const conf = new loki.Config(BASE_URL);
+ const client = new loki.Client(conf);
+
+ /**
+ * Define test scenario
+ */
+ export const options = {
+ vus: 10,
+ iterations: 10,
+ };
+
+ **It is important to create a namespace called `loki` as our trust policy is set to allow the IAM role to be used by the `loki` service account in the `loki` namespace. This is configurable but make sure to update your service account.**
+ * "main" function for each VU iteration
+ */
+ export default () => {
+ // Push request with 10 streams and uncompressed logs between 800KB and 2MB
+ var res = client.pushParameterized(10, 800 * KB, 2 * MB);
+ // Check for successful write
+ check(res, { 'successful write': (res) => res.status == 204 });
+
+ // Pick a random log format from label pool
+ let format = randomChoice(conf.labels["format"]);
+
+ // Execute instant query with limit 1
+ res = client.instantQuery(`count_over_time({format="${format}"}[1m])`, 1)
+ // Check for successful read
+ check(res, { 'successful instant query': (res) => res.status == 200 });
+
+ // Execute range query over last 5m and limit 1000
+ res = client.rangeQuery(`{format="${format}"}`, "5m", 1000)
+ // Check for successful read
+ check(res, { 'successful range query': (res) => res.status == 200 });
+
+ // Wait before next iteration
+ sleep(1);
+ }
+
+ /**
+ * Helper function to get random item from array
+ */
+ function randomChoice(items) {
+ return items[Math.floor(Math.random() * items.length)];
+ }
+ ```
+
+ **Replace `<EXTERNAL-IP>` with the external IP address of the Loki Gateway service.**
+
+ This script will write logs to Loki and query logs from Loki. It will write logs in a random format between 800KB and 2MB and query logs in a random format over the last 5 minutes.
+
+3. Run the test:
+
+ ```bash
+ ./k6 run aws-test.js
+ ```
+
+ This will run the test and output the results. You should see the test writing logs to Loki and querying logs from Loki.
+
+Now that you have successfully deployed Loki in microservices mode on AWS, you may wish to explore the following:
+
+- [Sending data to Loki](https://grafana.com/docs/loki/<LOKI_VERSION/send-data/)
+- [Querying Loki](https://grafana.com/docs/loki/<LOKI_VERSION>/query/)
+- [Manage](https://grafana.com/docs/loki/<LOKI_VERSION/operations/)
diff --git a/docs/sources/setup/install/helm/install-microservices/_index.md b/docs/sources/setup/install/helm/install-microservices/_index.md
index fea617e7bb9df..37ee90d25b291 100644
--- a/docs/sources/setup/install/helm/install-microservices/_index.md
+++ b/docs/sources/setup/install/helm/install-microservices/_index.md
@@ -1,7 +1,7 @@
---
-title: Install the microservice Helm chart
+title: Install the microservice Helm chart
menuTitle: Install microservice Loki
-description: Installing Loki in microservice (distributed) mode using the Helm chart.
+description: Installing Loki in microservice mode using the Helm chart.
weight: 300
keywords:
---
@@ -10,7 +10,7 @@ keywords:
This Helm Chart deploys Grafana Loki on Kubernetes.
-This chart configures Loki to run Loki in [microservice / distributed mode]({{< relref "../../../../get-started/deployment-modes#microservices-mode" >}}). The microservices deployment mode runs components of Loki as distinct processes.
+This Helm chart deploys Loki to run Loki in [microservice mode](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#microservices-mode) within a Kubernetes cluster. The microservices deployment mode runs components of Loki as distinct processes.
The default Helm chart deploys the following components:
- **Compactor component** (1 replica): Compacts and processes stored data.
@@ -21,16 +21,19 @@ The default Helm chart deploys the following components:
- **QueryFrontend component** (2 replicas, maxUnavailable: 1): Manages frontend queries. Up to 1 replica can be unavailable during updates.
- **QueryScheduler component** (2 replicas): Schedules queries.
-It is not recommended to run microservice mode with `filesystem` storage. For the purpose of this guide, we will use MinIO as the object storage to provide a complete example.
+{{< admonition type="note" >}}
+We do not recommend running in Microservice mode with `filesystem` storage. For the purpose of this guide, we will use MinIO as the object storage to provide a complete example.
+{{< /admonition >}}
+
-**Prerequisites**
+## Prerequisites
- Helm 3 or above. See [Installing Helm](https://helm.sh/docs/intro/install/).
- A running Kubernetes cluster (must have at least 3 nodes).
-- (Optional) A Memcached deployment for better query performance. For information on configuring Memcached, refer to the [caching section](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/caching/).
-**To deploy Loki in microservice mode (with MinIO):**
+
+## Deploying the Helm chart for development and testing
1. Add [Grafana's chart repository](https://github.com/grafana/helm-charts) to Helm:
@@ -39,48 +42,46 @@ It is not recommended to run microservice mode with `filesystem` storage. For th
helm repo add grafana https://grafana.github.io/helm-charts
```
-2. Update the chart repository:
+1. Update the chart repository:
```bash
helm repo update
```
-3. Create the configuration file `values.yaml`. The example below illustrates how to deploy Loki in test mode using MinIO as storage:
+1. Create the configuration file `values.yaml`. The example below illustrates how to deploy Loki in test mode using MinIO as storage:
```yaml
loki:
- schemaConfig:
- configs:
- - from: "2024-04-01"
- store: tsdb
- object_store: s3
- schema: v13
- index:
- prefix: loki_index_
- period: 24h
- ingester:
- chunk_encoding: snappy
- tracing:
- enabled: true
- querier:
- # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
- max_concurrent: 4
-
- #gateway:
- # ingress:
- # enabled: true
- # hosts:
- # - host: FIXME
- # paths:
- # - path: /
- # pathType: Prefix
+ schemaConfig:
+ configs:
+ - from: "2024-04-01"
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ ingester:
+ chunk_encoding: snappy
+ querier:
+ # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
+ max_concurrent: 4
+ pattern_ingester:
+ enabled: true
+ limits_config:
+ allow_structured_metadata: true
+ volume_enabled: true
+ retention_period: 672h
+ compactor:
+ retention_enabled: true
+ delete_request_store: s3
deploymentMode: Distributed
ingester:
- replicas: 3
+ replicas: 3 # To ensure data durability with replication
querier:
- replicas: 3
+ replicas: 3 # Improve query performance via parallelism
maxUnavailable: 2
queryFrontend:
replicas: 2
@@ -88,7 +89,7 @@ It is not recommended to run microservice mode with `filesystem` storage. For th
queryScheduler:
replicas: 2
distributor:
- replicas: 3
+ replicas: 3
maxUnavailable: 2
compactor:
replicas: 1
@@ -102,24 +103,29 @@ It is not recommended to run microservice mode with `filesystem` storage. For th
replicas: 0
bloomGateway:
replicas: 0
-
- # Enable minio for storage
- minio:
- enabled: true
-
- # Zero out replica counts of other deployment modes
+
backend:
- replicas: 0
+ replicas: 0
read:
- replicas: 0
+ replicas: 0
write:
- replicas: 0
+ replicas: 0
singleBinary:
- replicas: 0
+ replicas: 0
+
+ # This exposes the Loki gateway so it can be written to and queried externaly
+ gateway:
+ service:
+ type: LoadBalancer
+
+
+ # Enable minio for storage
+ minio:
+ enabled: true
```
-4. Install or upgrade the Loki deployment.
+1. Install or upgrade the Loki deployment.
- To install:
```bash
helm install --values values.yaml loki grafana/loki
@@ -140,7 +146,7 @@ It is not recommended to run microservice mode with `filesystem` storage. For th
loki-canary-8thrx 1/1 Running 0 167m
loki-canary-h965l 1/1 Running 0 167m
loki-canary-th8kb 1/1 Running 0 167m
- loki-chunks-cache-0 0/2 Pending 0 167m
+ loki-chunks-cache-0 2/2 Running 0 167m
loki-compactor-0 1/1 Running 0 167m
loki-compactor-1 1/1 Running 0 167m
loki-distributor-7c9bb8f4dd-bcwc5 1/1 Running 0 167m
@@ -167,59 +173,69 @@ It is not recommended to run microservice mode with `filesystem` storage. For th
## Object Storage Configuration
-After testing Loki with MinIO, it is recommended to configure Loki with an object storage provider. The following examples shows how to configure Loki with different object storage providers:
+After testing Loki with [MinIO](https://min.io/docs/minio/kubernetes/upstream/index.html), we recommend configuring Loki with an object storage provider. The following examples shows how to configure Loki with different object storage providers:
{{< admonition type="caution" >}}
When deploying Loki using S3 Storage **DO NOT** use the default bucket names; `chunk`, `ruler` and `admin`. Choose a unique name for each bucket. For more information see the following [security update](https://grafana.com/blog/2024/06/27/grafana-security-update-grafana-loki-and-unintended-data-write-attempts-to-amazon-s3-buckets/). This caution does not apply when you are using MinIO. When using MinIO we recommend using the default bucket names.
{{< /admonition >}}
-{{< code >}}
-```s3
+{{< collapse title="S3" >}}
+
+```yaml
# Example configuration for Loki with S3 storage
- loki:
- schemaConfig:
- configs:
- - from: "2024-04-01"
- store: tsdb
- object_store: s3
- schema: v13
- index:
- prefix: loki_index_
- period: 24h
- ingester:
+loki:
+ schemaConfig:
+ configs:
+ - from: 2024-04-01
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ storage_config:
+ aws:
+ region: <AWS region your bucket is in, for example, `eu-west-2`>
+ bucketnames: <Your AWS bucket for chunk, for example, `aws-loki-dev-chunk`>
+ s3forcepathstyle: false
+ ingester:
chunk_encoding: snappy
- tracing:
+ pattern_ingester:
enabled: true
- querier:
+ limits_config:
+ allow_structured_metadata: true
+ volume_enabled: true
+ retention_period: 672h # 28 days retention
+ querier:
max_concurrent: 4
- storage:
- type: s3
- bucketNames:
- chunks: "<INSERT BUCKET NAME>"
- ruler: "<INSERT BUCKET NAME>"
- admin: "<INSERT BUCKET NAME>"
- s3:
- # s3 URL can be used to specify the endpoint, access key, secret key, and bucket name
- s3: s3://access_key:secret_access_key@custom_endpoint/bucket_name
- # AWS endpoint URL
- endpoint: <your-endpoint>
- # AWS region where the S3 bucket is located
- region: <your-region>
- # AWS secret access key
- secretAccessKey: <your-secret-access-key>
- # AWS access key ID
- accessKeyId: <your-access-key-id>
- # AWS signature version (e.g., v2 or v4)
- signatureVersion: <your-signature-version>
- # Forces the path style for S3 (true/false)
- s3ForcePathStyle: false
- # Allows insecure (HTTP) connections (true/false)
- insecure: false
- # HTTP configuration settings
- http_config: {}
+ storage:
+ type: s3
+ bucketNames:
+ chunks: <Your AWS bucket for chunk, for example, `aws-loki-dev-chunk`>
+ ruler: <Your AWS bucket for ruler, for example, `aws-loki-dev-ruler`>
+ admin: <Your AWS bucket for admin, for example, `aws-loki-dev-admin`>
+ s3:
+ # s3 URL can be used to specify the endpoint, access key, secret key, and bucket name this works well for S3 compatible storage or if you are hosting Loki on-premises and want to use S3 as the storage backend. Either use the s3 URL or the individual fields below (AWS endpoint, region, secret).
+ s3: s3://access_key:secret_access_key@custom_endpoint/bucket_name
+ # AWS endpoint URL
+ endpoint: <your-endpoint>
+ # AWS region where the S3 bucket is located
+ region: <your-region>
+ # AWS secret access key
+ secretAccessKey: <your-secret-access-key>
+ # AWS access key ID
+ accessKeyId: <your-access-key-id>
+ # AWS signature version (e.g., v2 or v4)
+ signatureVersion: <your-signature-version>
+ # Forces the path style for S3 (true/false)
+ s3ForcePathStyle: false
+ # Allows insecure (HTTP) connections (true/false)
+ insecure: false
+ # HTTP configuration settings
+ http_config: {}
deploymentMode: Distributed
@@ -264,8 +280,11 @@ When deploying Loki using S3 Storage **DO NOT** use the default bucket names; `
replicas: 0
```
+{{< /collapse >}}
+
+{{< collapse title="Azure" >}}
-```azure
+```yaml
# Example configuration for Loki with Azure Blob Storage
loki:
@@ -347,10 +366,19 @@ singleBinary:
replicas: 0
```
-{{< /code >}}
+{{< /collapse >}}
To configure other storage providers, refer to the [Helm Chart Reference]({{< relref "../reference" >}}).
+## Deploying the Loki Helm chart to a Production Environment
+
+{{< admonition type="note" >}}
+We are actively working on providing more guides for deploying Loki in production.
+{{< /admonition >}}
+
+We recommend running Loki at scale within a cloud environment like AWS, Azure, or GCP. The below guides will show you how to deploy a minimally viable production environment.
+- [Deploy Loki on AWS]({{< relref "../deployment-guides/aws" >}})
+
## Next Steps
* Configure an agent to [send log data to Loki](/docs/loki/<LOKI_VERSION>/send-data/).
* Monitor the Loki deployment using the [Meta Monitoring Helm chart](/docs/loki/<LOKI_VERSION>/setup/install/helm/monitor-and-alert/)
diff --git a/docs/sources/setup/install/helm/install-monolithic/_index.md b/docs/sources/setup/install/helm/install-monolithic/_index.md
index 4373907dcfbb1..fe790b0b6f890 100644
--- a/docs/sources/setup/install/helm/install-monolithic/_index.md
+++ b/docs/sources/setup/install/helm/install-monolithic/_index.md
@@ -10,107 +10,411 @@ weight: 100
# Install the monolithic Helm chart
-This Helm Chart installation runs the Grafana Loki *single binary* within a Kubernetes cluster.
+This Helm Chart installation deploys Grafana Loki in [monolithic mode](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#monolithic-mode) within a Kubernetes cluster.
-If you set the `singleBinary.replicas` value to 1 and set the deployment mode to `SingleBinary`, this chart configures Loki to run the `all` target in a [monolithic mode](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#monolithic-mode), designed to work with the filesystem storage configuration. It will also configure meta-monitoring of metrics and logs.
+## Prerequisites
+
+- Helm 3 or above. See [Installing Helm](https://helm.sh/docs/intro/install/).
+- A running Kubernetes cluster.
+
+## Single Replica or Multiple Replicas
+
+There are two ways to deploy Loki in monolithic mode:
+1. **Single Replica**: Run Loki with a single replica. This mode is useful for testing and development or if you are planning to run Loki as a meta-monitoring system.
+2. **Multiple Replicas**: Run Loki with multiple replicas. This mode is useful for high availability. This mode is less economical than microservice mode, but it is simpler to operate. We recommend running at least three replicas for high availability.
+
+Once you have selected how many replicas you would like to deploy, choose the appropriate `values.yaml` configuration file below and then continue with the deployment steps.
+
+### Single Replica
+
+Deploying the Helm chart with a single replica deploys the following components:
+- Loki (1 replica)
+- Loki Canary (1 DaemonSet)
+- Loki Gateway (1 NGINX replica)
+- Loki Chunk and Result Cache (1 DaemonSet)
+- Minio (optional, if `minio.enabled=true`)
+
+Create the configuration file `values.yaml`:
{{< admonition type="note" >}}
You must specify `commonConfig.replication_factor: 1` if you are only using 1 replica, otherwise requests will fail.
{{< /admonition >}}
-If you set the `singleBinary.replicas` value to 2 or more, this chart configures Loki to run a *single binary* in a replicated, highly available mode. When running replicas of a single binary, you must configure object storage.
+```yaml
+loki:
+ commonConfig:
+ replication_factor: 1
+ schemaConfig:
+ configs:
+ - from: "2024-04-01"
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ pattern_ingester:
+ enabled: true
+ limits_config:
+ allow_structured_metadata: true
+ volume_enabled: true
+ retention_period: 672h # 28 days retention
+ compactor:
+ retention_enabled: true
+ delete_request_store: s3
+ ruler:
+ enable_api: true
-**Before you begin: Software Requirements**
+minio:
+ enabled: true
+
+deploymentMode: SingleBinary
-- Helm 3 or above. See [Installing Helm](https://helm.sh/docs/intro/install/).
-- A running Kubernetes cluster
+singleBinary:
+ replicas: 1
+
+# Zero out replica counts of other deployment modes
+backend:
+ replicas: 0
+read:
+ replicas: 0
+write:
+ replicas: 0
+
+ingester:
+ replicas: 0
+querier:
+ replicas: 0
+queryFrontend:
+ replicas: 0
+queryScheduler:
+ replicas: 0
+distributor:
+ replicas: 0
+compactor:
+ replicas: 0
+indexGateway:
+ replicas: 0
+bloomCompactor:
+ replicas: 0
+bloomGateway:
+ replicas: 0
+```
+
+In this configuration, we are deploying Loki with MinIO as the object storage. We recommend configuring object storage via cloud provider or pointing Loki at a MinIO cluster for production deployments.
+
+### Multiple Replicas
+
+Deploying the Helm chart with multiple replicas deploys the following components:
+- Loki (3 replicas)
+- Loki Canary (1 DaemonSet)
+- Loki Gateway (1 NGINX replica)
+- Loki Chunk and Result Cache (1 DaemonSet)
+- Minio (optional, if `minio.enabled=true`)
+
+Create the configuration file `values.yaml`:
-**To deploy Loki in monolithic mode:**
+{{< admonition type="note" >}}
+If you set the `singleBinary.replicas` value to 2 or more, this chart configures Loki to run a *single binary* in a replicated, highly available mode. When running replicas of a single binary, you must configure object storage.
+{{< /admonition >}}
+
+```yaml
+loki:
+ commonConfig:
+ replication_factor: 3
+ schemaConfig:
+ configs:
+ - from: "2024-04-01"
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ pattern_ingester:
+ enabled: true
+ limits_config:
+ allow_structured_metadata: true
+ volume_enabled: true
+ retention_period: 672h # 28 days retention
+ compactor:
+ retention_enabled: true
+ delete_request_store: s3
+ ruler:
+ enable_api: true
+
+minio:
+ enabled: true
+
+deploymentMode: SingleBinary
+
+singleBinary:
+ replicas: 3
+
+# Zero out replica counts of other deployment modes
+backend:
+ replicas: 0
+read:
+ replicas: 0
+write:
+ replicas: 0
+
+ingester:
+ replicas: 0
+querier:
+ replicas: 0
+queryFrontend:
+ replicas: 0
+queryScheduler:
+ replicas: 0
+distributor:
+ replicas: 0
+compactor:
+ replicas: 0
+indexGateway:
+ replicas: 0
+bloomCompactor:
+ replicas: 0
+bloomGateway:
+ replicas: 0
+```
+In this configuration, we need to make sure to update the `commonConfig.replication_factor` and `singleBinary.replicas` to the desired number of replicas. We are deploying Loki with MinIO as the object storage. We recommend configuring object storage via cloud provider or pointing Loki at a MinIO cluster for production deployments.
+
+## Deploying the Helm chart for development and testing
1. Add [Grafana's chart repository](https://github.com/grafana/helm-charts) to Helm:
- ```bash
- helm repo add grafana https://grafana.github.io/helm-charts
- ```
+ ```bash
+ helm repo add grafana https://grafana.github.io/helm-charts
+ ```
1. Update the chart repository:
+ ```bash
+ helm repo update
+ ```
+
+1. Deploy Loki using the configuration file `values.yaml`:
+
+ ```bash
+ helm install loki grafana/loki-stack -f values.yaml
+ ```
+1. Install or upgrade the Loki deployment.
+ - To install:
+ ```bash
+ helm install --values values.yaml loki grafana/loki
+ ```
+ - To upgrade:
+ ```bash
+ helm upgrade --values values.yaml loki grafana/loki
+ ```
+
+1. Verify that Loki is running:
```bash
- helm repo update
+ kubectl get pods -n loki
```
-1. Create the configuration file `values.yaml`:
-
- - If running a single replica of Loki, configure the `filesystem` storage:
-
- ```yaml
- deploymentMode: SingleBinary
- loki:
- commonConfig:
- replication_factor: 1
- storage:
- type: 'filesystem'
- schemaConfig:
- configs:
- - from: "2024-01-01"
- store: tsdb
- index:
- prefix: loki_index_
- period: 24h
- object_store: filesystem # we're storing on filesystem so there's no real persistence here.
- schema: v13
- singleBinary:
- replicas: 1
- read:
- replicas: 0
- backend:
- replicas: 0
- write:
- replicas: 0
- ```
-
- - If running Loki with a replication factor greater than 1, set the desired number replicas and provide object storage credentials:
-
- ```yaml
- loki:
- commonConfig:
- replication_factor: 3
- schemaConfig:
- configs:
- - from: "2024-01-01"
- store: tsdb
- index:
- prefix: loki_index_
- period: 24h
- object_store: s3
- schema: v13
- storage:
- type: 's3'
- bucketNames:
- chunks: loki-chunks
- ruler: loki-ruler
- admin: loki-admin
- s3:
- endpoint: foo.aws.com
- region: <AWS region>
- secretAccessKey: supersecret
- accessKeyId: secret
- s3ForcePathStyle: false
- insecure: false
- singleBinary:
- replicas: 3
- ```
-
-1. Deploy the Loki cluster using one of these commands.
-
- - Deploy with the defined configuration:
+## Object Storage Configuration
- ```bash
- helm install --values values.yaml loki grafana/loki
- ```
+After testing Loki with MinIO, we recommend configuring Loki with an object storage provider. The following examples shows how to configure Loki with different object storage providers:
+
+{{< admonition type="caution" >}}
+When deploying Loki using S3 Storage **DO NOT** use the default bucket names; `chunk`, `ruler` and `admin`. Choose a unique name for each bucket. For more information see the following [security update](https://grafana.com/blog/2024/06/27/grafana-security-update-grafana-loki-and-unintended-data-write-attempts-to-amazon-s3-buckets/). This caution does not apply when you are using MinIO. When using MinIO we recommend using the default bucket names.
+{{< /admonition >}}
- - Deploy with the defined configuration in a custom Kubernetes cluster namespace:
+{{< collapse title="S3" >}}
+
+```yaml
+loki:
+ commonConfig:
+ replication_factor: 3
+ schemaConfig:
+ configs:
+ - from: "2024-04-01"
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ storage_config:
+ aws:
+ region: <AWS region your bucket is in, for example, `eu-west-2`>
+ bucketnames: <Your AWS bucket for chunk, for exaxmple, `aws-loki-dev-chunk`>
+ s3forcepathstyle: false
+ pattern_ingester:
+ enabled: true
+ limits_config:
+ allow_structured_metadata: true
+ volume_enabled: true
+ retention_period: 672h # 28 days retention
+
+ storage:
+ type: s3
+ bucketNames:
+ chunks: <Your AWS bucket for chunk, for example, `aws-loki-dev-chunk`>
+ ruler: <Your AWS bucket for ruler, for example, `aws-loki-dev-ruler`>
+ admin: <Your AWS bucket for admin, for example, `aws-loki-dev-admin`>
+ s3:
+ # s3 URL can be used to specify the endpoint, access key, secret key, and bucket name this works well for S3 compatible storages or are hosting Loki on-premises and want to use S3 as the storage backend. Either use the s3 URL or the individual fields below (AWS endpoint, region, secret).
+ s3: s3://access_key:secret_access_key@custom_endpoint/bucket_name
+ # AWS endpoint URL
+ endpoint: <your-endpoint>
+ # AWS region where the S3 bucket is located
+ region: <your-region>
+ # AWS secret access key
+ secretAccessKey: <your-secret-access-key>
+ # AWS access key ID
+ accessKeyId: <your-access-key-id>
+ # AWS signature version (e.g., v2 or v4)
+ signatureVersion: <your-signature-version>
+ # Forces the path style for S3 (true/false)
+ s3ForcePathStyle: false
+ # Allows insecure (HTTP) connections (true/false)
+ insecure: false
+ # HTTP configuration settings
+ http_config: {}
+
+# Disable minio storage
+minio:
+ enabled: false
+
+singleBinary:
+ replicas: 3
+ persistence:
+ storageClass: gp2
+ accessModes:
+ - ReadWriteOnce
+ size: 30Gi
+
+# Zero out replica counts of other deployment modes
+backend:
+ replicas: 0
+read:
+ replicas: 0
+write:
+ replicas: 0
+
+ingester:
+ replicas: 0
+querier:
+ replicas: 0
+queryFrontend:
+ replicas: 0
+queryScheduler:
+ replicas: 0
+distributor:
+ replicas: 0
+compactor:
+ replicas: 0
+indexGateway:
+ replicas: 0
+bloomCompactor:
+ replicas: 0
+bloomGateway:
+ replicas: 0
+```
+
+{{< /collapse >}}
+
+{{< collapse title="Azure" >}}
+
+```yaml
+loki:
+ schemaConfig:
+ configs:
+ - from: "2024-04-01"
+ store: tsdb
+ object_store: azure
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ ingester:
+ chunk_encoding: snappy
+
+ storage:
+ type: azure
+ azure:
+ # Name of the Azure Blob Storage account
+ accountName: <your-account-name>
+ # Key associated with the Azure Blob Storage account
+ accountKey: <your-account-key>
+ # Comprehensive connection string for Azure Blob Storage account (Can be used to replace endpoint, accountName, and accountKey)
+ connectionString: <your-connection-string>
+ # Flag indicating whether to use Azure Managed Identity for authentication
+ useManagedIdentity: false
+ # Flag indicating whether to use a federated token for authentication
+ useFederatedToken: false
+ # Client ID of the user-assigned managed identity (if applicable)
+ userAssignedId: <your-user-assigned-id>
+ # Timeout duration for requests made to the Azure Blob Storage account (in seconds)
+ requestTimeout: <your-request-timeout>
+ # Domain suffix of the Azure Blob Storage service endpoint (e.g., core.windows.net)
+ endpointSuffix: <your-endpoint-suffix>
+ bucketNames:
+ chunks: "chunks"
+ ruler: "ruler"
+ admin: "admin"
+
+# Disable minio storage
+minio:
+ enabled: false
+
+singleBinary:
+ replicas: 3
+ persistence:
+ storageClass: gp2
+ accessModes:
+ - ReadWriteOnce
+ size: 30Gi
+
+# Zero out replica counts of other deployment modes
+backend:
+ replicas: 0
+read:
+ replicas: 0
+write:
+ replicas: 0
+
+ingester:
+ replicas: 0
+querier:
+ replicas: 0
+queryFrontend:
+ replicas: 0
+queryScheduler:
+ replicas: 0
+distributor:
+ replicas: 0
+compactor:
+ replicas: 0
+indexGateway:
+ replicas: 0
+bloomCompactor:
+ replicas: 0
+bloomGateway:
+ replicas: 0
+
+```
+
+{{< /collapse >}}
+
+
+
+To configure other storage providers, refer to the [Helm Chart Reference](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/install/helm/reference/).
+
+## Deploying the Loki Helm chart to a Production Environment
+
+{{< admonition type="note" >}}
+We are actively working on providing more guides for deploying Loki in production.
+{{< /admonition >}}
+
+We recommend running Loki at scale within a cloud environment like AWS, Azure, or GCP. The below guides will show you how to deploy a minimally viable production environment.
+- [Deploy Loki on AWS](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/install/helm/deployment-guides/aws)
+
+
+## Next Steps
+* Configure an agent to [send log data to Loki](/docs/loki/<LOKI_VERSION>/send-data/).
+* Monitor the Loki deployment using the [Meta Monitoring Helm chart](/docs/loki/<LOKI_VERSION>/setup/install/helm/monitor-and-alert/)
- ```bash
- helm install --values values.yaml loki --namespace=loki grafana/loki
- ```
diff --git a/docs/sources/setup/install/helm/install-scalable/_index.md b/docs/sources/setup/install/helm/install-scalable/_index.md
index 3bc6a3c0cad71..31461b33e06f8 100644
--- a/docs/sources/setup/install/helm/install-scalable/_index.md
+++ b/docs/sources/setup/install/helm/install-scalable/_index.md
@@ -11,7 +11,7 @@ keywords:
# Install the simple scalable Helm chart
-This Helm Chart deploys Grafana Loki on Kubernetes.
+This Helm Chart deploys Grafana Loki in [simple scalable mode](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#simple-scalable) within a Kubernetes cluster.
This chart configures Loki to run `read`, `write`, and `backend` targets in a [scalable mode]({{< relref "../../../../get-started/deployment-modes#simple-scalable" >}}). Loki’s simple scalable deployment mode separates execution paths into read, write, and backend targets.
@@ -22,17 +22,20 @@ The default Helm chart deploys the following components:
- Loki Canary (1 DaemonSet)
- Gateway (1 NGINX replica)
- Minio (optional, if `minio.enabled=true`)
+- Index and Chunk cache (1 replica)
-It is not recommended to run scalable mode with `filesystem` storage. For the purpose of this guide, we will use MinIO as the object storage to provide a complete example.
+{{< admonition type="note" >}}
+We do not recommended running scalable mode with `filesystem` storage. For the purpose of this guide, we will use MinIO as the object storage to provide a complete example.
+{{< /admonition >}}
-**Prerequisites**
+## Prerequisites
- Helm 3 or above. See [Installing Helm](https://helm.sh/docs/intro/install/).
- A running Kubernetes cluster (must have at least 3 nodes).
-- (Optional) A Memcached deployment for better query performance. For information on configuring Memcached, refer to [caching section]({{< relref "../../../../operations/caching" >}}).
+## Deploying the Helm chart for development and testing
-**To deploy Loki in simple scalable mode:**
+The following steps show how to deploy the Loki Helm chart in simple scalable mode using the included MinIO as the storage backend. Our recommendation is to start here for development and testing purposes. Then configure Loki with an object storage provider when moving to production.
1. Add [Grafana's chart repository](https://github.com/grafana/helm-charts) to Helm:
@@ -49,74 +52,52 @@ It is not recommended to run scalable mode with `filesystem` storage. For the pu
3. Create the configuration file `values.yaml`. The example below illustrates how to deploy Loki in test mode using MinIO as storage:
- ```yaml
- loki:
- schemaConfig:
- configs:
- - from: "2024-04-01"
- store: tsdb
- object_store: s3
- schema: v13
- index:
- prefix: loki_index_
- period: 24h
- ingester:
- chunk_encoding: snappy
- tracing:
- enabled: true
- querier:
- # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
- max_concurrent: 4
-
- #gateway:
- # ingress:
- # enabled: true
- # hosts:
- # - host: FIXME
- # paths:
- # - path: /
- # pathType: Prefix
-
- deploymentMode: SimpleScalable
-
- backend:
- replicas: 3
- read:
- replicas: 3
- write:
- replicas: 3
-
- # Enable minio for storage
- minio:
- enabled: true
-
- # Zero out replica counts of other deployment modes
- singleBinary:
- replicas: 0
-
- ingester:
- replicas: 0
- querier:
- replicas: 0
- queryFrontend:
- replicas: 0
- queryScheduler:
- replicas: 0
- distributor:
- replicas: 0
- compactor:
- replicas: 0
- indexGateway:
- replicas: 0
- bloomPlanner:
- replicas: 0
- bloomBuilder:
- replicas: 0
- bloomGateway:
- replicas: 0
- ```
-
-4. Install or upgrade the Loki deployment.
+ ```yaml
+ loki:
+ schemaConfig:
+ configs:
+ - from: "2024-04-01"
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ ingester:
+ chunk_encoding: snappy
+ querier:
+ # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
+ max_concurrent: 4
+ pattern_ingester:
+ enabled: true
+ limits_config:
+ allow_structured_metadata: true
+ volume_enabled: true
+ retention_period: 672h
+ compactor:
+ retention_enabled: true
+ delete_request_store: s3
+
+ deploymentMode: SimpleScalable
+
+ backend:
+ replicas: 2
+ read:
+ replicas: 2
+ write:
+ replicas: 3 # To ensure data durability with replication
+
+ # Enable minio for storage
+ minio:
+ enabled: true
+
+ gateway:
+ service:
+ type: LoadBalancer
+ ```
+
+1. Install or upgrade the Loki deployment.
+
- To install:
```bash
helm install --values values.yaml loki grafana/loki
@@ -128,15 +109,15 @@ It is not recommended to run scalable mode with `filesystem` storage. For the pu
## Object Storage Configuration
-After testing Loki with MinIO, it is recommended to configure Loki with an object storage provider. The following examples shows how to configure Loki with different object storage providers:
+After testing Loki with MinIO, we recommend configuring Loki with an object storage provider. The following examples shows how to configure Loki with different object storage providers:
{{< admonition type="caution" >}}
When deploying Loki using S3 Storage **DO NOT** use the default bucket names; `chunk`, `ruler` and `admin`. Choose a unique name for each bucket. For more information see the following [security update](https://grafana.com/blog/2024/06/27/grafana-security-update-grafana-loki-and-unintended-data-write-attempts-to-amazon-s3-buckets/). This caution does not apply when you are using MinIO. When using MinIO we recommend using the default bucket names.
{{< /admonition >}}
-{{< code >}}
+{{< collapse title="S3" >}}
-```s3
+```yaml
loki:
schemaConfig:
configs:
@@ -147,21 +128,28 @@ loki:
index:
prefix: loki_index_
period: 24h
- ingester:
- chunk_encoding: snappy
- tracing:
- enabled: true
+ storage_config:
+ aws:
+ region: <AWS region your bucket is in, for example, `eu-west-2`>
+ bucketnames: <Your AWS bucket for chunk, for example, `aws-loki-dev-chunk`>
+ s3forcepathstyle: false
+ pattern_ingester:
+ enabled: true
+ limits_config:
+ allow_structured_metadata: true
+ volume_enabled: true
+ retention_period: 672h # 28 days retention
querier:
max_concurrent: 4
storage:
type: s3
bucketNames:
- chunks: "<INSERT BUCKET NAME>"
- ruler: "<INSERT BUCKET NAME>"
- admin: "<INSERT BUCKET NAME>"
+ chunks: <Your AWS bucket for chunk, for example, `aws-loki-dev-chunk`>
+ ruler: <Your AWS bucket for ruler, for example, `aws-loki-dev-ruler`>
+ admin: <Your AWS bucket for admin, for example, `aws-loki-dev-admin`>
s3:
- # s3 URL can be used to specify the endpoint, access key, secret key, and bucket name
+ # s3 URL can be used to specify the endpoint, access key, secret key, and bucket name this works well for S3 compatible storages or if you are hosting Loki on-premises and want to use S3 as the storage backend. Either use the s3 URL or the individual fields below (AWS endpoint, region, secret).
s3: s3://access_key:secret_access_key@custom_endpoint/bucket_name
# AWS endpoint URL
endpoint: <your-endpoint>
@@ -192,33 +180,14 @@ write:
# Disable minio storage
minio:
enabled: false
-
-# Zero out replica counts of other deployment modes
-singleBinary:
- replicas: 0
-
-ingester:
- replicas: 0
-querier:
- replicas: 0
-queryFrontend:
- replicas: 0
-queryScheduler:
- replicas: 0
-distributor:
- replicas: 0
-compactor:
- replicas: 0
-indexGateway:
- replicas: 0
-bloomPlanner:
- replicas: 0
-bloomBuilder:
- replicas: 0
-bloomGateway:
- replicas: 0
```
-```azure
+
+{{< /collapse >}}
+
+{{< collapse title="Azure" >}}
+
+```yaml
+
loki:
schemaConfig:
configs:
@@ -273,33 +242,9 @@ write:
minio:
enabled: false
-# Zero out replica counts of other deployment modes
-singleBinary:
- replicas: 0
-
-ingester:
- replicas: 0
-querier:
- replicas: 0
-queryFrontend:
- replicas: 0
-queryScheduler:
- replicas: 0
-distributor:
- replicas: 0
-compactor:
- replicas: 0
-indexGateway:
- replicas: 0
-bloomPlanner:
- replicas: 0
-bloomBuilder:
- replicas: 0
-bloomGateway:
- replicas: 0
-```
-{{< /code >}}
+```
+{{< /collapse >}}
To configure other storage providers, refer to the [Helm Chart Reference]({{< relref "../reference" >}}).
|
docs
|
Deploy Loki Helm on AWS guide (#14517)
|
00ee18acc9d1f0a2fa7d9d04f57cee1e05d71da8
|
2025-01-31 18:47:32
|
benclive
|
chore(ksonnet): Add default setting for distributor.no_schedule_const… (#16032)
| false
|
diff --git a/production/ksonnet/loki/config.libsonnet b/production/ksonnet/loki/config.libsonnet
index f2e3c386157cb..fae999b8aded7 100644
--- a/production/ksonnet/loki/config.libsonnet
+++ b/production/ksonnet/loki/config.libsonnet
@@ -79,6 +79,9 @@
ruler_enabled: false,
distributor: {
+ // use_no_constraints is false by default allowing either TopologySpreadConstraints or pod antiAffinity to be configured.
+ // If no_schedule_constraints is set to true, neither of the pod constraints will be applied.
+ no_schedule_constraints: false,
use_topology_spread: true,
topology_spread_max_skew: 1,
},
|
chore
|
Add default setting for distributor.no_schedule_const… (#16032)
|
42c43ecb4213f0a83e016afe5b8a89cd36757aa2
|
2025-01-10 22:13:27
|
renovate[bot]
|
fix(deps): update module github.com/aws/aws-sdk-go-v2 to v1.32.8 (#15680)
| false
|
diff --git a/tools/lambda-promtail/go.mod b/tools/lambda-promtail/go.mod
index 0ad8d1b4afa40..b57ce4ee38b2c 100644
--- a/tools/lambda-promtail/go.mod
+++ b/tools/lambda-promtail/go.mod
@@ -4,7 +4,7 @@ go 1.22
require (
github.com/aws/aws-lambda-go v1.47.0
- github.com/aws/aws-sdk-go-v2 v1.32.7
+ github.com/aws/aws-sdk-go-v2 v1.32.8
github.com/aws/aws-sdk-go-v2/config v1.28.7
github.com/aws/aws-sdk-go-v2/service/s3 v1.72.1
github.com/go-kit/log v0.2.1
diff --git a/tools/lambda-promtail/go.sum b/tools/lambda-promtail/go.sum
index af98383f80ce6..2e5701e7a718c 100644
--- a/tools/lambda-promtail/go.sum
+++ b/tools/lambda-promtail/go.sum
@@ -48,8 +48,8 @@ github.com/aws/aws-lambda-go v1.47.0 h1:0H8s0vumYx/YKs4sE7YM0ktwL2eWse+kfopsRI1s
github.com/aws/aws-lambda-go v1.47.0/go.mod h1:dpMpZgvWx5vuQJfBt0zqBha60q7Dd7RfgJv23DymV8A=
github.com/aws/aws-sdk-go v1.54.19 h1:tyWV+07jagrNiCcGRzRhdtVjQs7Vy41NwsuOcl0IbVI=
github.com/aws/aws-sdk-go v1.54.19/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
-github.com/aws/aws-sdk-go-v2 v1.32.7 h1:ky5o35oENWi0JYWUZkB7WYvVPP+bcRF5/Iq7JWSb5Rw=
-github.com/aws/aws-sdk-go-v2 v1.32.7/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U=
+github.com/aws/aws-sdk-go-v2 v1.32.8 h1:cZV+NUS/eGxKXMtmyhtYPJ7Z4YLoI/V8bkTdRZfYhGo=
+github.com/aws/aws-sdk-go-v2 v1.32.8/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7 h1:lL7IfaFzngfx0ZwUGOZdsFFnQ5uLvR0hWqqhyE7Q9M8=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7/go.mod h1:QraP0UcVlQJsmHfioCrveWOC1nbiWUl3ej08h4mXWoc=
github.com/aws/aws-sdk-go-v2/config v1.28.7 h1:GduUnoTXlhkgnxTD93g1nv4tVPILbdNQOzav+Wpg7AE=
|
fix
|
update module github.com/aws/aws-sdk-go-v2 to v1.32.8 (#15680)
|
8010deac5d77e770af54bca5f4aa5da9612fd62e
|
2021-10-01 03:06:06
|
Karen Miller
|
docs: prominently advertise free Graphana Cloud availability (#4399)
| false
|
diff --git a/docs/sources/_index.md b/docs/sources/_index.md
index df95eaeee171d..36a7baa073279 100644
--- a/docs/sources/_index.md
+++ b/docs/sources/_index.md
@@ -17,3 +17,5 @@ metadata about your logs: labels (just like Prometheus labels). Log data itself
is then compressed and stored in chunks in object stores such as S3 or GCS, or
even locally on the filesystem. A small index and highly compressed chunks
simplifies the operation and significantly lowers the cost of Loki.
+
+> **Note:** You can use [Grafana Cloud](https://grafana.com/products/cloud/features/#cloud-logs) to avoid installing, maintaining, and scaling your own instance of Grafana Loki. The free forever plan includes 50GB of free logs. [Create a free account to get started](https://grafana.com/auth/sign-up/create-user?pg=docs-grafana-install&plcmt=in-text).
diff --git a/docs/sources/getting-started/_index.md b/docs/sources/getting-started/_index.md
index a5fb52d196df2..24b11b709a325 100644
--- a/docs/sources/getting-started/_index.md
+++ b/docs/sources/getting-started/_index.md
@@ -4,6 +4,8 @@ weight: 300
---
# Getting started with Loki
+> **Note:** You can use [Grafana Cloud](https://grafana.com/products/cloud/features/#cloud-logs) to avoid installing, maintaining, and scaling your own instance of Grafana Loki. The free forever plan includes 50GB of free logs. [Create a free account to get started](https://grafana.com/auth/sign-up/create-user?pg=docs-grafana-install&plcmt=in-text).
+
1. [Getting Logs Into Loki](get-logs-into-loki/)
1. [Grafana](grafana/)
1. [LogCLI](logcli/)
diff --git a/docs/sources/installation/_index.md b/docs/sources/installation/_index.md
index e0558b529008f..f308876cee354 100644
--- a/docs/sources/installation/_index.md
+++ b/docs/sources/installation/_index.md
@@ -4,6 +4,8 @@ weight: 200
---
# Installation
+> **Note:** You can use [Grafana Cloud](https://grafana.com/products/cloud/features/#cloud-logs) to avoid installing, maintaining, and scaling your own instance of Grafana Loki. The free forever plan includes 50GB of free logs. [Create a free account to get started](https://grafana.com/auth/sign-up/create-user?pg=docs-grafana-install&plcmt=in-text).
+
## Installation methods
Instructions for different methods of installing Loki and Promtail.
|
docs
|
prominently advertise free Graphana Cloud availability (#4399)
|
db9b863277062abc8e0f47d34c710ee8b8e7b38b
|
2024-11-14 19:07:18
|
renovate[bot]
|
fix(deps): update module go.opentelemetry.io/collector/pdata to v1.19.0 (#14916)
| false
|
diff --git a/go.mod b/go.mod
index a6003e38a7c55..875774ea9b13a 100644
--- a/go.mod
+++ b/go.mod
@@ -145,7 +145,7 @@ require (
github.com/twmb/franz-go/plugin/kotel v1.5.0
github.com/twmb/franz-go/plugin/kprom v1.1.0
github.com/willf/bloom v2.0.3+incompatible
- go.opentelemetry.io/collector/pdata v1.12.0
+ go.opentelemetry.io/collector/pdata v1.19.0
go4.org/netipx v0.0.0-20230125063823-8449b0a6169f
golang.org/x/exp v0.0.0-20240325151524-a685a6edb6d8
golang.org/x/oauth2 v0.23.0
diff --git a/go.sum b/go.sum
index 1d7cdb6a0599a..a4ff5dbc16e2e 100644
--- a/go.sum
+++ b/go.sum
@@ -2733,8 +2733,8 @@ go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
-go.opentelemetry.io/collector/pdata v1.12.0 h1:Xx5VK1p4VO0md8MWm2icwC1MnJ7f8EimKItMWw46BmA=
-go.opentelemetry.io/collector/pdata v1.12.0/go.mod h1:MYeB0MmMAxeM0hstCFrCqWLzdyeYySim2dG6pDT6nYI=
+go.opentelemetry.io/collector/pdata v1.19.0 h1:jmnU5R8TOCbwRr4B8sjdRxM7L5WnEKlQWX1dtLYxIbE=
+go.opentelemetry.io/collector/pdata v1.19.0/go.mod h1:Ox1YVLe87cZDB/TL30i4SUz1cA5s6AM6SpFMfY61ICs=
go.opentelemetry.io/collector/semconv v0.105.0 h1:8p6dZ3JfxFTjbY38d8xlQGB1TQ3nPUvs+D0RERniZ1g=
go.opentelemetry.io/collector/semconv v0.105.0/go.mod h1:yMVUCNoQPZVq/IPfrHrnntZTWsLf5YGZ7qwKulIl5hw=
go.opentelemetry.io/contrib/detectors/gcp v1.29.0 h1:TiaiXB4DpGD3sdzNlYQxruQngn5Apwzi1X0DRhuGvDQ=
diff --git a/vendor/go.opentelemetry.io/collector/pdata/internal/data/profileid.go b/vendor/go.opentelemetry.io/collector/pdata/internal/data/profileid.go
new file mode 100644
index 0000000000000..5b4e6f53ceb03
--- /dev/null
+++ b/vendor/go.opentelemetry.io/collector/pdata/internal/data/profileid.go
@@ -0,0 +1,79 @@
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+package data // import "go.opentelemetry.io/collector/pdata/internal/data"
+
+import (
+ "errors"
+
+ "github.com/gogo/protobuf/proto"
+)
+
+const profileIDSize = 16
+
+var (
+ errMarshalProfileID = errors.New("marshal: invalid buffer length for ProfileID")
+ errUnmarshalProfileID = errors.New("unmarshal: invalid ProfileID length")
+)
+
+// ProfileID is a custom data type that is used for all profile_id fields in OTLP
+// Protobuf messages.
+type ProfileID [profileIDSize]byte
+
+var _ proto.Sizer = (*SpanID)(nil)
+
+// Size returns the size of the data to serialize.
+func (tid ProfileID) Size() int {
+ if tid.IsEmpty() {
+ return 0
+ }
+ return profileIDSize
+}
+
+// IsEmpty returns true if id contains at leas one non-zero byte.
+func (tid ProfileID) IsEmpty() bool {
+ return tid == [profileIDSize]byte{}
+}
+
+// MarshalTo converts profile ID into a binary representation. Called by Protobuf serialization.
+func (tid ProfileID) MarshalTo(data []byte) (n int, err error) {
+ if tid.IsEmpty() {
+ return 0, nil
+ }
+
+ if len(data) < profileIDSize {
+ return 0, errMarshalProfileID
+ }
+
+ return copy(data, tid[:]), nil
+}
+
+// Unmarshal inflates this profile ID from binary representation. Called by Protobuf serialization.
+func (tid *ProfileID) Unmarshal(data []byte) error {
+ if len(data) == 0 {
+ *tid = [profileIDSize]byte{}
+ return nil
+ }
+
+ if len(data) != profileIDSize {
+ return errUnmarshalProfileID
+ }
+
+ copy(tid[:], data)
+ return nil
+}
+
+// MarshalJSON converts profile id into a hex string enclosed in quotes.
+func (tid ProfileID) MarshalJSON() ([]byte, error) {
+ if tid.IsEmpty() {
+ return []byte(`""`), nil
+ }
+ return marshalJSON(tid[:])
+}
+
+// UnmarshalJSON inflates profile id from hex string, possibly enclosed in quotes.
+// Called by Protobuf JSON deserialization.
+func (tid *ProfileID) UnmarshalJSON(data []byte) error {
+ *tid = [profileIDSize]byte{}
+ return unmarshalJSON(tid[:], data)
+}
diff --git a/vendor/go.opentelemetry.io/collector/pdata/internal/data/protogen/profiles/v1experimental/pprofextended.pb.go b/vendor/go.opentelemetry.io/collector/pdata/internal/data/protogen/profiles/v1experimental/pprofextended.pb.go
index 64c91b3221ec3..7b90b35373fb1 100644
--- a/vendor/go.opentelemetry.io/collector/pdata/internal/data/protogen/profiles/v1experimental/pprofextended.pb.go
+++ b/vendor/go.opentelemetry.io/collector/pdata/internal/data/protogen/profiles/v1experimental/pprofextended.pb.go
@@ -157,24 +157,24 @@ type Profile struct {
// If one of the values represents the number of events represented
// by the sample, by convention it should be at index 0 and use
// sample_type.unit == "count".
- SampleType []ValueType `protobuf:"bytes,1,rep,name=sample_type,json=sampleType,proto3" json:"sample_type"`
+ SampleType []*ValueType `protobuf:"bytes,1,rep,name=sample_type,json=sampleType,proto3" json:"sample_type,omitempty"`
// The set of samples recorded in this profile.
- Sample []Sample `protobuf:"bytes,2,rep,name=sample,proto3" json:"sample"`
+ Sample []*Sample `protobuf:"bytes,2,rep,name=sample,proto3" json:"sample,omitempty"`
// Mapping from address ranges to the image/binary/library mapped
// into that address range. mapping[0] will be the main binary.
- Mapping []Mapping `protobuf:"bytes,3,rep,name=mapping,proto3" json:"mapping"`
+ Mapping []*Mapping `protobuf:"bytes,3,rep,name=mapping,proto3" json:"mapping,omitempty"`
// Locations referenced by samples via location_indices.
- Location []Location `protobuf:"bytes,4,rep,name=location,proto3" json:"location"`
+ Location []*Location `protobuf:"bytes,4,rep,name=location,proto3" json:"location,omitempty"`
// Array of locations referenced by samples.
LocationIndices []int64 `protobuf:"varint,15,rep,packed,name=location_indices,json=locationIndices,proto3" json:"location_indices,omitempty"`
// Functions referenced by locations.
- Function []Function `protobuf:"bytes,5,rep,name=function,proto3" json:"function"`
+ Function []*Function `protobuf:"bytes,5,rep,name=function,proto3" json:"function,omitempty"`
// Lookup table for attributes.
AttributeTable []v1.KeyValue `protobuf:"bytes,16,rep,name=attribute_table,json=attributeTable,proto3" json:"attribute_table"`
// Represents a mapping between Attribute Keys and Units.
- AttributeUnits []AttributeUnit `protobuf:"bytes,17,rep,name=attribute_units,json=attributeUnits,proto3" json:"attribute_units"`
+ AttributeUnits []*AttributeUnit `protobuf:"bytes,17,rep,name=attribute_units,json=attributeUnits,proto3" json:"attribute_units,omitempty"`
// Lookup table for links.
- LinkTable []Link `protobuf:"bytes,18,rep,name=link_table,json=linkTable,proto3" json:"link_table"`
+ LinkTable []*Link `protobuf:"bytes,18,rep,name=link_table,json=linkTable,proto3" json:"link_table,omitempty"`
// A common table for strings referenced by various messages.
// string_table[0] must always be "".
StringTable []string `protobuf:"bytes,6,rep,name=string_table,json=stringTable,proto3" json:"string_table,omitempty"`
@@ -237,28 +237,28 @@ func (m *Profile) XXX_DiscardUnknown() {
var xxx_messageInfo_Profile proto.InternalMessageInfo
-func (m *Profile) GetSampleType() []ValueType {
+func (m *Profile) GetSampleType() []*ValueType {
if m != nil {
return m.SampleType
}
return nil
}
-func (m *Profile) GetSample() []Sample {
+func (m *Profile) GetSample() []*Sample {
if m != nil {
return m.Sample
}
return nil
}
-func (m *Profile) GetMapping() []Mapping {
+func (m *Profile) GetMapping() []*Mapping {
if m != nil {
return m.Mapping
}
return nil
}
-func (m *Profile) GetLocation() []Location {
+func (m *Profile) GetLocation() []*Location {
if m != nil {
return m.Location
}
@@ -272,7 +272,7 @@ func (m *Profile) GetLocationIndices() []int64 {
return nil
}
-func (m *Profile) GetFunction() []Function {
+func (m *Profile) GetFunction() []*Function {
if m != nil {
return m.Function
}
@@ -286,14 +286,14 @@ func (m *Profile) GetAttributeTable() []v1.KeyValue {
return nil
}
-func (m *Profile) GetAttributeUnits() []AttributeUnit {
+func (m *Profile) GetAttributeUnits() []*AttributeUnit {
if m != nil {
return m.AttributeUnits
}
return nil
}
-func (m *Profile) GetLinkTable() []Link {
+func (m *Profile) GetLinkTable() []*Link {
if m != nil {
return m.LinkTable
}
@@ -554,7 +554,7 @@ type Sample struct {
// discouraged case is having a string label and a numeric label of the same
// name on a sample. Again, possible to express, but should not be used.
// [deprecated, superseded by attributes]
- Label []Label `protobuf:"bytes,3,rep,name=label,proto3" json:"label"`
+ Label []*Label `protobuf:"bytes,3,rep,name=label,proto3" json:"label,omitempty"`
// References to attributes in Profile.attribute_table. [optional]
Attributes []uint64 `protobuf:"varint,10,rep,packed,name=attributes,proto3" json:"attributes,omitempty"`
// Reference to link in Profile.link_table. [optional]
@@ -632,7 +632,7 @@ func (m *Sample) GetValue() []int64 {
return nil
}
-func (m *Sample) GetLabel() []Label {
+func (m *Sample) GetLabel() []*Label {
if m != nil {
return m.Label
}
@@ -907,7 +907,7 @@ type Location struct {
// E.g., if memcpy() is inlined into printf:
// line[0].function_name == "memcpy"
// line[1].function_name == "printf"
- Line []Line `protobuf:"bytes,4,rep,name=line,proto3" json:"line"`
+ Line []*Line `protobuf:"bytes,4,rep,name=line,proto3" json:"line,omitempty"`
// Provides an indication that multiple symbols map to this location's
// address, for example due to identical code folding by the linker. In that
// case the line information above represents one of the multiple
@@ -974,7 +974,7 @@ func (m *Location) GetAddress() uint64 {
return 0
}
-func (m *Location) GetLine() []Line {
+func (m *Location) GetLine() []*Line {
if m != nil {
return m.Line
}
@@ -1170,100 +1170,100 @@ func init() {
}
var fileDescriptor_05f9ce3fdbeb046f = []byte{
- // 1483 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x57, 0xcd, 0x4f, 0x1b, 0x47,
- 0x1b, 0xc7, 0x1f, 0xf8, 0xe3, 0x31, 0x06, 0x33, 0xe1, 0xe5, 0xdd, 0x37, 0xaf, 0x02, 0xc4, 0xa8,
- 0x0d, 0x25, 0x92, 0x29, 0xa4, 0xad, 0xd2, 0xaa, 0x52, 0x6b, 0x82, 0x49, 0x56, 0x38, 0x86, 0x2e,
- 0x86, 0x96, 0x2a, 0xd1, 0x6a, 0xf1, 0x0e, 0x66, 0xc4, 0xee, 0xec, 0x6a, 0x77, 0x8c, 0xb0, 0xd4,
- 0x53, 0x8f, 0x51, 0x0f, 0x3d, 0xf7, 0x4f, 0xe8, 0xad, 0x7f, 0x41, 0xaf, 0x39, 0xe6, 0x52, 0xa9,
- 0xea, 0x21, 0xaa, 0x92, 0xbf, 0xa1, 0xf7, 0x6a, 0x9e, 0x99, 0xb5, 0xcd, 0x47, 0x0e, 0x6e, 0x2f,
- 0x68, 0x9e, 0xdf, 0xfc, 0xe6, 0x37, 0xcf, 0xec, 0xf3, 0x65, 0xe0, 0x8b, 0x20, 0xa4, 0x5c, 0x50,
- 0x8f, 0xfa, 0x54, 0x44, 0xfd, 0xb5, 0x30, 0x0a, 0x44, 0x20, 0xff, 0x9e, 0x30, 0x8f, 0xc6, 0x6b,
- 0xe7, 0xeb, 0xf4, 0x22, 0xa4, 0x11, 0xf3, 0x29, 0x17, 0x8e, 0xb7, 0x16, 0xca, 0x0d, 0x7a, 0x21,
- 0x28, 0x77, 0xa9, 0x5b, 0x43, 0x2e, 0xb9, 0x7f, 0x49, 0x40, 0x81, 0xb5, 0x44, 0xa0, 0x76, 0x59,
- 0xe0, 0xf6, 0x5c, 0x37, 0xe8, 0x06, 0xea, 0x0e, 0xb9, 0x52, 0xec, 0xdb, 0xab, 0x37, 0xf9, 0xd0,
- 0x09, 0x7c, 0x3f, 0xe0, 0x6b, 0xe7, 0xeb, 0x7a, 0xa5, 0xb8, 0xd5, 0xbf, 0x0a, 0x90, 0xdf, 0x53,
- 0xea, 0xe4, 0x39, 0x94, 0x62, 0xc7, 0x0f, 0x3d, 0x6a, 0x8b, 0x7e, 0x48, 0x8d, 0xd4, 0x52, 0x66,
- 0xa5, 0xb4, 0xf1, 0x49, 0x6d, 0x0c, 0x87, 0x6a, 0x87, 0x8e, 0xd7, 0xa3, 0xed, 0x7e, 0x48, 0x37,
- 0xb3, 0x2f, 0x5f, 0x2f, 0x4e, 0x58, 0xa0, 0x04, 0x25, 0x42, 0xbe, 0x82, 0x9c, 0xb2, 0x8c, 0x34,
- 0x2a, 0x3f, 0x18, 0x4b, 0x79, 0x1f, 0x8f, 0x6a, 0x59, 0x2d, 0x44, 0xda, 0x90, 0xf7, 0x9d, 0x30,
- 0x64, 0xbc, 0x6b, 0x64, 0x50, 0xf3, 0xa3, 0xb1, 0x34, 0x9f, 0xaa, 0xb3, 0x5a, 0x34, 0x91, 0x22,
- 0x5f, 0x43, 0xc1, 0x0b, 0x3a, 0x8e, 0x60, 0x01, 0x37, 0xb2, 0x28, 0xfb, 0xf1, 0x58, 0xb2, 0x4d,
- 0x7d, 0x58, 0xeb, 0x0e, 0xc4, 0xc8, 0x07, 0x50, 0x49, 0xd6, 0x36, 0xe3, 0x2e, 0xeb, 0xd0, 0xd8,
- 0x98, 0x59, 0xca, 0xac, 0x64, 0xac, 0x99, 0x04, 0x37, 0x15, 0x2c, 0x7d, 0x38, 0xe9, 0xf1, 0x0e,
- 0xfa, 0x30, 0xf9, 0x0f, 0x7c, 0xd8, 0xd6, 0x87, 0x13, 0x1f, 0x12, 0x31, 0x72, 0x08, 0x33, 0x8e,
- 0x10, 0x11, 0x3b, 0xee, 0x09, 0x6a, 0x0b, 0xe7, 0xd8, 0xa3, 0x46, 0x05, 0xf5, 0xef, 0xdd, 0xa8,
- 0xaf, 0x93, 0xe5, 0x7c, 0xbd, 0xb6, 0x43, 0xfb, 0x18, 0x5d, 0xad, 0x38, 0x3d, 0x50, 0x69, 0x4b,
- 0x11, 0xc2, 0x46, 0x75, 0x7b, 0x9c, 0x89, 0xd8, 0x98, 0x45, 0xdd, 0xcf, 0xc6, 0xf2, 0xbb, 0x9e,
- 0x68, 0x1c, 0x70, 0x26, 0xae, 0x5d, 0x25, 0xc1, 0x98, 0x1c, 0x02, 0x78, 0x8c, 0x9f, 0x69, 0xef,
- 0x09, 0xde, 0xb2, 0x3e, 0x5e, 0x84, 0x18, 0x3f, 0xd3, 0xe2, 0x45, 0x29, 0xa5, 0x9e, 0x70, 0x17,
- 0xa6, 0x62, 0x11, 0x31, 0xde, 0xd5, 0xca, 0xb9, 0xa5, 0xcc, 0x4a, 0xd1, 0x2a, 0x29, 0x4c, 0x51,
- 0x16, 0xa1, 0xe4, 0x46, 0x41, 0x68, 0x9f, 0x44, 0x8e, 0x4f, 0x63, 0x23, 0xbf, 0x94, 0x5a, 0xc9,
- 0x58, 0x20, 0xa1, 0x6d, 0x44, 0x24, 0xe1, 0x8c, 0xd2, 0x01, 0xa1, 0xa0, 0x08, 0x12, 0xd2, 0x84,
- 0x3b, 0x00, 0x82, 0xf9, 0xd4, 0xe6, 0x0e, 0x0f, 0x62, 0xa3, 0x88, 0xfb, 0x45, 0x89, 0xb4, 0x24,
- 0x40, 0xde, 0x83, 0x69, 0xb7, 0x17, 0xa9, 0x14, 0x51, 0x14, 0x40, 0x4a, 0x39, 0x41, 0x15, 0xed,
- 0x39, 0x94, 0xe4, 0x73, 0x02, 0x57, 0x95, 0x6a, 0x69, 0x29, 0xf5, 0xef, 0x4b, 0x55, 0x09, 0x62,
- 0xa9, 0xce, 0x43, 0x4e, 0x59, 0xc6, 0x14, 0xde, 0xae, 0x2d, 0x62, 0x40, 0x5e, 0x26, 0x04, 0xe5,
- 0xc2, 0x28, 0x63, 0xde, 0x26, 0x26, 0xa9, 0xc1, 0x2d, 0x97, 0x9e, 0x38, 0x3d, 0x4f, 0xd8, 0xa3,
- 0x3d, 0x64, 0x1a, 0x8f, 0xcf, 0xea, 0xad, 0xfd, 0x41, 0x33, 0xa8, 0x3e, 0x81, 0xf2, 0xa5, 0x50,
- 0x93, 0x65, 0x28, 0x0f, 0xf3, 0xe7, 0x8c, 0xf6, 0x8d, 0x14, 0x1e, 0x9d, 0x1a, 0x80, 0x3b, 0xb4,
- 0x4f, 0x08, 0x64, 0x65, 0x6a, 0x19, 0x69, 0xdc, 0xc3, 0x75, 0xf5, 0xd7, 0x14, 0x64, 0x65, 0x3c,
- 0xc9, 0x33, 0x28, 0x88, 0xc8, 0xe9, 0x50, 0x9b, 0xb9, 0x78, 0x78, 0x6a, 0xb3, 0x2e, 0x1f, 0xf6,
- 0xc7, 0xeb, 0xc5, 0x4f, 0xbb, 0xc1, 0x95, 0x4f, 0xc3, 0x64, 0x43, 0xf4, 0x3c, 0xda, 0x11, 0x41,
- 0xb4, 0x16, 0xba, 0x8e, 0x70, 0xd6, 0x18, 0x17, 0x34, 0xe2, 0x8e, 0xb7, 0x26, 0xad, 0x5a, 0x5b,
- 0x2a, 0x99, 0x5b, 0x56, 0x1e, 0x25, 0x4d, 0x97, 0x1c, 0x41, 0x3e, 0x0e, 0x1d, 0x2e, 0xc5, 0xd3,
- 0x28, 0xfe, 0xa5, 0x16, 0x7f, 0x38, 0xbe, 0xf8, 0x7e, 0xe8, 0x70, 0x73, 0xcb, 0xca, 0x49, 0x41,
- 0xd3, 0xad, 0xfe, 0x92, 0x82, 0xe2, 0x20, 0x1a, 0xf2, 0x8d, 0xba, 0xfd, 0xe2, 0x1b, 0x85, 0xc6,
- 0xae, 0xbe, 0x9b, 0x7c, 0x07, 0xff, 0x75, 0xba, 0xdd, 0x88, 0x76, 0x55, 0xb2, 0x08, 0xea, 0x87,
- 0x41, 0xe4, 0x78, 0x4c, 0xf4, 0x8d, 0xcc, 0x52, 0x6a, 0x65, 0x7a, 0xe3, 0xd1, 0x78, 0x85, 0x37,
- 0xd4, 0x6a, 0x0f, 0xa5, 0xac, 0x79, 0xe7, 0x46, 0xbc, 0xfa, 0x22, 0x03, 0x39, 0x15, 0x4e, 0x99,
- 0xb2, 0xa3, 0x5d, 0x8d, 0x5e, 0xe0, 0xe4, 0xc8, 0x5a, 0xe5, 0x91, 0x9e, 0x46, 0x2f, 0xc8, 0x06,
- 0xfc, 0x27, 0x01, 0x62, 0x3b, 0x16, 0x4e, 0x24, 0x34, 0x5b, 0x16, 0x51, 0xd6, 0xba, 0x35, 0xd8,
- 0xdc, 0x97, 0x7b, 0xea, 0xcc, 0x48, 0xc3, 0x8c, 0x6d, 0x8f, 0xf2, 0xae, 0x38, 0xc5, 0x92, 0xca,
- 0x0e, 0x1b, 0x66, 0xdc, 0x44, 0x58, 0x26, 0x60, 0x2c, 0x9c, 0xce, 0x59, 0x92, 0x02, 0x5a, 0x5c,
- 0x16, 0x58, 0xd9, 0x9a, 0x1d, 0x6e, 0x99, 0xae, 0x92, 0x9e, 0x83, 0xc9, 0x73, 0xf9, 0xcd, 0x71,
- 0x18, 0x65, 0x2c, 0x65, 0x90, 0x16, 0x4c, 0x7a, 0xce, 0x31, 0xf5, 0xf4, 0x38, 0xd9, 0x18, 0xaf,
- 0xab, 0xc8, 0x93, 0xba, 0x9a, 0x94, 0x0c, 0x59, 0x00, 0x18, 0x24, 0xb0, 0x2c, 0x65, 0xf9, 0x5d,
- 0x46, 0x10, 0x19, 0x58, 0xd9, 0x7f, 0xb0, 0xcc, 0xb2, 0x16, 0xae, 0xc9, 0x87, 0x30, 0x27, 0xfb,
- 0x41, 0x2c, 0x1c, 0x3f, 0x8c, 0x65, 0x2b, 0xbd, 0xc0, 0x4e, 0x80, 0x15, 0x97, 0xb5, 0xc8, 0x70,
- 0xef, 0x80, 0xb3, 0x0b, 0xd9, 0x0e, 0xaa, 0xdf, 0xc0, 0x24, 0xde, 0x4d, 0x2a, 0x90, 0x19, 0x96,
- 0x8e, 0x5c, 0x4a, 0x24, 0x16, 0x91, 0x4e, 0x1c, 0xb9, 0x94, 0x08, 0xef, 0xf9, 0x98, 0x23, 0x19,
- 0x4b, 0x2e, 0xc9, 0xff, 0xa0, 0xc0, 0x7b, 0x3e, 0x36, 0x6d, 0x23, 0x8b, 0x70, 0x9e, 0xf7, 0x7c,
- 0x59, 0x95, 0xd5, 0xdf, 0x32, 0x90, 0xd7, 0x53, 0x92, 0x4c, 0x43, 0x5a, 0x57, 0x56, 0xd6, 0x4a,
- 0x33, 0x57, 0xb6, 0x4b, 0x9f, 0xfa, 0x41, 0xd4, 0x57, 0xd1, 0xc4, 0x3b, 0xb2, 0x56, 0x49, 0x61,
- 0x18, 0xc4, 0x11, 0x8a, 0xc7, 0x7c, 0x26, 0xf0, 0xd2, 0x01, 0xa5, 0x29, 0x21, 0xd9, 0x30, 0xe5,
- 0xc7, 0xb4, 0x83, 0x93, 0x93, 0x98, 0xaa, 0xfb, 0xb3, 0x16, 0x48, 0x68, 0x17, 0x11, 0x72, 0x1b,
- 0x0a, 0xd2, 0xe2, 0x8e, 0x4f, 0x8d, 0x49, 0xf4, 0x6e, 0x60, 0x4b, 0xcf, 0x8f, 0x7b, 0xcc, 0x73,
- 0x65, 0x55, 0xe6, 0x94, 0xe7, 0x68, 0x9b, 0x2e, 0x79, 0x06, 0xe5, 0x64, 0xcb, 0x3e, 0x63, 0xdc,
- 0xc5, 0x1e, 0x39, 0xbd, 0xf1, 0x70, 0xac, 0x88, 0x6e, 0x2a, 0xb1, 0x1d, 0xc6, 0x5d, 0xab, 0x74,
- 0x3c, 0x34, 0xae, 0xc4, 0x75, 0xea, 0x5a, 0x5c, 0x97, 0xa1, 0x7c, 0xea, 0xc4, 0x76, 0x32, 0x75,
- 0xd5, 0xa4, 0x28, 0x58, 0x53, 0xa7, 0x4e, 0x9c, 0x4c, 0xe6, 0x21, 0x49, 0xbf, 0x46, 0x4d, 0x0b,
- 0x4d, 0x4a, 0x30, 0xb2, 0x02, 0x15, 0x49, 0xf2, 0x18, 0xa7, 0x36, 0xef, 0xf9, 0xc7, 0x34, 0x52,
- 0x53, 0xa3, 0x60, 0x4d, 0x9f, 0x3a, 0x71, 0x93, 0x71, 0xda, 0x52, 0x28, 0x59, 0x85, 0x59, 0xc9,
- 0x64, 0x1c, 0xb9, 0x7a, 0x00, 0x01, 0x52, 0x67, 0x4e, 0x9d, 0xd8, 0x44, 0x5c, 0x4d, 0xa1, 0xea,
- 0xf7, 0x69, 0x28, 0x24, 0x3f, 0x53, 0xae, 0x05, 0x76, 0x19, 0xca, 0xfa, 0xa7, 0x90, 0x2e, 0x22,
- 0x15, 0xd9, 0x29, 0x0d, 0xaa, 0xfa, 0x31, 0x20, 0xef, 0xb8, 0x6e, 0x44, 0xe3, 0x58, 0x47, 0x35,
- 0x31, 0xc9, 0x0e, 0xe6, 0x34, 0xd5, 0x3f, 0x9d, 0xc6, 0x1e, 0xcc, 0xc9, 0x3c, 0x42, 0x11, 0xf2,
- 0x7f, 0x28, 0xb2, 0xd8, 0x3e, 0x09, 0x3c, 0x97, 0xba, 0x18, 0xfe, 0x82, 0x55, 0x60, 0xf1, 0x36,
- 0xda, 0x38, 0x4b, 0xfb, 0x21, 0xd5, 0x5e, 0xe6, 0xb0, 0xd4, 0x8b, 0x12, 0x51, 0x2e, 0x5e, 0x0e,
- 0x52, 0xfe, 0x6a, 0x90, 0xaa, 0x47, 0x38, 0x38, 0xb0, 0x81, 0x25, 0x81, 0x1a, 0x34, 0x30, 0xf9,
- 0xa2, 0x72, 0x82, 0x2a, 0x39, 0xa2, 0xdf, 0xa5, 0x9b, 0x30, 0xba, 0x37, 0x0f, 0xb9, 0x4e, 0xe0,
- 0xf5, 0x7c, 0xae, 0xeb, 0x49, 0x5b, 0xd5, 0x17, 0x29, 0x28, 0x24, 0x81, 0xbe, 0xf6, 0x7d, 0x09,
- 0x64, 0x31, 0x9b, 0xb5, 0x10, 0x66, 0xf2, 0x22, 0x94, 0xe2, 0x7e, 0x2c, 0xa8, 0x6f, 0xe3, 0x96,
- 0x52, 0x03, 0x05, 0xb5, 0x24, 0x61, 0xb4, 0x0c, 0xb2, 0x57, 0xca, 0xe0, 0x0e, 0x80, 0x6a, 0xa8,
- 0xe8, 0x9f, 0x2a, 0x92, 0x22, 0x22, 0xf2, 0x7d, 0xab, 0x3f, 0xa4, 0x60, 0xfe, 0xe6, 0xf6, 0x4e,
- 0xee, 0xc1, 0x72, 0xfd, 0xf1, 0x63, 0xab, 0xf1, 0xb8, 0xde, 0x36, 0x77, 0x5b, 0x76, 0xbb, 0xf1,
- 0x74, 0x6f, 0xd7, 0xaa, 0x37, 0xcd, 0xf6, 0x91, 0x7d, 0xd0, 0xda, 0xdf, 0x6b, 0x3c, 0x32, 0xb7,
- 0xcd, 0xc6, 0x56, 0x65, 0x82, 0xdc, 0x85, 0x3b, 0xef, 0x22, 0x6e, 0x35, 0x9a, 0xed, 0x7a, 0x25,
- 0x45, 0xde, 0x87, 0xea, 0xbb, 0x28, 0x8f, 0x0e, 0x9e, 0x1e, 0x34, 0xeb, 0x6d, 0xf3, 0xb0, 0x51,
- 0x49, 0xaf, 0x7e, 0x0e, 0xa5, 0x91, 0xba, 0x22, 0xb7, 0x60, 0x66, 0xf3, 0xc0, 0x6c, 0x6e, 0xd9,
- 0xe6, 0x96, 0xdd, 0x34, 0x5b, 0x3b, 0x0d, 0xab, 0x32, 0x41, 0x0c, 0x98, 0x1b, 0x80, 0x9b, 0x66,
- 0xab, 0x6e, 0x1d, 0xd9, 0x4f, 0xea, 0xfb, 0x4f, 0x2a, 0xa9, 0xcd, 0x9f, 0x52, 0x2f, 0xdf, 0x2c,
- 0xa4, 0x5e, 0xbd, 0x59, 0x48, 0xfd, 0xf9, 0x66, 0x21, 0xf5, 0xe3, 0xdb, 0x85, 0x89, 0x57, 0x6f,
- 0x17, 0x26, 0x7e, 0x7f, 0xbb, 0x30, 0xf1, 0xad, 0x35, 0xf6, 0x24, 0x56, 0xff, 0x1b, 0x75, 0x29,
- 0x7f, 0xd7, 0xbf, 0x68, 0x3f, 0xa7, 0xef, 0xef, 0x86, 0x94, 0xb7, 0x07, 0x8a, 0x7b, 0x98, 0xbe,
- 0x7b, 0x49, 0xfa, 0x1e, 0xae, 0x37, 0x46, 0xd8, 0xc7, 0x39, 0xd4, 0x7b, 0xf0, 0x77, 0x00, 0x00,
- 0x00, 0xff, 0xff, 0xef, 0x03, 0x47, 0x6d, 0x06, 0x0e, 0x00, 0x00,
+ // 1480 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x57, 0x4f, 0x4f, 0x23, 0x47,
+ 0x16, 0xa7, 0xb1, 0xf1, 0x9f, 0x67, 0x0c, 0xa6, 0x86, 0x65, 0x7b, 0x67, 0x35, 0xc0, 0x18, 0xed,
+ 0x0e, 0xcb, 0x48, 0x66, 0x61, 0x76, 0xa3, 0x49, 0x14, 0x29, 0x31, 0x83, 0x19, 0x5a, 0x78, 0x0c,
+ 0x29, 0x0c, 0x09, 0xd1, 0x44, 0xad, 0xc6, 0x5d, 0x98, 0x16, 0xdd, 0xd5, 0xad, 0xee, 0x32, 0xc2,
+ 0x52, 0x8e, 0x39, 0x45, 0x39, 0xe4, 0x9c, 0x8f, 0x90, 0x5b, 0x3e, 0x41, 0xae, 0x23, 0xe5, 0x32,
+ 0x97, 0x48, 0x51, 0x0e, 0xa3, 0x68, 0xe6, 0x6b, 0xe4, 0x10, 0xd5, 0xab, 0x6a, 0xdb, 0x30, 0xcc,
+ 0xc1, 0x73, 0x41, 0xf5, 0x7e, 0xf5, 0xea, 0x57, 0xaf, 0xfa, 0xbd, 0xdf, 0x7b, 0x06, 0x3e, 0x09,
+ 0x23, 0xc6, 0x05, 0xf3, 0x59, 0xc0, 0x44, 0xdc, 0x5f, 0x8f, 0xe2, 0x50, 0x84, 0xf2, 0xef, 0x99,
+ 0xe7, 0xb3, 0x64, 0xfd, 0x72, 0x83, 0x5d, 0x45, 0x2c, 0xf6, 0x02, 0xc6, 0x85, 0xe3, 0xaf, 0x47,
+ 0x72, 0x83, 0x5d, 0x09, 0xc6, 0x5d, 0xe6, 0xd6, 0xd0, 0x97, 0x3c, 0xbc, 0x46, 0xa0, 0xc0, 0x5a,
+ 0x4a, 0x50, 0xbb, 0x4e, 0x70, 0x77, 0xbe, 0x1b, 0x76, 0x43, 0x75, 0x87, 0x5c, 0x29, 0xef, 0xbb,
+ 0x6b, 0xb7, 0xc5, 0xd0, 0x09, 0x83, 0x20, 0xe4, 0xeb, 0x97, 0x1b, 0x7a, 0xa5, 0x7c, 0xab, 0xbf,
+ 0x14, 0x20, 0x7f, 0xa0, 0xd8, 0xc9, 0xe7, 0x50, 0x4a, 0x9c, 0x20, 0xf2, 0x99, 0x2d, 0xfa, 0x11,
+ 0x33, 0x8d, 0xe5, 0xcc, 0x6a, 0x69, 0xf3, 0x83, 0xda, 0x18, 0x01, 0xd5, 0x8e, 0x1d, 0xbf, 0xc7,
+ 0xda, 0xfd, 0x88, 0x51, 0x50, 0x54, 0x72, 0x4d, 0xf6, 0x20, 0xa7, 0x2c, 0x73, 0x12, 0x39, 0x1f,
+ 0x8d, 0xc5, 0x79, 0x88, 0x47, 0xa9, 0xa6, 0x20, 0x2d, 0xc8, 0x07, 0x4e, 0x14, 0x79, 0xbc, 0x6b,
+ 0x66, 0x90, 0xed, 0x7f, 0x63, 0xb1, 0x3d, 0x53, 0x67, 0x69, 0x4a, 0x42, 0x3e, 0x83, 0x82, 0x1f,
+ 0x76, 0x1c, 0xe1, 0x85, 0xdc, 0xcc, 0x22, 0xe1, 0xff, 0xc7, 0x22, 0x6c, 0xea, 0xc3, 0x74, 0x40,
+ 0x43, 0xfe, 0x03, 0x95, 0x74, 0x6d, 0x7b, 0xdc, 0xf5, 0x3a, 0x2c, 0x31, 0x67, 0x97, 0x33, 0xab,
+ 0x19, 0x3a, 0x9b, 0xe2, 0x96, 0x82, 0xe5, 0xed, 0x67, 0x3d, 0xde, 0xc1, 0xdb, 0xa7, 0xde, 0xe3,
+ 0xf6, 0x1d, 0x7d, 0x98, 0x0e, 0x68, 0xc8, 0x31, 0xcc, 0x3a, 0x42, 0xc4, 0xde, 0x69, 0x4f, 0x30,
+ 0x5b, 0x38, 0xa7, 0x3e, 0x33, 0x2b, 0xc8, 0xfc, 0xe0, 0x56, 0x66, 0x5d, 0x0e, 0x97, 0x1b, 0xb5,
+ 0x3d, 0xd6, 0xc7, 0xfc, 0x6d, 0x65, 0x5f, 0xbc, 0x5a, 0x9a, 0xa0, 0x33, 0x03, 0x96, 0xb6, 0x24,
+ 0x21, 0x9d, 0x51, 0xde, 0x1e, 0xf7, 0x44, 0x62, 0xce, 0x21, 0xef, 0x47, 0x63, 0x45, 0x5c, 0x4f,
+ 0x39, 0x8e, 0xb8, 0x27, 0x46, 0x2e, 0x91, 0x66, 0x42, 0x0e, 0x00, 0x7c, 0x8f, 0x5f, 0xe8, 0xb8,
+ 0x09, 0xf2, 0x6f, 0x8c, 0x97, 0x0f, 0x8f, 0x5f, 0xd0, 0xa2, 0x24, 0x51, 0x61, 0xdf, 0x87, 0xe9,
+ 0x44, 0xc4, 0x1e, 0xef, 0x6a, 0xce, 0xdc, 0x72, 0x66, 0xb5, 0x48, 0x4b, 0x0a, 0x53, 0x2e, 0x4b,
+ 0x50, 0x72, 0xe3, 0x30, 0xb2, 0xcf, 0x62, 0x27, 0x60, 0x89, 0x99, 0x5f, 0x36, 0x56, 0x33, 0x14,
+ 0x24, 0xb4, 0x83, 0x88, 0x74, 0xb8, 0x60, 0x6c, 0xe0, 0x50, 0x50, 0x0e, 0x12, 0xd2, 0x0e, 0xf7,
+ 0x00, 0x84, 0x17, 0x30, 0x9b, 0x3b, 0x3c, 0x4c, 0xcc, 0x22, 0xee, 0x17, 0x25, 0xd2, 0x92, 0x00,
+ 0xf9, 0x17, 0xcc, 0xb8, 0xbd, 0x58, 0x15, 0x84, 0x72, 0x01, 0x74, 0x29, 0xa7, 0xa8, 0x72, 0xfb,
+ 0x0a, 0x4a, 0xf2, 0x21, 0xa1, 0xab, 0x04, 0x58, 0x5a, 0x36, 0xde, 0x5f, 0x80, 0x3a, 0x89, 0xa0,
+ 0x08, 0x51, 0x86, 0x0b, 0x90, 0x53, 0x96, 0x39, 0x8d, 0xb7, 0x6b, 0x8b, 0x98, 0x90, 0x97, 0x45,
+ 0xc0, 0xb8, 0x30, 0xcb, 0x58, 0xa5, 0xa9, 0x49, 0x6a, 0x70, 0xc7, 0x65, 0x67, 0x4e, 0xcf, 0x17,
+ 0xf6, 0x68, 0x67, 0x98, 0xc1, 0xe3, 0x73, 0x7a, 0xeb, 0x70, 0x20, 0xf4, 0xea, 0x2e, 0x94, 0xaf,
+ 0xa5, 0x97, 0xac, 0x40, 0x79, 0x58, 0x33, 0x17, 0xac, 0x6f, 0x1a, 0x78, 0x74, 0x7a, 0x00, 0xee,
+ 0xb1, 0x3e, 0x21, 0x90, 0x95, 0xe5, 0x64, 0x4e, 0xe2, 0x1e, 0xae, 0xab, 0x3f, 0x1b, 0x90, 0x95,
+ 0x99, 0x24, 0xcf, 0xa1, 0x20, 0x62, 0xa7, 0xc3, 0x6c, 0xcf, 0xc5, 0xc3, 0xd3, 0x5b, 0x75, 0xf9,
+ 0xb0, 0xdf, 0x5f, 0x2d, 0x7d, 0xd8, 0x0d, 0x6f, 0x7c, 0x1a, 0x4f, 0xb6, 0x39, 0xdf, 0x67, 0x1d,
+ 0x11, 0xc6, 0xeb, 0x91, 0xeb, 0x08, 0x67, 0xdd, 0xe3, 0x82, 0xc5, 0xdc, 0xf1, 0xd7, 0xa5, 0x55,
+ 0x6b, 0x4b, 0x26, 0x6b, 0x9b, 0xe6, 0x91, 0xd2, 0x72, 0xc9, 0x09, 0xe4, 0x93, 0xc8, 0xe1, 0x92,
+ 0x7c, 0x12, 0xc9, 0x3f, 0xd5, 0xe4, 0x8f, 0xc7, 0x27, 0x3f, 0x8c, 0x1c, 0x6e, 0x6d, 0xd3, 0x9c,
+ 0x24, 0xb4, 0xdc, 0xea, 0x4f, 0x06, 0x14, 0x07, 0xd9, 0x90, 0x6f, 0xd4, 0x4d, 0x15, 0xdf, 0x28,
+ 0x34, 0x76, 0xf3, 0xdd, 0xe4, 0x6b, 0xf8, 0xbb, 0xd3, 0xed, 0xc6, 0xac, 0xab, 0x8a, 0x45, 0xb0,
+ 0x20, 0x0a, 0x63, 0xc7, 0xf7, 0x44, 0xdf, 0xcc, 0x2c, 0x1b, 0xab, 0x33, 0x9b, 0x4f, 0xc6, 0x13,
+ 0xdb, 0x90, 0xab, 0x3d, 0xa4, 0xa2, 0x0b, 0xce, 0xad, 0x78, 0xf5, 0x9b, 0x0c, 0xe4, 0x54, 0x3a,
+ 0x65, 0xc9, 0x8e, 0xf6, 0x30, 0x76, 0x85, 0xf3, 0x20, 0x4b, 0xcb, 0x23, 0x1d, 0x8c, 0x5d, 0x91,
+ 0x4d, 0xf8, 0x5b, 0x0a, 0x24, 0x76, 0x22, 0x9c, 0x58, 0x68, 0x6f, 0x29, 0xa2, 0x2c, 0xbd, 0x33,
+ 0xd8, 0x3c, 0x94, 0x7b, 0xea, 0xcc, 0x48, 0x7b, 0x4c, 0x6c, 0x9f, 0xf1, 0xae, 0x38, 0x47, 0x49,
+ 0x65, 0x87, 0xed, 0x31, 0x69, 0x22, 0x2c, 0x0b, 0x30, 0x11, 0x4e, 0xe7, 0x22, 0x2d, 0x01, 0x4d,
+ 0x2e, 0x05, 0x56, 0xa6, 0x73, 0xc3, 0x2d, 0xcb, 0x55, 0xd4, 0xf3, 0x30, 0x75, 0x29, 0xbf, 0x39,
+ 0x0e, 0x9a, 0x0c, 0x55, 0x06, 0xd9, 0x85, 0x29, 0xdf, 0x39, 0x65, 0xbe, 0x1e, 0x18, 0x9b, 0xe3,
+ 0xf5, 0x13, 0x79, 0x92, 0x2a, 0x02, 0xb2, 0x08, 0x30, 0x28, 0x5d, 0x29, 0x62, 0xf9, 0x45, 0x46,
+ 0x10, 0x99, 0x52, 0xd9, 0x79, 0x50, 0x60, 0x59, 0x8a, 0x6b, 0xf2, 0x5f, 0x98, 0x97, 0x9d, 0x20,
+ 0x11, 0x4e, 0x10, 0x25, 0xb2, 0x71, 0x5e, 0x61, 0x0f, 0x40, 0xad, 0x65, 0x29, 0x19, 0xee, 0x1d,
+ 0x71, 0xef, 0x4a, 0x36, 0x82, 0xea, 0x17, 0x30, 0x85, 0xb7, 0x92, 0x0a, 0x64, 0x86, 0xa2, 0x91,
+ 0x4b, 0x89, 0x24, 0x22, 0xd6, 0x25, 0x23, 0x97, 0x12, 0xe1, 0xbd, 0x00, 0xab, 0x23, 0x43, 0xe5,
+ 0x92, 0xfc, 0x03, 0x0a, 0xbc, 0x17, 0x60, 0x8b, 0x36, 0xb3, 0x08, 0xe7, 0x79, 0x2f, 0x90, 0x7a,
+ 0xac, 0xfe, 0x9a, 0x81, 0xbc, 0x9e, 0x80, 0x64, 0x06, 0x26, 0xb5, 0xa6, 0xb2, 0x74, 0xd2, 0x73,
+ 0x65, 0xa3, 0x0c, 0x58, 0x10, 0xc6, 0x7d, 0x95, 0x47, 0xbc, 0x23, 0x4b, 0x4b, 0x0a, 0xc3, 0xf4,
+ 0x8d, 0xb8, 0xf8, 0x5e, 0xe0, 0x09, 0xbc, 0x74, 0xe0, 0xd2, 0x94, 0x90, 0x6c, 0x95, 0xf2, 0x33,
+ 0xda, 0xe1, 0xd9, 0x59, 0xc2, 0xd4, 0xfd, 0x59, 0x0a, 0x12, 0xda, 0x47, 0x84, 0xdc, 0x85, 0x82,
+ 0xb4, 0xb8, 0x13, 0x30, 0x73, 0x0a, 0xa3, 0x1b, 0xd8, 0x32, 0xf2, 0xd3, 0x9e, 0xe7, 0xbb, 0x52,
+ 0x8f, 0x39, 0x15, 0x39, 0xda, 0x96, 0x4b, 0x9e, 0x43, 0x39, 0xdd, 0xb2, 0x2f, 0x3c, 0xee, 0x62,
+ 0x77, 0x9c, 0xd9, 0x7c, 0x3c, 0x56, 0x2e, 0xb7, 0x14, 0xd9, 0x9e, 0xc7, 0x5d, 0x5a, 0x3a, 0x1d,
+ 0x1a, 0x37, 0xf2, 0x3a, 0xfd, 0x56, 0x5e, 0x57, 0xa0, 0x7c, 0xee, 0x24, 0x76, 0x3a, 0x63, 0xd5,
+ 0x8c, 0x28, 0xd0, 0xe9, 0x73, 0x27, 0x49, 0x27, 0xf0, 0xd0, 0x49, 0xbf, 0x46, 0xcd, 0x09, 0xed,
+ 0x94, 0x62, 0x64, 0x15, 0x2a, 0xd2, 0xc9, 0xf7, 0x38, 0xb3, 0x79, 0x2f, 0x38, 0x65, 0xb1, 0x9a,
+ 0x17, 0x05, 0x3a, 0x73, 0xee, 0x24, 0x4d, 0x8f, 0xb3, 0x96, 0x42, 0xc9, 0x1a, 0xcc, 0x49, 0x4f,
+ 0x8f, 0xa3, 0xaf, 0x1e, 0x3d, 0x80, 0xae, 0xb3, 0xe7, 0x4e, 0x62, 0x21, 0xae, 0xe6, 0x4f, 0xf5,
+ 0x4f, 0x03, 0x0a, 0xe9, 0x0f, 0x91, 0xb7, 0x12, 0xbb, 0x02, 0x65, 0xfd, 0x63, 0x47, 0xcb, 0x47,
+ 0x65, 0x76, 0x5a, 0x83, 0x4a, 0x39, 0x26, 0xe4, 0x1d, 0xd7, 0x8d, 0x59, 0x92, 0xe8, 0xac, 0xa6,
+ 0x26, 0x69, 0x60, 0x4d, 0x33, 0xfd, 0xe3, 0x68, 0xec, 0x61, 0xcc, 0x50, 0x06, 0x8c, 0xfc, 0x13,
+ 0x8a, 0x5e, 0x62, 0x9f, 0x85, 0xbe, 0xcb, 0x5c, 0x4c, 0x7c, 0x81, 0x16, 0xbc, 0x64, 0x07, 0x6d,
+ 0x9c, 0x9f, 0xfd, 0x88, 0xe9, 0xf8, 0x72, 0x28, 0xef, 0xa2, 0x44, 0x54, 0x70, 0xd7, 0xd3, 0x93,
+ 0xbf, 0x99, 0x9e, 0xea, 0x09, 0x0e, 0x0b, 0x6c, 0x5a, 0x69, 0x8a, 0x06, 0x4d, 0x4b, 0xbe, 0xa5,
+ 0x9c, 0xa2, 0x8a, 0x8e, 0xe8, 0x17, 0xe9, 0xc6, 0x8b, 0xe1, 0x2d, 0x40, 0xae, 0x13, 0xfa, 0xbd,
+ 0x80, 0x6b, 0x25, 0x69, 0xab, 0xfa, 0xad, 0x01, 0x85, 0x34, 0xc5, 0x6f, 0x7d, 0x59, 0x02, 0x59,
+ 0xac, 0x63, 0x4d, 0x84, 0x35, 0xbc, 0x04, 0xa5, 0xa4, 0x9f, 0x08, 0x16, 0xd8, 0xb8, 0xa5, 0xd8,
+ 0x40, 0x41, 0x2d, 0xe9, 0x30, 0x2a, 0x80, 0xec, 0x0d, 0x01, 0xdc, 0x03, 0x50, 0x4d, 0x14, 0xe3,
+ 0x53, 0xf2, 0x28, 0x22, 0x22, 0xdf, 0xb7, 0xf6, 0x9d, 0x01, 0x0b, 0xb7, 0xb7, 0x74, 0xf2, 0x00,
+ 0x56, 0xea, 0x4f, 0x9f, 0xd2, 0xc6, 0xd3, 0x7a, 0xdb, 0xda, 0x6f, 0xd9, 0xed, 0xc6, 0xb3, 0x83,
+ 0x7d, 0x5a, 0x6f, 0x5a, 0xed, 0x13, 0xfb, 0xa8, 0x75, 0x78, 0xd0, 0x78, 0x62, 0xed, 0x58, 0x8d,
+ 0xed, 0xca, 0x04, 0xb9, 0x0f, 0xf7, 0xde, 0xe5, 0xb8, 0xdd, 0x68, 0xb6, 0xeb, 0x15, 0x83, 0xfc,
+ 0x1b, 0xaa, 0xef, 0x72, 0x79, 0x72, 0xf4, 0xec, 0xa8, 0x59, 0x6f, 0x5b, 0xc7, 0x8d, 0xca, 0xe4,
+ 0xda, 0xc7, 0x50, 0x1a, 0x51, 0x14, 0xb9, 0x03, 0xb3, 0x5b, 0x47, 0x56, 0x73, 0xdb, 0xb6, 0xb6,
+ 0xed, 0xa6, 0xd5, 0xda, 0x6b, 0xd0, 0xca, 0x04, 0x31, 0x61, 0x7e, 0x00, 0x6e, 0x59, 0xad, 0x3a,
+ 0x3d, 0xb1, 0x77, 0xeb, 0x87, 0xbb, 0x15, 0x63, 0xeb, 0x07, 0xe3, 0xc5, 0xeb, 0x45, 0xe3, 0xe5,
+ 0xeb, 0x45, 0xe3, 0x8f, 0xd7, 0x8b, 0xc6, 0xf7, 0x6f, 0x16, 0x27, 0x5e, 0xbe, 0x59, 0x9c, 0xf8,
+ 0xed, 0xcd, 0xe2, 0xc4, 0x97, 0x74, 0xec, 0xe9, 0xab, 0xfe, 0xcb, 0xe9, 0x32, 0xfe, 0xae, 0x7f,
+ 0xb6, 0x7e, 0x9c, 0x7c, 0xb8, 0x1f, 0x31, 0xde, 0x1e, 0x30, 0x1e, 0x60, 0xe1, 0x1e, 0xa4, 0x85,
+ 0x7b, 0xbc, 0xd1, 0x18, 0xf1, 0x3e, 0xcd, 0x21, 0xdf, 0xa3, 0xbf, 0x02, 0x00, 0x00, 0xff, 0xff,
+ 0x92, 0x79, 0x01, 0x14, 0xd0, 0x0d, 0x00, 0x00,
}
func (m *Profile) Marshal() (dAtA []byte, err error) {
@@ -2490,7 +2490,7 @@ func (m *Profile) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.SampleType = append(m.SampleType, ValueType{})
+ m.SampleType = append(m.SampleType, &ValueType{})
if err := m.SampleType[len(m.SampleType)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -2524,7 +2524,7 @@ func (m *Profile) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Sample = append(m.Sample, Sample{})
+ m.Sample = append(m.Sample, &Sample{})
if err := m.Sample[len(m.Sample)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -2558,7 +2558,7 @@ func (m *Profile) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Mapping = append(m.Mapping, Mapping{})
+ m.Mapping = append(m.Mapping, &Mapping{})
if err := m.Mapping[len(m.Mapping)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -2592,7 +2592,7 @@ func (m *Profile) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Location = append(m.Location, Location{})
+ m.Location = append(m.Location, &Location{})
if err := m.Location[len(m.Location)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -2626,7 +2626,7 @@ func (m *Profile) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Function = append(m.Function, Function{})
+ m.Function = append(m.Function, &Function{})
if err := m.Function[len(m.Function)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -3025,7 +3025,7 @@ func (m *Profile) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.AttributeUnits = append(m.AttributeUnits, AttributeUnit{})
+ m.AttributeUnits = append(m.AttributeUnits, &AttributeUnit{})
if err := m.AttributeUnits[len(m.AttributeUnits)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -3059,7 +3059,7 @@ func (m *Profile) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.LinkTable = append(m.LinkTable, Link{})
+ m.LinkTable = append(m.LinkTable, &Link{})
if err := m.LinkTable[len(m.LinkTable)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -3606,7 +3606,7 @@ func (m *Sample) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Label = append(m.Label, Label{})
+ m.Label = append(m.Label, &Label{})
if err := m.Label[len(m.Label)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -4440,7 +4440,7 @@ func (m *Location) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Line = append(m.Line, Line{})
+ m.Line = append(m.Line, &Line{})
if err := m.Line[len(m.Line)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
diff --git a/vendor/go.opentelemetry.io/collector/pdata/internal/data/protogen/profiles/v1experimental/profiles.pb.go b/vendor/go.opentelemetry.io/collector/pdata/internal/data/protogen/profiles/v1experimental/profiles.pb.go
index 6e4662c248ba3..cd1c215adb441 100644
--- a/vendor/go.opentelemetry.io/collector/pdata/internal/data/protogen/profiles/v1experimental/profiles.pb.go
+++ b/vendor/go.opentelemetry.io/collector/pdata/internal/data/protogen/profiles/v1experimental/profiles.pb.go
@@ -13,6 +13,7 @@ import (
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
+ go_opentelemetry_io_collector_pdata_internal_data "go.opentelemetry.io/collector/pdata/internal/data"
v11 "go.opentelemetry.io/collector/pdata/internal/data/protogen/common/v1"
v1 "go.opentelemetry.io/collector/pdata/internal/data/protogen/resource/v1"
)
@@ -231,7 +232,7 @@ type ProfileContainer struct {
// all zeroes is considered invalid.
//
// This field is required.
- ProfileId []byte `protobuf:"bytes,1,opt,name=profile_id,json=profileId,proto3" json:"profile_id,omitempty"`
+ ProfileId go_opentelemetry_io_collector_pdata_internal_data.ProfileID `protobuf:"bytes,1,opt,name=profile_id,json=profileId,proto3,customtype=go.opentelemetry.io/collector/pdata/internal/data.ProfileID" json:"profile_id"`
// start_time_unix_nano is the start time of the profile.
// Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January 1970.
//
@@ -304,13 +305,6 @@ func (m *ProfileContainer) XXX_DiscardUnknown() {
var xxx_messageInfo_ProfileContainer proto.InternalMessageInfo
-func (m *ProfileContainer) GetProfileId() []byte {
- if m != nil {
- return m.ProfileId
- }
- return nil
-}
-
func (m *ProfileContainer) GetStartTimeUnixNano() uint64 {
if m != nil {
return m.StartTimeUnixNano
@@ -372,48 +366,49 @@ func init() {
}
var fileDescriptor_394731f2296acea3 = []byte{
- // 652 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x94, 0x51, 0x6b, 0xdb, 0x3a,
- 0x14, 0xc7, 0xe3, 0xa6, 0x4d, 0x52, 0xb5, 0xb9, 0x4d, 0x45, 0xef, 0xbd, 0xa6, 0x70, 0x73, 0x43,
- 0x5e, 0x96, 0xae, 0x60, 0x93, 0x76, 0x8c, 0x51, 0x18, 0x63, 0xed, 0x36, 0xe8, 0xca, 0xd6, 0xe0,
- 0xb5, 0x85, 0xed, 0xc5, 0xa8, 0xf1, 0x69, 0xa6, 0x61, 0x4b, 0x46, 0x96, 0x43, 0xba, 0x4f, 0xb1,
- 0xcf, 0xb1, 0x4f, 0xd2, 0xc7, 0xee, 0x6d, 0x6c, 0x30, 0x46, 0xfb, 0xb2, 0x7e, 0x8b, 0x61, 0x59,
- 0xf6, 0x12, 0x93, 0x51, 0xb2, 0x17, 0x23, 0x9f, 0xf3, 0x3f, 0xbf, 0xa3, 0xff, 0x91, 0x10, 0xda,
- 0xe1, 0x21, 0x30, 0x09, 0x3e, 0x04, 0x20, 0xc5, 0xb9, 0x1d, 0x0a, 0x2e, 0x79, 0xf2, 0x3d, 0xa3,
- 0x3e, 0x44, 0xf6, 0xb0, 0x0b, 0xa3, 0x10, 0x04, 0x0d, 0x80, 0x49, 0xe2, 0xe7, 0x71, 0x4b, 0xc9,
- 0xf0, 0xe6, 0x44, 0x6d, 0x1a, 0xb4, 0x72, 0xcd, 0x64, 0xed, 0xfa, 0xda, 0x80, 0x0f, 0x78, 0x8a,
- 0x4f, 0x56, 0xa9, 0x7a, 0xfd, 0xee, 0xb4, 0xf6, 0x7d, 0x1e, 0x04, 0x9c, 0xd9, 0xc3, 0xae, 0x5e,
- 0x69, 0xad, 0x35, 0x4d, 0x2b, 0x20, 0xe2, 0xb1, 0xe8, 0x43, 0xa2, 0xce, 0xd6, 0x5a, 0xff, 0x68,
- 0x26, 0x6b, 0x49, 0x02, 0x46, 0x12, 0x98, 0x07, 0x5e, 0x0a, 0x68, 0xbf, 0x47, 0xcb, 0x3d, 0x2d,
- 0x7f, 0x42, 0x24, 0xc1, 0xef, 0xd0, 0x6a, 0xd6, 0xc2, 0xcd, 0x38, 0xa6, 0xd1, 0x2a, 0x77, 0x96,
- 0xb6, 0x1e, 0x5a, 0x33, 0xcc, 0xc2, 0x72, 0x34, 0x25, 0xa3, 0x3b, 0x0d, 0x51, 0x88, 0xb4, 0x6f,
- 0x0c, 0xd4, 0x28, 0xca, 0xf0, 0x01, 0xaa, 0x65, 0x42, 0xd3, 0x68, 0x19, 0x9d, 0xa5, 0xad, 0x8d,
- 0xa9, 0x7d, 0xf3, 0x41, 0x0c, 0xbb, 0x79, 0xaf, 0xdd, 0xf9, 0x8b, 0x6f, 0xff, 0x97, 0x9c, 0x1c,
- 0x80, 0x09, 0xfa, 0x2b, 0xea, 0xf3, 0x70, 0xcc, 0xca, 0x9c, 0xb2, 0xb2, 0x33, 0x93, 0x95, 0x57,
- 0x09, 0x22, 0xf7, 0x51, 0x8f, 0xc6, 0x7f, 0xf1, 0x7f, 0x08, 0x45, 0xfd, 0xb7, 0x10, 0x10, 0x37,
- 0x16, 0xbe, 0x59, 0x6e, 0x19, 0x9d, 0x45, 0x67, 0x31, 0x8d, 0x1c, 0x0b, 0xff, 0x79, 0xa5, 0xf6,
- 0xa3, 0xda, 0xb8, 0xa9, 0xb6, 0xbf, 0x18, 0xa8, 0x3e, 0xc1, 0xc1, 0x87, 0x68, 0x41, 0x91, 0xb4,
- 0xcb, 0xed, 0xa9, 0x5b, 0xd2, 0x97, 0x63, 0xd8, 0xb5, 0xf6, 0x59, 0x24, 0x45, 0xac, 0x76, 0x24,
- 0x29, 0x67, 0x8a, 0xa5, 0xfd, 0xa6, 0x1c, 0xfc, 0x1a, 0xd5, 0x0a, 0x36, 0x67, 0x3b, 0x31, 0xbd,
- 0xb3, 0x3d, 0xce, 0x24, 0xa1, 0x0c, 0x84, 0x93, 0xe3, 0x6e, 0x31, 0xd9, 0xfe, 0x54, 0x46, 0x8d,
- 0x62, 0x75, 0x52, 0xa3, 0xeb, 0x5d, 0xea, 0x29, 0x93, 0xcb, 0xce, 0xa2, 0x8e, 0xec, 0x7b, 0xd8,
- 0x46, 0x6b, 0x91, 0x24, 0x42, 0xba, 0x92, 0x06, 0xe0, 0xc6, 0x8c, 0x8e, 0x5c, 0x46, 0x18, 0x37,
- 0xe7, 0x5a, 0x46, 0xa7, 0xe2, 0xac, 0xaa, 0xdc, 0x11, 0x0d, 0xe0, 0x98, 0xd1, 0xd1, 0x4b, 0xc2,
- 0x38, 0xde, 0x44, 0x18, 0x98, 0x57, 0x94, 0x97, 0x95, 0x7c, 0x05, 0x98, 0x37, 0x21, 0x7e, 0x81,
- 0x10, 0x91, 0x52, 0xd0, 0xd3, 0x58, 0x42, 0x64, 0xce, 0xab, 0x69, 0xdc, 0xb9, 0x65, 0xc2, 0x07,
- 0x70, 0x7e, 0x42, 0xfc, 0x38, 0x9b, 0xea, 0x18, 0x00, 0x3f, 0x40, 0xa6, 0x27, 0x78, 0x18, 0x82,
- 0xe7, 0xfe, 0x8a, 0xba, 0x7d, 0x1e, 0x33, 0x69, 0x2e, 0xb4, 0x8c, 0x4e, 0xdd, 0xf9, 0x47, 0xe7,
- 0x1f, 0xe7, 0xe9, 0xbd, 0x24, 0x8b, 0xef, 0xa3, 0x7f, 0xb9, 0xa0, 0x03, 0xca, 0x88, 0xef, 0x86,
- 0xe4, 0xdc, 0xe7, 0xc4, 0x73, 0xcf, 0xb8, 0x08, 0x88, 0x34, 0x2b, 0x6a, 0x8c, 0x7f, 0x67, 0xe9,
- 0x5e, 0x9a, 0x7d, 0xa6, 0x92, 0x78, 0x03, 0x35, 0x8a, 0x75, 0x66, 0x55, 0xcd, 0x70, 0xa5, 0x50,
- 0x80, 0x8f, 0x50, 0x55, 0x8f, 0xd5, 0xac, 0xa9, 0xab, 0x74, 0xef, 0x4f, 0x8e, 0x5d, 0xbb, 0xce,
- 0x50, 0xbb, 0x5f, 0x8d, 0x8b, 0xab, 0xa6, 0x71, 0x79, 0xd5, 0x34, 0xbe, 0x5f, 0x35, 0x8d, 0x0f,
- 0xd7, 0xcd, 0xd2, 0xe5, 0x75, 0xb3, 0xf4, 0xf9, 0xba, 0x59, 0x42, 0x16, 0xe5, 0xb3, 0x74, 0xd8,
- 0xad, 0x67, 0x77, 0xbe, 0x97, 0xc8, 0x7a, 0xc6, 0x1b, 0x67, 0x50, 0x04, 0xd0, 0xe4, 0x45, 0xf4,
- 0x7d, 0xe8, 0x4b, 0x2e, 0xec, 0xd0, 0x23, 0x92, 0xd8, 0x94, 0x49, 0x10, 0x8c, 0xf8, 0xb6, 0xfa,
- 0x53, 0x1d, 0x06, 0xc0, 0x7e, 0xf7, 0xb8, 0x7d, 0x9c, 0xdb, 0x3c, 0x0c, 0x81, 0x1d, 0xe5, 0x44,
- 0xd5, 0x2b, 0x33, 0x17, 0x59, 0x27, 0xdd, 0xa7, 0x63, 0xea, 0xd3, 0x8a, 0xe2, 0x6d, 0xff, 0x0c,
- 0x00, 0x00, 0xff, 0xff, 0xe1, 0x89, 0x49, 0xa4, 0x1b, 0x06, 0x00, 0x00,
+ // 671 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x94, 0xdf, 0x6e, 0xd3, 0x30,
+ 0x14, 0xc6, 0xeb, 0xfd, 0x69, 0x3b, 0x6f, 0x65, 0x9d, 0x35, 0x20, 0x9a, 0x44, 0x57, 0xf5, 0x86,
+ 0x8e, 0x49, 0x89, 0xba, 0x21, 0x84, 0x86, 0x10, 0xa2, 0x1b, 0x48, 0x63, 0x82, 0x55, 0x61, 0x9b,
+ 0x04, 0x37, 0x91, 0xd7, 0x78, 0xc5, 0x28, 0xb1, 0x23, 0xc7, 0xa9, 0x3a, 0x9e, 0x82, 0x2b, 0x1e,
+ 0x82, 0x27, 0xd9, 0xe5, 0x2e, 0xd1, 0x90, 0x26, 0xb4, 0xdd, 0xb0, 0xb7, 0x40, 0x71, 0x9c, 0xd0,
+ 0x46, 0x45, 0x53, 0xb9, 0x89, 0x1c, 0x9f, 0xef, 0xfc, 0xce, 0xf9, 0x7c, 0x9c, 0xc0, 0x2d, 0x1e,
+ 0x10, 0x26, 0x89, 0x47, 0x7c, 0x22, 0xc5, 0xa9, 0x15, 0x08, 0x2e, 0x79, 0xfc, 0x3c, 0xa1, 0x1e,
+ 0x09, 0xad, 0x7e, 0x8b, 0x0c, 0x02, 0x22, 0xa8, 0x4f, 0x98, 0xc4, 0x5e, 0xb6, 0x6f, 0x2a, 0x19,
+ 0x5a, 0x1f, 0xc9, 0x4d, 0x36, 0xcd, 0x4c, 0x33, 0x9a, 0xbb, 0xb2, 0xdc, 0xe3, 0x3d, 0x9e, 0xe0,
+ 0xe3, 0x55, 0xa2, 0x5e, 0x79, 0x34, 0xae, 0x7c, 0x97, 0xfb, 0x3e, 0x67, 0x56, 0xbf, 0xa5, 0x57,
+ 0x5a, 0x6b, 0x8e, 0xd3, 0x0a, 0x12, 0xf2, 0x48, 0x74, 0x49, 0xac, 0x4e, 0xd7, 0x5a, 0xff, 0x62,
+ 0x22, 0x6b, 0x71, 0x80, 0x0c, 0x24, 0x61, 0x2e, 0x71, 0x13, 0x40, 0xe3, 0x0b, 0x5c, 0xe8, 0x68,
+ 0xf9, 0x0e, 0x96, 0x18, 0x7d, 0x86, 0x4b, 0x69, 0x09, 0x27, 0xe5, 0x18, 0xa0, 0x3e, 0xdd, 0x9c,
+ 0xdf, 0x78, 0x6e, 0x4e, 0x70, 0x16, 0xa6, 0xad, 0x29, 0x29, 0xdd, 0xae, 0x8a, 0xdc, 0x4e, 0xe3,
+ 0x06, 0xc0, 0x6a, 0x5e, 0x86, 0xf6, 0x60, 0x39, 0x15, 0x1a, 0xa0, 0x0e, 0x9a, 0xf3, 0x1b, 0x6b,
+ 0x63, 0xeb, 0x66, 0x07, 0xd1, 0x6f, 0x65, 0xb5, 0xda, 0x33, 0x67, 0x97, 0xab, 0x05, 0x3b, 0x03,
+ 0x20, 0x0c, 0xef, 0x84, 0x5d, 0x1e, 0x0c, 0x59, 0x99, 0x52, 0x56, 0xb6, 0x26, 0xb2, 0xf2, 0x3e,
+ 0x46, 0x64, 0x3e, 0x2a, 0xe1, 0xf0, 0x2b, 0x7a, 0x00, 0x61, 0xd8, 0xfd, 0x44, 0x7c, 0xec, 0x44,
+ 0xc2, 0x33, 0xa6, 0xeb, 0xa0, 0x39, 0x67, 0xcf, 0x25, 0x3b, 0x87, 0xc2, 0x7b, 0x53, 0x2c, 0xff,
+ 0x2e, 0x55, 0x6f, 0x4a, 0x8d, 0x0b, 0x00, 0x2b, 0x23, 0x1c, 0xb4, 0x0f, 0x67, 0x15, 0x49, 0xbb,
+ 0xdc, 0x1c, 0xdb, 0x92, 0xbe, 0x1c, 0xfd, 0x96, 0xb9, 0xcb, 0x42, 0x29, 0x22, 0xd5, 0x91, 0xa4,
+ 0x9c, 0x29, 0x96, 0xf6, 0x9b, 0x70, 0xd0, 0x07, 0x58, 0xce, 0xd9, 0x9c, 0x6c, 0x62, 0xba, 0xb3,
+ 0x6d, 0xce, 0x24, 0xa6, 0x8c, 0x08, 0x3b, 0xc3, 0xdd, 0x62, 0xb2, 0xf1, 0x6d, 0x06, 0x56, 0xf3,
+ 0xd9, 0xe8, 0x18, 0x42, 0x9d, 0xef, 0x50, 0x57, 0x99, 0x5c, 0x68, 0x6f, 0xc7, 0xfd, 0x5e, 0x5c,
+ 0xae, 0x3e, 0xeb, 0xf1, 0x5c, 0x6b, 0x34, 0xfe, 0x24, 0x3c, 0x8f, 0x74, 0x25, 0x17, 0x56, 0xe0,
+ 0x62, 0x89, 0x2d, 0xca, 0x24, 0x11, 0x0c, 0x7b, 0x56, 0xfc, 0x96, 0x76, 0xb7, 0xbb, 0x63, 0xcf,
+ 0x69, 0xec, 0xae, 0x8b, 0x2c, 0xb8, 0x1c, 0x4a, 0x2c, 0xa4, 0x23, 0xa9, 0x4f, 0x9c, 0x88, 0xd1,
+ 0x81, 0xc3, 0x30, 0xe3, 0xc6, 0x54, 0x1d, 0x34, 0x8b, 0xf6, 0x92, 0x8a, 0x1d, 0x50, 0x9f, 0x1c,
+ 0x32, 0x3a, 0x78, 0x87, 0x19, 0x47, 0xeb, 0x10, 0x11, 0xe6, 0xe6, 0xe5, 0xd3, 0x4a, 0xbe, 0x48,
+ 0x98, 0x3b, 0x22, 0x7e, 0x0b, 0x21, 0x96, 0x52, 0xd0, 0xe3, 0x48, 0x92, 0xd0, 0x98, 0x51, 0x47,
+ 0xfa, 0xf0, 0x96, 0x31, 0xed, 0x91, 0xd3, 0x23, 0xec, 0x45, 0xe9, 0x68, 0x86, 0x00, 0xe8, 0x29,
+ 0x34, 0x5c, 0xc1, 0x83, 0x80, 0xb8, 0xce, 0xdf, 0x5d, 0xa7, 0xcb, 0x23, 0x26, 0x8d, 0xd9, 0x3a,
+ 0x68, 0x56, 0xec, 0x7b, 0x3a, 0xfe, 0x32, 0x0b, 0x6f, 0xc7, 0x51, 0xf4, 0x04, 0xde, 0xe7, 0x82,
+ 0xf6, 0x28, 0xc3, 0x9e, 0x13, 0xe0, 0x53, 0x8f, 0x63, 0xd7, 0x39, 0xe1, 0xc2, 0xc7, 0xd2, 0x28,
+ 0xaa, 0x59, 0xdc, 0x4d, 0xc3, 0x9d, 0x24, 0xfa, 0x5a, 0x05, 0xd1, 0x1a, 0xac, 0xe6, 0xf3, 0x8c,
+ 0x52, 0x3c, 0x08, 0x7b, 0x31, 0x97, 0x80, 0x0e, 0x60, 0x49, 0x1f, 0xab, 0x51, 0x56, 0xf7, 0xf1,
+ 0xf1, 0xff, 0xdc, 0x1d, 0xed, 0x3a, 0x45, 0xb5, 0x7f, 0x82, 0xb3, 0xab, 0x1a, 0x38, 0xbf, 0xaa,
+ 0x81, 0x5f, 0x57, 0x35, 0xf0, 0xf5, 0xba, 0x56, 0x38, 0xbf, 0xae, 0x15, 0x7e, 0x5c, 0xd7, 0x0a,
+ 0xd0, 0xa4, 0x7c, 0x92, 0x0a, 0xed, 0x4a, 0xfa, 0xe1, 0x74, 0x62, 0x59, 0x07, 0x7c, 0xb4, 0x27,
+ 0xbe, 0x43, 0xc9, 0xef, 0xb1, 0x47, 0xd8, 0xbf, 0xfe, 0x90, 0xdf, 0xa7, 0xd6, 0xf7, 0x03, 0xc2,
+ 0x0e, 0x32, 0xa2, 0xaa, 0x95, 0x9a, 0x0b, 0xcd, 0xa3, 0xd6, 0xab, 0x21, 0xf5, 0x71, 0x51, 0xf1,
+ 0x36, 0xff, 0x04, 0x00, 0x00, 0xff, 0xff, 0x73, 0x8f, 0x2e, 0xdd, 0x60, 0x06, 0x00, 0x00,
}
func (m *ProfilesData) Marshal() (dAtA []byte, err error) {
@@ -636,13 +631,16 @@ func (m *ProfileContainer) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i--
dAtA[i] = 0x11
}
- if len(m.ProfileId) > 0 {
- i -= len(m.ProfileId)
- copy(dAtA[i:], m.ProfileId)
- i = encodeVarintProfiles(dAtA, i, uint64(len(m.ProfileId)))
- i--
- dAtA[i] = 0xa
+ {
+ size := m.ProfileId.Size()
+ i -= size
+ if _, err := m.ProfileId.MarshalTo(dAtA[i:]); err != nil {
+ return 0, err
+ }
+ i = encodeVarintProfiles(dAtA, i, uint64(size))
}
+ i--
+ dAtA[i] = 0xa
return len(dAtA) - i, nil
}
@@ -720,10 +718,8 @@ func (m *ProfileContainer) Size() (n int) {
}
var l int
_ = l
- l = len(m.ProfileId)
- if l > 0 {
- n += 1 + l + sovProfiles(uint64(l))
- }
+ l = m.ProfileId.Size()
+ n += 1 + l + sovProfiles(uint64(l))
if m.StartTimeUnixNano != 0 {
n += 9
}
@@ -1198,9 +1194,8 @@ func (m *ProfileContainer) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.ProfileId = append(m.ProfileId[:0], dAtA[iNdEx:postIndex]...)
- if m.ProfileId == nil {
- m.ProfileId = []byte{}
+ if err := m.ProfileId.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
}
iNdEx = postIndex
case 2:
diff --git a/vendor/go.opentelemetry.io/collector/pdata/internal/json/enum.go b/vendor/go.opentelemetry.io/collector/pdata/internal/json/enum.go
index 4fbe193430eb3..02dd2b7768c6b 100644
--- a/vendor/go.opentelemetry.io/collector/pdata/internal/json/enum.go
+++ b/vendor/go.opentelemetry.io/collector/pdata/internal/json/enum.go
@@ -15,7 +15,7 @@ func ReadEnumValue(iter *jsoniter.Iterator, valueMap map[string]int32) int32 {
return iter.ReadInt32()
case jsoniter.StringValue:
val, ok := valueMap[iter.ReadString()]
- // Same behavior with official protbuf JSON decoder,
+ // Same behavior with official protobuf JSON decoder,
// see https://github.com/open-telemetry/opentelemetry-proto-go/pull/81
if !ok {
iter.ReportError("ReadEnumValue", "unknown string value")
diff --git a/vendor/go.opentelemetry.io/collector/pdata/internal/otlp/profiles.go b/vendor/go.opentelemetry.io/collector/pdata/internal/otlp/profiles.go
new file mode 100644
index 0000000000000..d134ccf9c0630
--- /dev/null
+++ b/vendor/go.opentelemetry.io/collector/pdata/internal/otlp/profiles.go
@@ -0,0 +1,12 @@
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+package otlp // import "go.opentelemetry.io/collector/pdata/internal/otlp"
+
+import (
+ otlpprofiles "go.opentelemetry.io/collector/pdata/internal/data/protogen/profiles/v1experimental"
+)
+
+// MigrateProfiles implements any translation needed due to deprecation in OTLP profiles protocol.
+// Any pprofile.Unmarshaler implementation from OTLP (proto/json) MUST call this, and the gRPC Server implementation.
+func MigrateProfiles(_ []*otlpprofiles.ResourceProfiles) {}
diff --git a/vendor/go.opentelemetry.io/collector/pdata/pcommon/map.go b/vendor/go.opentelemetry.io/collector/pdata/pcommon/map.go
index 5bbfab962b08e..91b803922a383 100644
--- a/vendor/go.opentelemetry.io/collector/pdata/pcommon/map.go
+++ b/vendor/go.opentelemetry.io/collector/pdata/pcommon/map.go
@@ -225,6 +225,15 @@ func (m Map) Range(f func(k string, v Value) bool) {
}
}
+// MoveTo moves all key/values from the current map overriding the destination and
+// resetting the current instance to its zero value
+func (m Map) MoveTo(dest Map) {
+ m.getState().AssertMutable()
+ dest.getState().AssertMutable()
+ *dest.getOrig() = *m.getOrig()
+ *m.getOrig() = nil
+}
+
// CopyTo copies all elements from the current map overriding the destination.
func (m Map) CopyTo(dest Map) {
dest.getState().AssertMutable()
diff --git a/vendor/go.opentelemetry.io/collector/pdata/pcommon/timestamp.go b/vendor/go.opentelemetry.io/collector/pdata/pcommon/timestamp.go
index 5fd1758b1bea5..666f86f43f649 100644
--- a/vendor/go.opentelemetry.io/collector/pdata/pcommon/timestamp.go
+++ b/vendor/go.opentelemetry.io/collector/pdata/pcommon/timestamp.go
@@ -13,11 +13,13 @@ type Timestamp uint64
// NewTimestampFromTime constructs a new Timestamp from the provided time.Time.
func NewTimestampFromTime(t time.Time) Timestamp {
+ // nolint:gosec
return Timestamp(uint64(t.UnixNano()))
}
// AsTime converts this to a time.Time.
func (ts Timestamp) AsTime() time.Time {
+ // nolint:gosec
return time.Unix(0, int64(ts)).UTC()
}
diff --git a/vendor/go.opentelemetry.io/collector/pdata/pcommon/value.go b/vendor/go.opentelemetry.io/collector/pdata/pcommon/value.go
index 77a84e517582d..286b9c928e338 100644
--- a/vendor/go.opentelemetry.io/collector/pdata/pcommon/value.go
+++ b/vendor/go.opentelemetry.io/collector/pdata/pcommon/value.go
@@ -148,6 +148,7 @@ func (v Value) FromRaw(iv any) error {
case int64:
v.SetInt(tv)
case uint:
+ // nolint:gosec
v.SetInt(int64(tv))
case uint8:
v.SetInt(int64(tv))
@@ -156,6 +157,7 @@ func (v Value) FromRaw(iv any) error {
case uint32:
v.SetInt(int64(tv))
case uint64:
+ // nolint:gosec
v.SetInt(int64(tv))
case float32:
v.SetDouble(float64(tv))
diff --git a/vendor/modules.txt b/vendor/modules.txt
index afc150678af4f..448a879acdda0 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1730,8 +1730,8 @@ go.opencensus.io/trace
go.opencensus.io/trace/internal
go.opencensus.io/trace/propagation
go.opencensus.io/trace/tracestate
-# go.opentelemetry.io/collector/pdata v1.12.0
-## explicit; go 1.21.0
+# go.opentelemetry.io/collector/pdata v1.19.0
+## explicit; go 1.22.0
go.opentelemetry.io/collector/pdata/internal
go.opentelemetry.io/collector/pdata/internal/data
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1
|
fix
|
update module go.opentelemetry.io/collector/pdata to v1.19.0 (#14916)
|
ea13730e63505afab1e63a9f7f664f02b491ce89
|
2025-03-05 00:47:46
|
Dylan Guedes
|
feat: Snapshopt stream overrides values on each request (#16523)
| false
|
diff --git a/clients/pkg/promtail/targets/lokipush/pushtarget.go b/clients/pkg/promtail/targets/lokipush/pushtarget.go
index e1ebafc1bab2e..1ab28df0c9b2b 100644
--- a/clients/pkg/promtail/targets/lokipush/pushtarget.go
+++ b/clients/pkg/promtail/targets/lokipush/pushtarget.go
@@ -111,7 +111,7 @@ func (t *PushTarget) run() error {
func (t *PushTarget) handleLoki(w http.ResponseWriter, r *http.Request) {
logger := util_log.WithContext(r.Context(), util_log.Logger)
userID, _ := tenant.TenantID(r.Context())
- req, err := push.ParseRequest(logger, userID, r, nil, push.EmptyLimits{}, push.ParseLokiRequest, nil, nil, false)
+ req, err := push.ParseRequest(logger, userID, r, push.EmptyLimits{}, push.ParseLokiRequest, nil, nil, false)
if err != nil {
level.Warn(t.logger).Log("msg", "failed to parse incoming push request", "err", err.Error())
http.Error(w, err.Error(), http.StatusBadRequest)
diff --git a/pkg/compactor/retention/expiration.go b/pkg/compactor/retention/expiration.go
index 392999a0d6908..a1d2415aceb95 100644
--- a/pkg/compactor/retention/expiration.go
+++ b/pkg/compactor/retention/expiration.go
@@ -134,19 +134,40 @@ func NewTenantsRetention(l Limits) *TenantsRetention {
}
func (tr *TenantsRetention) RetentionHoursFor(userID string, lbs labels.Labels) string {
- period := tr.RetentionPeriodFor(userID, lbs)
- return util.RetentionHours(period)
+ return NewTenantRetentionSnapshot(tr.limits, userID).RetentionHoursFor(lbs)
}
func (tr *TenantsRetention) RetentionPeriodFor(userID string, lbs labels.Labels) time.Duration {
- streamRetentions := tr.limits.StreamRetention(userID)
- globalRetention := tr.limits.RetentionPeriod(userID)
+ return NewTenantRetentionSnapshot(tr.limits, userID).RetentionPeriodFor(lbs)
+}
+
+// TenantRetentionSnapshot is a snapshot of retention rules for a tenant.
+// The underlying retention rules may change on the original limits object passed to
+// NewTenantRetentionSnapshot, but the snapshot is immutable.
+type TenantRetentionSnapshot struct {
+ streamRetentions []validation.StreamRetention
+ globalRetention time.Duration
+}
+
+func NewTenantRetentionSnapshot(limits Limits, userID string) *TenantRetentionSnapshot {
+ return &TenantRetentionSnapshot{
+ streamRetentions: limits.StreamRetention(userID),
+ globalRetention: limits.RetentionPeriod(userID),
+ }
+}
+
+func (r *TenantRetentionSnapshot) RetentionHoursFor(lbs labels.Labels) string {
+ period := r.RetentionPeriodFor(lbs)
+ return util.RetentionHours(period)
+}
+
+func (r *TenantRetentionSnapshot) RetentionPeriodFor(lbs labels.Labels) time.Duration {
var (
matchedRule validation.StreamRetention
found bool
)
Outer:
- for _, streamRetention := range streamRetentions {
+ for _, streamRetention := range r.streamRetentions {
for _, m := range streamRetention.Matchers {
if !m.Matches(lbs.Get(m.Name)) {
continue Outer
@@ -166,10 +187,12 @@ Outer:
found = true
matchedRule = streamRetention
}
+
if found {
return time.Duration(matchedRule.Period)
}
- return globalRetention
+
+ return r.globalRetention
}
type latestRetentionStartTime struct {
diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go
index bd8bae6217196..a2efc06c40525 100644
--- a/pkg/distributor/distributor.go
+++ b/pkg/distributor/distributor.go
@@ -180,7 +180,6 @@ type Distributor struct {
streamShardCount prometheus.Counter
tenantPushSanitizedStructuredMetadata *prometheus.CounterVec
- policyResolver push.PolicyResolver
usageTracker push.UsageTracker
ingesterTasks chan pushIngesterTask
ingesterTaskWg sync.WaitGroup
@@ -224,11 +223,6 @@ func New(
return client.New(internalCfg, addr)
}
- policyResolver := push.PolicyResolver(func(userID string, lbs labels.Labels) string {
- mappings := overrides.PoliciesStreamMapping(userID)
- return getPolicy(userID, lbs, mappings, logger)
- })
-
validator, err := NewValidator(overrides, usageTracker)
if err != nil {
return nil, err
@@ -286,7 +280,6 @@ func New(
healthyInstancesCount: atomic.NewUint32(0),
rateLimitStrat: rateLimitStrat,
tee: tee,
- policyResolver: policyResolver,
usageTracker: usageTracker,
ingesterTasks: make(chan pushIngesterTask),
ingesterAppends: promauto.With(registerer).NewCounterVec(prometheus.CounterOpts{
@@ -460,9 +453,17 @@ func (p *pushTracker) doneWithResult(err error) {
}
}
+func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*logproto.PushResponse, error) {
+ tenantID, err := tenant.TenantID(ctx)
+ if err != nil {
+ return nil, err
+ }
+ return d.PushWithResolver(ctx, req, newRequestScopedStreamResolver(tenantID, d.validator.Limits, d.logger))
+}
+
// Push a set of streams.
// The returned error is the last one seen.
-func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*logproto.PushResponse, error) {
+func (d *Distributor) PushWithResolver(ctx context.Context, req *logproto.PushRequest, streamResolver *requestScopedStreamResolver) (*logproto.PushResponse, error) {
tenantID, err := tenant.TenantID(ctx)
if err != nil {
return nil, err
@@ -538,7 +539,7 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
var lbs labels.Labels
var retentionHours, policy string
- lbs, stream.Labels, stream.Hash, retentionHours, policy, err = d.parseStreamLabels(validationContext, stream.Labels, stream)
+ lbs, stream.Labels, stream.Hash, retentionHours, policy, err = d.parseStreamLabels(validationContext, stream.Labels, stream, streamResolver)
if err != nil {
d.writeFailuresManager.Log(tenantID, err)
validationErrors.Add(err)
@@ -661,7 +662,7 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
}
if !d.ingestionRateLimiter.AllowN(now, tenantID, validationContext.validationMetrics.aggregatedPushStats.lineSize) {
- d.trackDiscardedData(ctx, req, validationContext, tenantID, validationContext.validationMetrics, validation.RateLimited)
+ d.trackDiscardedData(ctx, req, validationContext, tenantID, validationContext.validationMetrics, validation.RateLimited, streamResolver)
err = fmt.Errorf(validation.RateLimitedErrorMsg, tenantID, int(d.ingestionRateLimiter.Limit(now, tenantID)), validationContext.validationMetrics.aggregatedPushStats.lineCount, validationContext.validationMetrics.aggregatedPushStats.lineSize)
d.writeFailuresManager.Log(tenantID, err)
@@ -810,6 +811,7 @@ func (d *Distributor) trackDiscardedData(
tenantID string,
validationMetrics validationMetrics,
reason string,
+ streamResolver push.StreamResolver,
) {
for policy, retentionToStats := range validationMetrics.policyPushStats {
for retentionHours, stats := range retentionToStats {
@@ -820,7 +822,7 @@ func (d *Distributor) trackDiscardedData(
if d.usageTracker != nil {
for _, stream := range req.Streams {
- lbs, _, _, _, _, err := d.parseStreamLabels(validationContext, stream.Labels, stream)
+ lbs, _, _, _, _, err := d.parseStreamLabels(validationContext, stream.Labels, stream, streamResolver)
if err != nil {
continue
}
@@ -1199,11 +1201,11 @@ type labelData struct {
hash uint64
}
-func (d *Distributor) parseStreamLabels(vContext validationContext, key string, stream logproto.Stream) (labels.Labels, string, uint64, string, string, error) {
- mapping := d.validator.Limits.PoliciesStreamMapping(vContext.userID)
+// parseStreamLabels parses stream labels using a request-scoped policy resolver
+func (d *Distributor) parseStreamLabels(vContext validationContext, key string, stream logproto.Stream, streamResolver push.StreamResolver) (labels.Labels, string, uint64, string, string, error) {
if val, ok := d.labelCache.Get(key); ok {
- retentionHours := d.tenantsRetention.RetentionHoursFor(vContext.userID, val.ls)
- policy := getPolicy(vContext.userID, val.ls, mapping, d.logger)
+ retentionHours := streamResolver.RetentionHoursFor(val.ls)
+ policy := streamResolver.PolicyFor(val.ls)
return val.ls, val.ls.String(), val.hash, retentionHours, policy, nil
}
@@ -1214,7 +1216,7 @@ func (d *Distributor) parseStreamLabels(vContext validationContext, key string,
return nil, "", 0, retentionHours, "", fmt.Errorf(validation.InvalidLabelsErrorMsg, key, err)
}
- policy := getPolicy(vContext.userID, ls, mapping, d.logger)
+ policy := streamResolver.PolicyFor(ls)
retentionHours := d.tenantsRetention.RetentionHoursFor(vContext.userID, ls)
if err := d.validator.ValidateLabels(vContext, ls, stream, retentionHours, policy); err != nil {
@@ -1311,16 +1313,41 @@ func (d *Distributor) HealthyInstancesCount() int {
return int(d.healthyInstancesCount.Load())
}
-func getPolicy(userID string, lbs labels.Labels, mapping validation.PolicyStreamMapping, logger log.Logger) string {
- policies := mapping.PolicyFor(lbs)
+type requestScopedStreamResolver struct {
+ userID string
+ policyStreamMappings validation.PolicyStreamMapping
+ retention *retention.TenantRetentionSnapshot
+
+ logger log.Logger
+}
+
+func newRequestScopedStreamResolver(userID string, overrides Limits, logger log.Logger) *requestScopedStreamResolver {
+ return &requestScopedStreamResolver{
+ userID: userID,
+ policyStreamMappings: overrides.PoliciesStreamMapping(userID),
+ retention: retention.NewTenantRetentionSnapshot(overrides, userID),
+ logger: logger,
+ }
+}
+
+func (r requestScopedStreamResolver) RetentionPeriodFor(lbs labels.Labels) time.Duration {
+ return r.retention.RetentionPeriodFor(lbs)
+}
+
+func (r requestScopedStreamResolver) RetentionHoursFor(lbs labels.Labels) string {
+ return r.retention.RetentionHoursFor(lbs)
+}
+
+func (r requestScopedStreamResolver) PolicyFor(lbs labels.Labels) string {
+ policies := r.policyStreamMappings.PolicyFor(lbs)
var policy string
if len(policies) > 0 {
policy = policies[0]
if len(policies) > 1 {
- level.Warn(logger).Log(
+ level.Warn(r.logger).Log(
"msg", "multiple policies matched for the same stream",
- "org_id", userID,
+ "org_id", r.userID,
"stream", lbs.String(),
"policy", policy,
"policies", strings.Join(policies, ","),
diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go
index c39c0a645034b..2e09951e90573 100644
--- a/pkg/distributor/distributor_test.go
+++ b/pkg/distributor/distributor_test.go
@@ -1259,11 +1259,12 @@ func Benchmark_SortLabelsOnPush(b *testing.B) {
distributors, _ := prepare(&testing.T{}, 1, 5, limits, nil)
d := distributors[0]
request := makeWriteRequest(10, 10)
+ streamResolver := newRequestScopedStreamResolver("123", d.validator.Limits, nil)
vCtx := d.validator.getValidationContextForTime(testTime, "123")
for n := 0; n < b.N; n++ {
stream := request.Streams[0]
stream.Labels = `{buzz="f", a="b"}`
- _, _, _, _, _, err := d.parseStreamLabels(vCtx, stream.Labels, stream)
+ _, _, _, _, _, err := d.parseStreamLabels(vCtx, stream.Labels, stream, streamResolver)
if err != nil {
panic("parseStreamLabels fail,err:" + err.Error())
}
@@ -1307,11 +1308,11 @@ func TestParseStreamLabels(t *testing.T) {
d := distributors[0]
vCtx := d.validator.getValidationContextForTime(testTime, "123")
-
+ streamResolver := newRequestScopedStreamResolver("123", d.validator.Limits, nil)
t.Run(tc.name, func(t *testing.T) {
lbs, lbsString, hash, _, _, err := d.parseStreamLabels(vCtx, tc.origLabels, logproto.Stream{
Labels: tc.origLabels,
- })
+ }, streamResolver)
if tc.expectedErr != nil {
require.Equal(t, tc.expectedErr, err)
return
@@ -2257,3 +2258,108 @@ func BenchmarkDistributor_PushWithPolicies(b *testing.B) {
})
}
}
+
+func TestRequestScopedStreamResolver(t *testing.T) {
+ limits := &validation.Limits{}
+ flagext.DefaultValues(limits)
+
+ limits.RetentionPeriod = model.Duration(24 * time.Hour)
+ limits.StreamRetention = []validation.StreamRetention{
+ {
+ Period: model.Duration(48 * time.Hour),
+ Selector: `{env="prod"}`,
+ },
+ }
+ limits.PolicyStreamMapping = validation.PolicyStreamMapping{
+ "policy0": []*validation.PriorityStream{
+ {
+ Selector: `{env="prod"}`,
+ },
+ },
+ }
+
+ // Load matchers
+ require.NoError(t, limits.Validate())
+
+ overrides, err := validation.NewOverrides(*limits, nil)
+ require.NoError(t, err)
+
+ resolver := newRequestScopedStreamResolver("123", overrides, nil)
+
+ retentionHours := resolver.RetentionHoursFor(labels.FromStrings("env", "prod"))
+ require.Equal(t, "48", retentionHours)
+ retentionPeriod := resolver.RetentionPeriodFor(labels.FromStrings("env", "prod"))
+ require.Equal(t, 48*time.Hour, retentionPeriod)
+
+ retentionHours = resolver.RetentionHoursFor(labels.FromStrings("env", "dev"))
+ require.Equal(t, "24", retentionHours)
+ retentionPeriod = resolver.RetentionPeriodFor(labels.FromStrings("env", "dev"))
+ require.Equal(t, 24*time.Hour, retentionPeriod)
+
+ policy := resolver.PolicyFor(labels.FromStrings("env", "prod"))
+ require.Equal(t, "policy0", policy)
+
+ policy = resolver.PolicyFor(labels.FromStrings("env", "dev"))
+ require.Empty(t, policy)
+
+ // We now modify the underlying limits to test that the resolver is not affected by changes to the limits
+ limits.RetentionPeriod = model.Duration(36 * time.Hour)
+ limits.StreamRetention = []validation.StreamRetention{
+ {
+ Period: model.Duration(72 * time.Hour),
+ Selector: `{env="dev"}`,
+ },
+ }
+ limits.PolicyStreamMapping = validation.PolicyStreamMapping{
+ "policy1": []*validation.PriorityStream{
+ {
+ Selector: `{env="dev"}`,
+ },
+ },
+ }
+
+ // Load matchers
+ require.NoError(t, limits.Validate())
+
+ newOverrides, err := validation.NewOverrides(*limits, nil)
+ require.NoError(t, err)
+
+ // overwrite the overrides we passed to the resolver by the new ones
+ *overrides = *newOverrides
+
+ // All should be the same as before
+ retentionHours = resolver.RetentionHoursFor(labels.FromStrings("env", "prod"))
+ require.Equal(t, "48", retentionHours)
+ retentionPeriod = resolver.RetentionPeriodFor(labels.FromStrings("env", "prod"))
+ require.Equal(t, 48*time.Hour, retentionPeriod)
+
+ retentionHours = resolver.RetentionHoursFor(labels.FromStrings("env", "dev"))
+ require.Equal(t, "24", retentionHours)
+ retentionPeriod = resolver.RetentionPeriodFor(labels.FromStrings("env", "dev"))
+ require.Equal(t, 24*time.Hour, retentionPeriod)
+
+ policy = resolver.PolicyFor(labels.FromStrings("env", "prod"))
+ require.Equal(t, "policy0", policy)
+
+ policy = resolver.PolicyFor(labels.FromStrings("env", "dev"))
+ require.Empty(t, policy)
+
+ // But a new resolver should return the new values
+ newResolver := newRequestScopedStreamResolver("123", overrides, nil)
+
+ retentionHours = newResolver.RetentionHoursFor(labels.FromStrings("env", "prod"))
+ require.Equal(t, "36", retentionHours)
+ retentionPeriod = newResolver.RetentionPeriodFor(labels.FromStrings("env", "prod"))
+ require.Equal(t, 36*time.Hour, retentionPeriod)
+
+ retentionHours = newResolver.RetentionHoursFor(labels.FromStrings("env", "dev"))
+ require.Equal(t, "72", retentionHours)
+ retentionPeriod = newResolver.RetentionPeriodFor(labels.FromStrings("env", "dev"))
+ require.Equal(t, 72*time.Hour, retentionPeriod)
+
+ policy = newResolver.PolicyFor(labels.FromStrings("env", "prod"))
+ require.Empty(t, policy)
+
+ policy = newResolver.PolicyFor(labels.FromStrings("env", "dev"))
+ require.Equal(t, "policy1", policy)
+}
diff --git a/pkg/distributor/http.go b/pkg/distributor/http.go
index c6c87dbc74454..81d76f7dd1386 100644
--- a/pkg/distributor/http.go
+++ b/pkg/distributor/http.go
@@ -40,8 +40,12 @@ func (d *Distributor) pushHandler(w http.ResponseWriter, r *http.Request, pushRe
pushRequestParser = d.RequestParserWrapper(pushRequestParser)
}
+ // Create a request-scoped policy and retention resolver that will ensure consistent policy and retention resolution
+ // across all parsers for this HTTP request.
+ streamResolver := newRequestScopedStreamResolver(tenantID, d.validator.Limits, logger)
+
logPushRequestStreams := d.tenantConfigs.LogPushRequestStreams(tenantID)
- req, err := push.ParseRequest(logger, tenantID, r, d.tenantsRetention, d.validator.Limits, pushRequestParser, d.usageTracker, d.policyResolver, logPushRequestStreams)
+ req, err := push.ParseRequest(logger, tenantID, r, d.validator.Limits, pushRequestParser, d.usageTracker, streamResolver, logPushRequestStreams)
if err != nil {
if !errors.Is(err, push.ErrAllLogsFiltered) {
if d.tenantConfigs.LogPushRequest(tenantID) {
@@ -77,7 +81,7 @@ func (d *Distributor) pushHandler(w http.ResponseWriter, r *http.Request, pushRe
)
}
- _, err = d.Push(r.Context(), req)
+ _, err = d.PushWithResolver(r.Context(), req, streamResolver)
if err == nil {
if d.tenantConfigs.LogPushRequest(tenantID) {
level.Debug(logger).Log(
diff --git a/pkg/distributor/http_test.go b/pkg/distributor/http_test.go
index a73a73fa5e2ab..b264bca1f8370 100644
--- a/pkg/distributor/http_test.go
+++ b/pkg/distributor/http_test.go
@@ -125,10 +125,9 @@ func newFakeParser() *fakeParser {
func (p *fakeParser) parseRequest(
_ string,
_ *http.Request,
- _ push.TenantsRetention,
_ push.Limits,
_ push.UsageTracker,
- _ push.PolicyResolver,
+ _ push.StreamResolver,
_ bool,
_ log.Logger,
) (*logproto.PushRequest, *push.Stats, error) {
diff --git a/pkg/loghttp/push/otlp.go b/pkg/loghttp/push/otlp.go
index 3151e1decbd43..b75a51dd6f47e 100644
--- a/pkg/loghttp/push/otlp.go
+++ b/pkg/loghttp/push/otlp.go
@@ -35,14 +35,14 @@ const (
OTLPSeverityNumber = "severity_number"
)
-func ParseOTLPRequest(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker, policyResolver PolicyResolver, logPushRequestStreams bool, logger log.Logger) (*logproto.PushRequest, *Stats, error) {
+func ParseOTLPRequest(userID string, r *http.Request, limits Limits, tracker UsageTracker, streamResolver StreamResolver, logPushRequestStreams bool, logger log.Logger) (*logproto.PushRequest, *Stats, error) {
stats := NewPushStats()
otlpLogs, err := extractLogs(r, stats)
if err != nil {
return nil, nil, err
}
- req := otlpToLokiPushRequest(r.Context(), otlpLogs, userID, tenantsRetention, limits.OTLPConfig(userID), limits.DiscoverServiceName(userID), tracker, stats, logPushRequestStreams, logger, policyResolver)
+ req := otlpToLokiPushRequest(r.Context(), otlpLogs, userID, limits.OTLPConfig(userID), limits.DiscoverServiceName(userID), tracker, stats, logPushRequestStreams, logger, streamResolver)
return req, stats, nil
}
@@ -93,7 +93,7 @@ func extractLogs(r *http.Request, pushStats *Stats) (plog.Logs, error) {
return req.Logs(), nil
}
-func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, tenantsRetention TenantsRetention, otlpConfig OTLPConfig, discoverServiceName []string, tracker UsageTracker, stats *Stats, logPushRequestStreams bool, logger log.Logger, policyForResolver PolicyResolver) *logproto.PushRequest {
+func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, otlpConfig OTLPConfig, discoverServiceName []string, tracker UsageTracker, stats *Stats, logPushRequestStreams bool, logger log.Logger, streamResolver StreamResolver) *logproto.PushRequest {
if ld.LogRecordCount() == 0 {
return &logproto.PushRequest{}
}
@@ -187,8 +187,8 @@ func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, ten
}
resourceAttributesAsStructuredMetadataSize := loki_util.StructuredMetadataSize(resourceAttributesAsStructuredMetadata)
- retentionPeriodForUser := tenantsRetention.RetentionPeriodFor(userID, lbs)
- policy := policyForResolver(userID, lbs)
+ retentionPeriodForUser := streamResolver.RetentionPeriodFor(lbs)
+ policy := streamResolver.PolicyFor(lbs)
if _, ok := stats.StructuredMetadataBytes[policy]; !ok {
stats.StructuredMetadataBytes[policy] = make(map[time.Duration]int64)
diff --git a/pkg/loghttp/push/otlp_test.go b/pkg/loghttp/push/otlp_test.go
index c1c6d65f56c2c..4c6e85cd51aef 100644
--- a/pkg/loghttp/push/otlp_test.go
+++ b/pkg/loghttp/push/otlp_test.go
@@ -561,24 +561,25 @@ func TestOTLPToLokiPushRequest(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
stats := NewPushStats()
tracker := NewMockTracker()
+ streamResolver := newMockStreamResolver("fake", &fakeLimits{})
+ streamResolver.policyForOverride = func(lbs labels.Labels) string {
+ if lbs.Get("service_name") == "service-1" {
+ return "service-1-policy"
+ }
+ return "others"
+ }
pushReq := otlpToLokiPushRequest(
context.Background(),
tc.generateLogs(),
"foo",
- fakeRetention{},
tc.otlpConfig,
defaultServiceDetection,
tracker,
stats,
false,
log.NewNopLogger(),
- func(_ string, lbs labels.Labels) string {
- if lbs.Get("service_name") == "service-1" {
- return "service-1-policy"
- }
- return "others"
- },
+ streamResolver,
)
require.Equal(t, tc.expectedPushRequest, *pushReq)
require.Equal(t, tc.expectedStats, *stats)
diff --git a/pkg/loghttp/push/push.go b/pkg/loghttp/push/push.go
index 60d297a317e17..dccbe75ce8645 100644
--- a/pkg/loghttp/push/push.go
+++ b/pkg/loghttp/push/push.go
@@ -94,11 +94,18 @@ func (EmptyLimits) PolicyFor(_ string, _ labels.Labels) string {
return ""
}
+// StreamResolver is a request-scoped interface that provides retention period and policy for a given stream.
+// The values returned by the resolver will not chance throught the handling of the request
+type StreamResolver interface {
+ RetentionPeriodFor(lbs labels.Labels) time.Duration
+ RetentionHoursFor(lbs labels.Labels) string
+ PolicyFor(lbs labels.Labels) string
+}
+
type (
- RequestParser func(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker, policyResolver PolicyResolver, logPushRequestStreams bool, logger log.Logger) (*logproto.PushRequest, *Stats, error)
+ RequestParser func(userID string, r *http.Request, limits Limits, tracker UsageTracker, streamResolver StreamResolver, logPushRequestStreams bool, logger log.Logger) (*logproto.PushRequest, *Stats, error)
RequestParserWrapper func(inner RequestParser) RequestParser
ErrorWriter func(w http.ResponseWriter, error string, code int, logger log.Logger)
- PolicyResolver func(userID string, lbs labels.Labels) string
)
type PolicyWithRetentionWithBytes map[string]map[time.Duration]int64
@@ -130,8 +137,8 @@ type Stats struct {
IsAggregatedMetric bool
}
-func ParseRequest(logger log.Logger, userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, pushRequestParser RequestParser, tracker UsageTracker, policyResolver PolicyResolver, logPushRequestStreams bool) (*logproto.PushRequest, error) {
- req, pushStats, err := pushRequestParser(userID, r, tenantsRetention, limits, tracker, policyResolver, logPushRequestStreams, logger)
+func ParseRequest(logger log.Logger, userID string, r *http.Request, limits Limits, pushRequestParser RequestParser, tracker UsageTracker, streamResolver StreamResolver, logPushRequestStreams bool) (*logproto.PushRequest, error) {
+ req, pushStats, err := pushRequestParser(userID, r, limits, tracker, streamResolver, logPushRequestStreams, logger)
if err != nil && !errors.Is(err, ErrAllLogsFiltered) {
return nil, err
}
@@ -196,7 +203,7 @@ func ParseRequest(logger log.Logger, userID string, r *http.Request, tenantsRete
return req, err
}
-func ParseLokiRequest(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker, policyResolver PolicyResolver, logPushRequestStreams bool, logger log.Logger) (*logproto.PushRequest, *Stats, error) {
+func ParseLokiRequest(userID string, r *http.Request, limits Limits, tracker UsageTracker, streamResolver StreamResolver, logPushRequestStreams bool, logger log.Logger) (*logproto.PushRequest, *Stats, error) {
// Body
var body io.Reader
// bodySize should always reflect the compressed size of the request body
@@ -306,14 +313,12 @@ func ParseLokiRequest(userID string, r *http.Request, tenantsRetention TenantsRe
)
}
+ var totalBytesReceived int64
var retentionPeriod time.Duration
- if tenantsRetention != nil {
- retentionPeriod = tenantsRetention.RetentionPeriodFor(userID, lbs)
- }
- totalBytesReceived := int64(0)
var policy string
- if policyResolver != nil {
- policy = policyResolver(userID, lbs)
+ if streamResolver != nil {
+ retentionPeriod = streamResolver.RetentionPeriodFor(lbs)
+ policy = streamResolver.PolicyFor(lbs)
}
if _, ok := pushStats.LogLinesBytes[policy]; !ok {
diff --git a/pkg/loghttp/push/push_test.go b/pkg/loghttp/push/push_test.go
index 60b421b5a5b7f..87b9f52d71e71 100644
--- a/pkg/loghttp/push/push_test.go
+++ b/pkg/loghttp/push/push_test.go
@@ -279,6 +279,8 @@ func TestParseRequest(t *testing.T) {
},
} {
t.Run(fmt.Sprintf("test %d", index), func(t *testing.T) {
+ streamResolver := newMockStreamResolver("fake", test.fakeLimits)
+
structuredMetadataBytesIngested.Reset()
bytesIngested.Reset()
linesIngested.Reset()
@@ -299,11 +301,10 @@ func TestParseRequest(t *testing.T) {
util_log.Logger,
"fake",
request,
- nil,
test.fakeLimits,
ParseLokiRequest,
tracker,
- test.fakeLimits.PolicyFor,
+ streamResolver,
false,
)
@@ -341,7 +342,7 @@ func TestParseRequest(t *testing.T) {
require.Equal(
t,
float64(bytes),
- testutil.ToFloat64(structuredMetadataBytesIngested.WithLabelValues("fake", "", fmt.Sprintf("%t", test.aggregatedMetric), policyName)),
+ testutil.ToFloat64(structuredMetadataBytesIngested.WithLabelValues("fake", "1" /* We use "1" here because fakeLimits.RetentionHoursFor returns "1" */, fmt.Sprintf("%t", test.aggregatedMetric), policyName)),
)
}
@@ -352,7 +353,7 @@ func TestParseRequest(t *testing.T) {
testutil.ToFloat64(
bytesIngested.WithLabelValues(
"fake",
- "",
+ "1", // We use "1" here because fakeLimits.RetentionHoursFor returns "1"
fmt.Sprintf("%t", test.aggregatedMetric),
policyName,
),
@@ -388,8 +389,8 @@ func TestParseRequest(t *testing.T) {
require.Equal(t, 0, bytesReceived)
require.Equal(t, 0, linesReceived)
policy := ""
- require.Equal(t, float64(0), testutil.ToFloat64(structuredMetadataBytesIngested.WithLabelValues("fake", "", fmt.Sprintf("%t", test.aggregatedMetric), policy)))
- require.Equal(t, float64(0), testutil.ToFloat64(bytesIngested.WithLabelValues("fake", "", fmt.Sprintf("%t", test.aggregatedMetric), policy)))
+ require.Equal(t, float64(0), testutil.ToFloat64(structuredMetadataBytesIngested.WithLabelValues("fake", "1" /* We use "1" here because fakeLimits.RetentionHoursFor returns "1" */, fmt.Sprintf("%t", test.aggregatedMetric), policy)))
+ require.Equal(t, float64(0), testutil.ToFloat64(bytesIngested.WithLabelValues("fake", "1" /* We use "1" here because fakeLimits.RetentionHoursFor returns "1" */, fmt.Sprintf("%t", test.aggregatedMetric), policy)))
require.Equal(t, float64(0), testutil.ToFloat64(linesIngested.WithLabelValues("fake", fmt.Sprintf("%t", test.aggregatedMetric), policy)))
}
})
@@ -431,7 +432,8 @@ func Test_ServiceDetection(t *testing.T) {
request := createRequest("/loki/api/v1/push", strings.NewReader(body))
limits := &fakeLimits{enabled: true, labels: []string{"foo"}}
- data, err := ParseRequest(util_log.Logger, "fake", request, nil, limits, ParseLokiRequest, tracker, limits.PolicyFor, false)
+ streamResolver := newMockStreamResolver("fake", limits)
+ data, err := ParseRequest(util_log.Logger, "fake", request, limits, ParseLokiRequest, tracker, streamResolver, false)
require.NoError(t, err)
require.Equal(t, labels.FromStrings("foo", "bar", LabelServiceName, "bar").String(), data.Streams[0].Labels)
@@ -442,7 +444,8 @@ func Test_ServiceDetection(t *testing.T) {
request := createRequest("/otlp/v1/push", bytes.NewReader(body))
limits := &fakeLimits{enabled: true}
- data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker, limits.PolicyFor, false)
+ streamResolver := newMockStreamResolver("fake", limits)
+ data, err := ParseRequest(util_log.Logger, "fake", request, limits, ParseOTLPRequest, tracker, streamResolver, false)
require.NoError(t, err)
require.Equal(t, labels.FromStrings("k8s_job_name", "bar", LabelServiceName, "bar").String(), data.Streams[0].Labels)
})
@@ -456,7 +459,8 @@ func Test_ServiceDetection(t *testing.T) {
labels: []string{"special"},
indexAttributes: []string{"special"},
}
- data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker, limits.PolicyFor, false)
+ streamResolver := newMockStreamResolver("fake", limits)
+ data, err := ParseRequest(util_log.Logger, "fake", request, limits, ParseOTLPRequest, tracker, streamResolver, false)
require.NoError(t, err)
require.Equal(t, labels.FromStrings("special", "sauce", LabelServiceName, "sauce").String(), data.Streams[0].Labels)
})
@@ -470,7 +474,8 @@ func Test_ServiceDetection(t *testing.T) {
labels: []string{"special"},
indexAttributes: []string{},
}
- data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker, limits.PolicyFor, false)
+ streamResolver := newMockStreamResolver("fake", limits)
+ data, err := ParseRequest(util_log.Logger, "fake", request, limits, ParseOTLPRequest, tracker, streamResolver, false)
require.NoError(t, err)
require.Equal(t, labels.FromStrings(LabelServiceName, ServiceUnknown).String(), data.Streams[0].Labels)
})
@@ -573,6 +578,10 @@ func (f *fakeLimits) RetentionPeriodFor(_ string, _ labels.Labels) time.Duration
return time.Hour
}
+func (f *fakeLimits) RetentionHoursFor(_ string, _ labels.Labels) string {
+ return "1"
+}
+
func (f *fakeLimits) OTLPConfig(_ string) OTLPConfig {
if len(f.indexAttributes) > 0 {
return OTLPConfig{
@@ -621,6 +630,36 @@ func (f *fakeLimits) DiscoverServiceName(_ string) []string {
}
}
+type mockStreamResolver struct {
+ tenant string
+ limits *fakeLimits
+
+ policyForOverride func(lbs labels.Labels) string
+}
+
+func newMockStreamResolver(tenant string, limits *fakeLimits) *mockStreamResolver {
+ return &mockStreamResolver{
+ tenant: tenant,
+ limits: limits,
+ }
+}
+
+func (m mockStreamResolver) RetentionPeriodFor(lbs labels.Labels) time.Duration {
+ return m.limits.RetentionPeriodFor(m.tenant, lbs)
+}
+
+func (m mockStreamResolver) RetentionHoursFor(lbs labels.Labels) string {
+ return m.limits.RetentionHoursFor(m.tenant, lbs)
+}
+
+func (m mockStreamResolver) PolicyFor(lbs labels.Labels) string {
+ if m.policyForOverride != nil {
+ return m.policyForOverride(lbs)
+ }
+
+ return m.limits.PolicyFor(m.tenant, lbs)
+}
+
type MockCustomTracker struct {
receivedBytes map[string]float64
discardedBytes map[string]float64
diff --git a/pkg/validation/ingestion_policies.go b/pkg/validation/ingestion_policies.go
index d5734f6324e5c..9276439a36eae 100644
--- a/pkg/validation/ingestion_policies.go
+++ b/pkg/validation/ingestion_policies.go
@@ -14,7 +14,7 @@ const (
)
type PriorityStream struct {
- Priority int `yaml:"priority" json:"priority" doc:"description=The larger the value, the higher the priority."`
+ Priority int `yaml:"priority" json:"priority" doc:"description=The bigger the value, the higher the priority."`
Selector string `yaml:"selector" json:"selector" doc:"description=Stream selector expression."`
Matchers []*labels.Matcher `yaml:"-" json:"-"` // populated during validation.
}
|
feat
|
Snapshopt stream overrides values on each request (#16523)
|
da7acb49e56d5b053da3a54b0c3263679e45c6f1
|
2022-10-05 20:32:20
|
Dylan Guedes
|
loki: Add sharding support for negative/zeroed desired rate (#7342)
| false
|
diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go
index 7dbd0ab497fe4..158e84c946238 100644
--- a/pkg/distributor/distributor.go
+++ b/pkg/distributor/distributor.go
@@ -9,6 +9,7 @@ import (
"strings"
"time"
+ "github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/prometheus/model/labels"
@@ -411,14 +412,16 @@ func min(x1, x2 int) int {
// N is the sharding size for the given stream. shardSteam returns the smaller
// streams and their associated keys for hashing to ingesters.
func (d *Distributor) shardStream(stream logproto.Stream, streamSize int, userID string) ([]uint32, []streamTracker) {
- shardCount := d.shardCountFor(&stream, streamSize, d.cfg.ShardStreams.DesiredRate.Val(), d.rateStore)
+ logger := log.With(util_log.WithUserID(userID, util_log.Logger), "stream", stream.Labels)
+
+ shardCount := d.shardCountFor(logger, &stream, streamSize, d.cfg.ShardStreams.DesiredRate.Val(), d.rateStore)
if shardCount <= 1 {
return []uint32{util.TokenFor(userID, stream.Labels)}, []streamTracker{{stream: stream}}
}
if d.cfg.ShardStreams.LoggingEnabled {
- level.Info(util_log.Logger).Log("msg", "sharding request", "stream", stream.Labels, "shard_count", shardCount)
+ level.Info(logger).Log("msg", "sharding request", "shard_count", shardCount)
}
streamLabels := labelTemplate(stream.Labels)
@@ -429,7 +432,7 @@ func (d *Distributor) shardStream(stream logproto.Stream, streamSize int, userID
for i := 0; i < shardCount; i++ {
shard, ok := d.createShard(stream, streamLabels, streamPattern, shardCount, i)
if !ok {
- level.Error(util_log.Logger).Log("msg", "couldn't create shard", "stream", stream.Labels, "idx", i)
+ level.Error(logger).Log("msg", "couldn't create shard", "idx", i)
continue
}
@@ -598,12 +601,19 @@ func (d *Distributor) parseStreamLabels(vContext validationContext, key string,
// based on the rate stored in the rate store and will store the new evaluated number of shards.
//
// desiredRate is expected to be given in bytes.
-func (d *Distributor) shardCountFor(stream *logproto.Stream, streamSize, desiredRate int, rateStore RateStore) int {
+func (d *Distributor) shardCountFor(logger log.Logger, stream *logproto.Stream, streamSize, desiredRate int, rateStore RateStore) int {
+ if desiredRate <= 0 {
+ if d.cfg.ShardStreams.LoggingEnabled {
+ level.Error(logger).Log("msg", "invalid desired rate", "desired_rate", desiredRate)
+ }
+ return 1
+ }
+
rate, err := rateStore.RateFor(stream)
if err != nil {
d.streamShardingFailures.WithLabelValues("rate_not_found").Inc()
if d.cfg.ShardStreams.LoggingEnabled {
- level.Error(util_log.Logger).Log("msg", "couldn't shard stream because rate wasn't found", "stream", stream.Labels)
+ level.Error(logger).Log("msg", "couldn't shard stream because rate store returned error", "err", err)
}
return 1
}
@@ -612,7 +622,7 @@ func (d *Distributor) shardCountFor(stream *logproto.Stream, streamSize, desired
if shards > len(stream.Entries) {
d.streamShardingFailures.WithLabelValues("too_many_shards").Inc()
if d.cfg.ShardStreams.LoggingEnabled {
- level.Error(util_log.Logger).Log("msg", "number of shards bigger than number of entries", "stream", stream.Labels, "shards", shards, "entries", len(stream.Entries))
+ level.Error(logger).Log("msg", "number of shards bigger than number of entries", "shards", shards, "entries", len(stream.Entries))
}
return len(stream.Entries)
}
diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go
index 64ffdd27914fe..e8b5e9c9ef1b4 100644
--- a/pkg/distributor/distributor_test.go
+++ b/pkg/distributor/distributor_test.go
@@ -37,6 +37,7 @@ import (
"github.com/grafana/loki/pkg/runtime"
fe "github.com/grafana/loki/pkg/util/flagext"
loki_flagext "github.com/grafana/loki/pkg/util/flagext"
+ util_log "github.com/grafana/loki/pkg/util/log"
loki_net "github.com/grafana/loki/pkg/util/net"
"github.com/grafana/loki/pkg/util/test"
"github.com/grafana/loki/pkg/validation"
@@ -870,6 +871,24 @@ func TestShardCountFor(t *testing.T) {
wantShards int
wantErr bool
}{
+ {
+ name: "2 entries with zero rate and desired rate < 0, return 1 shard",
+ stream: &logproto.Stream{Hash: 1},
+ rate: 0,
+ desiredRate: -5, // in bytes
+ wantStreamSize: 2, // in bytes
+ wantShards: 1,
+ wantErr: false,
+ },
+ {
+ name: "2 entries with zero rate and desired rate == 0, return 1 shard",
+ stream: &logproto.Stream{Hash: 1},
+ rate: 0,
+ desiredRate: 0, // in bytes
+ wantStreamSize: 2, // in bytes
+ wantShards: 1,
+ wantErr: false,
+ },
{
name: "0 entries, return 0 shards always",
stream: &logproto.Stream{Hash: 1},
@@ -938,7 +957,7 @@ func TestShardCountFor(t *testing.T) {
d := &Distributor{
streamShardingFailures: shardingFailureMetric,
}
- got := d.shardCountFor(tc.stream, tc.wantStreamSize, tc.desiredRate, &noopRateStore{tc.rate})
+ got := d.shardCountFor(util_log.Logger, tc.stream, tc.wantStreamSize, tc.desiredRate, &noopRateStore{tc.rate})
require.Equal(t, tc.wantShards, got)
})
}
|
loki
|
Add sharding support for negative/zeroed desired rate (#7342)
|
94cfb90e7c34eb8c7cdf84f52dd623603ac9a408
|
2025-01-11 02:10:39
|
renovate[bot]
|
fix(deps): update module github.com/spf13/afero to v1.12.0 (#15696)
| false
|
diff --git a/go.mod b/go.mod
index a39120258638a..c40eee7c4f206 100644
--- a/go.mod
+++ b/go.mod
@@ -91,7 +91,7 @@ require (
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c
github.com/shurcooL/vfsgen v0.0.0-20200824052919-0d455de96546
github.com/sony/gobreaker/v2 v2.1.0
- github.com/spf13/afero v1.11.0
+ github.com/spf13/afero v1.12.0
github.com/stretchr/testify v1.10.0
github.com/uber/jaeger-client-go v2.30.0+incompatible
github.com/xdg-go/scram v1.1.2
@@ -103,7 +103,7 @@ require (
golang.org/x/sync v0.10.0
golang.org/x/sys v0.29.0
golang.org/x/time v0.9.0
- google.golang.org/api v0.214.0
+ google.golang.org/api v0.215.0
google.golang.org/grpc v1.68.1
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
@@ -291,7 +291,7 @@ require (
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db // indirect
github.com/google/s2a-go v0.1.8 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.4 // indirect
- github.com/googleapis/gax-go/v2 v2.14.0 // indirect
+ github.com/googleapis/gax-go/v2 v2.14.1 // indirect
github.com/gophercloud/gophercloud v1.14.0 // indirect
github.com/grafana/pyroscope-go/godeltaprof v0.1.8 // indirect
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed // indirect
@@ -372,8 +372,8 @@ require (
golang.org/x/term v0.28.0 // indirect
golang.org/x/tools v0.26.0 // indirect
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 // indirect
- google.golang.org/genproto/googleapis/api v0.0.0-20241118233622-e639e219e697 // indirect
- google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576
+ google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576 // indirect
+ google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8
gopkg.in/fsnotify/fsnotify.v1 v1.4.7 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
diff --git a/go.sum b/go.sum
index 9b41e9471f490..d3d7af1a8eaa1 100644
--- a/go.sum
+++ b/go.sum
@@ -595,8 +595,8 @@ github.com/googleapis/enterprise-certificate-proxy v0.3.4 h1:XYIDZApgAnrN1c855gT
github.com/googleapis/enterprise-certificate-proxy v0.3.4/go.mod h1:YKe7cfqYXjKGpGvmSg28/fFvhNzinZQm8DGnaburhGA=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
-github.com/googleapis/gax-go/v2 v2.14.0 h1:f+jMrjBPl+DL9nI4IQzLUxMq7XrAqFYB7hBPqMNIe8o=
-github.com/googleapis/gax-go/v2 v2.14.0/go.mod h1:lhBCnjdLrWRaPvLWhmc8IS24m9mr07qSYnHncrgo+zk=
+github.com/googleapis/gax-go/v2 v2.14.1 h1:hb0FFeiPaQskmvakKu5EbCbpntQn48jyHuvrkurSS/Q=
+github.com/googleapis/gax-go/v2 v2.14.1/go.mod h1:Hb/NubMaVM88SrNkvl8X/o8XWwDJEPqouaLeN2IUxoA=
github.com/gophercloud/gophercloud v1.14.0 h1:Bt9zQDhPrbd4qX7EILGmy+i7GP35cc+AAL2+wIJpUE8=
github.com/gophercloud/gophercloud v1.14.0/go.mod h1:aAVqcocTSXh2vYFZ1JTvx4EQmfgzxRcNupUfxZbBNDM=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
@@ -1087,8 +1087,8 @@ github.com/sony/gobreaker/v2 v2.1.0 h1:av2BnjtRmVPWBvy5gSFPytm1J8BmN5AGhq875FfGK
github.com/sony/gobreaker/v2 v2.1.0/go.mod h1:dO3Q/nCzxZj6ICjH6J/gM0r4oAwBMVLY8YAQf+NTtUg=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
-github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
-github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
+github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs=
+github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4=
github.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=
github.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
@@ -1567,8 +1567,8 @@ google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0M
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
-google.golang.org/api v0.214.0 h1:h2Gkq07OYi6kusGOaT/9rnNljuXmqPnaig7WGPmKbwA=
-google.golang.org/api v0.214.0/go.mod h1:bYPpLG8AyeMWwDU6NXoB00xC0DFkikVvd5MfwoxjLqE=
+google.golang.org/api v0.215.0 h1:jdYF4qnyczlEz2ReWIsosNLDuzXyvFHJtI5gcr0J7t0=
+google.golang.org/api v0.215.0/go.mod h1:fta3CVtuJYOEdugLNWm6WodzOS8KdFckABwN4I40hzY=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@@ -1611,10 +1611,10 @@ google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 h1:ToEetK57OidYuqD4Q5w+vfEnPvPpuTwedCNVohYJfNk=
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697/go.mod h1:JJrvXBWRZaFMxBufik1a4RpFw4HhgVtBBWQeQgUj2cc=
-google.golang.org/genproto/googleapis/api v0.0.0-20241118233622-e639e219e697 h1:pgr/4QbFyktUv9CtQ/Fq4gzEE6/Xs7iCXbktaGzLHbQ=
-google.golang.org/genproto/googleapis/api v0.0.0-20241118233622-e639e219e697/go.mod h1:+D9ySVjN8nY8YCVjc5O7PZDIdZporIDY3KaGfJunh88=
-google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576 h1:8ZmaLZE4XWrtU3MyClkYqqtl6Oegr3235h7jxsDyqCY=
-google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576/go.mod h1:5uTbfoYQed2U9p3KIj2/Zzm02PYhndfdmML0qC3q3FU=
+google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576 h1:CkkIfIt50+lT6NHAVoRYEyAvQGFM7xEwXUUywFvEb3Q=
+google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576/go.mod h1:1R3kvZ1dtP3+4p4d3G8uJ8rFk/fWlScl38vanWACI08=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8 h1:TqExAhdPaB60Ux47Cn0oLV07rGnxZzIsaRhQaqS666A=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8/go.mod h1:lcTa1sDdWEIHMWlITnIczmw5w60CF9ffkb8Z+DVmmjA=
google.golang.org/grpc v1.12.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
diff --git a/vendor/github.com/googleapis/gax-go/v2/.release-please-manifest.json b/vendor/github.com/googleapis/gax-go/v2/.release-please-manifest.json
index 29a5900c7da8a..a8c082dd61eb3 100644
--- a/vendor/github.com/googleapis/gax-go/v2/.release-please-manifest.json
+++ b/vendor/github.com/googleapis/gax-go/v2/.release-please-manifest.json
@@ -1,3 +1,3 @@
{
- "v2": "2.14.0"
+ "v2": "2.14.1"
}
diff --git a/vendor/github.com/googleapis/gax-go/v2/CHANGES.md b/vendor/github.com/googleapis/gax-go/v2/CHANGES.md
index 9fb9035908de5..17cced15eca61 100644
--- a/vendor/github.com/googleapis/gax-go/v2/CHANGES.md
+++ b/vendor/github.com/googleapis/gax-go/v2/CHANGES.md
@@ -1,5 +1,17 @@
# Changelog
+## [2.14.1](https://github.com/googleapis/gax-go/compare/v2.14.0...v2.14.1) (2024-12-19)
+
+
+### Bug Fixes
+
+* update golang.org/x/net to v0.33.0 ([#391](https://github.com/googleapis/gax-go/issues/391)) ([547a5b4](https://github.com/googleapis/gax-go/commit/547a5b43aa6f376f71242da9f18e65fbdfb342f6))
+
+
+### Documentation
+
+* fix godoc to refer to the proper envvar ([#387](https://github.com/googleapis/gax-go/issues/387)) ([dc6baf7](https://github.com/googleapis/gax-go/commit/dc6baf75c1a737233739630b5af6c9759f08abcd))
+
## [2.14.0](https://github.com/googleapis/gax-go/compare/v2.13.0...v2.14.0) (2024-11-13)
diff --git a/vendor/github.com/googleapis/gax-go/v2/internal/version.go b/vendor/github.com/googleapis/gax-go/v2/internal/version.go
index 8828893454926..2b284a24a482b 100644
--- a/vendor/github.com/googleapis/gax-go/v2/internal/version.go
+++ b/vendor/github.com/googleapis/gax-go/v2/internal/version.go
@@ -30,4 +30,4 @@
package internal
// Version is the current tagged release of the library.
-const Version = "2.14.0"
+const Version = "2.14.1"
diff --git a/vendor/github.com/googleapis/gax-go/v2/internallog/internallog.go b/vendor/github.com/googleapis/gax-go/v2/internallog/internallog.go
index 91b648a6a4c72..e47ab32acc29d 100644
--- a/vendor/github.com/googleapis/gax-go/v2/internallog/internallog.go
+++ b/vendor/github.com/googleapis/gax-go/v2/internallog/internallog.go
@@ -44,7 +44,7 @@ import (
// New returns a new [slog.Logger] default logger, or the provided logger if
// non-nil. The returned logger will be a no-op logger unless the environment
-// variable GOOGLE_SDK_DEBUG_LOGGING is set.
+// variable GOOGLE_SDK_GO_LOGGING_LEVEL is set.
func New(l *slog.Logger) *slog.Logger {
if l != nil {
return l
diff --git a/vendor/github.com/spf13/afero/.editorconfig b/vendor/github.com/spf13/afero/.editorconfig
new file mode 100644
index 0000000000000..4492e9f9fe15b
--- /dev/null
+++ b/vendor/github.com/spf13/afero/.editorconfig
@@ -0,0 +1,12 @@
+root = true
+
+[*]
+charset = utf-8
+end_of_line = lf
+indent_size = 4
+indent_style = space
+insert_final_newline = true
+trim_trailing_whitespace = true
+
+[*.go]
+indent_style = tab
diff --git a/vendor/github.com/spf13/afero/.golangci.yaml b/vendor/github.com/spf13/afero/.golangci.yaml
new file mode 100644
index 0000000000000..806289a25075f
--- /dev/null
+++ b/vendor/github.com/spf13/afero/.golangci.yaml
@@ -0,0 +1,18 @@
+linters-settings:
+ gci:
+ sections:
+ - standard
+ - default
+ - prefix(github.com/spf13/afero)
+
+linters:
+ disable-all: true
+ enable:
+ - gci
+ - gofmt
+ - gofumpt
+ - staticcheck
+
+issues:
+ exclude-dirs:
+ - gcsfs/internal/stiface
diff --git a/vendor/github.com/spf13/afero/README.md b/vendor/github.com/spf13/afero/README.md
index 3bafbfdfcaf04..619af574f38bd 100644
--- a/vendor/github.com/spf13/afero/README.md
+++ b/vendor/github.com/spf13/afero/README.md
@@ -12,7 +12,7 @@ types and methods. Afero has an exceptionally clean interface and simple design
without needless constructors or initialization methods.
Afero is also a library providing a base set of interoperable backend
-filesystems that make it easy to work with afero while retaining all the power
+filesystems that make it easy to work with, while retaining all the power
and benefit of the os and ioutil packages.
Afero provides significant improvements over using the os package alone, most
diff --git a/vendor/github.com/spf13/afero/iofs.go b/vendor/github.com/spf13/afero/iofs.go
index 938b9316e6b85..b13155ca4a9ce 100644
--- a/vendor/github.com/spf13/afero/iofs.go
+++ b/vendor/github.com/spf13/afero/iofs.go
@@ -255,7 +255,6 @@ func (f fromIOFSFile) Readdir(count int) ([]os.FileInfo, error) {
ret := make([]os.FileInfo, len(entries))
for i := range entries {
ret[i], err = entries[i].Info()
-
if err != nil {
return nil, err
}
diff --git a/vendor/github.com/spf13/afero/memmap.go b/vendor/github.com/spf13/afero/memmap.go
index d6c744e8d568d..ed92f5649dafd 100644
--- a/vendor/github.com/spf13/afero/memmap.go
+++ b/vendor/github.com/spf13/afero/memmap.go
@@ -16,11 +16,9 @@ package afero
import (
"fmt"
"io"
-
"log"
"os"
"path/filepath"
-
"sort"
"strings"
"sync"
diff --git a/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-gen.go b/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-gen.go
index a896306a036a2..ace3ea34d8385 100644
--- a/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-gen.go
+++ b/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-gen.go
@@ -1,4 +1,4 @@
-// Copyright 2024 Google LLC.
+// Copyright 2025 Google LLC.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/vendor/google.golang.org/api/compute/v1/compute-gen.go b/vendor/google.golang.org/api/compute/v1/compute-gen.go
index 13c64502d8401..d097ac480c8ff 100644
--- a/vendor/google.golang.org/api/compute/v1/compute-gen.go
+++ b/vendor/google.golang.org/api/compute/v1/compute-gen.go
@@ -1,4 +1,4 @@
-// Copyright 2024 Google LLC.
+// Copyright 2025 Google LLC.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/vendor/google.golang.org/api/compute/v1/compute2-gen.go b/vendor/google.golang.org/api/compute/v1/compute2-gen.go
index 7b25c7a8bfadd..370ae7ee2e45b 100644
--- a/vendor/google.golang.org/api/compute/v1/compute2-gen.go
+++ b/vendor/google.golang.org/api/compute/v1/compute2-gen.go
@@ -1,4 +1,4 @@
-// Copyright 2024 Google LLC.
+// Copyright 2025 Google LLC.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/vendor/google.golang.org/api/compute/v1/compute3-gen.go b/vendor/google.golang.org/api/compute/v1/compute3-gen.go
index 4777c61d8a0d0..5a5a0d4eb746a 100644
--- a/vendor/google.golang.org/api/compute/v1/compute3-gen.go
+++ b/vendor/google.golang.org/api/compute/v1/compute3-gen.go
@@ -1,4 +1,4 @@
-// Copyright 2024 Google LLC.
+// Copyright 2025 Google LLC.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/vendor/google.golang.org/api/iamcredentials/v1/iamcredentials-gen.go b/vendor/google.golang.org/api/iamcredentials/v1/iamcredentials-gen.go
index 85ba75d08f521..559cab1385b61 100644
--- a/vendor/google.golang.org/api/iamcredentials/v1/iamcredentials-gen.go
+++ b/vendor/google.golang.org/api/iamcredentials/v1/iamcredentials-gen.go
@@ -1,4 +1,4 @@
-// Copyright 2024 Google LLC.
+// Copyright 2025 Google LLC.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/vendor/google.golang.org/api/internal/version.go b/vendor/google.golang.org/api/internal/version.go
index 551a90770eb83..63449651ff8df 100644
--- a/vendor/google.golang.org/api/internal/version.go
+++ b/vendor/google.golang.org/api/internal/version.go
@@ -5,4 +5,4 @@
package internal
// Version is the current tagged release of the library.
-const Version = "0.214.0"
+const Version = "0.215.0"
diff --git a/vendor/google.golang.org/api/storage/v1/storage-gen.go b/vendor/google.golang.org/api/storage/v1/storage-gen.go
index 474fbb49846f1..89f08a8d98b26 100644
--- a/vendor/google.golang.org/api/storage/v1/storage-gen.go
+++ b/vendor/google.golang.org/api/storage/v1/storage-gen.go
@@ -1,4 +1,4 @@
-// Copyright 2024 Google LLC.
+// Copyright 2025 Google LLC.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/vendor/google.golang.org/genproto/googleapis/api/annotations/client.pb.go b/vendor/google.golang.org/genproto/googleapis/api/annotations/client.pb.go
index aa69fb4d509ff..4a9fce53c444f 100644
--- a/vendor/google.golang.org/genproto/googleapis/api/annotations/client.pb.go
+++ b/vendor/google.golang.org/genproto/googleapis/api/annotations/client.pb.go
@@ -180,6 +180,8 @@ type CommonLanguageSettings struct {
ReferenceDocsUri string `protobuf:"bytes,1,opt,name=reference_docs_uri,json=referenceDocsUri,proto3" json:"reference_docs_uri,omitempty"`
// The destination where API teams want this client library to be published.
Destinations []ClientLibraryDestination `protobuf:"varint,2,rep,packed,name=destinations,proto3,enum=google.api.ClientLibraryDestination" json:"destinations,omitempty"`
+ // Configuration for which RPCs should be generated in the GAPIC client.
+ SelectiveGapicGeneration *SelectiveGapicGeneration `protobuf:"bytes,3,opt,name=selective_gapic_generation,json=selectiveGapicGeneration,proto3" json:"selective_gapic_generation,omitempty"`
}
func (x *CommonLanguageSettings) Reset() {
@@ -229,6 +231,13 @@ func (x *CommonLanguageSettings) GetDestinations() []ClientLibraryDestination {
return nil
}
+func (x *CommonLanguageSettings) GetSelectiveGapicGeneration() *SelectiveGapicGeneration {
+ if x != nil {
+ return x.SelectiveGapicGeneration
+ }
+ return nil
+}
+
// Details about how and where to publish client libraries.
type ClientLibrarySettings struct {
state protoimpl.MessageState
@@ -984,6 +993,16 @@ type GoSettings struct {
// Some settings.
Common *CommonLanguageSettings `protobuf:"bytes,1,opt,name=common,proto3" json:"common,omitempty"`
+ // Map of service names to renamed services. Keys are the package relative
+ // service names and values are the name to be used for the service client
+ // and call options.
+ //
+ // publishing:
+ //
+ // go_settings:
+ // renamed_services:
+ // Publisher: TopicAdmin
+ RenamedServices map[string]string `protobuf:"bytes,2,rep,name=renamed_services,json=renamedServices,proto3" json:"renamed_services,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
}
func (x *GoSettings) Reset() {
@@ -1025,6 +1044,13 @@ func (x *GoSettings) GetCommon() *CommonLanguageSettings {
return nil
}
+func (x *GoSettings) GetRenamedServices() map[string]string {
+ if x != nil {
+ return x.RenamedServices
+ }
+ return nil
+}
+
// Describes the generator configuration for a method.
type MethodSettings struct {
state protoimpl.MessageState
@@ -1123,6 +1149,57 @@ func (x *MethodSettings) GetAutoPopulatedFields() []string {
return nil
}
+// This message is used to configure the generation of a subset of the RPCs in
+// a service for client libraries.
+type SelectiveGapicGeneration struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // An allowlist of the fully qualified names of RPCs that should be included
+ // on public client surfaces.
+ Methods []string `protobuf:"bytes,1,rep,name=methods,proto3" json:"methods,omitempty"`
+}
+
+func (x *SelectiveGapicGeneration) Reset() {
+ *x = SelectiveGapicGeneration{}
+ if protoimpl.UnsafeEnabled {
+ mi := &file_google_api_client_proto_msgTypes[12]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+ }
+}
+
+func (x *SelectiveGapicGeneration) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*SelectiveGapicGeneration) ProtoMessage() {}
+
+func (x *SelectiveGapicGeneration) ProtoReflect() protoreflect.Message {
+ mi := &file_google_api_client_proto_msgTypes[12]
+ if protoimpl.UnsafeEnabled && x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use SelectiveGapicGeneration.ProtoReflect.Descriptor instead.
+func (*SelectiveGapicGeneration) Descriptor() ([]byte, []int) {
+ return file_google_api_client_proto_rawDescGZIP(), []int{12}
+}
+
+func (x *SelectiveGapicGeneration) GetMethods() []string {
+ if x != nil {
+ return x.Methods
+ }
+ return nil
+}
+
// Experimental features to be included during client library generation.
// These fields will be deprecated once the feature graduates and is enabled
// by default.
@@ -1136,12 +1213,17 @@ type PythonSettings_ExperimentalFeatures struct {
// This feature will be enabled by default 1 month after launching the
// feature in preview packages.
RestAsyncIoEnabled bool `protobuf:"varint,1,opt,name=rest_async_io_enabled,json=restAsyncIoEnabled,proto3" json:"rest_async_io_enabled,omitempty"`
+ // Enables generation of protobuf code using new types that are more
+ // Pythonic which are included in `protobuf>=5.29.x`. This feature will be
+ // enabled by default 1 month after launching the feature in preview
+ // packages.
+ ProtobufPythonicTypesEnabled bool `protobuf:"varint,2,opt,name=protobuf_pythonic_types_enabled,json=protobufPythonicTypesEnabled,proto3" json:"protobuf_pythonic_types_enabled,omitempty"`
}
func (x *PythonSettings_ExperimentalFeatures) Reset() {
*x = PythonSettings_ExperimentalFeatures{}
if protoimpl.UnsafeEnabled {
- mi := &file_google_api_client_proto_msgTypes[13]
+ mi := &file_google_api_client_proto_msgTypes[14]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1154,7 +1236,7 @@ func (x *PythonSettings_ExperimentalFeatures) String() string {
func (*PythonSettings_ExperimentalFeatures) ProtoMessage() {}
func (x *PythonSettings_ExperimentalFeatures) ProtoReflect() protoreflect.Message {
- mi := &file_google_api_client_proto_msgTypes[13]
+ mi := &file_google_api_client_proto_msgTypes[14]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1177,6 +1259,13 @@ func (x *PythonSettings_ExperimentalFeatures) GetRestAsyncIoEnabled() bool {
return false
}
+func (x *PythonSettings_ExperimentalFeatures) GetProtobufPythonicTypesEnabled() bool {
+ if x != nil {
+ return x.ProtobufPythonicTypesEnabled
+ }
+ return false
+}
+
// Describes settings to use when generating API methods that use the
// long-running operation pattern.
// All default values below are from those used in the client library
@@ -1205,7 +1294,7 @@ type MethodSettings_LongRunning struct {
func (x *MethodSettings_LongRunning) Reset() {
*x = MethodSettings_LongRunning{}
if protoimpl.UnsafeEnabled {
- mi := &file_google_api_client_proto_msgTypes[16]
+ mi := &file_google_api_client_proto_msgTypes[18]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1218,7 +1307,7 @@ func (x *MethodSettings_LongRunning) String() string {
func (*MethodSettings_LongRunning) ProtoMessage() {}
func (x *MethodSettings_LongRunning) ProtoReflect() protoreflect.Message {
- mi := &file_google_api_client_proto_msgTypes[16]
+ mi := &file_google_api_client_proto_msgTypes[18]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1406,7 +1495,7 @@ var file_google_api_client_proto_rawDesc = []byte{
0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72,
0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e,
- 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x94, 0x01, 0x0a, 0x16, 0x43, 0x6f, 0x6d, 0x6d, 0x6f,
+ 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xf8, 0x01, 0x0a, 0x16, 0x43, 0x6f, 0x6d, 0x6d, 0x6f,
0x6e, 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67,
0x73, 0x12, 0x30, 0x0a, 0x12, 0x72, 0x65, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x63, 0x65, 0x5f, 0x64,
0x6f, 0x63, 0x73, 0x5f, 0x75, 0x72, 0x69, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x42, 0x02, 0x18,
@@ -1415,251 +1504,275 @@ var file_google_api_client_proto_rawDesc = []byte{
0x6f, 0x6e, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0e, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67,
0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62,
0x72, 0x61, 0x72, 0x79, 0x44, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52,
- 0x0c, 0x64, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0x93, 0x05,
- 0x0a, 0x15, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x53,
- 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x18, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69,
- 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f,
- 0x6e, 0x12, 0x3a, 0x0a, 0x0c, 0x6c, 0x61, 0x75, 0x6e, 0x63, 0x68, 0x5f, 0x73, 0x74, 0x61, 0x67,
- 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x17, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
- 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x4c, 0x61, 0x75, 0x6e, 0x63, 0x68, 0x53, 0x74, 0x61, 0x67, 0x65,
- 0x52, 0x0b, 0x6c, 0x61, 0x75, 0x6e, 0x63, 0x68, 0x53, 0x74, 0x61, 0x67, 0x65, 0x12, 0x2c, 0x0a,
- 0x12, 0x72, 0x65, 0x73, 0x74, 0x5f, 0x6e, 0x75, 0x6d, 0x65, 0x72, 0x69, 0x63, 0x5f, 0x65, 0x6e,
- 0x75, 0x6d, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x52, 0x10, 0x72, 0x65, 0x73, 0x74, 0x4e,
- 0x75, 0x6d, 0x65, 0x72, 0x69, 0x63, 0x45, 0x6e, 0x75, 0x6d, 0x73, 0x12, 0x3d, 0x0a, 0x0d, 0x6a,
- 0x61, 0x76, 0x61, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x15, 0x20, 0x01,
+ 0x0c, 0x64, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x62, 0x0a,
+ 0x1a, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x67, 0x61, 0x70, 0x69, 0x63,
+ 0x5f, 0x67, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x03, 0x20, 0x01, 0x28,
+ 0x0b, 0x32, 0x24, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x53,
+ 0x65, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x76, 0x65, 0x47, 0x61, 0x70, 0x69, 0x63, 0x47, 0x65, 0x6e,
+ 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x18, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x69,
+ 0x76, 0x65, 0x47, 0x61, 0x70, 0x69, 0x63, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f,
+ 0x6e, 0x22, 0x93, 0x05, 0x0a, 0x15, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62, 0x72,
+ 0x61, 0x72, 0x79, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x18, 0x0a, 0x07, 0x76,
+ 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x76, 0x65,
+ 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x3a, 0x0a, 0x0c, 0x6c, 0x61, 0x75, 0x6e, 0x63, 0x68, 0x5f,
+ 0x73, 0x74, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x17, 0x2e, 0x67, 0x6f,
+ 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x4c, 0x61, 0x75, 0x6e, 0x63, 0x68, 0x53,
+ 0x74, 0x61, 0x67, 0x65, 0x52, 0x0b, 0x6c, 0x61, 0x75, 0x6e, 0x63, 0x68, 0x53, 0x74, 0x61, 0x67,
+ 0x65, 0x12, 0x2c, 0x0a, 0x12, 0x72, 0x65, 0x73, 0x74, 0x5f, 0x6e, 0x75, 0x6d, 0x65, 0x72, 0x69,
+ 0x63, 0x5f, 0x65, 0x6e, 0x75, 0x6d, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x52, 0x10, 0x72,
+ 0x65, 0x73, 0x74, 0x4e, 0x75, 0x6d, 0x65, 0x72, 0x69, 0x63, 0x45, 0x6e, 0x75, 0x6d, 0x73, 0x12,
+ 0x3d, 0x0a, 0x0d, 0x6a, 0x61, 0x76, 0x61, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73,
+ 0x18, 0x15, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x18, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
+ 0x61, 0x70, 0x69, 0x2e, 0x4a, 0x61, 0x76, 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73,
+ 0x52, 0x0c, 0x6a, 0x61, 0x76, 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a,
+ 0x0a, 0x0c, 0x63, 0x70, 0x70, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x16,
+ 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70,
+ 0x69, 0x2e, 0x43, 0x70, 0x70, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0b, 0x63,
+ 0x70, 0x70, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x0c, 0x70, 0x68,
+ 0x70, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x17, 0x20, 0x01, 0x28, 0x0b,
+ 0x32, 0x17, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x50, 0x68,
+ 0x70, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0b, 0x70, 0x68, 0x70, 0x53, 0x65,
+ 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x43, 0x0a, 0x0f, 0x70, 0x79, 0x74, 0x68, 0x6f, 0x6e,
+ 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x18, 0x20, 0x01, 0x28, 0x0b, 0x32,
+ 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x50, 0x79, 0x74,
+ 0x68, 0x6f, 0x6e, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0e, 0x70, 0x79, 0x74,
+ 0x68, 0x6f, 0x6e, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3d, 0x0a, 0x0d, 0x6e,
+ 0x6f, 0x64, 0x65, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x19, 0x20, 0x01,
0x28, 0x0b, 0x32, 0x18, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e,
- 0x4a, 0x61, 0x76, 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0c, 0x6a, 0x61,
- 0x76, 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x0c, 0x63, 0x70,
- 0x70, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x16, 0x20, 0x01, 0x28, 0x0b,
- 0x32, 0x17, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x70,
- 0x70, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0b, 0x63, 0x70, 0x70, 0x53, 0x65,
- 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x0c, 0x70, 0x68, 0x70, 0x5f, 0x73, 0x65,
- 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x17, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x67,
- 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x50, 0x68, 0x70, 0x53, 0x65, 0x74,
- 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0b, 0x70, 0x68, 0x70, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e,
- 0x67, 0x73, 0x12, 0x43, 0x0a, 0x0f, 0x70, 0x79, 0x74, 0x68, 0x6f, 0x6e, 0x5f, 0x73, 0x65, 0x74,
- 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x18, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f,
- 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x50, 0x79, 0x74, 0x68, 0x6f, 0x6e, 0x53,
- 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0e, 0x70, 0x79, 0x74, 0x68, 0x6f, 0x6e, 0x53,
- 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3d, 0x0a, 0x0d, 0x6e, 0x6f, 0x64, 0x65, 0x5f,
- 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x19, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x18,
- 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x4e, 0x6f, 0x64, 0x65,
- 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0c, 0x6e, 0x6f, 0x64, 0x65, 0x53, 0x65,
- 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x43, 0x0a, 0x0f, 0x64, 0x6f, 0x74, 0x6e, 0x65, 0x74,
- 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x1a, 0x20, 0x01, 0x28, 0x0b, 0x32,
- 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x44, 0x6f, 0x74,
- 0x6e, 0x65, 0x74, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0e, 0x64, 0x6f, 0x74,
- 0x6e, 0x65, 0x74, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3d, 0x0a, 0x0d, 0x72,
- 0x75, 0x62, 0x79, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x1b, 0x20, 0x01,
- 0x28, 0x0b, 0x32, 0x18, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e,
- 0x52, 0x75, 0x62, 0x79, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0c, 0x72, 0x75,
- 0x62, 0x79, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x37, 0x0a, 0x0b, 0x67, 0x6f,
- 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x1c, 0x20, 0x01, 0x28, 0x0b, 0x32,
- 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x47, 0x6f, 0x53,
- 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0a, 0x67, 0x6f, 0x53, 0x65, 0x74, 0x74, 0x69,
- 0x6e, 0x67, 0x73, 0x22, 0xf4, 0x04, 0x0a, 0x0a, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x73, 0x68, 0x69,
- 0x6e, 0x67, 0x12, 0x43, 0x0a, 0x0f, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x5f, 0x73, 0x65, 0x74,
- 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f,
- 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x53,
- 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0e, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x53,
- 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x22, 0x0a, 0x0d, 0x6e, 0x65, 0x77, 0x5f, 0x69,
- 0x73, 0x73, 0x75, 0x65, 0x5f, 0x75, 0x72, 0x69, 0x18, 0x65, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b,
- 0x6e, 0x65, 0x77, 0x49, 0x73, 0x73, 0x75, 0x65, 0x55, 0x72, 0x69, 0x12, 0x2b, 0x0a, 0x11, 0x64,
- 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x75, 0x72, 0x69,
- 0x18, 0x66, 0x20, 0x01, 0x28, 0x09, 0x52, 0x10, 0x64, 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74,
- 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x55, 0x72, 0x69, 0x12, 0x24, 0x0a, 0x0e, 0x61, 0x70, 0x69, 0x5f,
- 0x73, 0x68, 0x6f, 0x72, 0x74, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x67, 0x20, 0x01, 0x28, 0x09,
- 0x52, 0x0c, 0x61, 0x70, 0x69, 0x53, 0x68, 0x6f, 0x72, 0x74, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x21,
- 0x0a, 0x0c, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x5f, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x18, 0x68,
- 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x4c, 0x61, 0x62, 0x65,
- 0x6c, 0x12, 0x34, 0x0a, 0x16, 0x63, 0x6f, 0x64, 0x65, 0x6f, 0x77, 0x6e, 0x65, 0x72, 0x5f, 0x67,
- 0x69, 0x74, 0x68, 0x75, 0x62, 0x5f, 0x74, 0x65, 0x61, 0x6d, 0x73, 0x18, 0x69, 0x20, 0x03, 0x28,
- 0x09, 0x52, 0x14, 0x63, 0x6f, 0x64, 0x65, 0x6f, 0x77, 0x6e, 0x65, 0x72, 0x47, 0x69, 0x74, 0x68,
- 0x75, 0x62, 0x54, 0x65, 0x61, 0x6d, 0x73, 0x12, 0x24, 0x0a, 0x0e, 0x64, 0x6f, 0x63, 0x5f, 0x74,
- 0x61, 0x67, 0x5f, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, 0x18, 0x6a, 0x20, 0x01, 0x28, 0x09, 0x52,
- 0x0c, 0x64, 0x6f, 0x63, 0x54, 0x61, 0x67, 0x50, 0x72, 0x65, 0x66, 0x69, 0x78, 0x12, 0x49, 0x0a,
- 0x0c, 0x6f, 0x72, 0x67, 0x61, 0x6e, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x6b, 0x20,
- 0x01, 0x28, 0x0e, 0x32, 0x25, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69,
- 0x2e, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x4f, 0x72,
- 0x67, 0x61, 0x6e, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0c, 0x6f, 0x72, 0x67, 0x61,
- 0x6e, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x4c, 0x0a, 0x10, 0x6c, 0x69, 0x62, 0x72,
- 0x61, 0x72, 0x79, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x6d, 0x20, 0x03,
- 0x28, 0x0b, 0x32, 0x21, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e,
- 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x53, 0x65, 0x74,
- 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0f, 0x6c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x53, 0x65,
- 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x49, 0x0a, 0x21, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x5f,
- 0x72, 0x65, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x63, 0x65, 0x5f, 0x64, 0x6f, 0x63, 0x75, 0x6d, 0x65,
- 0x6e, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x75, 0x72, 0x69, 0x18, 0x6e, 0x20, 0x01, 0x28,
- 0x09, 0x52, 0x1e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x65, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x63,
- 0x65, 0x44, 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x55, 0x72,
- 0x69, 0x12, 0x47, 0x0a, 0x20, 0x72, 0x65, 0x73, 0x74, 0x5f, 0x72, 0x65, 0x66, 0x65, 0x72, 0x65,
- 0x6e, 0x63, 0x65, 0x5f, 0x64, 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x74, 0x69, 0x6f,
- 0x6e, 0x5f, 0x75, 0x72, 0x69, 0x18, 0x6f, 0x20, 0x01, 0x28, 0x09, 0x52, 0x1d, 0x72, 0x65, 0x73,
- 0x74, 0x52, 0x65, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x63, 0x65, 0x44, 0x6f, 0x63, 0x75, 0x6d, 0x65,
- 0x6e, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x55, 0x72, 0x69, 0x22, 0x9a, 0x02, 0x0a, 0x0c, 0x4a,
- 0x61, 0x76, 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x27, 0x0a, 0x0f, 0x6c,
- 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x5f, 0x70, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x18, 0x01,
- 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x6c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x50, 0x61, 0x63,
- 0x6b, 0x61, 0x67, 0x65, 0x12, 0x5f, 0x0a, 0x13, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x5f,
- 0x63, 0x6c, 0x61, 0x73, 0x73, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28,
- 0x0b, 0x32, 0x2f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x4a,
- 0x61, 0x76, 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x2e, 0x53, 0x65, 0x72, 0x76,
- 0x69, 0x63, 0x65, 0x43, 0x6c, 0x61, 0x73, 0x73, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x45, 0x6e, 0x74,
- 0x72, 0x79, 0x52, 0x11, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x43, 0x6c, 0x61, 0x73, 0x73,
- 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x18,
- 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61,
+ 0x4e, 0x6f, 0x64, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0c, 0x6e, 0x6f,
+ 0x64, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x43, 0x0a, 0x0f, 0x64, 0x6f,
+ 0x74, 0x6e, 0x65, 0x74, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x1a, 0x20,
+ 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69,
+ 0x2e, 0x44, 0x6f, 0x74, 0x6e, 0x65, 0x74, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52,
+ 0x0e, 0x64, 0x6f, 0x74, 0x6e, 0x65, 0x74, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12,
+ 0x3d, 0x0a, 0x0d, 0x72, 0x75, 0x62, 0x79, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73,
+ 0x18, 0x1b, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x18, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
+ 0x61, 0x70, 0x69, 0x2e, 0x52, 0x75, 0x62, 0x79, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73,
+ 0x52, 0x0c, 0x72, 0x75, 0x62, 0x79, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x37,
+ 0x0a, 0x0b, 0x67, 0x6f, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x1c, 0x20,
+ 0x01, 0x28, 0x0b, 0x32, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69,
+ 0x2e, 0x47, 0x6f, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0a, 0x67, 0x6f, 0x53,
+ 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x22, 0xf4, 0x04, 0x0a, 0x0a, 0x50, 0x75, 0x62, 0x6c,
+ 0x69, 0x73, 0x68, 0x69, 0x6e, 0x67, 0x12, 0x43, 0x0a, 0x0f, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64,
+ 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32,
+ 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x4d, 0x65, 0x74,
+ 0x68, 0x6f, 0x64, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0e, 0x6d, 0x65, 0x74,
+ 0x68, 0x6f, 0x64, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x22, 0x0a, 0x0d, 0x6e,
+ 0x65, 0x77, 0x5f, 0x69, 0x73, 0x73, 0x75, 0x65, 0x5f, 0x75, 0x72, 0x69, 0x18, 0x65, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x0b, 0x6e, 0x65, 0x77, 0x49, 0x73, 0x73, 0x75, 0x65, 0x55, 0x72, 0x69, 0x12,
+ 0x2b, 0x0a, 0x11, 0x64, 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e,
+ 0x5f, 0x75, 0x72, 0x69, 0x18, 0x66, 0x20, 0x01, 0x28, 0x09, 0x52, 0x10, 0x64, 0x6f, 0x63, 0x75,
+ 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x55, 0x72, 0x69, 0x12, 0x24, 0x0a, 0x0e,
+ 0x61, 0x70, 0x69, 0x5f, 0x73, 0x68, 0x6f, 0x72, 0x74, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x67,
+ 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x61, 0x70, 0x69, 0x53, 0x68, 0x6f, 0x72, 0x74, 0x4e, 0x61,
+ 0x6d, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x5f, 0x6c, 0x61, 0x62,
+ 0x65, 0x6c, 0x18, 0x68, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62,
+ 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x12, 0x34, 0x0a, 0x16, 0x63, 0x6f, 0x64, 0x65, 0x6f, 0x77, 0x6e,
+ 0x65, 0x72, 0x5f, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x5f, 0x74, 0x65, 0x61, 0x6d, 0x73, 0x18,
+ 0x69, 0x20, 0x03, 0x28, 0x09, 0x52, 0x14, 0x63, 0x6f, 0x64, 0x65, 0x6f, 0x77, 0x6e, 0x65, 0x72,
+ 0x47, 0x69, 0x74, 0x68, 0x75, 0x62, 0x54, 0x65, 0x61, 0x6d, 0x73, 0x12, 0x24, 0x0a, 0x0e, 0x64,
+ 0x6f, 0x63, 0x5f, 0x74, 0x61, 0x67, 0x5f, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, 0x18, 0x6a, 0x20,
+ 0x01, 0x28, 0x09, 0x52, 0x0c, 0x64, 0x6f, 0x63, 0x54, 0x61, 0x67, 0x50, 0x72, 0x65, 0x66, 0x69,
+ 0x78, 0x12, 0x49, 0x0a, 0x0c, 0x6f, 0x72, 0x67, 0x61, 0x6e, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f,
+ 0x6e, 0x18, 0x6b, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x25, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
+ 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62, 0x72, 0x61,
+ 0x72, 0x79, 0x4f, 0x72, 0x67, 0x61, 0x6e, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0c,
+ 0x6f, 0x72, 0x67, 0x61, 0x6e, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x4c, 0x0a, 0x10,
+ 0x6c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x5f, 0x73, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73,
+ 0x18, 0x6d, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x21, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
+ 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62, 0x72, 0x61, 0x72,
+ 0x79, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x0f, 0x6c, 0x69, 0x62, 0x72, 0x61,
+ 0x72, 0x79, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x49, 0x0a, 0x21, 0x70, 0x72,
+ 0x6f, 0x74, 0x6f, 0x5f, 0x72, 0x65, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x63, 0x65, 0x5f, 0x64, 0x6f,
+ 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x75, 0x72, 0x69, 0x18,
+ 0x6e, 0x20, 0x01, 0x28, 0x09, 0x52, 0x1e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x52, 0x65, 0x66, 0x65,
+ 0x72, 0x65, 0x6e, 0x63, 0x65, 0x44, 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x74, 0x69,
+ 0x6f, 0x6e, 0x55, 0x72, 0x69, 0x12, 0x47, 0x0a, 0x20, 0x72, 0x65, 0x73, 0x74, 0x5f, 0x72, 0x65,
+ 0x66, 0x65, 0x72, 0x65, 0x6e, 0x63, 0x65, 0x5f, 0x64, 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74,
+ 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x75, 0x72, 0x69, 0x18, 0x6f, 0x20, 0x01, 0x28, 0x09, 0x52,
+ 0x1d, 0x72, 0x65, 0x73, 0x74, 0x52, 0x65, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x63, 0x65, 0x44, 0x6f,
+ 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x55, 0x72, 0x69, 0x22, 0x9a,
+ 0x02, 0x0a, 0x0c, 0x4a, 0x61, 0x76, 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12,
+ 0x27, 0x0a, 0x0f, 0x6c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x5f, 0x70, 0x61, 0x63, 0x6b, 0x61,
+ 0x67, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x6c, 0x69, 0x62, 0x72, 0x61, 0x72,
+ 0x79, 0x50, 0x61, 0x63, 0x6b, 0x61, 0x67, 0x65, 0x12, 0x5f, 0x0a, 0x13, 0x73, 0x65, 0x72, 0x76,
+ 0x69, 0x63, 0x65, 0x5f, 0x63, 0x6c, 0x61, 0x73, 0x73, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x18,
+ 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61,
+ 0x70, 0x69, 0x2e, 0x4a, 0x61, 0x76, 0x61, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x2e,
+ 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x43, 0x6c, 0x61, 0x73, 0x73, 0x4e, 0x61, 0x6d, 0x65,
+ 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x11, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x43,
+ 0x6c, 0x61, 0x73, 0x73, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f, 0x6d,
+ 0x6d, 0x6f, 0x6e, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67,
+ 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61, 0x6e,
+ 0x67, 0x75, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06, 0x63,
+ 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x1a, 0x44, 0x0a, 0x16, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65,
+ 0x43, 0x6c, 0x61, 0x73, 0x73, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12,
+ 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65,
+ 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09,
+ 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x49, 0x0a, 0x0b, 0x43,
+ 0x70, 0x70, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f,
+ 0x6d, 0x6d, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f,
+ 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61,
+ 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06,
+ 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x22, 0x49, 0x0a, 0x0b, 0x50, 0x68, 0x70, 0x53, 0x65, 0x74,
+ 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x18,
+ 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61,
0x70, 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61, 0x67,
0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f,
- 0x6e, 0x1a, 0x44, 0x0a, 0x16, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x43, 0x6c, 0x61, 0x73,
- 0x73, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b,
- 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a,
- 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61,
- 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x49, 0x0a, 0x0b, 0x43, 0x70, 0x70, 0x53, 0x65,
+ 0x6e, 0x22, 0xc5, 0x02, 0x0a, 0x0e, 0x50, 0x79, 0x74, 0x68, 0x6f, 0x6e, 0x53, 0x65, 0x74, 0x74,
+ 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x18, 0x01,
+ 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70,
+ 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65,
+ 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e,
+ 0x12, 0x64, 0x0a, 0x15, 0x65, 0x78, 0x70, 0x65, 0x72, 0x69, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x6c,
+ 0x5f, 0x66, 0x65, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32,
+ 0x2f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x50, 0x79, 0x74,
+ 0x68, 0x6f, 0x6e, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x2e, 0x45, 0x78, 0x70, 0x65,
+ 0x72, 0x69, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x6c, 0x46, 0x65, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73,
+ 0x52, 0x14, 0x65, 0x78, 0x70, 0x65, 0x72, 0x69, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x6c, 0x46, 0x65,
+ 0x61, 0x74, 0x75, 0x72, 0x65, 0x73, 0x1a, 0x90, 0x01, 0x0a, 0x14, 0x45, 0x78, 0x70, 0x65, 0x72,
+ 0x69, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x6c, 0x46, 0x65, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73, 0x12,
+ 0x31, 0x0a, 0x15, 0x72, 0x65, 0x73, 0x74, 0x5f, 0x61, 0x73, 0x79, 0x6e, 0x63, 0x5f, 0x69, 0x6f,
+ 0x5f, 0x65, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x12,
+ 0x72, 0x65, 0x73, 0x74, 0x41, 0x73, 0x79, 0x6e, 0x63, 0x49, 0x6f, 0x45, 0x6e, 0x61, 0x62, 0x6c,
+ 0x65, 0x64, 0x12, 0x45, 0x0a, 0x1f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x5f, 0x70,
+ 0x79, 0x74, 0x68, 0x6f, 0x6e, 0x69, 0x63, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x73, 0x5f, 0x65, 0x6e,
+ 0x61, 0x62, 0x6c, 0x65, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08, 0x52, 0x1c, 0x70, 0x72, 0x6f,
+ 0x74, 0x6f, 0x62, 0x75, 0x66, 0x50, 0x79, 0x74, 0x68, 0x6f, 0x6e, 0x69, 0x63, 0x54, 0x79, 0x70,
+ 0x65, 0x73, 0x45, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x22, 0x4a, 0x0a, 0x0c, 0x4e, 0x6f, 0x64,
+ 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f, 0x6d,
+ 0x6d, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67,
+ 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61, 0x6e,
+ 0x67, 0x75, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06, 0x63,
+ 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x22, 0xae, 0x04, 0x0a, 0x0e, 0x44, 0x6f, 0x74, 0x6e, 0x65, 0x74,
+ 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f, 0x6d, 0x6d,
+ 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
+ 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61, 0x6e, 0x67,
+ 0x75, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06, 0x63, 0x6f,
+ 0x6d, 0x6d, 0x6f, 0x6e, 0x12, 0x5a, 0x0a, 0x10, 0x72, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x5f,
+ 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2f,
+ 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x44, 0x6f, 0x74, 0x6e,
+ 0x65, 0x74, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x2e, 0x52, 0x65, 0x6e, 0x61, 0x6d,
+ 0x65, 0x64, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52,
+ 0x0f, 0x72, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73,
+ 0x12, 0x5d, 0x0a, 0x11, 0x72, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x5f, 0x72, 0x65, 0x73, 0x6f,
+ 0x75, 0x72, 0x63, 0x65, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x30, 0x2e, 0x67, 0x6f,
+ 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x44, 0x6f, 0x74, 0x6e, 0x65, 0x74, 0x53,
+ 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x2e, 0x52, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x52,
+ 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x10, 0x72,
+ 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73, 0x12,
+ 0x2b, 0x0a, 0x11, 0x69, 0x67, 0x6e, 0x6f, 0x72, 0x65, 0x64, 0x5f, 0x72, 0x65, 0x73, 0x6f, 0x75,
+ 0x72, 0x63, 0x65, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x09, 0x52, 0x10, 0x69, 0x67, 0x6e, 0x6f,
+ 0x72, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73, 0x12, 0x38, 0x0a, 0x18,
+ 0x66, 0x6f, 0x72, 0x63, 0x65, 0x64, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65,
+ 0x5f, 0x61, 0x6c, 0x69, 0x61, 0x73, 0x65, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x09, 0x52, 0x16,
+ 0x66, 0x6f, 0x72, 0x63, 0x65, 0x64, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x41,
+ 0x6c, 0x69, 0x61, 0x73, 0x65, 0x73, 0x12, 0x35, 0x0a, 0x16, 0x68, 0x61, 0x6e, 0x64, 0x77, 0x72,
+ 0x69, 0x74, 0x74, 0x65, 0x6e, 0x5f, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73,
+ 0x18, 0x06, 0x20, 0x03, 0x28, 0x09, 0x52, 0x15, 0x68, 0x61, 0x6e, 0x64, 0x77, 0x72, 0x69, 0x74,
+ 0x74, 0x65, 0x6e, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73, 0x1a, 0x42, 0x0a,
+ 0x14, 0x52, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73,
+ 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
+ 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38,
+ 0x01, 0x1a, 0x43, 0x0a, 0x15, 0x52, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f,
+ 0x75, 0x72, 0x63, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65,
+ 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05,
+ 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c,
+ 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x4a, 0x0a, 0x0c, 0x52, 0x75, 0x62, 0x79, 0x53, 0x65,
0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e,
0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
0x61, 0x70, 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61,
0x67, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d,
- 0x6f, 0x6e, 0x22, 0x49, 0x0a, 0x0b, 0x50, 0x68, 0x70, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67,
+ 0x6f, 0x6e, 0x22, 0xe4, 0x01, 0x0a, 0x0a, 0x47, 0x6f, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67,
0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43,
0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74,
- 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x22, 0xfd, 0x01,
- 0x0a, 0x0e, 0x50, 0x79, 0x74, 0x68, 0x6f, 0x6e, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73,
- 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b,
- 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6f,
- 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74, 0x74,
- 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x12, 0x64, 0x0a, 0x15,
- 0x65, 0x78, 0x70, 0x65, 0x72, 0x69, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x6c, 0x5f, 0x66, 0x65, 0x61,
- 0x74, 0x75, 0x72, 0x65, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x2f, 0x2e, 0x67, 0x6f,
- 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x50, 0x79, 0x74, 0x68, 0x6f, 0x6e, 0x53,
- 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x2e, 0x45, 0x78, 0x70, 0x65, 0x72, 0x69, 0x6d, 0x65,
- 0x6e, 0x74, 0x61, 0x6c, 0x46, 0x65, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73, 0x52, 0x14, 0x65, 0x78,
- 0x70, 0x65, 0x72, 0x69, 0x6d, 0x65, 0x6e, 0x74, 0x61, 0x6c, 0x46, 0x65, 0x61, 0x74, 0x75, 0x72,
- 0x65, 0x73, 0x1a, 0x49, 0x0a, 0x14, 0x45, 0x78, 0x70, 0x65, 0x72, 0x69, 0x6d, 0x65, 0x6e, 0x74,
- 0x61, 0x6c, 0x46, 0x65, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73, 0x12, 0x31, 0x0a, 0x15, 0x72, 0x65,
- 0x73, 0x74, 0x5f, 0x61, 0x73, 0x79, 0x6e, 0x63, 0x5f, 0x69, 0x6f, 0x5f, 0x65, 0x6e, 0x61, 0x62,
- 0x6c, 0x65, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x12, 0x72, 0x65, 0x73, 0x74, 0x41,
- 0x73, 0x79, 0x6e, 0x63, 0x49, 0x6f, 0x45, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x22, 0x4a, 0x0a,
- 0x0c, 0x4e, 0x6f, 0x64, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a,
- 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e,
- 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f,
- 0x6e, 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67,
- 0x73, 0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x22, 0xae, 0x04, 0x0a, 0x0e, 0x44, 0x6f,
- 0x74, 0x6e, 0x65, 0x74, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x06,
- 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67,
- 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e,
- 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73,
- 0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x12, 0x5a, 0x0a, 0x10, 0x72, 0x65, 0x6e, 0x61,
- 0x6d, 0x65, 0x64, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x18, 0x02, 0x20, 0x03,
- 0x28, 0x0b, 0x32, 0x2f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e,
- 0x44, 0x6f, 0x74, 0x6e, 0x65, 0x74, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x2e, 0x52,
- 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x45, 0x6e,
- 0x74, 0x72, 0x79, 0x52, 0x0f, 0x72, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x53, 0x65, 0x72, 0x76,
- 0x69, 0x63, 0x65, 0x73, 0x12, 0x5d, 0x0a, 0x11, 0x72, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x5f,
- 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32,
- 0x30, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x44, 0x6f, 0x74,
- 0x6e, 0x65, 0x74, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x2e, 0x52, 0x65, 0x6e, 0x61,
- 0x6d, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72,
- 0x79, 0x52, 0x10, 0x72, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72,
- 0x63, 0x65, 0x73, 0x12, 0x2b, 0x0a, 0x11, 0x69, 0x67, 0x6e, 0x6f, 0x72, 0x65, 0x64, 0x5f, 0x72,
- 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x09, 0x52, 0x10,
- 0x69, 0x67, 0x6e, 0x6f, 0x72, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73,
- 0x12, 0x38, 0x0a, 0x18, 0x66, 0x6f, 0x72, 0x63, 0x65, 0x64, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x73,
- 0x70, 0x61, 0x63, 0x65, 0x5f, 0x61, 0x6c, 0x69, 0x61, 0x73, 0x65, 0x73, 0x18, 0x05, 0x20, 0x03,
- 0x28, 0x09, 0x52, 0x16, 0x66, 0x6f, 0x72, 0x63, 0x65, 0x64, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70,
- 0x61, 0x63, 0x65, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x65, 0x73, 0x12, 0x35, 0x0a, 0x16, 0x68, 0x61,
- 0x6e, 0x64, 0x77, 0x72, 0x69, 0x74, 0x74, 0x65, 0x6e, 0x5f, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74,
- 0x75, 0x72, 0x65, 0x73, 0x18, 0x06, 0x20, 0x03, 0x28, 0x09, 0x52, 0x15, 0x68, 0x61, 0x6e, 0x64,
- 0x77, 0x72, 0x69, 0x74, 0x74, 0x65, 0x6e, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65,
- 0x73, 0x1a, 0x42, 0x0a, 0x14, 0x52, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x53, 0x65, 0x72, 0x76,
- 0x69, 0x63, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79,
- 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76,
- 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75,
- 0x65, 0x3a, 0x02, 0x38, 0x01, 0x1a, 0x43, 0x0a, 0x15, 0x52, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64,
- 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10,
- 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79,
- 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52,
- 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x4a, 0x0a, 0x0c, 0x52, 0x75,
- 0x62, 0x79, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f,
- 0x6d, 0x6d, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f,
- 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61,
- 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06,
- 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x22, 0x48, 0x0a, 0x0a, 0x47, 0x6f, 0x53, 0x65, 0x74, 0x74,
- 0x69, 0x6e, 0x67, 0x73, 0x12, 0x3a, 0x0a, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x18, 0x01,
- 0x20, 0x01, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70,
- 0x69, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65,
- 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e,
- 0x22, 0xc2, 0x03, 0x0a, 0x0e, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x53, 0x65, 0x74, 0x74, 0x69,
- 0x6e, 0x67, 0x73, 0x12, 0x1a, 0x0a, 0x08, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x18,
- 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x12,
- 0x49, 0x0a, 0x0c, 0x6c, 0x6f, 0x6e, 0x67, 0x5f, 0x72, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x18,
- 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x26, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61,
- 0x70, 0x69, 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67,
- 0x73, 0x2e, 0x4c, 0x6f, 0x6e, 0x67, 0x52, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x52, 0x0b, 0x6c,
- 0x6f, 0x6e, 0x67, 0x52, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x12, 0x32, 0x0a, 0x15, 0x61, 0x75,
- 0x74, 0x6f, 0x5f, 0x70, 0x6f, 0x70, 0x75, 0x6c, 0x61, 0x74, 0x65, 0x64, 0x5f, 0x66, 0x69, 0x65,
- 0x6c, 0x64, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x09, 0x52, 0x13, 0x61, 0x75, 0x74, 0x6f, 0x50,
- 0x6f, 0x70, 0x75, 0x6c, 0x61, 0x74, 0x65, 0x64, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x1a, 0x94,
- 0x02, 0x0a, 0x0b, 0x4c, 0x6f, 0x6e, 0x67, 0x52, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x12, 0x47,
- 0x0a, 0x12, 0x69, 0x6e, 0x69, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x70, 0x6f, 0x6c, 0x6c, 0x5f, 0x64,
- 0x65, 0x6c, 0x61, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f,
- 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72,
- 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x10, 0x69, 0x6e, 0x69, 0x74, 0x69, 0x61, 0x6c, 0x50, 0x6f,
- 0x6c, 0x6c, 0x44, 0x65, 0x6c, 0x61, 0x79, 0x12, 0x32, 0x0a, 0x15, 0x70, 0x6f, 0x6c, 0x6c, 0x5f,
- 0x64, 0x65, 0x6c, 0x61, 0x79, 0x5f, 0x6d, 0x75, 0x6c, 0x74, 0x69, 0x70, 0x6c, 0x69, 0x65, 0x72,
- 0x18, 0x02, 0x20, 0x01, 0x28, 0x02, 0x52, 0x13, 0x70, 0x6f, 0x6c, 0x6c, 0x44, 0x65, 0x6c, 0x61,
- 0x79, 0x4d, 0x75, 0x6c, 0x74, 0x69, 0x70, 0x6c, 0x69, 0x65, 0x72, 0x12, 0x3f, 0x0a, 0x0e, 0x6d,
- 0x61, 0x78, 0x5f, 0x70, 0x6f, 0x6c, 0x6c, 0x5f, 0x64, 0x65, 0x6c, 0x61, 0x79, 0x18, 0x03, 0x20,
+ 0x74, 0x69, 0x6e, 0x67, 0x73, 0x52, 0x06, 0x63, 0x6f, 0x6d, 0x6d, 0x6f, 0x6e, 0x12, 0x56, 0x0a,
+ 0x10, 0x72, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x5f, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65,
+ 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2b, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
+ 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x47, 0x6f, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x2e,
+ 0x52, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x45,
+ 0x6e, 0x74, 0x72, 0x79, 0x52, 0x0f, 0x72, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64, 0x53, 0x65, 0x72,
+ 0x76, 0x69, 0x63, 0x65, 0x73, 0x1a, 0x42, 0x0a, 0x14, 0x52, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x64,
+ 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a,
+ 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12,
+ 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05,
+ 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0xc2, 0x03, 0x0a, 0x0e, 0x4d, 0x65,
+ 0x74, 0x68, 0x6f, 0x64, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x12, 0x1a, 0x0a, 0x08,
+ 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08,
+ 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x12, 0x49, 0x0a, 0x0c, 0x6c, 0x6f, 0x6e, 0x67,
+ 0x5f, 0x72, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x26,
+ 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x4d, 0x65, 0x74, 0x68,
+ 0x6f, 0x64, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x2e, 0x4c, 0x6f, 0x6e, 0x67, 0x52,
+ 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x52, 0x0b, 0x6c, 0x6f, 0x6e, 0x67, 0x52, 0x75, 0x6e, 0x6e,
+ 0x69, 0x6e, 0x67, 0x12, 0x32, 0x0a, 0x15, 0x61, 0x75, 0x74, 0x6f, 0x5f, 0x70, 0x6f, 0x70, 0x75,
+ 0x6c, 0x61, 0x74, 0x65, 0x64, 0x5f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x18, 0x03, 0x20, 0x03,
+ 0x28, 0x09, 0x52, 0x13, 0x61, 0x75, 0x74, 0x6f, 0x50, 0x6f, 0x70, 0x75, 0x6c, 0x61, 0x74, 0x65,
+ 0x64, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x73, 0x1a, 0x94, 0x02, 0x0a, 0x0b, 0x4c, 0x6f, 0x6e, 0x67,
+ 0x52, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x12, 0x47, 0x0a, 0x12, 0x69, 0x6e, 0x69, 0x74, 0x69,
+ 0x61, 0x6c, 0x5f, 0x70, 0x6f, 0x6c, 0x6c, 0x5f, 0x64, 0x65, 0x6c, 0x61, 0x79, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f,
- 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0c,
- 0x6d, 0x61, 0x78, 0x50, 0x6f, 0x6c, 0x6c, 0x44, 0x65, 0x6c, 0x61, 0x79, 0x12, 0x47, 0x0a, 0x12,
- 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x70, 0x6f, 0x6c, 0x6c, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x6f,
- 0x75, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
- 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74,
- 0x69, 0x6f, 0x6e, 0x52, 0x10, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x50, 0x6f, 0x6c, 0x6c, 0x54, 0x69,
- 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x2a, 0xa3, 0x01, 0x0a, 0x19, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74,
- 0x4c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x4f, 0x72, 0x67, 0x61, 0x6e, 0x69, 0x7a, 0x61, 0x74,
- 0x69, 0x6f, 0x6e, 0x12, 0x2b, 0x0a, 0x27, 0x43, 0x4c, 0x49, 0x45, 0x4e, 0x54, 0x5f, 0x4c, 0x49,
- 0x42, 0x52, 0x41, 0x52, 0x59, 0x5f, 0x4f, 0x52, 0x47, 0x41, 0x4e, 0x49, 0x5a, 0x41, 0x54, 0x49,
- 0x4f, 0x4e, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00,
- 0x12, 0x09, 0x0a, 0x05, 0x43, 0x4c, 0x4f, 0x55, 0x44, 0x10, 0x01, 0x12, 0x07, 0x0a, 0x03, 0x41,
- 0x44, 0x53, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x50, 0x48, 0x4f, 0x54, 0x4f, 0x53, 0x10, 0x03,
- 0x12, 0x0f, 0x0a, 0x0b, 0x53, 0x54, 0x52, 0x45, 0x45, 0x54, 0x5f, 0x56, 0x49, 0x45, 0x57, 0x10,
- 0x04, 0x12, 0x0c, 0x0a, 0x08, 0x53, 0x48, 0x4f, 0x50, 0x50, 0x49, 0x4e, 0x47, 0x10, 0x05, 0x12,
- 0x07, 0x0a, 0x03, 0x47, 0x45, 0x4f, 0x10, 0x06, 0x12, 0x11, 0x0a, 0x0d, 0x47, 0x45, 0x4e, 0x45,
- 0x52, 0x41, 0x54, 0x49, 0x56, 0x45, 0x5f, 0x41, 0x49, 0x10, 0x07, 0x2a, 0x67, 0x0a, 0x18, 0x43,
- 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x44, 0x65, 0x73, 0x74,
- 0x69, 0x6e, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x2a, 0x0a, 0x26, 0x43, 0x4c, 0x49, 0x45, 0x4e,
- 0x54, 0x5f, 0x4c, 0x49, 0x42, 0x52, 0x41, 0x52, 0x59, 0x5f, 0x44, 0x45, 0x53, 0x54, 0x49, 0x4e,
- 0x41, 0x54, 0x49, 0x4f, 0x4e, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45,
- 0x44, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x47, 0x49, 0x54, 0x48, 0x55, 0x42, 0x10, 0x0a, 0x12,
- 0x13, 0x0a, 0x0f, 0x50, 0x41, 0x43, 0x4b, 0x41, 0x47, 0x45, 0x5f, 0x4d, 0x41, 0x4e, 0x41, 0x47,
- 0x45, 0x52, 0x10, 0x14, 0x3a, 0x4a, 0x0a, 0x10, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x5f, 0x73,
- 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x12, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
- 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f,
- 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x9b, 0x08, 0x20, 0x03, 0x28, 0x09, 0x52,
- 0x0f, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65,
- 0x3a, 0x43, 0x0a, 0x0c, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x5f, 0x68, 0x6f, 0x73, 0x74,
- 0x12, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
- 0x75, 0x66, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e,
- 0x73, 0x18, 0x99, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c,
- 0x74, 0x48, 0x6f, 0x73, 0x74, 0x3a, 0x43, 0x0a, 0x0c, 0x6f, 0x61, 0x75, 0x74, 0x68, 0x5f, 0x73,
- 0x63, 0x6f, 0x70, 0x65, 0x73, 0x12, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70,
- 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x4f,
- 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x9a, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x6f,
- 0x61, 0x75, 0x74, 0x68, 0x53, 0x63, 0x6f, 0x70, 0x65, 0x73, 0x3a, 0x44, 0x0a, 0x0b, 0x61, 0x70,
- 0x69, 0x5f, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67,
- 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x53, 0x65, 0x72, 0x76,
- 0x69, 0x63, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0xc1, 0xba, 0xab, 0xfa, 0x01,
- 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x61, 0x70, 0x69, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e,
- 0x42, 0x69, 0x0a, 0x0e, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61,
- 0x70, 0x69, 0x42, 0x0b, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50,
- 0x01, 0x5a, 0x41, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67,
- 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x65, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f,
- 0x6f, 0x67, 0x6c, 0x65, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61, 0x6e, 0x6e,
- 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x3b, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74,
- 0x69, 0x6f, 0x6e, 0x73, 0xa2, 0x02, 0x04, 0x47, 0x41, 0x50, 0x49, 0x62, 0x06, 0x70, 0x72, 0x6f,
- 0x74, 0x6f, 0x33,
+ 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x10,
+ 0x69, 0x6e, 0x69, 0x74, 0x69, 0x61, 0x6c, 0x50, 0x6f, 0x6c, 0x6c, 0x44, 0x65, 0x6c, 0x61, 0x79,
+ 0x12, 0x32, 0x0a, 0x15, 0x70, 0x6f, 0x6c, 0x6c, 0x5f, 0x64, 0x65, 0x6c, 0x61, 0x79, 0x5f, 0x6d,
+ 0x75, 0x6c, 0x74, 0x69, 0x70, 0x6c, 0x69, 0x65, 0x72, 0x18, 0x02, 0x20, 0x01, 0x28, 0x02, 0x52,
+ 0x13, 0x70, 0x6f, 0x6c, 0x6c, 0x44, 0x65, 0x6c, 0x61, 0x79, 0x4d, 0x75, 0x6c, 0x74, 0x69, 0x70,
+ 0x6c, 0x69, 0x65, 0x72, 0x12, 0x3f, 0x0a, 0x0e, 0x6d, 0x61, 0x78, 0x5f, 0x70, 0x6f, 0x6c, 0x6c,
+ 0x5f, 0x64, 0x65, 0x6c, 0x61, 0x79, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67,
+ 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44,
+ 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0c, 0x6d, 0x61, 0x78, 0x50, 0x6f, 0x6c, 0x6c,
+ 0x44, 0x65, 0x6c, 0x61, 0x79, 0x12, 0x47, 0x0a, 0x12, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x5f, 0x70,
+ 0x6f, 0x6c, 0x6c, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28,
+ 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
+ 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x10, 0x74, 0x6f,
+ 0x74, 0x61, 0x6c, 0x50, 0x6f, 0x6c, 0x6c, 0x54, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x22, 0x34,
+ 0x0a, 0x18, 0x53, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x76, 0x65, 0x47, 0x61, 0x70, 0x69, 0x63,
+ 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x65,
+ 0x74, 0x68, 0x6f, 0x64, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x09, 0x52, 0x07, 0x6d, 0x65, 0x74,
+ 0x68, 0x6f, 0x64, 0x73, 0x2a, 0xa3, 0x01, 0x0a, 0x19, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c,
+ 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x4f, 0x72, 0x67, 0x61, 0x6e, 0x69, 0x7a, 0x61, 0x74, 0x69,
+ 0x6f, 0x6e, 0x12, 0x2b, 0x0a, 0x27, 0x43, 0x4c, 0x49, 0x45, 0x4e, 0x54, 0x5f, 0x4c, 0x49, 0x42,
+ 0x52, 0x41, 0x52, 0x59, 0x5f, 0x4f, 0x52, 0x47, 0x41, 0x4e, 0x49, 0x5a, 0x41, 0x54, 0x49, 0x4f,
+ 0x4e, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, 0x12,
+ 0x09, 0x0a, 0x05, 0x43, 0x4c, 0x4f, 0x55, 0x44, 0x10, 0x01, 0x12, 0x07, 0x0a, 0x03, 0x41, 0x44,
+ 0x53, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x50, 0x48, 0x4f, 0x54, 0x4f, 0x53, 0x10, 0x03, 0x12,
+ 0x0f, 0x0a, 0x0b, 0x53, 0x54, 0x52, 0x45, 0x45, 0x54, 0x5f, 0x56, 0x49, 0x45, 0x57, 0x10, 0x04,
+ 0x12, 0x0c, 0x0a, 0x08, 0x53, 0x48, 0x4f, 0x50, 0x50, 0x49, 0x4e, 0x47, 0x10, 0x05, 0x12, 0x07,
+ 0x0a, 0x03, 0x47, 0x45, 0x4f, 0x10, 0x06, 0x12, 0x11, 0x0a, 0x0d, 0x47, 0x45, 0x4e, 0x45, 0x52,
+ 0x41, 0x54, 0x49, 0x56, 0x45, 0x5f, 0x41, 0x49, 0x10, 0x07, 0x2a, 0x67, 0x0a, 0x18, 0x43, 0x6c,
+ 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x44, 0x65, 0x73, 0x74, 0x69,
+ 0x6e, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x2a, 0x0a, 0x26, 0x43, 0x4c, 0x49, 0x45, 0x4e, 0x54,
+ 0x5f, 0x4c, 0x49, 0x42, 0x52, 0x41, 0x52, 0x59, 0x5f, 0x44, 0x45, 0x53, 0x54, 0x49, 0x4e, 0x41,
+ 0x54, 0x49, 0x4f, 0x4e, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44,
+ 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x47, 0x49, 0x54, 0x48, 0x55, 0x42, 0x10, 0x0a, 0x12, 0x13,
+ 0x0a, 0x0f, 0x50, 0x41, 0x43, 0x4b, 0x41, 0x47, 0x45, 0x5f, 0x4d, 0x41, 0x4e, 0x41, 0x47, 0x45,
+ 0x52, 0x10, 0x14, 0x3a, 0x4a, 0x0a, 0x10, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x5f, 0x73, 0x69,
+ 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x12, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
+ 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64,
+ 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x9b, 0x08, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0f,
+ 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x3a,
+ 0x43, 0x0a, 0x0c, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x5f, 0x68, 0x6f, 0x73, 0x74, 0x12,
+ 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
+ 0x66, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73,
+ 0x18, 0x99, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74,
+ 0x48, 0x6f, 0x73, 0x74, 0x3a, 0x43, 0x0a, 0x0c, 0x6f, 0x61, 0x75, 0x74, 0x68, 0x5f, 0x73, 0x63,
+ 0x6f, 0x70, 0x65, 0x73, 0x12, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72,
+ 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x4f, 0x70,
+ 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x9a, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x6f, 0x61,
+ 0x75, 0x74, 0x68, 0x53, 0x63, 0x6f, 0x70, 0x65, 0x73, 0x3a, 0x44, 0x0a, 0x0b, 0x61, 0x70, 0x69,
+ 0x5f, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
+ 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69,
+ 0x63, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0xc1, 0xba, 0xab, 0xfa, 0x01, 0x20,
+ 0x01, 0x28, 0x09, 0x52, 0x0a, 0x61, 0x70, 0x69, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x42,
+ 0x69, 0x0a, 0x0e, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70,
+ 0x69, 0x42, 0x0b, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01,
+ 0x5a, 0x41, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67, 0x2e,
+ 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x65, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x6f,
+ 0x67, 0x6c, 0x65, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61, 0x6e, 0x6e, 0x6f,
+ 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x3b, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69,
+ 0x6f, 0x6e, 0x73, 0xa2, 0x02, 0x04, 0x47, 0x41, 0x50, 0x49, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
+ 0x6f, 0x33,
}
var (
@@ -1675,7 +1788,7 @@ func file_google_api_client_proto_rawDescGZIP() []byte {
}
var file_google_api_client_proto_enumTypes = make([]protoimpl.EnumInfo, 2)
-var file_google_api_client_proto_msgTypes = make([]protoimpl.MessageInfo, 17)
+var file_google_api_client_proto_msgTypes = make([]protoimpl.MessageInfo, 19)
var file_google_api_client_proto_goTypes = []interface{}{
(ClientLibraryOrganization)(0), // 0: google.api.ClientLibraryOrganization
(ClientLibraryDestination)(0), // 1: google.api.ClientLibraryDestination
@@ -1691,55 +1804,59 @@ var file_google_api_client_proto_goTypes = []interface{}{
(*RubySettings)(nil), // 11: google.api.RubySettings
(*GoSettings)(nil), // 12: google.api.GoSettings
(*MethodSettings)(nil), // 13: google.api.MethodSettings
- nil, // 14: google.api.JavaSettings.ServiceClassNamesEntry
- (*PythonSettings_ExperimentalFeatures)(nil), // 15: google.api.PythonSettings.ExperimentalFeatures
- nil, // 16: google.api.DotnetSettings.RenamedServicesEntry
- nil, // 17: google.api.DotnetSettings.RenamedResourcesEntry
- (*MethodSettings_LongRunning)(nil), // 18: google.api.MethodSettings.LongRunning
- (api.LaunchStage)(0), // 19: google.api.LaunchStage
- (*durationpb.Duration)(nil), // 20: google.protobuf.Duration
- (*descriptorpb.MethodOptions)(nil), // 21: google.protobuf.MethodOptions
- (*descriptorpb.ServiceOptions)(nil), // 22: google.protobuf.ServiceOptions
+ (*SelectiveGapicGeneration)(nil), // 14: google.api.SelectiveGapicGeneration
+ nil, // 15: google.api.JavaSettings.ServiceClassNamesEntry
+ (*PythonSettings_ExperimentalFeatures)(nil), // 16: google.api.PythonSettings.ExperimentalFeatures
+ nil, // 17: google.api.DotnetSettings.RenamedServicesEntry
+ nil, // 18: google.api.DotnetSettings.RenamedResourcesEntry
+ nil, // 19: google.api.GoSettings.RenamedServicesEntry
+ (*MethodSettings_LongRunning)(nil), // 20: google.api.MethodSettings.LongRunning
+ (api.LaunchStage)(0), // 21: google.api.LaunchStage
+ (*durationpb.Duration)(nil), // 22: google.protobuf.Duration
+ (*descriptorpb.MethodOptions)(nil), // 23: google.protobuf.MethodOptions
+ (*descriptorpb.ServiceOptions)(nil), // 24: google.protobuf.ServiceOptions
}
var file_google_api_client_proto_depIdxs = []int32{
1, // 0: google.api.CommonLanguageSettings.destinations:type_name -> google.api.ClientLibraryDestination
- 19, // 1: google.api.ClientLibrarySettings.launch_stage:type_name -> google.api.LaunchStage
- 5, // 2: google.api.ClientLibrarySettings.java_settings:type_name -> google.api.JavaSettings
- 6, // 3: google.api.ClientLibrarySettings.cpp_settings:type_name -> google.api.CppSettings
- 7, // 4: google.api.ClientLibrarySettings.php_settings:type_name -> google.api.PhpSettings
- 8, // 5: google.api.ClientLibrarySettings.python_settings:type_name -> google.api.PythonSettings
- 9, // 6: google.api.ClientLibrarySettings.node_settings:type_name -> google.api.NodeSettings
- 10, // 7: google.api.ClientLibrarySettings.dotnet_settings:type_name -> google.api.DotnetSettings
- 11, // 8: google.api.ClientLibrarySettings.ruby_settings:type_name -> google.api.RubySettings
- 12, // 9: google.api.ClientLibrarySettings.go_settings:type_name -> google.api.GoSettings
- 13, // 10: google.api.Publishing.method_settings:type_name -> google.api.MethodSettings
- 0, // 11: google.api.Publishing.organization:type_name -> google.api.ClientLibraryOrganization
- 3, // 12: google.api.Publishing.library_settings:type_name -> google.api.ClientLibrarySettings
- 14, // 13: google.api.JavaSettings.service_class_names:type_name -> google.api.JavaSettings.ServiceClassNamesEntry
- 2, // 14: google.api.JavaSettings.common:type_name -> google.api.CommonLanguageSettings
- 2, // 15: google.api.CppSettings.common:type_name -> google.api.CommonLanguageSettings
- 2, // 16: google.api.PhpSettings.common:type_name -> google.api.CommonLanguageSettings
- 2, // 17: google.api.PythonSettings.common:type_name -> google.api.CommonLanguageSettings
- 15, // 18: google.api.PythonSettings.experimental_features:type_name -> google.api.PythonSettings.ExperimentalFeatures
- 2, // 19: google.api.NodeSettings.common:type_name -> google.api.CommonLanguageSettings
- 2, // 20: google.api.DotnetSettings.common:type_name -> google.api.CommonLanguageSettings
- 16, // 21: google.api.DotnetSettings.renamed_services:type_name -> google.api.DotnetSettings.RenamedServicesEntry
- 17, // 22: google.api.DotnetSettings.renamed_resources:type_name -> google.api.DotnetSettings.RenamedResourcesEntry
- 2, // 23: google.api.RubySettings.common:type_name -> google.api.CommonLanguageSettings
- 2, // 24: google.api.GoSettings.common:type_name -> google.api.CommonLanguageSettings
- 18, // 25: google.api.MethodSettings.long_running:type_name -> google.api.MethodSettings.LongRunning
- 20, // 26: google.api.MethodSettings.LongRunning.initial_poll_delay:type_name -> google.protobuf.Duration
- 20, // 27: google.api.MethodSettings.LongRunning.max_poll_delay:type_name -> google.protobuf.Duration
- 20, // 28: google.api.MethodSettings.LongRunning.total_poll_timeout:type_name -> google.protobuf.Duration
- 21, // 29: google.api.method_signature:extendee -> google.protobuf.MethodOptions
- 22, // 30: google.api.default_host:extendee -> google.protobuf.ServiceOptions
- 22, // 31: google.api.oauth_scopes:extendee -> google.protobuf.ServiceOptions
- 22, // 32: google.api.api_version:extendee -> google.protobuf.ServiceOptions
- 33, // [33:33] is the sub-list for method output_type
- 33, // [33:33] is the sub-list for method input_type
- 33, // [33:33] is the sub-list for extension type_name
- 29, // [29:33] is the sub-list for extension extendee
- 0, // [0:29] is the sub-list for field type_name
+ 14, // 1: google.api.CommonLanguageSettings.selective_gapic_generation:type_name -> google.api.SelectiveGapicGeneration
+ 21, // 2: google.api.ClientLibrarySettings.launch_stage:type_name -> google.api.LaunchStage
+ 5, // 3: google.api.ClientLibrarySettings.java_settings:type_name -> google.api.JavaSettings
+ 6, // 4: google.api.ClientLibrarySettings.cpp_settings:type_name -> google.api.CppSettings
+ 7, // 5: google.api.ClientLibrarySettings.php_settings:type_name -> google.api.PhpSettings
+ 8, // 6: google.api.ClientLibrarySettings.python_settings:type_name -> google.api.PythonSettings
+ 9, // 7: google.api.ClientLibrarySettings.node_settings:type_name -> google.api.NodeSettings
+ 10, // 8: google.api.ClientLibrarySettings.dotnet_settings:type_name -> google.api.DotnetSettings
+ 11, // 9: google.api.ClientLibrarySettings.ruby_settings:type_name -> google.api.RubySettings
+ 12, // 10: google.api.ClientLibrarySettings.go_settings:type_name -> google.api.GoSettings
+ 13, // 11: google.api.Publishing.method_settings:type_name -> google.api.MethodSettings
+ 0, // 12: google.api.Publishing.organization:type_name -> google.api.ClientLibraryOrganization
+ 3, // 13: google.api.Publishing.library_settings:type_name -> google.api.ClientLibrarySettings
+ 15, // 14: google.api.JavaSettings.service_class_names:type_name -> google.api.JavaSettings.ServiceClassNamesEntry
+ 2, // 15: google.api.JavaSettings.common:type_name -> google.api.CommonLanguageSettings
+ 2, // 16: google.api.CppSettings.common:type_name -> google.api.CommonLanguageSettings
+ 2, // 17: google.api.PhpSettings.common:type_name -> google.api.CommonLanguageSettings
+ 2, // 18: google.api.PythonSettings.common:type_name -> google.api.CommonLanguageSettings
+ 16, // 19: google.api.PythonSettings.experimental_features:type_name -> google.api.PythonSettings.ExperimentalFeatures
+ 2, // 20: google.api.NodeSettings.common:type_name -> google.api.CommonLanguageSettings
+ 2, // 21: google.api.DotnetSettings.common:type_name -> google.api.CommonLanguageSettings
+ 17, // 22: google.api.DotnetSettings.renamed_services:type_name -> google.api.DotnetSettings.RenamedServicesEntry
+ 18, // 23: google.api.DotnetSettings.renamed_resources:type_name -> google.api.DotnetSettings.RenamedResourcesEntry
+ 2, // 24: google.api.RubySettings.common:type_name -> google.api.CommonLanguageSettings
+ 2, // 25: google.api.GoSettings.common:type_name -> google.api.CommonLanguageSettings
+ 19, // 26: google.api.GoSettings.renamed_services:type_name -> google.api.GoSettings.RenamedServicesEntry
+ 20, // 27: google.api.MethodSettings.long_running:type_name -> google.api.MethodSettings.LongRunning
+ 22, // 28: google.api.MethodSettings.LongRunning.initial_poll_delay:type_name -> google.protobuf.Duration
+ 22, // 29: google.api.MethodSettings.LongRunning.max_poll_delay:type_name -> google.protobuf.Duration
+ 22, // 30: google.api.MethodSettings.LongRunning.total_poll_timeout:type_name -> google.protobuf.Duration
+ 23, // 31: google.api.method_signature:extendee -> google.protobuf.MethodOptions
+ 24, // 32: google.api.default_host:extendee -> google.protobuf.ServiceOptions
+ 24, // 33: google.api.oauth_scopes:extendee -> google.protobuf.ServiceOptions
+ 24, // 34: google.api.api_version:extendee -> google.protobuf.ServiceOptions
+ 35, // [35:35] is the sub-list for method output_type
+ 35, // [35:35] is the sub-list for method input_type
+ 35, // [35:35] is the sub-list for extension type_name
+ 31, // [31:35] is the sub-list for extension extendee
+ 0, // [0:31] is the sub-list for field type_name
}
func init() { file_google_api_client_proto_init() }
@@ -1892,7 +2009,19 @@ func file_google_api_client_proto_init() {
return nil
}
}
- file_google_api_client_proto_msgTypes[13].Exporter = func(v interface{}, i int) interface{} {
+ file_google_api_client_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} {
+ switch v := v.(*SelectiveGapicGeneration); i {
+ case 0:
+ return &v.state
+ case 1:
+ return &v.sizeCache
+ case 2:
+ return &v.unknownFields
+ default:
+ return nil
+ }
+ }
+ file_google_api_client_proto_msgTypes[14].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PythonSettings_ExperimentalFeatures); i {
case 0:
return &v.state
@@ -1904,7 +2033,7 @@ func file_google_api_client_proto_init() {
return nil
}
}
- file_google_api_client_proto_msgTypes[16].Exporter = func(v interface{}, i int) interface{} {
+ file_google_api_client_proto_msgTypes[18].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*MethodSettings_LongRunning); i {
case 0:
return &v.state
@@ -1923,7 +2052,7 @@ func file_google_api_client_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_google_api_client_proto_rawDesc,
NumEnums: 2,
- NumMessages: 17,
+ NumMessages: 19,
NumExtensions: 4,
NumServices: 0,
},
diff --git a/vendor/google.golang.org/genproto/googleapis/api/metric/metric.pb.go b/vendor/google.golang.org/genproto/googleapis/api/metric/metric.pb.go
index d4b89c98d19b7..7f6e006cde312 100644
--- a/vendor/google.golang.org/genproto/googleapis/api/metric/metric.pb.go
+++ b/vendor/google.golang.org/genproto/googleapis/api/metric/metric.pb.go
@@ -172,6 +172,63 @@ func (MetricDescriptor_ValueType) EnumDescriptor() ([]byte, []int) {
return file_google_api_metric_proto_rawDescGZIP(), []int{0, 1}
}
+// The resource hierarchy level of the timeseries data of a metric.
+type MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel int32
+
+const (
+ // Do not use this default value.
+ MetricDescriptor_MetricDescriptorMetadata_TIME_SERIES_RESOURCE_HIERARCHY_LEVEL_UNSPECIFIED MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel = 0
+ // Scopes a metric to a project.
+ MetricDescriptor_MetricDescriptorMetadata_PROJECT MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel = 1
+ // Scopes a metric to an organization.
+ MetricDescriptor_MetricDescriptorMetadata_ORGANIZATION MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel = 2
+ // Scopes a metric to a folder.
+ MetricDescriptor_MetricDescriptorMetadata_FOLDER MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel = 3
+)
+
+// Enum value maps for MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel.
+var (
+ MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel_name = map[int32]string{
+ 0: "TIME_SERIES_RESOURCE_HIERARCHY_LEVEL_UNSPECIFIED",
+ 1: "PROJECT",
+ 2: "ORGANIZATION",
+ 3: "FOLDER",
+ }
+ MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel_value = map[string]int32{
+ "TIME_SERIES_RESOURCE_HIERARCHY_LEVEL_UNSPECIFIED": 0,
+ "PROJECT": 1,
+ "ORGANIZATION": 2,
+ "FOLDER": 3,
+ }
+)
+
+func (x MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel) Enum() *MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel {
+ p := new(MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel)
+ *p = x
+ return p
+}
+
+func (x MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel) String() string {
+ return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
+}
+
+func (MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel) Descriptor() protoreflect.EnumDescriptor {
+ return file_google_api_metric_proto_enumTypes[2].Descriptor()
+}
+
+func (MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel) Type() protoreflect.EnumType {
+ return &file_google_api_metric_proto_enumTypes[2]
+}
+
+func (x MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel) Number() protoreflect.EnumNumber {
+ return protoreflect.EnumNumber(x)
+}
+
+// Deprecated: Use MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel.Descriptor instead.
+func (MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel) EnumDescriptor() ([]byte, []int) {
+ return file_google_api_metric_proto_rawDescGZIP(), []int{0, 0, 0}
+}
+
// Defines a metric type and its schema. Once a metric descriptor is created,
// deleting or altering it stops data collection and makes the metric type's
// existing data unusable.
@@ -519,6 +576,8 @@ type MetricDescriptor_MetricDescriptorMetadata struct {
// age are guaranteed to be ingested and available to be read, excluding
// data loss due to errors.
IngestDelay *durationpb.Duration `protobuf:"bytes,3,opt,name=ingest_delay,json=ingestDelay,proto3" json:"ingest_delay,omitempty"`
+ // The scope of the timeseries data of the metric.
+ TimeSeriesResourceHierarchyLevel []MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel `protobuf:"varint,4,rep,packed,name=time_series_resource_hierarchy_level,json=timeSeriesResourceHierarchyLevel,proto3,enum=google.api.MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel" json:"time_series_resource_hierarchy_level,omitempty"`
}
func (x *MetricDescriptor_MetricDescriptorMetadata) Reset() {
@@ -575,6 +634,13 @@ func (x *MetricDescriptor_MetricDescriptorMetadata) GetIngestDelay() *durationpb
return nil
}
+func (x *MetricDescriptor_MetricDescriptorMetadata) GetTimeSeriesResourceHierarchyLevel() []MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel {
+ if x != nil {
+ return x.TimeSeriesResourceHierarchyLevel
+ }
+ return nil
+}
+
var File_google_api_metric_proto protoreflect.FileDescriptor
var file_google_api_metric_proto_rawDesc = []byte{
@@ -585,7 +651,7 @@ var file_google_api_metric_proto_rawDesc = []byte{
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x6c, 0x61, 0x75, 0x6e, 0x63, 0x68,
0x5f, 0x73, 0x74, 0x61, 0x67, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f,
0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75,
- 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xc1, 0x07, 0x0a,
+ 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xf0, 0x09, 0x0a,
0x10, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f,
0x72, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x08, 0x20,
@@ -620,7 +686,7 @@ var file_google_api_metric_proto_rawDesc = []byte{
0x6f, 0x72, 0x65, 0x64, 0x5f, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x74, 0x79,
0x70, 0x65, 0x73, 0x18, 0x0d, 0x20, 0x03, 0x28, 0x09, 0x52, 0x16, 0x6d, 0x6f, 0x6e, 0x69, 0x74,
0x6f, 0x72, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65,
- 0x73, 0x1a, 0xd8, 0x01, 0x0a, 0x18, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x44, 0x65, 0x73, 0x63,
+ 0x73, 0x1a, 0x87, 0x04, 0x0a, 0x18, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x44, 0x65, 0x73, 0x63,
0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x12, 0x3e,
0x0a, 0x0c, 0x6c, 0x61, 0x75, 0x6e, 0x63, 0x68, 0x5f, 0x73, 0x74, 0x61, 0x67, 0x65, 0x18, 0x01,
0x20, 0x01, 0x28, 0x0e, 0x32, 0x17, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70,
@@ -633,35 +699,54 @@ var file_google_api_metric_proto_rawDesc = []byte{
0x0a, 0x0c, 0x69, 0x6e, 0x67, 0x65, 0x73, 0x74, 0x5f, 0x64, 0x65, 0x6c, 0x61, 0x79, 0x18, 0x03,
0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52,
- 0x0b, 0x69, 0x6e, 0x67, 0x65, 0x73, 0x74, 0x44, 0x65, 0x6c, 0x61, 0x79, 0x22, 0x4f, 0x0a, 0x0a,
- 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x4b, 0x69, 0x6e, 0x64, 0x12, 0x1b, 0x0a, 0x17, 0x4d, 0x45,
- 0x54, 0x52, 0x49, 0x43, 0x5f, 0x4b, 0x49, 0x4e, 0x44, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43,
- 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, 0x12, 0x09, 0x0a, 0x05, 0x47, 0x41, 0x55, 0x47, 0x45,
- 0x10, 0x01, 0x12, 0x09, 0x0a, 0x05, 0x44, 0x45, 0x4c, 0x54, 0x41, 0x10, 0x02, 0x12, 0x0e, 0x0a,
- 0x0a, 0x43, 0x55, 0x4d, 0x55, 0x4c, 0x41, 0x54, 0x49, 0x56, 0x45, 0x10, 0x03, 0x22, 0x71, 0x0a,
- 0x09, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x1a, 0x0a, 0x16, 0x56, 0x41,
- 0x4c, 0x55, 0x45, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49,
- 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, 0x12, 0x08, 0x0a, 0x04, 0x42, 0x4f, 0x4f, 0x4c, 0x10, 0x01,
- 0x12, 0x09, 0x0a, 0x05, 0x49, 0x4e, 0x54, 0x36, 0x34, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x44,
- 0x4f, 0x55, 0x42, 0x4c, 0x45, 0x10, 0x03, 0x12, 0x0a, 0x0a, 0x06, 0x53, 0x54, 0x52, 0x49, 0x4e,
- 0x47, 0x10, 0x04, 0x12, 0x10, 0x0a, 0x0c, 0x44, 0x49, 0x53, 0x54, 0x52, 0x49, 0x42, 0x55, 0x54,
- 0x49, 0x4f, 0x4e, 0x10, 0x05, 0x12, 0x09, 0x0a, 0x05, 0x4d, 0x4f, 0x4e, 0x45, 0x59, 0x10, 0x06,
- 0x22, 0x8f, 0x01, 0x0a, 0x06, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x12, 0x12, 0x0a, 0x04, 0x74,
- 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12,
- 0x36, 0x0a, 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32,
- 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x4d, 0x65, 0x74,
- 0x72, 0x69, 0x63, 0x2e, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52,
- 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x1a, 0x39, 0x0a, 0x0b, 0x4c, 0x61, 0x62, 0x65, 0x6c,
- 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20,
- 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75,
- 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02,
- 0x38, 0x01, 0x42, 0x5f, 0x0a, 0x0e, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
- 0x2e, 0x61, 0x70, 0x69, 0x42, 0x0b, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x50, 0x72, 0x6f, 0x74,
- 0x6f, 0x50, 0x01, 0x5a, 0x37, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61,
- 0x6e, 0x67, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x65, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f,
- 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x6d,
- 0x65, 0x74, 0x72, 0x69, 0x63, 0x3b, 0x6d, 0x65, 0x74, 0x72, 0x69, 0x63, 0xa2, 0x02, 0x04, 0x47,
- 0x41, 0x50, 0x49, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
+ 0x0b, 0x69, 0x6e, 0x67, 0x65, 0x73, 0x74, 0x44, 0x65, 0x6c, 0x61, 0x79, 0x12, 0xa6, 0x01, 0x0a,
+ 0x24, 0x74, 0x69, 0x6d, 0x65, 0x5f, 0x73, 0x65, 0x72, 0x69, 0x65, 0x73, 0x5f, 0x72, 0x65, 0x73,
+ 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x68, 0x69, 0x65, 0x72, 0x61, 0x72, 0x63, 0x68, 0x79, 0x5f,
+ 0x6c, 0x65, 0x76, 0x65, 0x6c, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0e, 0x32, 0x56, 0x2e, 0x67, 0x6f,
+ 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x44,
+ 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x2e, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63,
+ 0x44, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61,
+ 0x74, 0x61, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x53, 0x65, 0x72, 0x69, 0x65, 0x73, 0x52, 0x65, 0x73,
+ 0x6f, 0x75, 0x72, 0x63, 0x65, 0x48, 0x69, 0x65, 0x72, 0x61, 0x72, 0x63, 0x68, 0x79, 0x4c, 0x65,
+ 0x76, 0x65, 0x6c, 0x52, 0x20, 0x74, 0x69, 0x6d, 0x65, 0x53, 0x65, 0x72, 0x69, 0x65, 0x73, 0x52,
+ 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x48, 0x69, 0x65, 0x72, 0x61, 0x72, 0x63, 0x68, 0x79,
+ 0x4c, 0x65, 0x76, 0x65, 0x6c, 0x22, 0x83, 0x01, 0x0a, 0x20, 0x54, 0x69, 0x6d, 0x65, 0x53, 0x65,
+ 0x72, 0x69, 0x65, 0x73, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x48, 0x69, 0x65, 0x72,
+ 0x61, 0x72, 0x63, 0x68, 0x79, 0x4c, 0x65, 0x76, 0x65, 0x6c, 0x12, 0x34, 0x0a, 0x30, 0x54, 0x49,
+ 0x4d, 0x45, 0x5f, 0x53, 0x45, 0x52, 0x49, 0x45, 0x53, 0x5f, 0x52, 0x45, 0x53, 0x4f, 0x55, 0x52,
+ 0x43, 0x45, 0x5f, 0x48, 0x49, 0x45, 0x52, 0x41, 0x52, 0x43, 0x48, 0x59, 0x5f, 0x4c, 0x45, 0x56,
+ 0x45, 0x4c, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00,
+ 0x12, 0x0b, 0x0a, 0x07, 0x50, 0x52, 0x4f, 0x4a, 0x45, 0x43, 0x54, 0x10, 0x01, 0x12, 0x10, 0x0a,
+ 0x0c, 0x4f, 0x52, 0x47, 0x41, 0x4e, 0x49, 0x5a, 0x41, 0x54, 0x49, 0x4f, 0x4e, 0x10, 0x02, 0x12,
+ 0x0a, 0x0a, 0x06, 0x46, 0x4f, 0x4c, 0x44, 0x45, 0x52, 0x10, 0x03, 0x22, 0x4f, 0x0a, 0x0a, 0x4d,
+ 0x65, 0x74, 0x72, 0x69, 0x63, 0x4b, 0x69, 0x6e, 0x64, 0x12, 0x1b, 0x0a, 0x17, 0x4d, 0x45, 0x54,
+ 0x52, 0x49, 0x43, 0x5f, 0x4b, 0x49, 0x4e, 0x44, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49,
+ 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, 0x12, 0x09, 0x0a, 0x05, 0x47, 0x41, 0x55, 0x47, 0x45, 0x10,
+ 0x01, 0x12, 0x09, 0x0a, 0x05, 0x44, 0x45, 0x4c, 0x54, 0x41, 0x10, 0x02, 0x12, 0x0e, 0x0a, 0x0a,
+ 0x43, 0x55, 0x4d, 0x55, 0x4c, 0x41, 0x54, 0x49, 0x56, 0x45, 0x10, 0x03, 0x22, 0x71, 0x0a, 0x09,
+ 0x56, 0x61, 0x6c, 0x75, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x1a, 0x0a, 0x16, 0x56, 0x41, 0x4c,
+ 0x55, 0x45, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46,
+ 0x49, 0x45, 0x44, 0x10, 0x00, 0x12, 0x08, 0x0a, 0x04, 0x42, 0x4f, 0x4f, 0x4c, 0x10, 0x01, 0x12,
+ 0x09, 0x0a, 0x05, 0x49, 0x4e, 0x54, 0x36, 0x34, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x44, 0x4f,
+ 0x55, 0x42, 0x4c, 0x45, 0x10, 0x03, 0x12, 0x0a, 0x0a, 0x06, 0x53, 0x54, 0x52, 0x49, 0x4e, 0x47,
+ 0x10, 0x04, 0x12, 0x10, 0x0a, 0x0c, 0x44, 0x49, 0x53, 0x54, 0x52, 0x49, 0x42, 0x55, 0x54, 0x49,
+ 0x4f, 0x4e, 0x10, 0x05, 0x12, 0x09, 0x0a, 0x05, 0x4d, 0x4f, 0x4e, 0x45, 0x59, 0x10, 0x06, 0x22,
+ 0x8f, 0x01, 0x0a, 0x06, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79,
+ 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x36,
+ 0x0a, 0x06, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1e,
+ 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x4d, 0x65, 0x74, 0x72,
+ 0x69, 0x63, 0x2e, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x06,
+ 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x73, 0x1a, 0x39, 0x0a, 0x0b, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x73,
+ 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01,
+ 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
+ 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38,
+ 0x01, 0x42, 0x5f, 0x0a, 0x0e, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
+ 0x61, 0x70, 0x69, 0x42, 0x0b, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x50, 0x72, 0x6f, 0x74, 0x6f,
+ 0x50, 0x01, 0x5a, 0x37, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e,
+ 0x67, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x65, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67,
+ 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x6d, 0x65,
+ 0x74, 0x72, 0x69, 0x63, 0x3b, 0x6d, 0x65, 0x74, 0x72, 0x69, 0x63, 0xa2, 0x02, 0x04, 0x47, 0x41,
+ 0x50, 0x49, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
@@ -676,34 +761,36 @@ func file_google_api_metric_proto_rawDescGZIP() []byte {
return file_google_api_metric_proto_rawDescData
}
-var file_google_api_metric_proto_enumTypes = make([]protoimpl.EnumInfo, 2)
+var file_google_api_metric_proto_enumTypes = make([]protoimpl.EnumInfo, 3)
var file_google_api_metric_proto_msgTypes = make([]protoimpl.MessageInfo, 4)
var file_google_api_metric_proto_goTypes = []interface{}{
- (MetricDescriptor_MetricKind)(0), // 0: google.api.MetricDescriptor.MetricKind
- (MetricDescriptor_ValueType)(0), // 1: google.api.MetricDescriptor.ValueType
- (*MetricDescriptor)(nil), // 2: google.api.MetricDescriptor
- (*Metric)(nil), // 3: google.api.Metric
- (*MetricDescriptor_MetricDescriptorMetadata)(nil), // 4: google.api.MetricDescriptor.MetricDescriptorMetadata
- nil, // 5: google.api.Metric.LabelsEntry
- (*label.LabelDescriptor)(nil), // 6: google.api.LabelDescriptor
- (api.LaunchStage)(0), // 7: google.api.LaunchStage
- (*durationpb.Duration)(nil), // 8: google.protobuf.Duration
+ (MetricDescriptor_MetricKind)(0), // 0: google.api.MetricDescriptor.MetricKind
+ (MetricDescriptor_ValueType)(0), // 1: google.api.MetricDescriptor.ValueType
+ (MetricDescriptor_MetricDescriptorMetadata_TimeSeriesResourceHierarchyLevel)(0), // 2: google.api.MetricDescriptor.MetricDescriptorMetadata.TimeSeriesResourceHierarchyLevel
+ (*MetricDescriptor)(nil), // 3: google.api.MetricDescriptor
+ (*Metric)(nil), // 4: google.api.Metric
+ (*MetricDescriptor_MetricDescriptorMetadata)(nil), // 5: google.api.MetricDescriptor.MetricDescriptorMetadata
+ nil, // 6: google.api.Metric.LabelsEntry
+ (*label.LabelDescriptor)(nil), // 7: google.api.LabelDescriptor
+ (api.LaunchStage)(0), // 8: google.api.LaunchStage
+ (*durationpb.Duration)(nil), // 9: google.protobuf.Duration
}
var file_google_api_metric_proto_depIdxs = []int32{
- 6, // 0: google.api.MetricDescriptor.labels:type_name -> google.api.LabelDescriptor
- 0, // 1: google.api.MetricDescriptor.metric_kind:type_name -> google.api.MetricDescriptor.MetricKind
- 1, // 2: google.api.MetricDescriptor.value_type:type_name -> google.api.MetricDescriptor.ValueType
- 4, // 3: google.api.MetricDescriptor.metadata:type_name -> google.api.MetricDescriptor.MetricDescriptorMetadata
- 7, // 4: google.api.MetricDescriptor.launch_stage:type_name -> google.api.LaunchStage
- 5, // 5: google.api.Metric.labels:type_name -> google.api.Metric.LabelsEntry
- 7, // 6: google.api.MetricDescriptor.MetricDescriptorMetadata.launch_stage:type_name -> google.api.LaunchStage
- 8, // 7: google.api.MetricDescriptor.MetricDescriptorMetadata.sample_period:type_name -> google.protobuf.Duration
- 8, // 8: google.api.MetricDescriptor.MetricDescriptorMetadata.ingest_delay:type_name -> google.protobuf.Duration
- 9, // [9:9] is the sub-list for method output_type
- 9, // [9:9] is the sub-list for method input_type
- 9, // [9:9] is the sub-list for extension type_name
- 9, // [9:9] is the sub-list for extension extendee
- 0, // [0:9] is the sub-list for field type_name
+ 7, // 0: google.api.MetricDescriptor.labels:type_name -> google.api.LabelDescriptor
+ 0, // 1: google.api.MetricDescriptor.metric_kind:type_name -> google.api.MetricDescriptor.MetricKind
+ 1, // 2: google.api.MetricDescriptor.value_type:type_name -> google.api.MetricDescriptor.ValueType
+ 5, // 3: google.api.MetricDescriptor.metadata:type_name -> google.api.MetricDescriptor.MetricDescriptorMetadata
+ 8, // 4: google.api.MetricDescriptor.launch_stage:type_name -> google.api.LaunchStage
+ 6, // 5: google.api.Metric.labels:type_name -> google.api.Metric.LabelsEntry
+ 8, // 6: google.api.MetricDescriptor.MetricDescriptorMetadata.launch_stage:type_name -> google.api.LaunchStage
+ 9, // 7: google.api.MetricDescriptor.MetricDescriptorMetadata.sample_period:type_name -> google.protobuf.Duration
+ 9, // 8: google.api.MetricDescriptor.MetricDescriptorMetadata.ingest_delay:type_name -> google.protobuf.Duration
+ 2, // 9: google.api.MetricDescriptor.MetricDescriptorMetadata.time_series_resource_hierarchy_level:type_name -> google.api.MetricDescriptor.MetricDescriptorMetadata.TimeSeriesResourceHierarchyLevel
+ 10, // [10:10] is the sub-list for method output_type
+ 10, // [10:10] is the sub-list for method input_type
+ 10, // [10:10] is the sub-list for extension type_name
+ 10, // [10:10] is the sub-list for extension extendee
+ 0, // [0:10] is the sub-list for field type_name
}
func init() { file_google_api_metric_proto_init() }
@@ -754,7 +841,7 @@ func file_google_api_metric_proto_init() {
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_google_api_metric_proto_rawDesc,
- NumEnums: 2,
+ NumEnums: 3,
NumMessages: 4,
NumExtensions: 0,
NumServices: 0,
diff --git a/vendor/google.golang.org/genproto/googleapis/rpc/errdetails/error_details.pb.go b/vendor/google.golang.org/genproto/googleapis/rpc/errdetails/error_details.pb.go
index 3e5621827921e..3cd9a5bb8e62b 100644
--- a/vendor/google.golang.org/genproto/googleapis/rpc/errdetails/error_details.pb.go
+++ b/vendor/google.golang.org/genproto/googleapis/rpc/errdetails/error_details.pb.go
@@ -80,11 +80,12 @@ type ErrorInfo struct {
Domain string `protobuf:"bytes,2,opt,name=domain,proto3" json:"domain,omitempty"`
// Additional structured details about this error.
//
- // Keys should match /[a-zA-Z0-9-_]/ and be limited to 64 characters in
+ // Keys must match a regular expression of `[a-z][a-zA-Z0-9-_]+` but should
+ // ideally be lowerCamelCase. Also, they must be limited to 64 characters in
// length. When identifying the current value of an exceeded limit, the units
// should be contained in the key, not the value. For example, rather than
- // {"instanceLimit": "100/request"}, should be returned as,
- // {"instanceLimitPerRequest": "100"}, if the client exceeds the number of
+ // `{"instanceLimit": "100/request"}`, should be returned as,
+ // `{"instanceLimitPerRequest": "100"}`, if the client exceeds the number of
// instances that can be created in a single (batch) request.
Metadata map[string]string `protobuf:"bytes,3,rep,name=metadata,proto3" json:"metadata,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
}
@@ -870,6 +871,16 @@ type BadRequest_FieldViolation struct {
Field string `protobuf:"bytes,1,opt,name=field,proto3" json:"field,omitempty"`
// A description of why the request element is bad.
Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"`
+ // The reason of the field-level error. This is a constant value that
+ // identifies the proximate cause of the field-level error. It should
+ // uniquely identify the type of the FieldViolation within the scope of the
+ // google.rpc.ErrorInfo.domain. This should be at most 63
+ // characters and match a regular expression of `[A-Z][A-Z0-9_]+[A-Z0-9]`,
+ // which represents UPPER_SNAKE_CASE.
+ Reason string `protobuf:"bytes,3,opt,name=reason,proto3" json:"reason,omitempty"`
+ // Provides a localized error message for field-level errors that is safe to
+ // return to the API consumer.
+ LocalizedMessage *LocalizedMessage `protobuf:"bytes,4,opt,name=localized_message,json=localizedMessage,proto3" json:"localized_message,omitempty"`
}
func (x *BadRequest_FieldViolation) Reset() {
@@ -918,6 +929,20 @@ func (x *BadRequest_FieldViolation) GetDescription() string {
return ""
}
+func (x *BadRequest_FieldViolation) GetReason() string {
+ if x != nil {
+ return x.Reason
+ }
+ return ""
+}
+
+func (x *BadRequest_FieldViolation) GetLocalizedMessage() *LocalizedMessage {
+ if x != nil {
+ return x.LocalizedMessage
+ }
+ return nil
+}
+
// Describes a URL link.
type Help_Link struct {
state protoimpl.MessageState
@@ -1026,51 +1051,57 @@ var file_google_rpc_error_details_proto_rawDesc = []byte{
0x07, 0x73, 0x75, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07,
0x73, 0x75, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x12, 0x20, 0x0a, 0x0b, 0x64, 0x65, 0x73, 0x63, 0x72,
0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x64, 0x65,
- 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0xa8, 0x01, 0x0a, 0x0a, 0x42, 0x61,
+ 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x8c, 0x02, 0x0a, 0x0a, 0x42, 0x61,
0x64, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x50, 0x0a, 0x10, 0x66, 0x69, 0x65, 0x6c,
0x64, 0x5f, 0x76, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x01, 0x20, 0x03,
0x28, 0x0b, 0x32, 0x25, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x72, 0x70, 0x63, 0x2e,
0x42, 0x61, 0x64, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64,
0x56, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x0f, 0x66, 0x69, 0x65, 0x6c, 0x64,
- 0x56, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x48, 0x0a, 0x0e, 0x46, 0x69,
- 0x65, 0x6c, 0x64, 0x56, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x14, 0x0a, 0x05,
- 0x66, 0x69, 0x65, 0x6c, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x66, 0x69, 0x65,
- 0x6c, 0x64, 0x12, 0x20, 0x0a, 0x0b, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f,
- 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70,
- 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x4f, 0x0a, 0x0b, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x49,
- 0x6e, 0x66, 0x6f, 0x12, 0x1d, 0x0a, 0x0a, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x5f, 0x69,
- 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
- 0x49, 0x64, 0x12, 0x21, 0x0a, 0x0c, 0x73, 0x65, 0x72, 0x76, 0x69, 0x6e, 0x67, 0x5f, 0x64, 0x61,
- 0x74, 0x61, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x73, 0x65, 0x72, 0x76, 0x69, 0x6e,
- 0x67, 0x44, 0x61, 0x74, 0x61, 0x22, 0x90, 0x01, 0x0a, 0x0c, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72,
- 0x63, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x23, 0x0a, 0x0d, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72,
- 0x63, 0x65, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x72,
- 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12, 0x23, 0x0a, 0x0d, 0x72,
- 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01,
- 0x28, 0x09, 0x52, 0x0c, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x4e, 0x61, 0x6d, 0x65,
- 0x12, 0x14, 0x0a, 0x05, 0x6f, 0x77, 0x6e, 0x65, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52,
- 0x05, 0x6f, 0x77, 0x6e, 0x65, 0x72, 0x12, 0x20, 0x0a, 0x0b, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69,
- 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x64, 0x65, 0x73,
- 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x6f, 0x0a, 0x04, 0x48, 0x65, 0x6c, 0x70,
- 0x12, 0x2b, 0x0a, 0x05, 0x6c, 0x69, 0x6e, 0x6b, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32,
- 0x15, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x72, 0x70, 0x63, 0x2e, 0x48, 0x65, 0x6c,
- 0x70, 0x2e, 0x4c, 0x69, 0x6e, 0x6b, 0x52, 0x05, 0x6c, 0x69, 0x6e, 0x6b, 0x73, 0x1a, 0x3a, 0x0a,
- 0x04, 0x4c, 0x69, 0x6e, 0x6b, 0x12, 0x20, 0x0a, 0x0b, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70,
- 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x64, 0x65, 0x73, 0x63,
- 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x10, 0x0a, 0x03, 0x75, 0x72, 0x6c, 0x18, 0x02,
- 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x75, 0x72, 0x6c, 0x22, 0x44, 0x0a, 0x10, 0x4c, 0x6f, 0x63,
- 0x61, 0x6c, 0x69, 0x7a, 0x65, 0x64, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x16, 0x0a,
- 0x06, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x6c,
- 0x6f, 0x63, 0x61, 0x6c, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65,
- 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x42,
- 0x6c, 0x0a, 0x0e, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x72, 0x70,
- 0x63, 0x42, 0x11, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x44, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x50,
- 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x3f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67,
- 0x6f, 0x6c, 0x61, 0x6e, 0x67, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x65, 0x6e, 0x70, 0x72, 0x6f,
- 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x72, 0x70,
- 0x63, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x3b, 0x65, 0x72, 0x72,
- 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0xa2, 0x02, 0x03, 0x52, 0x50, 0x43, 0x62, 0x06, 0x70,
- 0x72, 0x6f, 0x74, 0x6f, 0x33,
+ 0x56, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0xab, 0x01, 0x0a, 0x0e, 0x46,
+ 0x69, 0x65, 0x6c, 0x64, 0x56, 0x69, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x14, 0x0a,
+ 0x05, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x66, 0x69,
+ 0x65, 0x6c, 0x64, 0x12, 0x20, 0x0a, 0x0b, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x69,
+ 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69,
+ 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x06, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18,
+ 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x12, 0x49, 0x0a,
+ 0x11, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x69, 0x7a, 0x65, 0x64, 0x5f, 0x6d, 0x65, 0x73, 0x73, 0x61,
+ 0x67, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1c, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
+ 0x65, 0x2e, 0x72, 0x70, 0x63, 0x2e, 0x4c, 0x6f, 0x63, 0x61, 0x6c, 0x69, 0x7a, 0x65, 0x64, 0x4d,
+ 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x52, 0x10, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x69, 0x7a, 0x65,
+ 0x64, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x22, 0x4f, 0x0a, 0x0b, 0x52, 0x65, 0x71, 0x75,
+ 0x65, 0x73, 0x74, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x1d, 0x0a, 0x0a, 0x72, 0x65, 0x71, 0x75, 0x65,
+ 0x73, 0x74, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x72, 0x65, 0x71,
+ 0x75, 0x65, 0x73, 0x74, 0x49, 0x64, 0x12, 0x21, 0x0a, 0x0c, 0x73, 0x65, 0x72, 0x76, 0x69, 0x6e,
+ 0x67, 0x5f, 0x64, 0x61, 0x74, 0x61, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x73, 0x65,
+ 0x72, 0x76, 0x69, 0x6e, 0x67, 0x44, 0x61, 0x74, 0x61, 0x22, 0x90, 0x01, 0x0a, 0x0c, 0x52, 0x65,
+ 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x23, 0x0a, 0x0d, 0x72, 0x65,
+ 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28,
+ 0x09, 0x52, 0x0c, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x54, 0x79, 0x70, 0x65, 0x12,
+ 0x23, 0x0a, 0x0d, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65,
+ 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65,
+ 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x6f, 0x77, 0x6e, 0x65, 0x72, 0x18, 0x03, 0x20,
+ 0x01, 0x28, 0x09, 0x52, 0x05, 0x6f, 0x77, 0x6e, 0x65, 0x72, 0x12, 0x20, 0x0a, 0x0b, 0x64, 0x65,
+ 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52,
+ 0x0b, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x22, 0x6f, 0x0a, 0x04,
+ 0x48, 0x65, 0x6c, 0x70, 0x12, 0x2b, 0x0a, 0x05, 0x6c, 0x69, 0x6e, 0x6b, 0x73, 0x18, 0x01, 0x20,
+ 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x72, 0x70, 0x63,
+ 0x2e, 0x48, 0x65, 0x6c, 0x70, 0x2e, 0x4c, 0x69, 0x6e, 0x6b, 0x52, 0x05, 0x6c, 0x69, 0x6e, 0x6b,
+ 0x73, 0x1a, 0x3a, 0x0a, 0x04, 0x4c, 0x69, 0x6e, 0x6b, 0x12, 0x20, 0x0a, 0x0b, 0x64, 0x65, 0x73,
+ 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b,
+ 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x10, 0x0a, 0x03, 0x75,
+ 0x72, 0x6c, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x75, 0x72, 0x6c, 0x22, 0x44, 0x0a,
+ 0x10, 0x4c, 0x6f, 0x63, 0x61, 0x6c, 0x69, 0x7a, 0x65, 0x64, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67,
+ 0x65, 0x12, 0x16, 0x0a, 0x06, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28,
+ 0x09, 0x52, 0x06, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x65, 0x73,
+ 0x73, 0x61, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x6d, 0x65, 0x73, 0x73,
+ 0x61, 0x67, 0x65, 0x42, 0x6c, 0x0a, 0x0e, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
+ 0x65, 0x2e, 0x72, 0x70, 0x63, 0x42, 0x11, 0x45, 0x72, 0x72, 0x6f, 0x72, 0x44, 0x65, 0x74, 0x61,
+ 0x69, 0x6c, 0x73, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x3f, 0x67, 0x6f, 0x6f, 0x67,
+ 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67, 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x65,
+ 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x61, 0x70, 0x69,
+ 0x73, 0x2f, 0x72, 0x70, 0x63, 0x2f, 0x65, 0x72, 0x72, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73,
+ 0x3b, 0x65, 0x72, 0x72, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0xa2, 0x02, 0x03, 0x52, 0x50,
+ 0x43, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
@@ -1111,11 +1142,12 @@ var file_google_rpc_error_details_proto_depIdxs = []int32{
12, // 3: google.rpc.PreconditionFailure.violations:type_name -> google.rpc.PreconditionFailure.Violation
13, // 4: google.rpc.BadRequest.field_violations:type_name -> google.rpc.BadRequest.FieldViolation
14, // 5: google.rpc.Help.links:type_name -> google.rpc.Help.Link
- 6, // [6:6] is the sub-list for method output_type
- 6, // [6:6] is the sub-list for method input_type
- 6, // [6:6] is the sub-list for extension type_name
- 6, // [6:6] is the sub-list for extension extendee
- 0, // [0:6] is the sub-list for field type_name
+ 9, // 6: google.rpc.BadRequest.FieldViolation.localized_message:type_name -> google.rpc.LocalizedMessage
+ 7, // [7:7] is the sub-list for method output_type
+ 7, // [7:7] is the sub-list for method input_type
+ 7, // [7:7] is the sub-list for extension type_name
+ 7, // [7:7] is the sub-list for extension extendee
+ 0, // [0:7] is the sub-list for field type_name
}
func init() { file_google_rpc_error_details_proto_init() }
diff --git a/vendor/modules.txt b/vendor/modules.txt
index fe83d65ad6796..71484d0b996bc 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -951,7 +951,7 @@ github.com/google/uuid
## explicit; go 1.19
github.com/googleapis/enterprise-certificate-proxy/client
github.com/googleapis/enterprise-certificate-proxy/client/util
-# github.com/googleapis/gax-go/v2 v2.14.0
+# github.com/googleapis/gax-go/v2 v2.14.1
## explicit; go 1.21
github.com/googleapis/gax-go/v2
github.com/googleapis/gax-go/v2/apierror
@@ -1609,8 +1609,8 @@ github.com/sony/gobreaker/v2
# github.com/spaolacci/murmur3 v1.1.0
## explicit
github.com/spaolacci/murmur3
-# github.com/spf13/afero v1.11.0
-## explicit; go 1.19
+# github.com/spf13/afero v1.12.0
+## explicit; go 1.21
github.com/spf13/afero
github.com/spf13/afero/internal/common
github.com/spf13/afero/mem
@@ -2005,7 +2005,7 @@ golang.org/x/tools/internal/stdlib
golang.org/x/tools/internal/typeparams
golang.org/x/tools/internal/typesinternal
golang.org/x/tools/internal/versions
-# google.golang.org/api v0.214.0
+# google.golang.org/api v0.215.0
## explicit; go 1.21
google.golang.org/api/cloudresourcemanager/v1
google.golang.org/api/compute/v1
@@ -2031,7 +2031,7 @@ google.golang.org/genproto/googleapis/type/calendarperiod
google.golang.org/genproto/googleapis/type/date
google.golang.org/genproto/googleapis/type/expr
google.golang.org/genproto/protobuf/api
-# google.golang.org/genproto/googleapis/api v0.0.0-20241118233622-e639e219e697
+# google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576
## explicit; go 1.21
google.golang.org/genproto/googleapis/api
google.golang.org/genproto/googleapis/api/annotations
@@ -2040,7 +2040,7 @@ google.golang.org/genproto/googleapis/api/expr/v1alpha1
google.golang.org/genproto/googleapis/api/label
google.golang.org/genproto/googleapis/api/metric
google.golang.org/genproto/googleapis/api/monitoredres
-# google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576
+# google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8
## explicit; go 1.21
google.golang.org/genproto/googleapis/rpc/code
google.golang.org/genproto/googleapis/rpc/errdetails
|
fix
|
update module github.com/spf13/afero to v1.12.0 (#15696)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.