exceptionless/Exceptionless

Job integration

promising1 opened this issue · 11 comments

Hi,
I'm testing the Job servicer and was looking at your docker-compose file example in version 7.2.1 to run it locally. My understanding, since there's no information anywhere about that, is that the storage should be share between the Web and the Job services. But, in the docker-compose file, the Web app stores in Redis and the Job service locally (see excerpt below). Can you explain how to set those up properly.

Here's my settings:

Web Service:

ConnectionStrings:
  #Redis: server=""
  Redis: server="localhost,abortConnect=false"
  Elasticsearch: server=http://elastic:elastic@localhost:9200
  Cache: provider=redis
  MessageBus: provider=redis
  Queue: provider=redis
  Storage: provider=redis
  OAuth: "DcorBaseUri=https://localhost:7100"
  #Storage: provider=redis;path=..\..\DATA
  #LDAP: ''
  #Email: ""
...
# Base url for the ui used to build links in emails and other places.
BaseURL: 'http://localhost:7101/#!'

# Wether or not to run the jobs in process. Requires Redis to be configured when running jobs out of process.
RunJobsInProcess: false
AppScope: dev

Job Service:

ConnectionStrings:
  Redis: server="localhost,abortConnect=false"
  Elasticsearch: server=http://elastic:elastic@localhost:9200
  Cache: provider=redis
  MessageBus: provider=redis
  Queue: provider=redis
  #Storage: provider=redis #provider=folder;path=.\storage
  Storage: provider=folder;path=.\storage
  #Email: smtp://localhost:1025

# Base url for the ui used to build links in emails and other places.
BaseURL: "http://localhost:9001/#!"

#RunJobsInProcess: true
AppScope: dev

Hello,

You need to mount volumes, please see our sample docker compose here for more information: https://github.com/exceptionless/Exceptionless/blob/main/samples/docker-compose.yml

Also, please be sure to reset your azure credentials as you included a connection string.

Ok, I was able to test with both the Web and Job services pointing to the same folder, that worked. Now, if we want to use Redis as storage, is it possible?

No, we don't have a redis storage provider.

Sorry if I'm reopening this ticket but, we've been testing for a couple of days now and still cannot get the logging to be stable.

Status:
We have an Elasticsearch service (managed) on Azure.

When testing locally, no problems are detected. The indexes are being allocated correctly and the logging, using the Job service, are being posted to Elastic and properly read from Web. The issues happen when we deploy the API (Web) and Job to Azure (with the settings below). That where it starts acting up. At first, everything seems ok, but then we start seeing error messages in the Web app (see below) and the organizations and projects dissapear.

Error Messages we get in Web console after a while:

Elasticsearch.Net.ElasticsearchClientException: Request failed to execute. Call: Status code 400 from: 
POST /production-events-2023.01.18%2Cproduction-events-2023.01.19%2Cproduction-events-2023.01.20%2Cproduction-events-2023.01.21/_search?typed_keys=true&ignore_unavailable=true. 
ServerError: Type: search_phase_execution_exception Reason: "all shards failed" CausedBy: "Type: illegal_argument_exception Reason: "Fielddata is disabled on [stack_id] in [production-events-2023.01.21]. Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [stack_id] in order to load field data by uninverting the inverted index. Note that this can use significant memory." CausedBy: "Type: illegal_argument_exception Reason: "Fielddata is disabled on [stack_id] in [production-events-2023.01.21]. Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [stack_id] in order to load field data by uninverting the inverted index. Note that this can use significant memory."""

Here's our setup:

Web App (API):

  • Azure Web App Service
  • Using an Azure Redis server
  • storage is linked to an Azure File Storage (shared with Job)
    appsettings.json
---
ConnectionStrings:
  Elasticsearch: server="https://logging:ABCD@elastic.westus2.azure.elastic-cloud.com"
  Redis: server="redis.cache.windows.net:6380,password=ABCDR=,ssl=True,abortConnect=False"
  Cache: provider=redis
  MessageBus: provider=redis
  Queue: provider=redis
  Storage: provider=folder;path=.\storage

BaseURL: "https://logging.domain.com"
RunJobsInProcess: false
AppScope: production
EnableAccountCreation: true

Serilog:
  MinimumLevel:
    Default: Warning
  WriteTo:
    - Name: Console
      Args:
        theme: "Serilog.Sinks.SystemConsole.Themes.ConsoleTheme::None, Serilog.Sinks.Console"

Apm:
  ServiceEnvironment: prod

Job Service:

  • Azure Web App Service
  • storage linked to the same Azure file storage

'appsettings.yml`:

---
ConnectionStrings:
  Elasticsearch: server="https://logging:ABCD@elastic.westus2.azure.elastic-cloud.com"
  Redis: server="redis.cache.windows.net:6380,password=ABCDR=,ssl=True,abortConnect=False"
  Cache: provider=redis
  MessageBus: provider=redis
  Queue: provider=redis
  Storage: provider=folder;path=.\storage

BaseURL: "https://loggingjob.domain.com"
AppScope: production

Serilog:
  MinimumLevel:
    Default: Warning
  WriteTo:
  - Name: Console
    Args: 
      theme: "Serilog.Sinks.SystemConsole.Themes.ConsoleTheme::None, Serilog.Sinks.Console"

Apm:
  ServiceEnvironment: prod

Does the following index exist? production-events-2023.01.21

Yes, the event index exists. Pardon my ignorance but, could it be a caching issue that causes a conflict in the indexes?

I doubt it would be a caching issue. I was thinking the alias had the index name. Can you check the elasticsearch mappings of this index and see how it compares to the previous days that are working. Did you turn off the mapper-size plugin?

Hi @niemyjski,
Verified the mappings and they are identical to the previous ones and the newly (today) created one (see below). Any suggestion on starting fresh, so that we can avoid previous index queries. Again, pretty weird that it only happens for the "production" settings, Dev and stage seem to be working fine.

Mappings of production-events-2023.01.21

{
  "mappings": {
    "properties": {
      "created_utc": {
        "type": "date"
      },
      "data": {
        "properties": {
          "@submission_client": {
            "properties": {
              "ip_address": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              },
              "user_agent": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              },
              "version": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              }
            }
          }
        }
      },
      "date": {
        "type": "date"
      },
      "id": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "is_first_occurrence": {
        "type": "boolean"
      },
      "message": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "organization_id": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "project_id": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "stack_id": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "type": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      }
    }
  }
}

@niemyjski in trying to circumvent the issue and further test, we created new indexers, via the AppScope setting and monitored the console logs and found the following error:

Elasticsearch.Net.ElasticsearchClientException: Request failed to execute. Call: Status code 400 from: PUT /prod-events-v1-2023.01.23. 
    ServerError: 
        Type: mapper_parsing_exception 
        Reason: "Failed to parse mapping: Root mapping definition has unsupported parameters:  [_size : {enabled=true}]" 
        CausedBy: "Type: mapper_parsing_exception Reason: "Root mapping definition has unsupported parameters:  [_size : {enabled=true}]""

Hello,

You'll need to disable the mapper size plugin and then delete these indices as the mappings are invalid. #1215 (comment)

Ok, this seems to have fixed the issue. Thanks for the help, much appreciated!