Skip to content

/AWS1/CL_QQCWEBCRAWLERCONF

The configuration details for the web data source.

CONSTRUCTOR

IMPORTING

Required arguments:

io_urlconfiguration TYPE REF TO /AWS1/CL_QQCURLCONFIGURATION /AWS1/CL_QQCURLCONFIGURATION

The configuration of the URL/URLs for the web content that you want to crawl. You should be authorized to crawl the URLs.

Optional arguments:

io_crawlerlimits TYPE REF TO /AWS1/CL_QQCWEBCRAWLERLIMITS /AWS1/CL_QQCWEBCRAWLERLIMITS

The configuration of crawl limits for the web URLs.

it_inclusionfilters TYPE /AWS1/CL_QQCURLFILTERLIST_W=>TT_URLFILTERLIST TT_URLFILTERLIST

A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

it_exclusionfilters TYPE /AWS1/CL_QQCURLFILTERLIST_W=>TT_URLFILTERLIST TT_URLFILTERLIST

A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

iv_scope TYPE /AWS1/QQCWEBSCOPETYPE /AWS1/QQCWEBSCOPETYPE

The scope of what is crawled for your URLs. You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL https://docs.aws.amazon.com/bedrock/latest/userguide/ and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain aws.amazon.com can also include sub domain docs.aws.amazon.com.


Queryable Attributes

urlConfiguration

The configuration of the URL/URLs for the web content that you want to crawl. You should be authorized to crawl the URLs.

Accessible with the following methods

Method Description
GET_URLCONFIGURATION() Getter for URLCONFIGURATION

crawlerLimits

The configuration of crawl limits for the web URLs.

Accessible with the following methods

Method Description
GET_CRAWLERLIMITS() Getter for CRAWLERLIMITS

inclusionFilters

A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

Accessible with the following methods

Method Description
GET_INCLUSIONFILTERS() Getter for INCLUSIONFILTERS, with configurable default
ASK_INCLUSIONFILTERS() Getter for INCLUSIONFILTERS w/ exceptions if field has no va
HAS_INCLUSIONFILTERS() Determine if INCLUSIONFILTERS has a value

exclusionFilters

A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

Accessible with the following methods

Method Description
GET_EXCLUSIONFILTERS() Getter for EXCLUSIONFILTERS, with configurable default
ASK_EXCLUSIONFILTERS() Getter for EXCLUSIONFILTERS w/ exceptions if field has no va
HAS_EXCLUSIONFILTERS() Determine if EXCLUSIONFILTERS has a value

scope

The scope of what is crawled for your URLs. You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL https://docs.aws.amazon.com/bedrock/latest/userguide/ and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain aws.amazon.com can also include sub domain docs.aws.amazon.com.

Accessible with the following methods

Method Description
GET_SCOPE() Getter for SCOPE, with configurable default
ASK_SCOPE() Getter for SCOPE w/ exceptions if field has no value
HAS_SCOPE() Determine if SCOPE has a value