Loading...
Skip to main content

Refresh a Source.

This will start a Job to populate the Corpus with chunks from documents from the source. This will fail if there are already jobs running for the source.

Path Parameters
    corpusId string required

    The ID of the corpus owning the source to refresh.

    sourceId string required

    The ID of the source to refresh.

Request Body required
    corpusId string required

    The ID of the corpus owning the source to refresh.

    sourceId string required

    The ID of the source to refresh.

    force boolean

    Typically refresh requests will return a 409 (conflict) if another job is currently running for a source. If "force" is set to true, any conflicting jobs will instead be cancelled and a new job will be started.

Responses

OK


Schema
    job object

    A complete loading, processing, and embedding pipeline for a particular Source. A job can be started in one of several ways:

     1. When a new Source is created.
    2. Automatically based on the Source's JobSchedule.
    3. Manually by forcing a refresh.
    4. When a Document is updated or deleted.

    In all cases, the job's loading, processing, and embedding parameters are copied from the Source when the Job is created to ensure unambiguous behavior. In the case of a document being updated or deleted, no loading will occur. Instead processing and embedding will be performed with a scope limited to the updated document, possibly spawning other jobs to update the document's children. (If an aggregation processing step requires other documents, they'll be read from storage but won't be altered.)

    corpusId string

    The corpus that this job belongs to.

    sourceId string

    The source that this job belongs to.

    jobId string

    The unique ID of this job.

    parentJobId string

    For document updates and deletions, each job may spawn children to update documents derived from the updated document. If this job is such a child, then this is the parent job ID. (The parent is owned by the same corpus and source.)

    state enum

    Possible values: [JOB_STATE_UNSPECIFIED, JOB_STATE_PENDING, JOB_STATE_RUNNING, JOB_STATE_COMPLETED, JOB_STATE_FAILED, JOB_STATE_CANCELLED]

    The current state of the job.

    created date-time

    The timestamp that the job was requested.

    started date-time

    The timestamp that the job began.

    completed date-time

    The timestamp that the job completed.

    errorMessage string

    If the job failed (or was cancelled), the error message describing the failure.

    loadSpec object

    The LoadSpec used during this job. This is the LoadSpec defined for the Source at the time this job was created.

    Only one of the load_spec, updated_document_id, or deleted_document_id fields may be populated.

    maxDocuments int32

    The maximum number of documents to ingest. This cannot exceed 200 in general. If you need more documents in a single corpus, please contact the Fixie team.

    maxDocumentBytes int32

    The maximum size of an individual document in bytes. If unset, a reasonable default will be chosen by Fixie.

    relevantDocumentTypes object

    The types of documents to keep. Any documents surfaced during loading that don't match this filter will be discarded. If unset, all documents will be kept.

    include object

    Mime types must be in this set to be kept. Empty implies the universal set. That is, all mime types will be kept save those in the exclude set.

    mimeTypes string[]
    exclude object

    Mime types must not be in this set to be kept. Empty imples the empty set.

    mimeTypes string[]
    web object

    Allows loading documents by crawling the web.

    Only one of the web or static fields may be populated when creating a new source.

    startUrls string[] required

    The list of start URLs to crawl.

    maxDepth int32

    The maximum depth of links to traverse. If 0 (or unset), there will be no depth limit.

    includeGlobPatterns string[]

    A set of glob patterns matched against any additional discovered URLs. URLs matching these patterns will be included in the crawl, unless the URL matches any of the exclude_glob_patterns.

    excludeGlobPatterns string[]

    A set of glob patterns matched against any additional discovered URLs. URLs matching these patterns will be excluded from the crawl.

    static object

    Allows loading documents from a static source (e.g. a file upload).

    Only one of the web or static fields may be populated when creating a new source.

    documents object[] required

    The documents to load.

  • Array [
  • filename string required

    The filename of the document.

    mimeType string required

    The MIME type of the document.

    contents bytes required

    The contents of the document.

    metadata object

    The metadata to attach to this document.

    publicUrl string

    The public URL of the document, if any.

    language string

    The BCP47 language code of the document, if known.

    title string

    The title of the document, if known.

    description string

    The description of the document, if known.

    published date-time

    The timestamp that the document was published, if known.

  • ]
  • updatedDocumentId string

    The ID of the document that was updated whose direct children should be reprocessed and whose chunks should be recomputed as part of this job. (The document is owned by the same corpus and source as this job.)

    Only one of the load_spec, updated_document_id, or deleted_document_id fields may be populated.

    deletedDocumentId string

    The ID of the document to be deleted whose direct children and chunks should be deleted (or updated in the case of a child created by an aggregation processing step) as part of this job. (The document is owned by the same corpus and source as this job.)

    Only one of the load_spec, updated_document_id, or deleted_document_id fields may be populated.

    processSteps object[]

    The ProcessSteps used during this job. These are the ProcessSteps defined for the Source at the time this job was created. In the case of an updated/deleted document child job, this may be a sublist of the Source's ProcessSteps.

  • Array [
  • stepName string required

    The human-readable name of the step.

    relevantDocumentTypes object

    The types of documents to which this step applies. Leave empty to apply to all documents.

    include object

    Mime types must be in this set to be kept. Empty implies the universal set. That is, all mime types will be kept save those in the exclude set.

    mimeTypes string[]
    exclude object

    Mime types must not be in this set to be kept. Empty imples the empty set.

    mimeTypes string[]
    htmlToMarkdown object

    Converts HTML documents to Markdown. Use with relevant_document_types set to include only text/html.

    unstructuredProcessor object

    Converts binary documents to plain text.

  • ]
  • chunkSpec object

    The ChunkSpec used during this job. This is the ChunkSpec defined for the Source at the time this job was created.

    inputSelector object

    The input documents that should be chunked. Only documents that correspond to UTF-8 encoded text can be chunked. Any other kind of document will fail.

    mimeTypeFilter object

    Filters documents based on their mime type.

    include object

    Mime types must be in this set to be kept. Empty implies the universal set. That is, all mime types will be kept save those in the exclude set.

    mimeTypes string[]
    exclude object

    Mime types must not be in this set to be kept. Empty imples the empty set.

    mimeTypes string[]
    originFilter object

    Filters documents based on their origin.

    origins object[]

    Document origins must match one of these to be kept.

  • Array [
  • load boolean
    processStep string
  • ]
  • chunkSize int32

    The desired chunk size for each chunk, in tokens. This is a strict maximum, as well as a target. Adjacent chunks will be combined if their total size is under this limit.

    maxChunksPerDocument int32

    The maximum number of chunks to produce for an individual document.

    maxChunksTotal int32

    The maximum number of chunks to produce in total. This cannot exceed 5000 in general. If you need more chunks in a single source, please contact the Fixie team.

    embedSteps object[]

    The EmbedSteps used during this job. These are the EmbedSteps defined for the Source at the time this job was created.

  • Array [
  • stepName string

    The human-readable name for this step.

    direct object

    Embeds chunks directly.

    parentChild object

    Embeds chunks using a parent-child strategy.

  • ]
  • loadResult object

    The results of loading.

    started date-time

    The timestamp at which loading began.

    completed date-time

    The timestamp at which loading completed.

    createdDocsCount int32

    The number of documents created.

    updatedDocsCount int32

    The number of documents that existed previously and were updated.

    unchangedDocsCount int32

    The number of documents that existed previously and were not modified.

    deletedDocsCount int32

    The number of documents deleted because they're no longer present in the source.

    sizeFilteredDocsCount int32

    The number of documents omitted due to content size.

    typeFilteredDocsCount int32

    The number of documents omitted due to mime type.

    processStepResults object[]

    The results of each processing step.

  • Array [
  • stepName string

    The step that produced these results.

    expectedOutputDocsCount int32

    The number of documents expected from this step prior to execution.

    started date-time

    The timestamp that this processing step began.

    completed date-time

    The timestamp that this processing step completed.

    producedDocsCount int32

    The total number of documents produced by this step.

    failedDocsCount int32

    The number of documents that failed to be processed.

    createdDocsCount int32

    The number of documents created.

    updatedDocsCount int32

    The number of documents that existed previously and were updated.

    unchangedDocsCount int32

    The number of documents that existed previously for which processing produced the same result as before.

    deletedDocsCount int32

    The number of documents deleted because they were previously produced by this step but weren't with the latest input.

  • ]
  • chunkResult object

    The results of chunking.

    started date-time

    The timestamp that chunking began.

    completed date-time

    The timestamp that chunking completed.

    expectedDocsCount int32

    The number of documents expected to be chunked prior to execution.

    successfulDocsCount int32

    The number of documents successfully chunked.

    failedDocsCount int32

    The number of documents that failed to be chunked.

    createdChunksCount int32

    The number of chunks created.

    unchangedChunksCount int32

    The number of chunks that were not modified.

    deletedChunksCount int32

    The number of chunks deleted.

    embedStepResults object[]

    The results of each embedding step.

  • Array [
  • stepName string

    The step that produced these results.

    started date-time

    The timestamp that this embedding step began.

    completed date-time

    The timestamp that this embedding step completed.

    expectedChunksCount int32

    The number of chunks expected to be embedded by this step prior to execution.

    successfulChunksCount int32

    The number of chunks successfully embedded.

    failedChunksCount int32

    The number of chunks that failed to be embedded.

    createdVectorsCount int32

    The number of vectors created.

    unchangedVectorsCount int32

    The number of vectors that were not modified.

    deletedVectorsCount int32

    The number of vectors deleted.

  • ]
Loading...