Hierarchy

Properties

evaluatorType: keyof EvaluatorType

The name of the evaluator to use. Example: labeled_criteria, criteria, etc.

agentTools?: StructuredToolInterface<ZodObject<any, any, any, any, {}>>[]

A list of tools available to the agent, for TrajectoryEvalChain.

chainOptions?: Partial<Omit<LLMEvalChainInput<EvalOutputType, BaseLanguageModelInterface<any, BaseLanguageModelCallOptions>>, "llm">>
criteria?: CriteriaLike

The criteria to use for the evaluator.

distanceMetric?: EmbeddingDistanceType

The distance metric to use for comparing the embeddings.

The embedding objects to vectorize the outputs.

feedbackKey?: string

The feedback (or metric) name to use for the logged evaluation results. If none provided, we default to the evaluationName.

prepareData?: ((data) => {
    prediction: unknown;
    input?: unknown;
    reference?: unknown;
})

Type declaration

    • (data): {
          prediction: unknown;
          input?: unknown;
          reference?: unknown;
      }
    • Convert the evaluation data into a format that can be used by the evaluator. By default, we pass the first value of the run.inputs, run.outputs (predictions), and references (example.outputs) If this is specified, it will override the prepareData function in the RunEvalConfig for this particular evaluator.

      Parameters

      • data: {
            run: Run;
            reference_outputs?: Record<string, unknown>;
        }

        The data to prepare.

        • run: Run
        • Optional reference_outputs?: Record<string, unknown>

      Returns {
          prediction: unknown;
          input?: unknown;
          reference?: unknown;
      }

      The prepared data.

      • prediction: unknown
      • Optional input?: unknown
      • Optional reference?: unknown

Generated using TypeDoc