POSThttps://api.usemeru.com/refine/v2/predict

Text/Code Completion

A prediction is how you run an inference request on a model that is supported by Meru. Information about parameters passed into OpenAI's models is taken directly from OpenAI's website. Use this POST command to complete a chunk of natural language or code.

Authentication

  • Name
    x-api-key
    Type
    string
    Description

    Your Meru API Key

Required Attributes

  • Name
    model_id
    Type
    string
    Description

    Unique identifier for the other contact in the conversation. You can find all the model_ids we support in the Models section on the right side. Copy and paste the identifier exactly how it is listed in the table.

  • Name
    prompt
    Type
    string
    Description

    This is your prompt that will be used to generate an image or additional text, based on the model you have chosen.

Optional Attributes

  • Name
    suffix
    Type
    string
    Description

    The suffix that comes after a completion of inserted text.

  • Name
    max_tokens
    Type
    integer
    Description

    The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).

  • Name
    temperature
    Type
    number
    Description

    What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend altering this or top_p but not both.

  • Name
    top_p
    Type
    number
    Description

    An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

  • Name
    n
    Type
    integer
    Description

    How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

  • Name
    logprobs
    Type
    integer
    Description

    Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5.

  • Name
    stop
    Type
    string
    Description

    Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

  • Name
    echo
    Type
    boolean
    Description

    Echo back the prompt in addition to the completion

  • Name
    presence_penalty
    Type
    number
    Description

    Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

  • Name
    frequency_penalty
    Type
    number
    Description

    Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

  • Name
    best_of
    Type
    integer
    Description

    Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n.

Request

POST
refine/v2/predict
curl https://api.usemeru.com/refine/v2/predict \
  -H 'Content-Type: application/json' \
  -H 'x-api-key: ${API Key}' \
  -d '{
    "model_id": "text-davinci-003",
    "inputs": {
            "prompt": "how are you doing?"
    }
  }'

Response

    {
      "err_code": 0, 
      "id": JOB_ID, 
      "outputs": {
        "object": "text_completion", 
        "choices": [
          {
            "text": "\n\nI'm doing well, thank you for asking. How are you?", 
            "index": 0, 
            "logprobs": null, 
            "finish_reason": 
            "length"
          }
        ], 
        "usage": {
          "prompt_tokens": 5, 
          "completion_tokens": 16, 
          "total_tokens": 21
          }
        }, 
        "cost": 0.000378
    }

POSThttps://api.usemeru.com/refine/v2/predict/{job_id}

Text/Code Editing

Make a POST request to complete a chunk of code using one of the Codex models. Information about parameters passed into codex models is taken directly from OpenAI's website.

Authentication

  • Name
    x-api-key
    Type
    string
    Description

    Your Meru API Key

Required Attributes

  • Name
    model_id
    Type
    string
    Description

    Unique identifier for the other contact in the conversation. You can find all the model_ids we support in the Models section on the right side. Copy and paste the identifier exactly how it is listed in the table.

  • Name
    instruction
    Type
    string
    Description

    The instruction that tells the model how to edit the prompt.

Optional Attributes

  • Name
    n
    Type
    integer
    Description

    (Defaults to 1) How many edits to generate for the input and instruction.

  • Name
    input
    Type
    string
    Description

    (Defaults to ") The input text to use as a starting point for the edit.

  • Name
    temperature
    Type
    number
    Description

    (Defaults to 1) What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend altering this or top_p but not both.

  • Name
    top_p
    Type
    number
    Description

    (Defaults to 1) An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

Request

POST
refine/v2/predict/
curl https://api.usemeru.com/refine/v2/predict \
  -H 'Content-Type: application/json' \
  -H 'x-api-key: ${API Key}' \
  -d '{
    "model_id": "text-davinci-edit-001",
    "inputs": {
        "input":"What day of the wek is it?",
        "instruction":"Fix the spelling mistakes"
    }
  }'

Response

{
  "err_code": 0, 
  "id": JOB_ID, 
  "outputs": {
    "object": "edit", 
    "choices": [
      {
        "text": "What day of the week is it?\n",
        "index": 0
      }
    ], 
    "usage": {
      "prompt_tokens": 25,
      "completion_tokens": 28,
      "total_tokens": 53
    }
  },
"cost": 0.000954
}

GEThttps://api.usemeru.com/refine/v2/predict/{job_id}

Retrieve Completion or Edit

Make a GET request to retrieve a previously completed prediction.

Authentication

  • Name
    x-api-key
    Type
    string
    Description

    Your Meru API Key

Request

GET
refine/v2/predict/{job_id}
curl https://api.usemeru.com/refine/v2/predict/{job_id} \
  -H 'Content-Type: application/json' \
  -H 'x-api-key: ${API Key}'

Response

{
  "err_code": 0, 
  "id": JOB_ID, 
  "outputs": {
    "object": "edit", 
    "choices": [
      {
        "text": "What day of the week is it?\n",
        "index": 0
      }
    ], 
    "usage": {
      "prompt_tokens": 25,
      "completion_tokens": 28,
      "total_tokens": 53
    }
  },
"cost": 0.000954
}

POSThttps://api.usemeru.com/refine/v2/predict/

Generate Image

Submit a POST request to generate an image with Stable Diffusion. Does not return image (must retrieve image with GET request).

Authentication

  • Name
    x-api-key
    Type
    string
    Description

    Your Meru API Key

Required Attributes

  • Name
    prompt
    Type
    string
    Description

    Prompt for image generation.

Optional Attributes

  • Name
    num_samples
    Type
    integer
    Description

    (Default: 1). Number of samples (seperate images) to generate. Maximum is 8.

  • Name
    guidance_scale
    Type
    number
    Description

    (Default: 7.5). Higher values place more weight on the prompt.

  • Name
    seed
    Type
    integer
    Description

    (Default: 42). Seed for generation. Stick to a static seed to see similar results across inference runs. To randomize, set seed to -1.

  • Name
    num_inference_steps
    Type
    integer
    Description

    (Default: 50) Number of steps to run inference for. We don't recommend going beyond 100 steps.

  • Name
    height
    Type
    integer
    Description

    (Default: 512) Image height in pixels. Note: Total image size (width multiplied by height) must be less than 786432 pixels.

  • Name
    width
    Type
    integer
    Description

    (Default: 512) Image width in pixels. Note: Total image size (width multiplied by height) must be less than 786432 pixels.

Request

POST
refine/v2/predict/
curl https://api.usemeru.com/refine/v2/predict \
  -H 'Content-Type: application/json' \
  -H 'x-api-key: ${API_KEY}' \
  -d '{
    "model_id": "stability-diffusion-v1-5",
    "inputs": {
        "prompt" : "A picture of a cheescake with happy birthday written on it"
    }
  }'

Response

  {
    "err_code": 0,
    "id": JOB_ID, 
    "status_code": 1
  }

GEThttps://api.usemeru.com/refine/v2/predict/{job_id}

Retrieve Image

Make a GET request to retrieve a set of images from a completed generation job.

Authentication

  • Name
    x-api-key
    Type
    string
    Description

    Your Meru API Key

Request

GET
refine/v2/predict/{job_id}
curl https://api.usemeru.com/refine/v2/predict/{job_id} \
  -H 'Content-Type: application/json' \
  -H 'x-api-key: ${API Key}'

Response

{
  "err_code": 0, 
  "id": JOB_ID, 
  "outputs": {
  "infer_urls_upscaled": {
    [LIST_OF_IMAGE_URLS]
    }, 
  "image_urls":{
    [LIST_OF_IMAGE_URLS]
    },
  "cost": "0.005152512450730734"
}