You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Feature description
Using open-source models in the data interpreter often leads to issues with incapability of generating correct JSON format . I tried to add repair_llm_output: true in the config2.yaml but it didn't work.
by checking the source code, I noticed that the repair work is conducted in ActionNode, while my issue occurs in WriteAnalysisCode:
The model generated incorrect JSON response and passed it into the CodeParser
The parser did not match the regex
the logger reported an error, then directly returned the incorrectly formatted text.
The incorrectly formatted text was passed back to the action, where it failed during decoding with reflection = json.loads(CodeParser.parse_code(block=None, text=rsp))
The repair mechanism was not triggered.
Therefore, it is ideal to adapt the repair functionality for the data interpreter as well.
Your Feature
Enabling Data Interpreter to postprocess llm output
The text was updated successfully, but these errors were encountered:
Feature description
Using open-source models in the data interpreter often leads to issues with incapability of generating correct JSON format . I tried to add
repair_llm_output: true
in the config2.yaml but it didn't work.by checking the source code, I noticed that the repair work is conducted in ActionNode, while my issue occurs in WriteAnalysisCode:
reflection = json.loads(CodeParser.parse_code(block=None, text=rsp))
The repair mechanism was not triggered.
Therefore, it is ideal to adapt the repair functionality for the data interpreter as well.
Your Feature
Enabling Data Interpreter to postprocess llm output
The text was updated successfully, but these errors were encountered: