Finetuning LLMs for financial applications
By Svetlana Borovkova, Head of Quantitative Modeling
In my previous column, I discussed how Large Language Models (LLMs) are and can be applied in financial services. I also outlined several challenges associated with such applications, such as the lack of domain-specific financial knowledge, hallucinations, timeliness, or explainability issues. Fortunately, techniques for dealing with these challenges are rapidly emerging. In this column, I would like to address some of them.
Issues such as hallucinations (producing plausibly sounding but incorrect answers) and timeliness (LLMs being trained up to a specific point in time and so, not ‘knowing’ anything that happened after that) can be dealt with using a technique called retrieval augmented generation (RAG). This technique is rapidly becoming popular, both in commercial LLMs (such as Chat GPT) and in the development of in-house LLM applications.