Changelog

Here, you'll find a comprehensive list of updates, enhancements, and fixes that we've introduced to deliver the best possible experience to you.

Are experiencing issues or have a feature suggestion?

-> Report an issue or feature request

-> Here is our public roadmap

2023 Updates

October 2023

Our approach involves an iterative process thought an easy to use interface, starting with a large, general model and minimal textual samples, and progressively adaptating a smaller and faster, as your models are being used to make predictions. More cost-effective, and highly capable NLP model tailored to the specific task at hand, automatically.

-> New machine learning training pipeline and deployment: We've implemented our vision on this improved ML training pipeline and deployment, to bring you lightning-fast performance and superior accuracy. We're enabling the power of LLMs to teach smaller models in an iterative proccess, beginning with larger models and gradually fine-tuning task-specific, smaller, and more cost-effective models for real-world applications. This entire process is automated, eliminating efforts on data labeling and technical concerns. Through the implementation of distillation, quantization and auto labeling techniques, your NLP models will exhibit unparalleled robustness and speed.

-> Credit-based pricing model: We're introducing a credit-based pricing model. Our free plan stays, and our Starter plan kicks off at just $18 / month, while the PRO plan starts at $55 / month. Now, it's even easier to enable the power of language models to your team. Find out more about our pricing here.

Pricing page details: https://metatext.ai/pricing

-> Enhanced interface flow: Structured flow for model training, integration and monitoring by Train, Connect and Monitor features. It's even easy to annotate your data, and get a NLP model trained deployed automatically. Integrate it via API or 3rd party apps. And monitor your predictions and model training versions.

-> More predictions with Bulk Inference: Now you can run bulk inference for up to 50,000 predictions at one time. Note: The predictions are made in your browser so it depends on the device capacity.

-> Train and predict larger text documents: Added support for up to 5,000 chars per record or prediction, that's approx. 1,000 to 1, 450 words, and ~2000 tokens per record / prediction allowed to be used to train your language models or make predictions on new texts.

-> Projects API: Added the ability to retrieve project details via the API.

-> Records API: Addressed a critical performance bottleneck that caused errors when retrieving records.

August 2023

-> Fixes on Model Training Status Update: We've addressed an issue related to model training status updates. Now, you can expect a more accurate and timely reflection of your model's training progress. Our team has worked hard to enhance this aspect of the platform, ensuring that you stay well-informed about your model's training journey.

July 2023

-> Training parameters: This feature allows you to fine-tune your models like never before. With the Training Parameters, you have the ability to adjust crucial model settings, simplified into three main parameters, such as language, model size and epochs. This level of customization empowers you to optimize model performance for your specific use cases. Take control of your model training and elevate your AI solutions to new heights.

Check out the Training Parameters docs

May 2023

-> Records API: We've added a powerful way to add data to your projects, Records API is a programmatically way to integrate Metatext.AI into your workflows and train models automatically as you add, update or delete text data.

Check out the API Reference

March 2023

-> Fix Model Deployment Artifact Update to Endpoint [Fix] Resolved an inconsistency between training and inference by automatically updating the model to the inference endpoint. Now, whether you're using the Playground, Batch, or API interface, the model's behavior remains aligned, ensuring consistent predictions for the same record.

-> Batch Inference [Feature] Introduced batch inference capability, allowing you to run up to 1,000 predictions at once and export the results to CSV. This feature streamlines large-scale inference tasks.

February 2023

-> Add Answering Model Type [Feature] Introducing the ability to build a custom ChatGPT with a knowledge base. You can now upload knowledge bases from various formats, including CSV, PDF, Docx, TXT, and HTML, enhancing your chatbot's capabilities.

-> Automatic Model Training [Performance] Streamlined the model training process with an asynchronous flow. Training is now triggered whenever you upload or annotate records, ensuring that your models are always up-to-date with the latest data.

-> Model Deployment Automatically to Endpoint [Performance] We've improved the deployment process by making the production-ready endpoint available as soon as model training is completed. This enhancement ensures a seamless transition from training to deployment.

January 2023

-> Annotation Mode: Data Labeling with Active Learning [Feature] Enhanced the annotation mode with AI assistance using active learning. This feature optimizes data labeling by selecting the best sample of records to annotate, ensuring faster and more accurate results.

-> New Interface [Feature] Unveiled a new user interface designed to enhance usability and flow. The updated interface offers a more user-friendly experience, making navigation and interaction smoother than ever before.

We're committed to continuously enhancing your experience with our product. Your feedback and suggestions are invaluable to us as we strive to provide a seamless and effective platform for your text analysis and transformation needs. Stay tuned for more updates, and please don't hesitate to reach out with any thoughts or questions!

Last updated