|
14 | 14 | | [Fine-tuning and function calling](https://learn.microsoft.com/azure/ai-services/openai/how-to/fine-tuning-functions?WT.mc_id=academic-105485-koreyst) | 使用**函式呼叫範例**微調你的模型可以通過獲得更準確和一致的輸出來改進模型輸出——具有類似格式的回應和成本節省 | |
15 | 15 | | [Fine-tuning Models: Azure OpenAI Guidance](https://learn.microsoft.com/azure/ai-services/openai/concepts/models#fine-tuning-models?WT.mc_id=academic-105485-koreyst) | 查閱此表以了解**哪些模型可以在Azure OpenAI中進行微調**,以及這些模型在哪些地區可用。如有需要,查閱它們的token限制和訓練數據到期日期。 | |
16 | 16 | | [To Fine Tune or Not To Fine Tune? That is the Question](https://learn.microsoft.com/shows/ai-show/to-fine-tune-or-not-fine-tune-that-is-the-question?WT.mc_id=academic-105485-koreyst) | 這個30分鐘的**2023年10月**AI Show節目討論了幫助你做出這個決定的優點、缺點和實際見解。 | |
17 | | -| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning?WT.mc_id=academic-105485-koreyst) | 這個**AI Playbook**資源帶你了解數據需求、格式化、超參數微調以及你應該知道的挑戰/限制。 | |
| 17 | +| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning-recommend?WT.mc_id=academic-105485-koreyst) | 這個**AI Playbook**資源帶你了解數據需求、格式化、超參數微調以及你應該知道的挑戰/限制。 | |
18 | 18 | | **Tutorial**: [Azure OpenAI GPT3.5 Turbo Fine-Tuning](https://learn.microsoft.com/azure/ai-services/openai/tutorials/fine-tune?tabs=python%2Ccommand-line?WT.mc_id=academic-105485-koreyst) | 學習建立一個範例微調數據集,準備微調,建立微調工作,並在Azure上部署微調模型。 | |
19 | 19 | | **Tutorial**: [Fine-tune a Llama 2 model in Azure AI Studio](https://learn.microsoft.com/azure/ai-studio/how-to/fine-tune-model-llama?WT.mc_id=academic-105485-koreyst) | Azure AI Studio允許你使用基於UI的工作流程來定制大型語言模型以適應你的個人數據集_適合低代碼開發者_。請參見此範例。 | |
20 | 20 | | **Tutorial**:[Fine-tune Hugging Face models for a single GPU on Azure](https://learn.microsoft.com/azure/databricks/machine-learning/train-model/huggingface/fine-tune-model?WT.mc_id=academic-105485-koreyst) | 本文描述了如何使用Hugging Face transformers函式庫在單個GPU上與Azure DataBricks + Hugging Face Trainer函式庫一起微調Hugging Face模型。 | |
|
30 | 30 | | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
31 | 31 | | **OpenAI Cookbook**: [Data preparation and analysis for chat model fine-tuning](https://cookbook.openai.com/examples/chat_finetuning_data_prep?WT.mc_id=academic-105485-koreyst) | 此筆記本用作預處理和分析用於微調聊天模型的聊天數據集。它檢查格式錯誤,提供基本統計資訊,並估算微調成本的代幣數量。請參見: [Fine-tuning method for gpt-3.5-turbo](https://platform.openai.com/docs/guides/fine-tuning?WT.mc_id=academic-105485-koreyst)。 | |
32 | 32 | | **OpenAI Cookbook**: [Fine-Tuning for Retrieval Augmented Generation (RAG) with Qdrant](https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant?WT.mc_id=academic-105485-koreyst) | 此筆記本的目的是通過一個全面的範例來演示如何為檢索增強生成(RAG)微調OpenAI模型。我們還將整合Qdrant和少樣本學習來提升模型性能並減少捏造。 | |
33 | | -| **OpenAI Cookbook**: [Fine-tuning GPT with Weights & Biases](https://cookbook.openai.com/examples/third_party/gpt_finetuning_with_wandb?WT.mc_id=academic-105485-koreyst) | Weights & Biases (W&B) 是AI開發者平台,提供訓練模型、微調模型和利用基礎模型的工具。首先閱讀他們的[OpenAI Fine-Tuning](https://docs.wandb.ai/guides/integrations/openai?WT.mc_id=academic-105485-koreyst)指南,然後嘗試Cookbook練習。 | |
| 33 | +| **OpenAI Cookbook**: [Fine-tuning GPT with Weights & Biases](https://cookbook.openai.com/examples/third_party/gpt_finetuning_with_wandb?WT.mc_id=academic-105485-koreyst) | Weights & Biases (W&B) 是AI開發者平台,提供訓練模型、微調模型和利用基礎模型的工具。首先閱讀他們的[OpenAI Fine-Tuning]https://docs.wandb.ai/guides/integrations/openai-fine-tuning/?WT.mc_id=academic-105485-koreyst)指南,然後嘗試Cookbook練習。 | |
34 | 34 | | **Community Tutorial** [Phinetuning 2.0](https://huggingface.co/blog/g-ronimo/phinetuning?WT.mc_id=academic-105485-koreyst) - fine-tuning for Small Language Models | 認識[Phi-2](https://www.microsoft.com/research/blog/phi-2-the-surprising-power-of-small-language-models/?WT.mc_id=academic-105485-koreyst),微軟的新小型模型,功能強大且緊湊。本指南將引導您微調Phi-2,展示如何建立獨特的數據集並使用QLoRA微調模型。 | |
35 | 35 | | **Hugging Face Tutorial** [How to Fine-Tune LLMs in 2024 with Hugging Face](https://www.philschmid.de/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst) | 這篇博客文章將引導您如何在2024年使用Hugging Face TRL、Transformers和數據集微調開放LLMs。您將定義一個使用案例,設定開發環境,準備數據集,微調模型,測試評估,然後部署到生產環境。 | |
36 | 36 | | **Hugging Face: [AutoTrain Advanced](https://github.com/huggingface/autotrain-advanced?WT.mc_id=academic-105485-koreyst)** | 帶來更快更簡單的[最先進機器學習模型](https://twitter.com/abhi1thakur/status/1755167674894557291?WT.mc_id=academic-105485-koreyst)訓練和部署。 Repo 有適合Colab的指南和YouTube影片指導,用於微調。**反映了最近的[local-first](https://twitter.com/abhi1thakur/status/1750828141805777057?WT.mc_id=academic-105485-koreyst)更新**。請閱讀[AutoTrain documentation](https://huggingface.co/autotrain?WT.mc_id=academic-105485-koreyst)。 | |
|
0 commit comments