Skip to content

Commit 5b49380

Browse files
committed
Update fine-tuning resources in RESOURCES.md and translations, correcting URLs and enhancing descriptions for clarity and consistency.
1 parent b7730f1 commit 5b49380

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

18-fine-tuning/RESOURCES.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ The lesson was built using a number of core resources from OpenAI and Azure Open
1414
| [Fine-tuning and function calling](https://learn.microsoft.com/azure/ai-services/openai/how-to/fine-tuning-functions?WT.mc_id=academic-105485-koreyst) | Fine-tuning your model **with function calling examples** can improve model output by getting more accurate and consistent outputs - with similarly-formatted responses & cost-savings |
1515
| [Fine-tuning Models: Azure OpenAI Guidance](https://learn.microsoft.com/azure/ai-services/openai/concepts/models#fine-tuning-models?WT.mc_id=academic-105485-koreyst) | Look up this table to understand **what models can be fine-tuned** in Azure OpenAI, and which regions these are available in. Look up their token limits and training data expiry dates if needed. |
1616
| [To Fine Tune or Not To Fine Tune? That is the Question](https://learn.microsoft.com/shows/ai-show/to-fine-tune-or-not-fine-tune-that-is-the-question?WT.mc_id=academic-105485-koreyst) | This 30-min **Oct 2023** episode of the AI Show discusses benefits, drawbacks and practical insights that help you make this decision. |
17-
| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/en-us/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning-recommend?WT.mc_id=academic-105485-koreyst) | This **AI Playbook** resource walks you through data requirements, formatting, hyperparameter fine-tuning and challenges/limitations you should know. |
17+
| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning-recommend?WT.mc_id=academic-105485-koreyst) | This **AI Playbook** resource walks you through data requirements, formatting, hyperparameter fine-tuning and challenges/limitations you should know. |
1818
| **Tutorial**: [Azure OpenAI GPT3.5 Turbo Fine-Tuning](https://learn.microsoft.com/azure/ai-services/openai/tutorials/fine-tune?tabs=python%2Ccommand-line?WT.mc_id=academic-105485-koreyst) | Learn to create a sample fine-tuning dataset, prepare for fine-tuning, create a fine-tuning job, and deploy the fine-tuned model on Azure. |
1919
| **Tutorial**: [Fine-tune a Llama 2 model in Azure AI Studio](https://learn.microsoft.com/azure/ai-studio/how-to/fine-tune-model-llama?WT.mc_id=academic-105485-koreyst) | Azure AI Studio lets you tailor large language models to your personal datasets _using a UI-based workflow suitable for low-code developers_. See this example. |
2020
| **Tutorial**:[Fine-tune Hugging Face models for a single GPU on Azure](https://learn.microsoft.com/azure/databricks/machine-learning/train-model/huggingface/fine-tune-model?WT.mc_id=academic-105485-koreyst) | This article describes how to fine-tune a Hugging Face model with the Hugging Face transformers library on a single GPU with Azure DataBricks + Hugging Face Trainer libraries |

18-fine-tuning/translations/tw/RESOURCES.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
| [Fine-tuning and function calling](https://learn.microsoft.com/azure/ai-services/openai/how-to/fine-tuning-functions?WT.mc_id=academic-105485-koreyst) | 使用**函式呼叫範例**微調你的模型可以通過獲得更準確和一致的輸出來改進模型輸出——具有類似格式的回應和成本節省 |
1515
| [Fine-tuning Models: Azure OpenAI Guidance](https://learn.microsoft.com/azure/ai-services/openai/concepts/models#fine-tuning-models?WT.mc_id=academic-105485-koreyst) | 查閱此表以了解**哪些模型可以在Azure OpenAI中進行微調**,以及這些模型在哪些地區可用。如有需要,查閱它們的token限制和訓練數據到期日期。 |
1616
| [To Fine Tune or Not To Fine Tune? That is the Question](https://learn.microsoft.com/shows/ai-show/to-fine-tune-or-not-fine-tune-that-is-the-question?WT.mc_id=academic-105485-koreyst) | 這個30分鐘的**2023年10月**AI Show節目討論了幫助你做出這個決定的優點、缺點和實際見解。 |
17-
| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning?WT.mc_id=academic-105485-koreyst) | 這個**AI Playbook**資源帶你了解數據需求、格式化、超參數微調以及你應該知道的挑戰/限制。 |
17+
| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning-recommend?WT.mc_id=academic-105485-koreyst) | 這個**AI Playbook**資源帶你了解數據需求、格式化、超參數微調以及你應該知道的挑戰/限制。 |
1818
| **Tutorial**: [Azure OpenAI GPT3.5 Turbo Fine-Tuning](https://learn.microsoft.com/azure/ai-services/openai/tutorials/fine-tune?tabs=python%2Ccommand-line?WT.mc_id=academic-105485-koreyst) | 學習建立一個範例微調數據集,準備微調,建立微調工作,並在Azure上部署微調模型。 |
1919
| **Tutorial**: [Fine-tune a Llama 2 model in Azure AI Studio](https://learn.microsoft.com/azure/ai-studio/how-to/fine-tune-model-llama?WT.mc_id=academic-105485-koreyst) | Azure AI Studio允許你使用基於UI的工作流程來定制大型語言模型以適應你的個人數據集_適合低代碼開發者_。請參見此範例。 |
2020
| **Tutorial**:[Fine-tune Hugging Face models for a single GPU on Azure](https://learn.microsoft.com/azure/databricks/machine-learning/train-model/huggingface/fine-tune-model?WT.mc_id=academic-105485-koreyst) | 本文描述了如何使用Hugging Face transformers函式庫在單個GPU上與Azure DataBricks + Hugging Face Trainer函式庫一起微調Hugging Face模型。 |
@@ -30,7 +30,7 @@
3030
| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
3131
| **OpenAI Cookbook**: [Data preparation and analysis for chat model fine-tuning](https://cookbook.openai.com/examples/chat_finetuning_data_prep?WT.mc_id=academic-105485-koreyst) | 此筆記本用作預處理和分析用於微調聊天模型的聊天數據集。它檢查格式錯誤,提供基本統計資訊,並估算微調成本的代幣數量。請參見: [Fine-tuning method for gpt-3.5-turbo](https://platform.openai.com/docs/guides/fine-tuning?WT.mc_id=academic-105485-koreyst)|
3232
| **OpenAI Cookbook**: [Fine-Tuning for Retrieval Augmented Generation (RAG) with Qdrant](https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant?WT.mc_id=academic-105485-koreyst) | 此筆記本的目的是通過一個全面的範例來演示如何為檢索增強生成(RAG)微調OpenAI模型。我們還將整合Qdrant和少樣本學習來提升模型性能並減少捏造。 |
33-
| **OpenAI Cookbook**: [Fine-tuning GPT with Weights & Biases](https://cookbook.openai.com/examples/third_party/gpt_finetuning_with_wandb?WT.mc_id=academic-105485-koreyst) | Weights & Biases (W&B) 是AI開發者平台,提供訓練模型、微調模型和利用基礎模型的工具。首先閱讀他們的[OpenAI Fine-Tuning](https://docs.wandb.ai/guides/integrations/openai?WT.mc_id=academic-105485-koreyst)指南,然後嘗試Cookbook練習。 |
33+
| **OpenAI Cookbook**: [Fine-tuning GPT with Weights & Biases](https://cookbook.openai.com/examples/third_party/gpt_finetuning_with_wandb?WT.mc_id=academic-105485-koreyst) | Weights & Biases (W&B) 是AI開發者平台,提供訓練模型、微調模型和利用基礎模型的工具。首先閱讀他們的[OpenAI Fine-Tuning]https://docs.wandb.ai/guides/integrations/openai-fine-tuning/?WT.mc_id=academic-105485-koreyst)指南,然後嘗試Cookbook練習。 |
3434
| **Community Tutorial** [Phinetuning 2.0](https://huggingface.co/blog/g-ronimo/phinetuning?WT.mc_id=academic-105485-koreyst) - fine-tuning for Small Language Models | 認識[Phi-2](https://www.microsoft.com/research/blog/phi-2-the-surprising-power-of-small-language-models/?WT.mc_id=academic-105485-koreyst),微軟的新小型模型,功能強大且緊湊。本指南將引導您微調Phi-2,展示如何建立獨特的數據集並使用QLoRA微調模型。 |
3535
| **Hugging Face Tutorial** [How to Fine-Tune LLMs in 2024 with Hugging Face](https://www.philschmid.de/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst) | 這篇博客文章將引導您如何在2024年使用Hugging Face TRL、Transformers和數據集微調開放LLMs。您將定義一個使用案例,設定開發環境,準備數據集,微調模型,測試評估,然後部署到生產環境。 |
3636
| **Hugging Face: [AutoTrain Advanced](https://github.com/huggingface/autotrain-advanced?WT.mc_id=academic-105485-koreyst)** | 帶來更快更簡單的[最先進機器學習模型](https://twitter.com/abhi1thakur/status/1755167674894557291?WT.mc_id=academic-105485-koreyst)訓練和部署。 Repo 有適合Colab的指南和YouTube影片指導,用於微調。**反映了最近的[local-first](https://twitter.com/abhi1thakur/status/1750828141805777057?WT.mc_id=academic-105485-koreyst)更新**。請閱讀[AutoTrain documentation](https://huggingface.co/autotrain?WT.mc_id=academic-105485-koreyst)|

0 commit comments

Comments
 (0)