Bring OpenAI’s ChatGPT model in Azure to your own enterprise-grade app experiences with precise control over the knowledge …
Bring OpenAI’s ChatGPT model in Azure to your own enterprise-grade app experiences with precise control over the knowledge …
Where someone has a long session, how does AzureOpenAI service deal with token limits where it has to give the whole context especially where previous responses are long?
Can I get it to point to a sql table?
Thanks for sharing these invaluable tips and source code. Any experiment results on which approach (e.g. read decompose ask) to use where?
Can we integrate azure bot framework? If I want add action to promote results? And is there any way we can do content moderation?
it do not work on mac
Is this app using OpenAI ChatGPT app or it using Azure OpenAI GPT model API?
Why calling all LLM ChatGPT? Do you call all car Ford T?
The question I would have is this one: do the « private data » is « protected »? In a chat, chatgpt said that I sould not share private information with it, because it cannot guarantee that the data « will not be used / made public or something ».
Super!
OMG been looking for this information for 3 weeks. Thank you. Saw it before on another channel but it was very confusing compared to how this video explained things.
Chapters
00:00:00 – Introduction
00:00:18 – Overview of the Azure OpenAI service
00:01:23 – Applying ChatGPT to enterprise-grade applications on the Azure service
00:02:29 – Retrieval Augmented Generation
00:03:06 – Private Knowledge
0:03:32 – Using ChatGPT in an App
0:04:25 – Asking Questions in the App
0:05:49 – Exposing Details of Conversation Turns
0:06:31 – Injecting fragments of documents
0:06:46 – Different approaches for generating responses
0:08:14 – Adapting style of response
0:09:41 – How Information Protection Works
0:10:02 – Demonstration of Document-Level Granular Access Control
0:11:00 – Adding New Information into Search
0:11:20 – Running Scripts to Add New Information
0:12:04 – Code Behind Sample App
0:12:50 – Overview of ChatGPT
0:13:31 – Using Azure OpenAI Studio Playground
0:14:30 – Building Your Own Enterprise-grade ChatGPT-enabled App
Please Give this service to office365 customer for free!
Can we train our confluence wiki to azure gpt?
how do we make sure that Microsoft is not taking our own property code/data in the cognitive search ??
Could you share the script to update the data? explain how it works?
Do you have an automatic way to update the application by doing azd deploy? or other?
It looks really interesting. What about using it to gather insights about structured data? Say for a set of headlines, what is the top-performing headline (based upon summary data) and what the CTA is and how far above or below a benchmark? Basically gathering insights, thru a guided process, from structured data?
Sorry, I didn't get it. Do I understand correctly that if we want to keep our data private, we need to keep it separate from the model? And only add a piece of information during the response generation process. If we start to teach the model our data, will it become public?
Thank you very much for this kind of content and other stuff you do for the community (like GitHub repos with prompt template repos and such ).
I want to integrate this with Microsoft teams across my enterprise. Is this able to be done and if so what is recommended? PVAT?
Honestly, the more I hear about this tech, the more I want to use it for worldbuilding and lore for games and storytelling because wooooow that seems like a good way to prevent ever making another characterization flub or timeline mistake ever again.
Well it turns out that the chief technology baked into this solution is LangChain. Despite being mentioned nowhere in the video.
Question. If we have all our data on remote servers or through AWS, would it be difficult to use those data sources?
I'm so lost how to tailor this code to my needs. Is anyone here a software developer that can help me use this code? Thank you.
Does not the word "train" have a specific meaning in ML? The real question is not if ChatGPT is 'training' on corporate data, but whether it is saving prompts, and saving that information and able to repeat that information to a different corporation. It isn't enough to say it doesn't train, we know currently training was cut off In September 2021. So could you clarify, do you mean that ChatGPT doesn't use prompts to fine tune the model? The bot itself certainly states it can save parameters and repeate them to other users, based on fine tuning. And there was a leak a while ago where even conversation history of other usrs was leaked by openAI. It's because the bot will also state it doesn't 'train' on prompts that I want to clarify. AFter all if data is leaked by the prompts users submit, then it doesn't matter if the word is 'fine tuning' instead of 'trained' – what is paramount is private information was saved and shared.
ChatGPT have some math mistake,not all the answer correct!
it‘’s very good for open and cognitive search
I got here from langchain and custom embedding for openai
We are trying to build a similar solution to enable conversational q&a but using elastic search for indexing with embeddings.
1. How do you decide on the chunk size before indexing?
2. How different would the retrieved chunks based on cosine similarity be when compared with cognitive search?
Can anyone help me with robust strategy for handling dependent and independent questions during the conversation, including generating a standalone question to provide additional context for dependent questions?
Is the strategy used here to augment the user's latest question with prior conversation history robust for all kinds of scenarios?
what a shame individuals cannot use the Azura openAI services
What is the chat UI? Something custom they made or is it available for enterprise customers?
Hi, do you have details on the type of RBAC role required to be able to deploy the demo? I am getting a : 'the client does not have the necessary permissions to perform the specified action', I have Cognitive Services Contributor access
Thanks for this great video, really exciting! I have one question: Are prompts (and thus company information) processed exclusively in Azure OpenAI Service, and NOT through OpenAI's API?
I looks great!! thanks.
is there a video explaining step by step the coding from star to end. It would be great for those us who are starting in the AI and Azure
Really impressive! So nuch potential to build vertical saas
So this is langchain + openai API? Nice
How do you ensure the cognitive search results don’t exceed the 4096 token limit for the ChatGPT? And if they do exceed this (entirely possible with large amount of corporate data), how son you chunk it for ChatGPT ?
This is pure gold.
Would it not be reasonable to expect that in the next few years every major MSFT cloud storage and development tool like Azure SQL, SharePoint, Dataverse, Power Apps and Power Bi to offer this feature automatically?
Stunning 🤩
Very interesting! Would it also be possible to make an integration between GPT and SAP or MS Dynamics? I am an SAP FI consultant handling incidents and changes submitted by the finance departments. Would it be possible to make a private model in which GPT can read through the SAP system and give instructions on how to solve certain incidents? For example, if a user gets a certain error when performing a payment run, would GPT be able to analyse where in the system this error is coming from and how to solve it? Not just giving recommendations as is it does now when anonymizing the data and submitting it in the public GPT environment. And fff course all without sharing any information to the outside world.
Can this use data from SharePoint or does it need to be stored in Azure storage?
Hi! I've been trying to recreate this project on my machine and I'm getting an error I don't quite get. I've found a workaround, but I feel like my workaround is reducing the performance of the assistant. I'm using an AzureOpenAI service that is based on gpt-35-turbo and when I try to ask a question using RRR or RDA I'm getting an exception saying that gpt-35-turbo does not support parameters "logprobs, best_of and echo". I've deactivated them in order to make the project work, but as I've said it feels like the quality of the responses have diminished.
Did anybody else encounter this problem?
I don’t trust anything with the word “Microsoft” in it, but I still end up using their software.
How's the solution implemented to have information protected at user level?
How to use the same ai response on your MS Teams?
Comments are closed.