Skip to content

Can ChatGPT work with your enterprise data?



Bring OpenAI’s ChatGPT model in Azure to your own enterprise-grade app experiences with precise control over the knowledge …

44 thoughts on “Can ChatGPT work with your enterprise data?”

  1. Where someone has a long session, how does AzureOpenAI service deal with token limits where it has to give the whole context especially where previous responses are long?

  2. The question I would have is this one: do the « private data » is « protected »? In a chat, chatgpt said that I sould not share private information with it, because it cannot guarantee that the data « will not be used / made public or something ».

  3. OMG been looking for this information for 3 weeks. Thank you. Saw it before on another channel but it was very confusing compared to how this video explained things.

  4. Chapters

    00:00:00 – Introduction
    00:00:18 – Overview of the Azure OpenAI service
    00:01:23 – Applying ChatGPT to enterprise-grade applications on the Azure service
    00:02:29 – Retrieval Augmented Generation
    00:03:06 – Private Knowledge
    0:03:32 – Using ChatGPT in an App
    0:04:25 – Asking Questions in the App
    0:05:49 – Exposing Details of Conversation Turns
    0:06:31 – Injecting fragments of documents
    0:06:46 – Different approaches for generating responses
    0:08:14 – Adapting style of response
    0:09:41 – How Information Protection Works
    0:10:02 – Demonstration of Document-Level Granular Access Control
    0:11:00 – Adding New Information into Search
    0:11:20 – Running Scripts to Add New Information
    0:12:04 – Code Behind Sample App
    0:12:50 – Overview of ChatGPT
    0:13:31 – Using Azure OpenAI Studio Playground
    0:14:30 – Building Your Own Enterprise-grade ChatGPT-enabled App

  5. It looks really interesting. What about using it to gather insights about structured data? Say for a set of headlines, what is the top-performing headline (based upon summary data) and what the CTA is and how far above or below a benchmark? Basically gathering insights, thru a guided process, from structured data?

  6. Sorry, I didn't get it. Do I understand correctly that if we want to keep our data private, we need to keep it separate from the model? And only add a piece of information during the response generation process. If we start to teach the model our data, will it become public?

  7. Honestly, the more I hear about this tech, the more I want to use it for worldbuilding and lore for games and storytelling because wooooow that seems like a good way to prevent ever making another characterization flub or timeline mistake ever again.

  8. Does not the word "train" have a specific meaning in ML? The real question is not if ChatGPT is 'training' on corporate data, but whether it is saving prompts, and saving that information and able to repeat that information to a different corporation. It isn't enough to say it doesn't train, we know currently training was cut off In September 2021. So could you clarify, do you mean that ChatGPT doesn't use prompts to fine tune the model? The bot itself certainly states it can save parameters and repeate them to other users, based on fine tuning. And there was a leak a while ago where even conversation history of other usrs was leaked by openAI. It's because the bot will also state it doesn't 'train' on prompts that I want to clarify. AFter all if data is leaked by the prompts users submit, then it doesn't matter if the word is 'fine tuning' instead of 'trained' – what is paramount is private information was saved and shared.

  9. We are trying to build a similar solution to enable conversational q&a but using elastic search for indexing with embeddings.

    1. How do you decide on the chunk size before indexing?
    2. How different would the retrieved chunks based on cosine similarity be when compared with cognitive search?

  10. Can anyone help me with robust strategy for handling dependent and independent questions during the conversation, including generating a standalone question to provide additional context for dependent questions?

    Is the strategy used here to augment the user's latest question with prior conversation history robust for all kinds of scenarios?

  11. Hi, do you have details on the type of RBAC role required to be able to deploy the demo? I am getting a : 'the client does not have the necessary permissions to perform the specified action', I have Cognitive Services Contributor access

  12. Thanks for this great video, really exciting! I have one question: Are prompts (and thus company information) processed exclusively in Azure OpenAI Service, and NOT through OpenAI's API?

  13. I looks great!! thanks. 
    is there a video explaining step by step the coding from star to end. It would be great for those us who are starting in the AI and Azure

  14. How do you ensure the cognitive search results don’t exceed the 4096 token limit for the ChatGPT? And if they do exceed this (entirely possible with large amount of corporate data), how son you chunk it for ChatGPT ?

  15. This is pure gold.

    Would it not be reasonable to expect that in the next few years every major MSFT cloud storage and development tool like Azure SQL, SharePoint, Dataverse, Power Apps and Power Bi to offer this feature automatically?

  16. Very interesting! Would it also be possible to make an integration between GPT and SAP or MS Dynamics? I am an SAP FI consultant handling incidents and changes submitted by the finance departments. Would it be possible to make a private model in which GPT can read through the SAP system and give instructions on how to solve certain incidents? For example, if a user gets a certain error when performing a payment run, would GPT be able to analyse where in the system this error is coming from and how to solve it? Not just giving recommendations as is it does now when anonymizing the data and submitting it in the public GPT environment. And fff course all without sharing any information to the outside world.

  17. Hi! I've been trying to recreate this project on my machine and I'm getting an error I don't quite get. I've found a workaround, but I feel like my workaround is reducing the performance of the assistant. I'm using an AzureOpenAI service that is based on gpt-35-turbo and when I try to ask a question using RRR or RDA I'm getting an exception saying that gpt-35-turbo does not support parameters "logprobs, best_of and echo". I've deactivated them in order to make the project work, but as I've said it feels like the quality of the responses have diminished.

    Did anybody else encounter this problem?

Comments are closed.