As generative AI tools like ChatGPT continue transforming how professionals write, strategize, and solve problems, a new set of structured prompt-writing frameworks is helping users tap into their full potential. Created by Andrew Bolis and recently circulated on Facebook, the visual guide titled “ChatGPT Prompt Writing Frameworks” introduces five distinct models—R-T-F, T-A-G, B-A-B, C-A-R-E, and R-I-S-E—each designed to improve precision, creativity, and clarity in prompting. These formulas are especially powerful for marketers, copywriters, product teams, and designers looking to generate more effective and targeted AI outputs.
The R-T-F framework—Role, Task, Format—guides users to first define the persona or perspective the AI should adopt, then specify the exact task, followed by the format of the output. For instance, asking ChatGPT to “act as a B2B SaaS marketer” to “create a LinkedIn post” in a “call-to-action format” encourages the model to generate content that’s more specific and relevant. By combining roleplay with clear structure, R-T-F helps users avoid vague outputs and achieve more professional-grade results.
Next is T-A-G, which stands for Task, Action, Goal. This framework is particularly effective for process improvement and performance-driven writing. It asks users to clearly define the task, identify the action to be taken, and state the desired outcome. In the example of email onboarding, a prompt might request a copywriter to revise a sequence with the goal of increasing activation rates. T-A-G creates a direct line from intention to impact, making AI suggestions more strategic and results-oriented.
The B-A-B framework—Before, After, Bridge—is rooted in persuasive communication. It prompts the AI to explain the current problem (Before), envision the ideal outcome (After), and offer a strategy to achieve that transition (Bridge). This structure is useful in marketing, sales, and even UX work. For instance, when addressing low mobile app engagement, a B-A-B prompt might seek suggestions for UX changes and push notifications to encourage return visits. It’s a storytelling device that naturally leads into solution-building.
With C-A-R-E—Context, Action, Result, Example—the focus shifts to case study-style narration. This model provides the AI with background context, then describes the action taken, the result achieved, and an example that reinforces the scenario. It works well for summarizing campaigns, explaining success stories, or structuring marketing copy. A prompt using C-A-R-E might involve organizing a virtual summit and showcasing strategies that led to high user signups. This framework is ideal for those who want outputs grounded in real-world outcomes.
Finally, R-I-S-E—Role, Input, Steps, Outcome—is designed for analytical tasks and UX workflows. It invites users to define the AI’s role, provide relevant data or observations, request step-by-step analysis, and specify the intended outcome. A designer, for example, could input heatmap data and ask for improvements that increase checkout completion rates. In one highlighted case, this structured approach helped boost conversions from 45% to 60%. R-I-S-E encourages detailed, measurable, and evidence-based prompting.
These frameworks underscore a broader trend: as AI becomes more embedded in daily work, prompt literacy is becoming a core professional skill. The clarity and logic these models provide don’t just improve responses—they elevate the conversation between human and machine. In this rapidly evolving space, knowing how to ask the right question is quickly becoming just as valuable as knowing the answer.