nemozone

a zone for no one and everyone :) Btw this blog is only for adults! Dieser Blog ist nur für Erwachsene!

Apple erwägt radikal abgespeckte smarte Brille

Apple könnte in naher Zukunft eine neue, leichtere und kostengünstigere smarte Brille entwickeln, die sich stark von der aktuellen Mixed-Reality-Brille Vision Pro unterscheidet. Berichten zufolge plant das Unternehmen, eine Brille auf den Markt zu bringen, die sich auf Augmented Reality (AR) konzentriert und in ihrem Design den Ray-Ban Meta Smart Glasses ähnelt. Diese Brille wird voraussichtlich wie eine herkömmliche Hornbrille aussehen und könnte somit eine breitere Zielgruppe ansprechen.

Hintergrund zur Vision Pro

Die Apple Vision Pro ist ein innovatives, aber auch schweres und teures Gerät, das als VR-Brille konzipiert ist. Sie ermöglicht es Nutzern, die reale Welt durch Kameras zu sehen, was jedoch mit einem hohen Preis und einem beträchtlichen Gewicht verbunden ist. Bislang hat sich die Vision Pro nicht als großer Verkaufsschlager erwiesen, was Apple dazu veranlasst haben könnte, nach Alternativen zu suchen. Laut “Bloomberg” hat Apple im Rahmen eines Projekts namens “Atlas” bereits ausgewählten Mitarbeitern E-Mails geschickt, um Teilnehmer für eine bevorstehende Nutzerstudie zu finden.

Augmented Reality im Fokus

Die neue Brille soll ein essenzielles Feature der Augmented Reality bieten, indem digitale Informationen direkt vor die Augen der Nutzer projiziert werden. Dies könnte ähnlich funktionieren wie bei den MYVU-Brillen, die auf der IFA vorgestellt wurden und Funktionen wie Navigation, Teleprompter-Funktionen sowie Echtzeit-Übersetzungen ermöglichen. Die Integration von Lautsprechern in die Bügel könnte die Nutzungserfahrung weiter verbessern.

Konkurrenz und Marktentwicklung

Im Wettbewerb mit Meta und Ray-Ban ist Apple nicht allein. Mark Zuckerberg hat kürzlich ebenfalls eine ähnliche Brille vorgestellt, jedoch handelt es sich dabei derzeit nur um einen frühen Prototypen namens “Orion”. Die Entwicklung marktreifer Produkte wird voraussichtlich noch Jahre in Anspruch nehmen. Der Preis für die Herstellung solcher Geräte liegt derzeit bei etwa 9000 Euro, wobei das Ziel für den Endverbraucherpreis unter 900 Euro liegen soll.

Ausblick

Apple plant bereits für das kommende Jahr ein neues Modell der Vision Pro mit einem völlig neuen Chip. Eine günstigere Variante der Mixed-Reality-Brille wurde jedoch auf 2027 verschoben. Während Apple also weiterhin an der Vision Pro arbeitet, bleibt abzuwarten, wann und in welcher Form die neuen smarten Brillen auf den Markt kommen werden.

Citations: [1] https://www.n-tv.de/technik/Apple-erwaegt-radikal-abgespeckte-smarte-Brille-article25337503.html

DARPA and the Nuclear Disaster Response: A Historical Perspective

The Defense Advanced Research Projects Agency (DARPA) has long been at the forefront of technological innovation, particularly in defense and national security. One of its critical roles has been in addressing nuclear threats and disasters, a responsibility that has evolved significantly since its inception in 1958. This blog post explores DARPA's historical involvement in nuclear disaster response, highlighting key projects and their implications for modern defense strategies.

The Origins of DARPA's Nuclear Focus

DARPA was established in response to the technological surprises posed by the Soviet Union during the Cold War. Among its early initiatives was Project Argus, which aimed to create a radiation belt in Earth's magnetosphere to disrupt incoming missiles. Although this project ultimately failed, it set the stage for DARPA's ongoing commitment to innovative defense solutions against nuclear threats[4].

Key Projects and Innovations

Over the decades, DARPA has undertaken various projects aimed at enhancing the United States' ability to respond to nuclear disasters:

  1. Nuclear Detection Technologies: DARPA has invested in advanced detection systems capable of identifying nuclear materials and monitoring potential threats. These technologies are crucial for preventing nuclear proliferation and ensuring rapid response capabilities.

  2. Counter-Hypersonic Systems: In recent years, DARPA has focused on developing systems to counter hypersonic weapons, which pose a significant challenge due to their speed and maneuverability. The Glide Breaker project exemplifies this effort, aiming to create interceptors that can neutralize these fast-moving threats before they reach their targets[2].

  3. Radiation Mitigation Strategies: DARPA has also explored methods for mitigating radiation exposure in the event of a nuclear disaster. This includes research into protective gear and decontamination processes that could save lives during incidents involving radiological materials[5].

Lessons from Past Nuclear Events

The agency's historical projects provide valuable lessons for contemporary nuclear disaster preparedness:

  • Interagency Coordination: Effective response to nuclear incidents requires seamless collaboration among various government agencies. Historical analysis shows that miscommunication can lead to delays in response efforts, highlighting the need for robust interagency protocols.

  • Public Awareness and Training: Ensuring that both emergency responders and the public are educated about nuclear threats is essential for effective disaster management. Past experiences indicate that preparedness training can significantly improve outcomes during actual events.

  • Technological Innovation: Continuous investment in research and development is vital for keeping pace with evolving threats. DARPA's focus on cutting-edge technologies ensures that the U.S. maintains a strategic advantage in nuclear defense capabilities.

Conclusion

DARPA's historical involvement in addressing nuclear disasters underscores its critical role in national security. By leveraging innovative technologies and fostering interagency cooperation, DARPA continues to enhance the United States' ability to respond effectively to nuclear threats. As we look toward the future, it is imperative that we learn from past experiences while remaining vigilant against emerging challenges in this ever-evolving landscape of global security.

Citations: [1] https://www.wilsoncenter.org/blog-post/revisiting-1979-vela-mystery-report-critical-oral-history-conference [2] https://nationalinterest.org/blog/buzz/mad-scientists-darpa-have-plan-kill-russia-or-chinas-hypersonic-missiles-44427 [3] https://www.newscientist.com/article/dn13907-fifty-years-of-darpa-hits-misses-and-ones-to-watch/ [4] https://www.newscientist.com/article/2125337-war-by-any-means-the-story-of-darpa/ [5] https://www.darpa.mil/attachments/darapa60_publication-no-ads.pdf [6] https://killerinnovations.com/untold-stories-of-darpa/ [7] https://www.defense.gov/News/News-Stories/Article/Article/602879/darpa-robots-to-face-final-challenge-in-california/igphoto/darpa-robots-to-face-final-challenge-in-california/ [8] https://www.darpa.mil/news-events/2013-12-21a

As a long-time Flameshot user, I recently encountered an annoying issue after upgrading to Gnome 41 on Wayland. Every time I tried to take a screenshot, I was prompted to “Share” my screen. This constant interruption quickly became frustrating, especially when I needed to capture multiple screenshots in quick succession.

The Root of the Problem

After some investigation, I discovered that this wasn't actually a Flameshot bug. The issue stems from a unilateral decision made by Gnome developers that affects all third-party screenshot tools[1]. They implemented a new security measure that requires users to explicitly share their screen for each capture attempt.

The Impact on Users

This change has significant implications for productivity:

  • Workflow disruption: The constant prompts break concentration and slow down tasks.
  • Inconsistent user experience: Gnome's own screenshot tool is whitelisted, creating an unfair advantage.
  • Limited options: Users who prefer alternative tools like Flameshot are penalized.

Community Reaction

I'm not alone in my frustration. Many users have voiced their concerns about this change:

  • On GitHub: Multiple issues have been opened discussing the problem.
  • On Gnome's GitLab: Developers and users alike have debated the merits of this decision.

What Can Be Done?

Unfortunately, there's little that Flameshot's developers can do to address this issue directly. The ball is in Gnome's court. If you're as frustrated as I am, consider:

  1. Voicing your concerns on Gnome's issue trackers.
  2. Exploring alternative desktop environments that don't implement this restriction.
  3. Using Flameshot on X11 instead of Wayland (if possible in your setup).

Looking Forward

While I understand the security considerations behind this change, I hope Gnome developers will reconsider their approach. A more balanced solution that respects both security and usability would be ideal for all users, regardless of their preferred screenshot tool.

In the meantime, I'll be keeping a close eye on developments and exploring workarounds to maintain my productivity while using Flameshot on Gnome Wayland.

Citations: [1] https://flameshot.org/docs/guide/wayland-help/#i-am-asked-to-share-my-screen-every-time

Wayfarer: A Versatile Screen Recorder for GNOME and Wayland

If you're a Linux user looking for a powerful and flexible screen recording solution, Wayfarer might just be the tool you've been searching for. This open-source project, available on GitHub, offers a modern screen recorder designed specifically for GNOME and other desktop environments running on Wayland or Xorg[1].

Key Features:

  1. Broad Compatibility: Wayfarer supports GNOME desktops on popular distributions like Arch, Fedora, Debian Testing, and Ubuntu 22.04. It also works with wlroots-based desktops[1].

  2. Multiple Output Formats: The application supports MKV, MP4, and WebM video containers, with options for VP8, VP9, and MP4 video codecs. Audio can be recorded in Opus or MP3 format[1].

  3. Flexible Recording Options: Users can define custom recording areas, set delays before recording starts, and even specify a timer for automatic recording stops[1].

  4. Hardware Acceleration: Where available, Wayfarer offers VAAPI-enabled video codecs for improved performance[1].

  5. User-Friendly Interface: The application provides an intuitive GUI for setting up and controlling recordings[1].

Technical Details:

Wayfarer is built using modern technologies, including:

  • Gtk 4 (with an obsolete Gtk 3 branch available)
  • Vala programming language
  • GStreamer 1.0
  • Pipewire / PulseAudio
  • XDG Desktop Portal[1]

For developers interested in contributing or building from source, the project uses a meson/ninja build system and provides detailed instructions for setup on various Linux distributions[1].

While Wayfarer offers powerful functionality, it's worth noting that it adheres to Wayland's security model. This means users will need to authorize screen capture through the XDG Portal, though the application includes features to minimize this inconvenience[1].

If you're looking for a capable, open-source screen recording solution that embraces modern Linux desktop technologies, Wayfarer is definitely worth checking out. Visit the GitHub repository to learn more, contribute, or simply give it a try on your system.

Citations: [1] https://github.com/stronnag/wayfarer

Genmo: Empowering Video Creation for Everyone

In an age where video content dominates our digital landscape, with over 1 billion hours of video consumed daily on platforms like YouTube, the challenge remains: how can the average person create compelling videos? This is where Genmo steps in, revolutionizing the way we approach video creation.

The Vision Behind Genmo

Founded by two former Google employees and academics, including a co-author of the influential DDPM paper, Genmo aims to democratize video production. The founders recognized that while ideas are abundant, the tools for transforming those ideas into cinematic experiences are often inaccessible. Their mission is clear: to enable anyone to bring their stories to life effortlessly.

Why Genmo Matters

  • Accessibility: Traditional video creation often requires technical skills and expensive equipment. Genmo seeks to break down these barriers, making it easier for anyone to produce high-quality videos.
  • Innovation: With a focus on leveraging advanced technology, Genmo is at the forefront of redefining how we create and share video content.
  • Community: The company is actively looking for passionate individuals to join their team, emphasizing a collaborative approach to achieving their vision.

Join the Movement

As Genmo continues to grow and evolve, they invite talented individuals who share their passion for storytelling and innovation. The journey has just begun, but with a strong foundation and a clear mission, Genmo is poised to make a significant impact in the world of video creation.

Dream, create, redefine. What will you make?

Citations: [1] https://www.genmo.ai/about

Watch the video

In the ever-evolving world of digital imagery, AI-based tools are making waves by simplifying complex editing tasks. One such innovative solution is the object removal tool offered by AIEase. This powerful application allows users to effortlessly remove unwanted elements from their images, creating cleaner and more visually appealing results.

How It Works

The AIEase object removal tool employs advanced artificial intelligence algorithms to detect and erase objects from images with remarkable precision. Users can simply upload their image and use intuitive brush or rectangle tools to mark the areas they wish to remove. The AI then analyzes the surrounding pixels and seamlessly fills in the space, maintaining the image's overall integrity and natural appearance.

Key Features

User-Friendly Interface: The tool boasts a clean, straightforward design that makes it accessible to both novice and experienced users.

Multiple Selection Tools: Users can choose between brush and rectangle tools for precise object selection.

Adjustable Brush Sizes: The ability to modify brush and eraser sizes allows for fine-tuned control over the editing process.

Automatic Watermark Removal: A dedicated “Remove” button helps detect and eliminate text watermarks automatically.

Wide Format Support: The tool accepts various image formats, including JPG, JPEG, PNG, BMP, and WEBP.

Applications

This versatile tool can be used in numerous scenarios:

  • Removing photobombers from vacation pictures
  • Erasing unwanted objects from product photos
  • Cleaning up cluttered backgrounds in portraits
  • Removing watermarks from stock images (where permitted)

Conclusion

The AIEase object removal tool represents a significant leap forward in accessible image editing technology. By harnessing the power of AI, it empowers users to achieve professional-grade results without the need for extensive editing skills or expensive software. Whether you're a casual photographer or a digital marketing professional, this tool offers a quick and effective solution for enhancing your images.

Citations: [1] https://www.aiease.ai/app/remove-object-image

Keyviz: Visualize Your Keystrokes in Real-Time on Windows

If you've ever wanted to showcase your keyboard shortcuts during screencasts, presentations, or tutorials, Keyviz is the perfect tool for you. This free and open-source software allows you to visualize your keystrokes and mouse actions in real-time on Windows, bringing a new level of interactivity to your content.

Key Features

Real-Time Visualization Keyviz displays your keystrokes and mouse actions as they happen, allowing your audience to follow along effortlessly. Whether you're demonstrating complex keyboard shortcuts or showcasing your typing speed, Keyviz makes it easy for viewers to see exactly what you're doing[1].

Customizable Appearance Don't settle for a plain black and white display. Keyviz offers extensive customization options, allowing you to:

  • Change the visualization style (isometric or solid)
  • Adjust the size to fit your screen
  • Customize colors for normal and modifier keys
  • Add or remove borders
  • Display icons and symbols on keys[3]

Powerful Configuration Options Keyviz gives you control over how your keystrokes are displayed:

  • Filter out regular keys to show only shortcuts (e.g., Ctrl + S)
  • Adjust the position of the visualization on your screen
  • Set how long keystrokes linger before fading out
  • Choose from various animation presets for a dynamic look[3]

Getting Started with Keyviz

  1. Installation: Download the latest version from the GitHub Releases page or the Microsoft Store. Simply unzip the file and run the installer[1][3].

  2. Launch: Once installed, run Keyviz to start visualizing your keystrokes immediately[3].

  3. Customization: Access the settings by right-clicking the Keyviz icon in your taskbar. Here you can adjust the appearance and behavior to suit your needs[3].

  4. Usage: Keyviz runs in the background, capturing and displaying your keystrokes. You can easily toggle it on and off using the taskbar icon[3].

Pro Tips

  • For screen recordings or streaming, add Keyviz as a game capture source in OBS to place the key display anywhere in your scene[2].
  • Experiment with different styles and animations to find the perfect look for your content.
  • Use the hotkey filter to focus on important shortcuts during tutorials or demonstrations[3].

Conclusion

Keyviz is an invaluable tool for content creators, educators, and anyone who wants to add a professional touch to their keyboard-centric presentations. Its blend of functionality and customization makes it stand out among key capturing tools for Windows. Give Keyviz a try and watch your tutorials and demonstrations come to life with dynamic keystroke visualization.

Citations: [1] https://github.com/mulaRahul/keyviz [2] https://www.youtube.com/watch?v=_J1tjKMuL74 [3] https://www.youtube.com/watch?v=FwuTqWzlRSc [4] https://filecr.com/windows/rahul-mula-keyviz/ [5] https://www.youtube.com/watch?v=uJNIRLYXEDw [6] https://www.reddit.com/r/software/comments/9pp90h/keystroke_visualizer/ [7] https://mularahul.github.io/keyviz/ [8] https://alternativeto.net/feature/keystroke-visualization/

Apple Unveils Groundbreaking Private Cloud Compute Security Research Initiative

Apple's Security Engineering and Architecture (SEAR) team has launched an unprecedented initiative to open up its Private Cloud Compute (PCC) system for public scrutiny. This bold move aims to build trust and transparency in Apple's cloud-based AI processing capabilities while maintaining industry-leading privacy and security standards.

Key Components of the Initiative

Security Guide Apple has published a comprehensive Private Cloud Compute Security Guide, offering in-depth technical details about PCC's architecture and security measures. This guide covers crucial topics such as:

  • PCC attestations built on hardware-implemented features
  • Authentication and routing of PCC requests for non-targetability
  • Transparency in software running in Apple's data centers
  • PCC's resilience against various attack scenarios

Virtual Research Environment (VRE) For the first time, Apple has created a Virtual Research Environment for one of its platforms. The VRE allows researchers to:

  • Analyze PCC security directly from their Mac
  • Run PCC node software in a virtual machine
  • Access a virtual Secure Enclave Processor (SEP)
  • Perform inference against demonstration models
  • Modify and debug PCC software for deeper investigation

The VRE is available in the latest macOS Sequoia 15.1 Developer Preview and requires a Mac with Apple silicon and at least 16GB of unified memory.

Source Code Release Apple is making the source code for key PCC components available under a limited-use license. This includes projects such as:

  • CloudAttestation
  • Thimble
  • splunkloggingd
  • srd_tools

Researchers can access this code through the apple/security-pcc project on GitHub.

Apple Security Bounty Program Expansion

To further encourage research, Apple has expanded its Security Bounty program to include PCC-specific vulnerabilities. The new bounty categories align with critical threats outlined in the Security Guide:

Category Maximum Bounty
Remote attack on request data $1,000,000
Access to user's request data outside trust boundary $250,000
Attack from privileged network position $150,000
Execution of unattested code $100,000
Accidental data disclosure $50,000

A Commitment to Transparency and Security

By opening up PCC for public scrutiny, Apple demonstrates its commitment to verifiable transparency in AI processing. This initiative sets a new standard for security and privacy in cloud-based AI systems, inviting researchers and curious minds alike to explore, verify, and contribute to the ongoing improvement of PCC's security measures.

As Apple continues to push the boundaries of AI technology, this open approach to security research promises to foster trust and collaboration within the tech community, ultimately benefiting users through enhanced privacy and security in cloud-based AI services.

Citations: [1] https://www.lawfaremedia.org/contributors/ikrstic [2] https://security.apple.com [3] https://security.apple.com/blog/pcc-security-research/ [4] https://jobs.apple.com/en-us/search?team=security-and-privacy-SFTWR-SEC [5] https://jobs.apple.com/nl-nl/details/200563691/swe-security-research-engineer-kernel-systems-sear-remote-considered [6] https://www.security.nl/posting/864159/Apple+vindt+kritiek+Chrome-lek+dat+remote+code+execution+mogelijk+maakt [7] https://twitter.com/radian?lang=en [8] https://jobs.apple.com/nl-nl/details/200549367/swe-lead-program-manager-security-engineering

Geofence warrants have become a contentious topic in the realm of law enforcement and digital privacy. These powerful investigative tools allow police to obtain location data for all devices within a specific area and time frame, raising significant constitutional and ethical concerns.

What are Geofence Warrants?

Geofence warrants, also known as reverse location warrants, are a relatively new type of search warrant that enables law enforcement to compel technology companies, primarily Google, to search their entire database of user location data to identify devices present in a particular area during a specified time period[1]. Unlike traditional warrants that target a specific suspect or device, geofence warrants cast a wide net, potentially capturing data from hundreds or thousands of innocent individuals.

How They Work

The process typically involves three steps:

  1. Law enforcement defines a geographic area and time frame of interest.
  2. The tech company searches its database and provides anonymized data for devices in that area.
  3. Police may request additional information or “unmask” specific users based on the initial data.

Constitutional Concerns

The use of geofence warrants has sparked intense debate over their constitutionality:

  1. Fourth Amendment Issues: Critics argue that geofence warrants violate the Fourth Amendment's protection against unreasonable searches and seizures, as they lack the specificity required for traditional warrants[1].

  2. Overbreadth: These warrants often capture data from numerous innocent individuals, raising concerns about privacy and potential misuse of information[2].

  3. Lack of Probable Cause: Unlike traditional warrants, geofence warrants do not require probable cause for each individual whose data is collected[2].

In August 2024, the U.S. Court of Appeals for the Fifth Circuit made a landmark ruling in United States v. Smith, declaring geofence warrants unconstitutional under the Fourth Amendment[1]. This decision diverges from previous court rulings and could have far-reaching implications for law enforcement practices.

Impact on Law Enforcement and Privacy

The use of geofence warrants has significant implications:

  • Investigative Tool: Law enforcement argues that geofence warrants are crucial for solving crimes, especially in cases with limited leads[3].
  • Privacy Concerns: Critics worry about the potential for abuse and the erosion of privacy rights in the digital age[3].
  • Chilling Effect: The use of these warrants during political protests has raised concerns about their impact on free speech and assembly rights[3].

Geofence Warrants?

The legal landscape surrounding geofence warrants is rapidly evolving. With conflicting court decisions and ongoing debates, it's likely that this issue will eventually reach the Supreme Court for a definitive ruling on their constitutionality[1].

As technology continues to advance, the balance between effective law enforcement and individual privacy rights remains a critical issue for society to address. The outcome of this debate will have lasting implications for digital privacy and the future of criminal investigations in the United States.

Citations: [1] https://www.wilmerhale.com/insights/client-alerts/20240827-the-impact-and-future-of-the-fifth-circuits-new-hard-line-stance-on-geofence-warrants [2] https://www.bjcl.org/blog/the-constitutionality-of-geofence-warrants [3] https://www.eff.org/deeplinks/2023/12/end-geofence-warrants [4] https://reason.com/volokh/2024/08/13/fifth-circuit-shuts-down-geofence-warrants-and-maybe-a-lot-more/ [5] https://www.lexipol.com/resources/blog/another-view-of-geofence-warrants/ [6] https://www.forbes.com/sites/cyrusfarivar/2023/12/14/google-just-killed-geofence-warrants-police-location-data/ [7] https://www.nacdl.org/Content/Geofence-Warrants [8] https://www.lexipol.com/resources/blog/emerging-tech-and-law-enforcement-what-are-geofences-and-how-do-they-work/

GPT4All offers an exciting way to integrate AI capabilities with your Obsidian vault while maintaining privacy and local control. Here's an overview of how to use GPT4All with Obsidian:

Installation and Setup

  1. Download and install the GPT4All desktop application[1][4].
  2. Launch GPT4All and enable the API Server in the settings[2].
  3. Download at least one GPT4All language model within the application[2].

Connecting to Obsidian

There are two main approaches to using GPT4All with Obsidian:

1. Using the LocalDocs Feature

  1. Open the LocalDocs feature in GPT4All[1].
  2. Click “Add Collection” and name it (e.g., “Obsidian Vault”)[1].
  3. Link the collection to your Obsidian vault folder[1].
  4. Create the collection to start the embedding process[1].

2. Using the Obsidian LLM Plugin (Beta)

  1. Install the Obsidian LLM Plugin using BRAT (Beta Reviewers Auto-update Tester)[2].
  2. Enable the plugin in Obsidian's Community Plugins settings[2].
  3. Configure the plugin to use GPT4All models[2].

Interacting with Your Vault

Once set up, you can:

  • Chat with your Obsidian notes using GPT4All models[1][4].
  • Use the AI to generate text, auto-tag notes, or perform other tasks within Obsidian[3].
  • Access your vault's content privately and securely without sending data to external servers[4].

Considerations

  • Performance may vary depending on your hardware and the chosen model[5].
  • Response times can be slower compared to cloud-based AI services[5].
  • The technology is still in development, so expect improvements over time[3][5].

By using GPT4All with Obsidian, you can leverage AI capabilities while maintaining control over your data and ensuring privacy. This combination allows for powerful, context-aware interactions with your personal knowledge base.

Citations: [1] https://docs.gpt4all.io/gpt4all_desktop/cookbook/use-local-ai-models-to-privately-chat-with-Obsidian.html [2] https://github.com/r-mahoney/Obsidian-LLM-Plugin [3] https://forum.obsidian.md/t/gpt-for-the-privacy-conscious-gpt4all/69371 [4] https://www.toolify.ai/ai-news/unlock-the-power-of-ai-run-obsidian-ai-locally-with-gpt4all-1528678 [5] https://www.youtube.com/watch?v=MndgTphJdRc [6] https://www.ssp.sh/brain/second-brain-assistant-with-obsidian-notegpt/ [7] https://github.com/brianpetro/obsidian-smart-connections/discussions/141 [8] https://obsidian.md/plugins?search=gpt