Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
Amir's picture
3 4

Amir

sahsaeedi
·
https://sahsaeedi.github.io/
  • sahsaeedi
  • sahsaeedi
  • shasaeedi

AI & ML interests

NLP, RLHF, Alignment

Recent Activity

updated a Space 25 days ago
tpo-alignment/README
updated a Space 25 days ago
DualCPO/README
updated a Space 25 days ago
DualCPO/README
View all activity

Organizations

Cogint ASU's profile picture DCPO-T2I's profile picture TPO's profile picture

authored 4 papers 3 months ago

UnSeenTimeQA: Time-Sensitive Question-Answering Beyond LLMs' Memorization

Paper • 2407.03525 • Published Jul 3, 2024 • 3

Triple Preference Optimization: Achieving Better Alignment with Less Data in a Single Step Optimization

Paper • 2405.16681 • Published May 26, 2024 • 1

When "Competency" in Reasoning Opens the Door to Vulnerability: Jailbreaking LLMs via Novel Complex Ciphers

Paper • 2402.10601 • Published Feb 16, 2024 • 1

How Can Input Reformulation Improve Tool Usage Accuracy in a Complex Dynamic Environment? A Study on $τ$-bench

Paper • 2508.20931 • Published Aug 28 • 15
authored a paper over 1 year ago

Insights into Alignment: Evaluating DPO and its Variants Across Multiple Tasks

Paper • 2404.14723 • Published Apr 23, 2024 • 10
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs