
The primary consensus among reviewers is that ZIP significantly reduces the "query cost"—the number of times you have to ask the model for a result—while maintaining or improving accuracy.
The community recognized the extensive evaluations showcasing superior accuracy and query efficiency over 13+ tasks. 27cc3576a6f149e95cf68afc3e25cd6c.zip
It addresses the high query requirements of existing methods by reducing problem dimensionality and using "intrinsic-dimensional gradient clipping." The primary consensus among reviewers is that ZIP
The string corresponds to a specific research paper titled "ZIP: An Efficient Zeroth-order Prompt Tuning for Black-box Vision-Language Models." It looks like there's no response available for this search
One reviewer pointed out that the methods ZIP was compared against (like BLACKVIP and BPTVLM) were from 2023, and suggested that more recent 2024 benchmarks should have been included for a fairer comparison.
It looks like there's no response available for this search. Try asking something else.
Reviewers pointed out that the soft prompt reparameterization design choices were thoroughly tested, including detailed ablation studies.