Clever: A Curated Benchmark for Formally Verified Code Generation We introduce CLEVER, the first curated benchmark for evaluating the generation of specifications and formally verified code in Lean The benchmark comprises of 161 programming problems; it evaluates both formal speci-fication generation and implementation synthesis from natural language, requiring formal correctness proofs for both
CLEVER: A Curated Benchmark for Formally Verified Code Generation TL;DR: We introduce CLEVER, a hand-curated benchmark for verified code generation in Lean It requires full formal specs and proofs No few-shot method solves all stages, making it a strong testbed for synthesis and formal reasoning
The Clever Hans Mirage: A Comprehensive Survey on Spurious. . . This survey on spurious correlations uses the Clever Hans metaphor to motivate the problem, formalizes a group-based setup g=(y,a) with core metrics (worst-group, average-group, bias-conflicting), and explains why models latch onto shortcuts (simplicity bias, training dynamics)
CLEVER: A Curated Benchmark for Formally Verified Code Generation This paper introduces CLEVER, a benchmark dataset designed to evaluate LLMs on formally verified code generation It consists of 161 carefully crafted Lean specifications derived from programming problems in the existing HumanEval dataset
Evaluating the Robustness of Neural Networks: An Extreme Value. . . Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks
Contrastive Learning Via Equivariant Representation - OpenReview In this paper, we revisit the roles of augmentation strategies and equivariance in improving CL's efficacy We propose CLeVER (Contrastive Learning Via Equivariant Representation), a novel equivariant contrastive learning framework compatible with augmentation strategies of arbitrary complexity for various mainstream CL backbone models
STAIR: Improving Safety Alignment with Introspective Reasoning One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the AI into providing harmful responses Our method, STAIR (SafeTy Alignment with Introspective Reasoning), guides models to think more carefully before responding
Counterfactual Debiasing for Fact Verification 579 In this paper, we have proposed a novel counter- factual framework CLEVER for debiasing fact- checking models Unlike existing works, CLEVER is augmentation-free and mitigates biases on infer- ence stage In CLEVER, the claim-evidence fusion model and the claim-only model are independently trained to capture the corresponding information
Submissions | OpenReview Leaving the barn door open for Clever Hans: Simple features predict LLM benchmark answers Lorenzo Pacchiardi, Marko Tesic, Lucy G Cheke, Jose Hernandez-Orallo 27 Sept 2024 (modified: 05 Feb 2025) Submitted to ICLR 2025 Readers: Everyone