This qualitative study explores the adoption of generative artificial intelligence (AI) in academic writing in higher education and explores its influence on the quality of writing, creativity, and ethical interaction. Using an interpretive design, which involves semi-structured interviews, and thematic coding, the study focuses on how university students navigate the human-AI collaborative process. Results indicate that generative AI is more of a mediating resource that reduces cognitive load and increases linguistic ability, especially among non-native speakers of the English language and at the first draft stage. Statistics also show that students are more engaged when generative AI reduces the technical obstacles to academic expression. However, there is a substantial amount of tension between authorship authenticity, and the participants are facing a voice dilemma whereby there are better mechanics at the cost of individual expression. Other hazards of creative obsession and mass extinction of new material are also reported in the study, where ideas are limited to the expected patterns of generative AI models. Also, the respondents are critically mindful of technical constraints, such as the frequency of hallucinations, with 47 per cent of AI-generated references in medical texts being fabricated (Bhattacharyya et al., 2023) and the unreliability of detection software (Gotoman et al., 2025; Wu and Wu, 2024). These results indicate that colleges and universities should stop focusing on enforcement based on detection and start focusing on the development of critical AI literacy, creating explicit rules to provide students with opportunities to use AI as a partner without interfering with authority and scholarship.