As undergraduate computer science classes grow in size, institutions increasingly rely on asynchronous computer-based assessments. However, concerns about academic integrity remain. To investigate whether exam timing reveals evidence of cheating, we analyze 21,403 submissions from 51 asynchronous exams across two undergraduate courses. We extend prior research on proctored, multi-day exams by introducing a comparison in student performance trends between two distinct assessment modes: on-site proctored and off-site unproctored. We find that performance declines throughout the exam window in both modes, suggesting the absence of widespread collaborative cheating. We observe a weak negative correlation between start time and performance, with standardized scores decreasing by 0.14 points per hour (on-site proctored) and 0.61 points per hour (off-site unproctored). In addition, start-time distributions and student surveys reveal behavioral differences. On-site proctored exams follow a centered start-time distribution, likely influenced by a reserved lecture hour. In contrast, off-site unproctored exams show a left-tailed distribution, with most students starting later than intended. This pattern suggests that greater scheduling flexibility leads to later exam starts, potentially exacerbating performance declines due to academic procrastination.