About Early Access, Grand Prize & Qualification
Our intention is to provide Interview Navigator access in a controlled fashion to campus students. While we are working on feature development, we would also prefer students to take active participation in solution usage.
Eligibility for Grand Prize Winner Event:Your first 5 Steps
#1 Attend a workshop and Join Wait List
#2 Test AI interview navigator between 26th October and 10th November 2025
#3 REFER your college mate's Email. Minimum criteria for Grand Prize Event is 100+ active students completing all KPIs below.
#4 Only those who have joined the Wait List above will be eligible for Grand Prize Event
#5 Interviews, Improvement Feedback & Video Testimonials
Note - Please expect more detailed communications in coming days.
Achieve 3 Results KPI
#1 Participate actively, complete and consume total 30 minutes of Interview time using our Interview application. A single Interview could range between between 5 minutes to 15 minutes. Try different Interview settings.
#2 Provide Improvement Feedback
#3 Share Video Testimonials inside our application, which can be used for future public promotion. Process will be shared with you.
Note - Cut Off Timestamp 15th Nov 2025 India Time end of the day (midnight)
1 Grand Prize Winner
Based on all Results mentioned above, the Just Placed team will choose and announce a winner
Award winner announcement expected on or before 20th November 2025.
Grand Prize winner is fully dependent upon Early Access campaign program's success and in order to announce the Grand Prize Winner, at least 100 different individual participants must complete all the KPI's listed above.
Note - Upon winner announcement, the Award consumption Process and Terms and Conditions will be shared with the winner.
Early Access Campaign IssuesEarly Access campaign fail due to various reasons and we may have to cancel the Early Access event including Grand Prize winner announcement mid way.
Risks associated with Early Access Campaign & Grand Prize
System Performance and Stability Failures
#1 Server overload & crash:
The sudden surge in concurrent requests (or simultaneous, identical actions like everyone submitting a form at once) can quickly overwhelm the server. This results in system failures (crashes) or timeouts/outages (500 or 503 errors), stopping testing for everyone.
#2 Severe performance degradation:
While the system may not crash, the load can cause extremely slow load times or high latency (delayed response times). This makes the application unusable for testing and invalidates any performance data collected.
#3 Database Bottlenecks:
The database is often the first component to fail under heavy load. Many concurrent read/write operations can cause unoptimized queries to take too long, leading to a backlog of requests and a system slowdown or crash.
#4 Resource such as RAM Exhaustion: The server may run out of key resources like available memory (RAM) or the limit on the number of entry processes (simultaneous scripts) it can handle.
User/Client side Technical Issues
#1 Unstable internet connectivity issues:
An unstable or slow internet connection for individual users can lead to disconnections, loss of progress, or failure to load test pages.
#2 Browser/Device compatibility:
The testing platform may not be compatible with all the operating systems, browsers, or device types (e.g., older laptops or different mobile phones) the users are using, creating a barrier to entry.
#3 Client side Technical flaws:
A user's device might have hardware or software issues, like low RAM, an incompatible operating system, or a slow CPU, which can prevent the application from running smoothly for them.
#4 Cache/Session Conflicts: With many users logging in and performing actions, issues like improper session handling or cache conflicts on the user's end can lead to incorrect data or log-in errors.
Data and Test Environment Issues
#1 Data Conflicts/Contentions:
If multiple testers are accessing and trying to modify the same test data record simultaneously, it can lead to locks, errors, or data corruption that stops the test. This is especially true for simultaneous users initiating identical, critical actions (e.g., reserving the last item).
#2 Unstable Test Environment: If the dedicated test environment is unstable, constantly updated, or being shared with other development activities, it can be unpredictable and lead to flaky tests (intermittent pass/fail results) that can't be reliably completed or analysed.
#3 Setup and Login Failures: The sheer scale of users logging in at once can expose weaknesses in the login process or user provisioning system, causing many testers to be unable to start.