diff options
author | Jason Wessel <jason.wessel@windriver.com> | 2009-07-28 13:31:46 -0500 |
---|---|---|
committer | Jason Wessel <jason.wessel@windriver.com> | 2009-07-28 13:31:46 -0500 |
commit | 35da5c2084d5fb3d7b754ecd71dc41264f80780d (patch) | |
tree | ea4da59ce0dc35a16a11fbaf7464f920ad04dc96 /kernel | |
parent | 4733fd328f14280900435d9dbae1487d110a4d56 (diff) |
softlockup: add sched_clock_tick() to avoid kernel warning on kgdb resume
When CONFIG_HAVE_UNSTABLE_SCHED_CLOCK is set sched_clock() gets the
time from hardware, such as from TSC. In this configuration kgdb will
report a softlock warning messages on resuming or detaching from a
debug session.
Sequence of events in the problem case:
1) "cpu sched clock" and "hardware time" are at 100 sec prior
to a call to kgdb_handle_exception()
2) Debugger waits in kgdb_handle_exception() for 80 sec and on exit
the following is called ... touch_softlockup_watchdog() -->
__raw_get_cpu_var(touch_timestamp) = 0;
3) "cpu sched clock" = 100s (it was not updated, because the interrupt
was disabled in kgdb) but the "hardware time" = 180 sec
4) The first timer interrupt after resuming from kgdb_handle_exception
updates the watchdog from the "cpu sched clock"
update_process_times() { ... run_local_timers() --> softlockup_tick()
--> check (touch_timestamp == 0) (it is "YES" here, we have set
"touch_timestamp = 0" at kgdb) --> __touch_softlockup_watchdog()
***(A)--> reset "touch_timestamp" to "get_timestamp()" (Here, the
"touch_timestamp" will still be set to 100s.) ...
scheduler_tick() ***(B)--> sched_clock_tick() (update "cpu sched
clock" to "hardware time" = 180s) ... }
5) The Second timer interrupt handler appears to have a large jump and
trips the softlockup warning.
update_process_times() { ... run_local_timers() --> softlockup_tick()
--> "cpu sched clock" - "touch_timestamp" = 180s-100s > 60s --> printk
"soft lockup error messages" ... }
note: ***(A) reset "touch_timestamp" to "get_timestamp(this_cpu)"
Why "touch_timestamp" is 100 sec, instead of 180 sec?
With the CONFIG_HAVE_UNSTABLE_SCHED_CLOCK" set the call trace of
get_timestamp() is:
get_timestamp(this_cpu) -->cpu_clock(this_cpu)
-->sched_clock_cpu(this_cpu) -->__update_sched_clock(sched_clock_data,
now)
The __update_sched_clock() function uses the GTOD tick value to create
a window to normalize the "now" values. So if "now" values is too big
for sched_clock_data, it will be ignored.
The fix is to invoke sched_clock_tick() to update "cpu sched clock" in
order to recover from this state. This is done by introducing the
function touch_softlockup_watchdog_sync(), which allows kgdb to
request that the sched clock is updated when the watchdog thread runs
the first time after a resume from kgdb.
Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
Signed-off-by: Dongdong Deng <Dongdong.Deng@windriver.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: peterz@infradead.org
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/kgdb.c | 6 | ||||
-rw-r--r-- | kernel/softlockup.c | 16 |
2 files changed, 19 insertions, 3 deletions
diff --git a/kernel/kgdb.c b/kernel/kgdb.c index 9147a3190c9d..f44a5a57a635 100644 --- a/kernel/kgdb.c +++ b/kernel/kgdb.c @@ -590,7 +590,7 @@ static void kgdb_wait(struct pt_regs *regs) /* Signal the primary CPU that we are done: */ atomic_set(&cpu_in_kgdb[cpu], 0); - touch_softlockup_watchdog(); + touch_softlockup_watchdog_sync(); clocksource_touch_watchdog(); local_irq_restore(flags); } @@ -1433,7 +1433,7 @@ acquirelock: atomic_read(&kgdb_cpu_doing_single_step) != cpu) { atomic_set(&kgdb_active, -1); - touch_softlockup_watchdog(); + touch_softlockup_watchdog_sync(); clocksource_touch_watchdog(); local_irq_restore(flags); @@ -1526,7 +1526,7 @@ acquirelock: kgdb_restore: /* Free kgdb_active */ atomic_set(&kgdb_active, -1); - touch_softlockup_watchdog(); + touch_softlockup_watchdog_sync(); clocksource_touch_watchdog(); local_irq_restore(flags); diff --git a/kernel/softlockup.c b/kernel/softlockup.c index 88796c330838..0bad4f900e45 100644 --- a/kernel/softlockup.c +++ b/kernel/softlockup.c @@ -79,6 +79,14 @@ void touch_softlockup_watchdog(void) } EXPORT_SYMBOL(touch_softlockup_watchdog); +static int softlock_touch_sync[NR_CPUS]; + +void touch_softlockup_watchdog_sync(void) +{ + softlock_touch_sync[raw_smp_processor_id()] = 1; + __raw_get_cpu_var(touch_timestamp) = 0; +} + void touch_all_softlockup_watchdogs(void) { int cpu; @@ -118,6 +126,14 @@ void softlockup_tick(void) } if (touch_timestamp == 0) { + if (unlikely(softlock_touch_sync[this_cpu])) { + /* + * If the time stamp was touched atomically + * make sure the scheduler tick is up to date. + */ + softlock_touch_sync[this_cpu] = 0; + sched_clock_tick(); + } __touch_softlockup_watchdog(); return; } |