Вам будет нужен плагин cookie, который предоставляет несколько дополнительных подписей функции cookie.
$.cookie('cookie_name', 'cookie_value')
хранилища переходный cookie (только существует в объеме этой сессии, в то время как $.cookie('cookie_name', 'cookie_value', 'cookie_expiration")
создает cookie, который продлится через сессии - см. http://www.stilbuero.de/2006/09/17/cookie-plugin-for-jquery/ для получения дополнительной информации о плагине cookie JQuery.
, Если Вы хотите установить cookie, которые используются для всего сайта, необходимо будет использовать JavaScript как это:
document.cookie = "name=value; expires=date; domain=domain; path=path; secure"
Single opcodes are thread-safe because of the GIL but nothing else:
import time
class something(object):
def __init__(self,c):
self.c=c
def inc(self):
new = self.c+1
# if the thread is interrupted by another inc() call its result is wrong
time.sleep(0.001) # sleep makes the os continue another thread
self.c = new
x = something(0)
import threading
for _ in range(10000):
threading.Thread(target=x.inc).start()
print x.c # ~900 here, instead of 10000
Every resource shared by multiple threads must have a lock.
Нет, этот код абсолютно и явно не потокобезопасен.
import threading
i = 0
def test():
global i
for x in range(100000):
i += 1
threads = [threading.Thread(target=test) for t in range(10)]
for t in threads:
t.start()
for t in threads:
t.join()
assert i == 1000000, i
дает постоянный сбой.
i + = 1 преобразуется в четыре кода операции: загрузить i, загрузить 1, сложить два и сохраните его обратно в i. Интерпретатор Python переключает активные потоки (освобождая GIL от одного потока, чтобы другой поток мог его иметь) каждые 100 кодов операций. (Оба они являются деталями реализации.) Состояние гонки возникает, когда вытеснение из 100 кодов операций происходит между загрузкой и сохранением, позволяя другому потоку начать увеличивать счетчик. Когда он возвращается к приостановленному потоку, он продолжает использовать старое значение «i» и тем временем отменяет приращения, выполняемые другими потоками.
Сделать его потокобезопасным просто; добавить блокировку:
#!/usr/bin/python
import threading
i = 0
i_lock = threading.Lock()
def test():
global i
i_lock.acquire()
try:
for x in range(100000):
i += 1
finally:
i_lock.release()
threads = [threading.Thread(target=test) for t in range(10)]
for t in threads:
t.start()
for t in threads:
t.join()
assert i == 1000000, i
(note: you would need global c
in each function to make your code work.)
Is this code thread safe?
No. Only a single bytecode instruction is ‘atomic’ in CPython, and a +=
may not result in a single opcode, even when the values involved are simple integers:
>>> c= 0
>>> def inc():
... global c
... c+= 1
>>> import dis
>>> dis.dis(inc)
3 0 LOAD_GLOBAL 0 (c)
3 LOAD_CONST 1 (1)
6 INPLACE_ADD
7 STORE_GLOBAL 0 (c)
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
So one thread could get to index 6 with c and 1 loaded, give up the GIL and let another thread in, which executes an inc
and sleeps, returning the GIL to the first thread, which now has the wrong value.
In any case, what's atomic is an implementation detail which you shouldn't rely on. Bytecodes may change in future versions of CPython, and the results will be totally different in other implementations of Python that do not rely on a GIL. If you need thread safety, you need a locking mechanism.
It's easy to prove that your code is not thread safe. You can increase the likelyhood of seeing the race condition by using a sleep in the critical parts (this simply simulates a slow CPU). However if you run the code for long enough you should see the race condition eventually regardless.
from time import sleep
c = 0
def increment():
global c
c_ = c
sleep(0.1)
c = c_ + 1
def decrement():
global c
c_ = c
sleep(0.1)
c = c_ - 1
Short answer: no.
Long answer: generally not.
While CPython's GIL makes single opcodes thread-safe, this is no general behaviour. You may not assume that even simple operations like an addition is a atomic instruction. The addition may only be half done when another thread runs.
And as soon as your functions access a variable in more than one opcode, your thread safety is gone. You can generate thread safety, if you wrap your function bodies in locks. But be aware that locks may be computationally costly and may generate deadlocks.
If you actually want to make your code not thread-safe, and have good chance of "bad" stuff actually happening without you trying like ten thousand times (or one time when you real don't want "bad" stuff to happen), you can 'jitter' your code with explicit sleeps:
def íncrement():
global c
x = c
from time import sleep
sleep(0.1)
c = x + 1
Are you sure that the functions increment and decrement execute without any error?
I think it should raise an UnboundLocalError because you have to explicitly tell Python that you want to use the global variable named 'c'.
So change increment ( also decrement ) to the following:
def increment():
global c
c += 1
I think as is your code is thread unsafe. This article about thread synchronisation mechanisms in Python may be helpful.