gpt4 book ai didi

python - PsychoPy 实验期间的屏幕截图

转载 作者:太空宇宙 更新时间:2023-11-03 12:47:57 27 4
gpt4 key购买 nike

我正在尝试在我的心理任务中捕捉定时屏幕截图。我有一个固定十字,然后是屏幕左右两侧的 2 张面孔,然后是一个点。我只想要两张脸出现在屏幕上的 1 秒时间段的屏幕截图。例程中有 10 对不同的面孔,例程循环 3 次。理想情况下,我希望通过此代码将 30 张图像保存到我的计算机中。到目前为止,我的代码如下:

from __future__ import division  # so that 1/3=0.333 instead of 1/3=0
from psychopy import visual, core, data, event, logging, sound, gui
from psychopy.constants import * # things like STARTED, FINISHED
import numpy as np # whole numpy lib is available, prepend 'np.'
from numpy import sin, cos, tan, log, log10, pi, average, sqrt, std, deg2rad, rad2deg, linspace, asarray)
from numpy.random import random, randint, normal, shuffle
import os # handy system and path functions

import socket
import time

# Store info about the experiment session
expName = 'DotProbe_EyeTracker_BSchool'
expInfo = {u'session': u'001', u'participant': u''}
dlg = gui.DlgFromDict(dictionary=expInfo, title=expName)
if dlg.OK == False: core.quit() # user pressed cancel
expInfo['date'] = data.getDateStr() # add a simple timestamp
expInfo['expName'] = expName

# Setup files for saving
if not os.path.isdir('data'):
os.makedirs('data') # if this fails (e.g. permissions) we will get error
filename = 'data' + os.path.sep + '%s_%s' %(expInfo['participant'], expInfo['date'])
logFile = logging.LogFile(filename+'.log', level=logging.EXP)
logging.console.setLevel(logging.WARNING) # this outputs to the screen, not a file

# An ExperimentHandler isn't essential but helps with data saving
thisExp = data.ExperimentHandler(name=expName, version='',
extraInfo=expInfo, runtimeInfo=None,
originPath=None,
savePickle=True, saveWideText=True,
dataFileName=filename)

# Start Code - component code to be run before the window creation

# Setup the Window
win = visual.Window(size=(1366, 768), fullscr=True, screen=0, allowGUI=False,
allowStencil=False, monitor='testMonitor', color=[-1,-1,-1], colorSpace='rgb')
myClock = core.Clock()

# store frame rate of monitor if we can measure it successfully
expInfo['frameRate']=win.getActualFrameRate()
if expInfo['frameRate']!=None:
frameDur = 1.0/round(expInfo['frameRate'])
else:
frameDur = 1.0/60.0 # couldn't get a reliable measure so guess

# Initialize components for Routine "instructions"
instructionsClock = core.Clock()
text = visual.TextStim(win=win, ori=0, name='text',
text='Respond to the probe once it appears. EIther click "2" when probe replaces left face or click "3" when probe replaces right face.', font='Arial',
pos=[0, 0], height=0.1, wrapWidth=None,
color='white', colorSpace='rgb', opacity=1,
depth=0.0)

# Initialize components for Routine "block1"
block1Clock = core.Clock()
fixation = visual.TextStim(win=win, ori=0, name='fixation',
text='+', font='Arial',
pos=[0, 0], height=0.1, wrapWidth=None,
color='white', colorSpace='rgb', opacity=1,
depth=0.0)

leftimage = visual.ImageStim(win=win, name='leftimage',
image='sin', mask=None,
ori=0, pos=[0,0], size=[1, 1.34],
color=[1,1,1], colorSpace='rgb', opacity=1,
texRes=128, interpolate=False, depth=-1.0)

rightimage = visual.ImageStim(win=win, name='rightimage',
image='sin', mask=None,
ori=0, pos=[0,0], size=[1, 1.34],
color=[1,1,1], colorSpace='rgb', opacity=1,
texRes=128, interpolate=False, depth=-2.0)

probe = visual.ImageStim(win=win, name='probe',
image='sin', mask=None,
ori=0, pos=[0,0], size=[0.5, 0.5],
color=[1,1,1], colorSpace='rgb', opacity=1,
texRes=128, interpolate=False, depth=-3.0)

#Get and save a screen shot" of everything in stimlist:
stimlist = [leftimage, rightimage]
t0 = myClock.getTime()
rect=(-1,1,1,-1)
screenshot = visual.BufferImageStim(win, stim=stimlist, rect=rect)
# rect is the screen rectangle to grab, (-1,1,1,-1) is whole screen
# as a list of the edges: Left Top Right Bottom, in norm units.

# Create some handy timers
globalClock = core.Clock() # to track the time since experiment started
routineTimer = core.CountdownTimer() # to track time remaining of each (non-slip) routine

最佳答案

按照其他人的建议使用 win.getMovieFramewin.saveMovieFrames 。您不需要 visual.BufferImageStim。当您完成脚本时,您可能会遇到条件循环。我会在实际实验运行时截取屏幕截图,而不是事先“模拟”。它确保您的屏幕截图准确描述了实验过程中实际发生的事情 - 即使您犯了错误并错误地绘制了东西 :-) 当然,如果屏幕截图的目的纯粹是为了文档,请删除/在运行实际实验时对这些行进行注释以提高性能。

# Loop through trials. You may organize them using ``data.TrialHandler`` or generate them yourself.
for trial in myTrialList:
# Draw whatever you need, probably dependent on the condition. E.g.:
if trial['condition'] == 'right':
rightimage.draw()
else:
leftimage.draw()
fixation.draw()

# Show your stimulus
win.flip()

# Save screenshot. Maybe outcomment these line during production.
win.getMovieFrame() # Defaults to front buffer, I.e. what's on screen now.
win.saveMovieFrames('screenshot' + trial['condition']) # save with a descriptive and unique filename. .

关于python - PsychoPy 实验期间的屏幕截图,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25233907/

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com