Back in my favorite weekend uniform, i.e. a pair of distressed jeans, oversized shirt and fun heels. Funny, but of all the things I wear, this particular combination is the one I feel most comfortable in. And sure, high heels are not exactly synonymous with comfort, but you know me, I am not flats kind of a gal. In fact, I've gotten so used to wearing heels anything below 3 inches is comfortable and therefore qualifies as weekend wear... Though for pumps as gorgeous as the ones from Paul Andrew I am wearing in today's post, I gladly break the 3 inch rule:)
一般認為貝多芬不少作品受其耳疾影響,最近美國一項研究團隊指出,貝多芬可能另患有心臟疾病。研究發現,貝多芬不朽音樂中某些篇章,可能反映出他的心臟因心律不整而出現不規則跳動。
來自密西根大學(University of Michigan)和華盛頓大學(University of Washington)的音樂家、心臟病學家和醫學歷史學家所組成的團隊,他們針對貝多芬的一些作品進行研究。
密西根大學醫學院研究人員Joel Howell表示,當人們因心律不整,包括心跳太快、太慢或不規則跳動時,會呈現某些可預測模式,而團隊成員在研究貝多芬樂譜後,發現不少節拍和調性突然轉變,似乎符合這種不對稱模式。
華盛頓大學醫學院心臟科助理教授Zachary Goldberger指出,在第十三號弦樂四重奏第五樂章〈抒情短歌〉(cavatina)中,關鍵處轉為降C大調,這種不平衡的節奏引發悲傷和迷惘的感覺,「可能源自於呼吸急促,與心律失常有關」,另外,降E大調第26號鋼琴奏鳴曲《告別》,作品Op. 81a(Les Adieux)開頭都反映作出曲家的心律不整問題。
Goldberger表示,這些作品就像「音樂劇心電圖」,用音樂呈現心電圖的變化。
該團隊認為,作曲家罹患耳疾或強化他在其他方面的感知力,使其更容易察覺自己的心律不整,而這種心律不整可能造就部分最偉大的作品。
新聞來源:medicalnewstoday
Drawings of motorcycles by artist Carter Asmann, using the stains left by coffee cups. More below.
View the whole post: Drawings of Motorcycles Using Coffee Cup Stains by Artist Carter Asmann over on BOOOOOOOM!.
I feel like every time I sit down to actually start using Swift, I’m constantly battling with the language. My latest challenge: creating a backing buffer of pixel data that will be rendered to the screen.
This really shouldn’t be a difficult thing to do, so hopefully someone can point out the obvious thing I’m doing wrong in Swift so that my performance is so terrible.
Here’s the basic code that I wanted to profile (of course, there is a version for each of the languages):
void RenderWeirdGradient(BackingBuffer *buffer)
{
for (int y = 0; y < buffer->height; ++y) {
int row = y * buffer->width;
for (int x = 0; x < buffer->width; ++x) {
Pixel p = { 0, y + GreenOffset, x + BlueOffset, 255 };
buffer->data[row + x] = p;
}
}
}
- (void)drawRect:(NSRect)dirtyRect
{
CFTimeInterval start = CACurrentMediaTime();
if (buffer.data) {
RenderWeirdGradient(&buffer);
CGContextRef contextRef = [[NSGraphicsContext currentContext] graphicsPort];
assert(contextRef != NULL);
NSData *data = [NSData dataWithBytesNoCopy:buffer.data
length:buffer.width * buffer.height * sizeof(Pixel)
freeWhenDone:NO];
CGDataProviderRef providerRef = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(buffer.width, buffer.height,
buffer.bitsPerComponent,
buffer.bitsPerPixel,
sizeof(Pixel) * buffer.width,
buffer.colorSpace,
buffer.bitmapInfo,
providerRef,
NULL,
true,
kCGRenderingIntentDefault);
CGContextDrawImage(contextRef, NSMakeRect(0, 0, buffer.width, buffer.height), imageRef);
CGImageRelease(imageRef);
CGDataProviderRelease(providerRef);
}
CFTimeInterval elapsed = CACurrentMediaTime() - start;
NSLog(@"elapsed: %f", elapsed);
}
Hopefully the code is straight forward, basically it just renders a window that looks the picture below with the pattern moving at each update call.
Let’s just start with the timings…
Language | Timing |
---|---|
ObjC (NO ARC, -O0) | 0.01137s |
ObjC (NO ARC, -Os) | 0.01051s |
ObjC (ARC, -O0) | 0.01159s |
ObjC (ARC, -Os) | 0.00983s |
Swift * (ARC, -O0) | 0.03005s |
Swift * (ARC, -Os) | 0.01707s |
Swift [] (ARC, -O0) | 1.19796s |
Swift [] (ARC, -Os) | 0.02701s |
The “Swift *” is a version that uses UnsafeMutablePointer<UInt8>
to handle the
buffer. The “Swift []” is one that uses an array of pixel data, where pixel is
simply a struct:
struct Pixel {
var red: Byte = 0x00
var green: Byte = 0x00
var blue: Byte = 0x00
var alpha: Byte = 0xFF
}
OK… that is not a freaking typo up there. It literally takes 1.2 SECONDS to render each frame. Both ObjC versions did it in 0.01s and even the pointer version of Swift only took 0.03s seconds.
THAT IS 120 TIMES SLOWER!!!!
I can hear you now: oh, that’s a debug build, you should always do your performance profiling with optimizations on.
My answer is this: I’m not trying to profile the app, I’m simply trying to build a small game. I cannot do it because the performance is so flipping terrible in debug mode. Trying to debug your app with optimized code is just a pain. Sure, at the end of the project, that’s probably the right thing to be doing, but until you get there, builds without optimizations are just so much easier to work with.
So what are we to do?
Well… if you want to use Swift, you have to bang your head against your desk and try different things until something gets better. In this case, I can make the performance significantly better (though, let’s be honest here, it’s still 37 TIMES slower than the ObjC version):
func renderWeirdGradient(blueOffset: Int, _ greenOffset: Int) {
for var y = 0; y < buffer.height; y++ {
let row = y * buffer.width
for var x = 0; x < buffer.width; x++ {
// Simply using the "FAST" code block changes the timing from 1.2s
// per call to 0.37s per call.
// ---- SLOW CODE BLOCK
buffer.pixels[row + x].green = Byte((y + greenOffset) & 0xFF)
buffer.pixels[row + x].blue = Byte((y + blueOffset) & 0xFF)
// ---- END SLOW CODE BLOCK
// ---- FASTER CODE BLOCK
let pixel = Pixel(
red: 0,
green: Byte((y + greenOffset) & 0xFF),
blue: Byte((x + blueOffset) & 0xFF),
alpha: 255)
buffer.pixels[row + x] = pixel
// ---- END FASTER CODE BLOCK
}
}
self.needsDisplay = true
}
I ran into this same type of stuff when I was working on the JSON parser. The fundamental problem with Swift, with regards to performance, is that it is impossible to reason what the performance of your code is going to be. It is impossible to know if your code is just horrendously slow in debug, or if the optimizing is going to get ride of all of the extra crap that is going on.
This sucks.
Seriously, what are we supposed to be doing here? Is it actually possible to build highly responsive apps that process a lot of complex information in Swift?
If most of your code is Swift code that is simply bridging into ObjC or C, you will probably read this post and not have a clue what I’m talking about. You are benefiting from the speed of ObjC.
The sad state of affairs is that, today, Swift is slow. I don’t see that changing by next WWDC.
Swift Source: https://github.com/owensd/handmade-swift/tree/day005_badperf
ObjC Projects: BackingBufferObjC.zip
In case you are curious, here’s my system spec:
Model Name: MacBook Pro
Model Identifier: MacBookPro10,1
Processor Name: Intel Core i7
Processor Speed: 2.7 GHz
Number of Processors: 1
Total Number of Cores: 4
L2 Cache (per Core): 256 KB
L3 Cache: 8 MB
Memory: 16 GB
Boot ROM Version: MBP101.00EE.B05
SMC Version (system): 2.3f36