Home
Search results “Gcc optimization options”
GCC/Clang Optimizations for Embedded Linux - Khem Raj, Comcast RDK
 
54:05
GCC/Clang Optimizations for Embedded Linux - Khem Raj, Comcast RDK This talk will cover how gcc and clang/LLVM compilers can boost the Embedded Linux Development by optimizing for size and performance on constrained systems. It will also cover specific commandline options that are available for tuning the programs for power/performance/size optimizations and how they impact each other. It will also discuss how can we get better code by helping the compilers by writing "friendly" code. Primarily it will focus on C but will also cover C++. Since we have multiple architectures supporting Embedded Linux, we will also discuss architecture specific tunings and optimizations that can be taken advantage. About Khem Raj Working on deploying Yocto Project/OpenEmbedded into Comcast's community Reference Design Kit for STB, Gateway and IoT platforms. Working on designing optimal open source software development and contribution procedures. Previously worked at Juniper where he was responsible to creating and maintaining Linux base operating system for upcoming Junos( Juniper's Network Operating System) again it was based on Yocto project. He is a contributor and maintainer for pieces in OpenEmbedded and Yocto Project. Last he spoke at ELCE Berlin in 2016
Views: 1625 The Linux Foundation
CppCon 2016: Tim Haines “Improving Performance Through Compiler Switches..."
 
01:06:22
http://CppCon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/cppcon/cppcon2016 — Much attention has been given to what modern optimizing compilers can do with your code, but little is ever said as to how to make the compiler invoke these optimizations. Of course, the answer is compiler switches! But which ones are needed to generate the best code? How many switches does it take to get the best performance? How do different compilers compare when using the same set of switches? I explore all of these questions and more to shed light on the interplay between C++ compilers and modern hardware drawing on my work in high performance scientific computing. Enabling modern optimizing compilers to exploit current-generation processor features is critical to success in this field. Yet, modernizing aging codebases to utilize these processor features is a daunting task that often results in non-portable code. Rather than relying on hand-tuned optimizations, I explore the ability of today's compilers to breathe new life into old code. In particular, I examine how industry-standard compilers like those from gcc, clang, and Intel perform when compiling operations common to scientific computing without any modifications to the source code. Specifically, I look at streaming data manipulations, reduction operations, compute-intensive loops, and selective array operations. By comparing the quality of the code generated and time to solution from these compilers with various optimization settings for several different C++ implementations, I am able to quantify the utility of each compiler switch in handling varying degrees of abstractions in C++ code. Finally, I measure the effects of these compiler settings on the up-and-coming industrial benchmark High Performance Conjugate Gradient that focuses more on the effects of the memory subsystem than current benchmarks like the traditional High Performance LinPACK suite. — Tim Haines University of Wisconsin-Madison PhD Candidate Madison, WI I am a third-year PhD candidate working in computational astrophysics. My undergraduate work was in computer science, physics, and mathematics, and I have an M.S. in physics. Fundamentally, my interests lie in developing software systems to try to answer difficult scientific questions combining modern parallel programming techniques in C++ with heterogeneous and massively parallel hardware. As such, I have a keen interest in the application of high performance computing to scientific problems (often called "scientific computing"). I spend most of my days attempting to design and build flexible, abstract software for parallel hardware in C++. Currently, I am part of a collaboration including the University of Washington and the University of Illinois at Urbana-Champagne working on the development of the cosmological N-body code CHArm N-body GrAvity solver (ChaNGa). Although it has excellent scaling properties (up to 512K processors with 93% efficiency), the node-level performance is sub-optimal. I am now working with a CS PhD candidate at UIUC to replace much of the C++98 codebase with C++11 and incorporate GPU computing using the CUDA runtime. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 16000 CppCon
GCC compilation Step by Step explanation with Example
 
08:18
This video will explain GCC compilation process with help of exmaple 1. Pre processing 2. Compilation 3. Assember 4. Linking
Views: 23998 HowTo
Parser and Lexer — How to Create a Compiler part 1/5 — Converting text into an Abstract Syntax Tree
 
51:04
In this tool-assisted education video I create a parser in C++ for a B-like programming language using GNU Bison. For the lexicographical analysis, a lexer is generated using re2c. This is part of a multi-episode series. In the next video, we will focus on optimization. Downloads: — https://github.com/bisqwit/compiler_series/tree/master/ep1 All the material associated with this episode can be downloaded here. Acknowledgements: — Picture: Processors :: Jason Rogers — Music¹: Aryol :: The Strategy Continues :: Kyohei Sada (converted into MIDI and played through OPL3 emulation through homebrew software) — Music²: Star Ocean :: Past Days :: Motoi Sakuraba (SPC-OPL3 conversion) — Music³: Rockman & Forte :: Museum :: Kirikiri-Chan and others (SPC-OPL3 conversion) — Music⁴: Famicom Tantei Club Part II: Ushiro ni Tatsu Shōjo :: Dean’s Room :: Kenji Yamamoto (SPC-OPL3 conversion), original composition: Bach's Invention № 15 — Music⁵: Aryol :: Arrest :: Kyohei Sada (SPC-OPL3 conversion) — Music⁶: Ren & Stimpy Show : Fire Dogs :: Main Theme :: Martin Gwynn Jones and others (SPC-OPL3 conversion) — Music⁷: Aryol :: Warmup :: Kyohei Sada (SPC-OPL3 conversion) — Music⁸: Energy Breaker :: Golden-Colored Wind :: Yukio Nakajima (SPC-OPL3 conversion) — Music⁹: Wonder Project J :: House :: Akihiko Mori (SPC-OPL3 conversion) — SFX: Mostly from YouTube Audio Library. Some are recorded from video games like The Guardian Legend, Lunar Ball, and Super Mario All-Stars. ¹ 00:37, ² 02:46 & 39:26, ³ 10:10, ⁴ 16:06, ⁵ 27:18, ⁶ 37:20, ⁷ 38:58 & 45:58, ⁸ 49:00, ⁹ 50:40 My links: Twitter: https://twitter.com/RealBisqwit Liberapay: https://liberapay.com/Bisqwit Steady: https://steadyhq.com/en/bisqwit Patreon: https://patreon.com/Bisqwit (Other options at https://bisqwit.iki.fi/donate.html) Twitch: https://twitch.tv/RealBisqwit Homepage: https://iki.fi/bisqwit/ You can contribute subtitles: https://www.youtube.com/timedtext_video?ref=share&v=eF9qWbuQLuw or to any of my videos: https://www.youtube.com/timedtext_cs_panel?tab=2&c=UCKTehwyGCKF-b2wo0RKwrcg ---Rant--- [9:35 PM] Bisqwit: Now uploading to YouTube. Within about 24 hours I will know if the rogue AI at YouTube slams the “limited or no advertising" stamp into it, or not. Actually, I only know if it does so *when* it does it. Then, I need to wait an additional 25 hours for YouTube staff to manually review it and clear the flag. If the flag does not appear, then it is possible that the bot just has not scanned it yet and I need to wait longer. Premature publication could mean that the bot will mark it after it has already been published, and then I will not receive any revenue for the first spike of views. It used to be 18 hours (since uploading that the bot does its evil deeds), but nowadays YT recommends waiting just 3 hours. We will see, we will see. #Bisqwit #Compiler #Tutorial
Views: 95475 Bisqwit
C Programming Optimized Code, RAMifications
 
03:50
Let's take a look at what happens to our function when we compile it with compiler optimizations enabled. What is optimized code? What is function in-lining? See how the gcc compiler strips off the CDECL calling convention wrappers (stack push call and ret instruction overhead) and embeds our functions code directly into main() when we enable the gcc optimize for size flag, -Os. This is just one of the many, many optimizing steps a compiler might do. We can also tell it to unroll loops, or treat the stack differently. See "man gcc" for options. But let's not enable too many of these optimizations, or risk ricing up our code. Optimized code can run a little bit faster or take up a little bit less memory. However, the compiler flags we choose can make significant changes to the final assembly language output. This may not be a problem for some, but in the future we might want to know about this in case we decide to do some non-standard self-modifying code experiments. Resources: http://www.iso-9899.info/wiki/Main_Page http://www.cs.princeton.edu/~benjasik/gdb/gdbtut.html http://www.phiral.net/linuxasmone.htm http://en.wikibooks.org/wiki/Category:X86_Disassembly http://en.wikipedia.org/wiki/X86_calling_conventions#cdecl
Views: 7604 Executive Quest
CppCon 2017: Matt Godbolt “What Has My Compiler Done for Me Lately? Unbolting the Compiler's Lid”
 
01:15:46
http://CppCon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2017 — In 2012, Matt and a colleague were arguing whether it was efficient to use the then-new-fangled range for. During the discussion a bash script was written to quickly compile C++ source and dump the assembly. Five years later and that script has grown into a website relied on by many to quickly see the code their compiler emits, to compare different compilers' code generation and behaviour, to quickly prototype and share code, and investigate the effect of optimization flags. In this talk Matt will not only show you how easy (and fun!) it is to understand the assembly code generated by your compiler, but also how important it can be. He'll explain how he uses Compiler Explorer in his day job programming low-latency trading systems, and show some real-world examples. He'll demystify assembly code and give you the tools to understand and appreciate how hard your compiler works for you. He'll also talk a little about how Compiler Explorer works behind the scenes, how it is maintained and deployed, and share some stories about how it has changed over the years. By the end of this session you'll be itching to take your favourite code snippets and start exploring what your compiler does with them. — Matt Godbolt: DRW, Senior Software Engineer Matt Godbolt is a software engineer with trading firm DRW, and the creator of the Compiler Explorer website. He is passionate about writing efficient code. He has previously worked at on mobile apps at Google, run his own C++ tools company and spent more than a decade making console games. When he's not hacking on Compiler Explorer, Matt enjoys writing emulators for old 8-bit computer hardware. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 58521 CppCon
Automatic Tuning of Compiler Options using irace - GNU Tools Cauldron 2018
 
09:25
Presented by Manuel López-Ibáñez at GNU Tools Cauldron 2018 While modern compilers usually offer different levels of optimization as possible defaults, they have a larger number of command-line options and numerical parameters that impact properties of the generated machine-code. The irace package (https://cran.r-project.org/package=irace) is a method for automatic algorithm configuration that can handle numerical and discrete options and optimizes their settings according to a given metric (such as runtime) over a large number of noisy (stochastic) benchmarks. When adapted to the tuning of compiler options in GCC, irace becomes an alternative to Acovea, OpenTuner and TACT with some desirable features. Experimental results show that, depending on the specific code to be optimized, speed-ups of up to 1.4 when compared to the -O2 and -O3 optimization flags are possible. I'm presenting on behalf of the other authors who will not attend. They did all the experiments using our tool (irace) and advice from me. I hope the irace package will be interesting to GCC developers and users (it is GPL).
Views: 171 Embecosm
CppCon 2017: Dmitry Panin “Practical Techniques for Improving C++ Build Times”
 
55:52
http://CppCon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2017 — Slow builds block all C++ developers from the work being done. At Facebook we have a huge codebase, where the time spent compiling C++ sources grows significantly faster than the size of the repository. In this talk we will share our practical experience optimizing build times, in some cases from several hours to just a few minutes. The majority of the techniques are open sourced or generic and can be immediately applied to your codebase. Facebook strives to squeeze build speed out of everything: starting from a distributed build system, through the compiler toolchain and ending with code itself. We will dive into different strategies of calculating cache keys, potential caching traps and approaches to improve cache efficiency. We tune the compiler, specifically with compilation flags, profile data and link time options. We will talk about the benchmarks we use to track improvements and detect regressions and what challenges we face there. Finally, you will learn about our unsuccessful approaches with an explanation of why they didn't work out for us. — Dmitry Panin: Facebook, Software Engineer Dmitry is a software engineer at Facebook working in Ads Infrastructure Team. He has been contributing to efficiency, scalability and reliability of C++ backend services responsible for ads delivery. He is currently hacking on Facebook's build infrastructure and C++ codebase itself with the goal to improve build speed. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 4768 CppCon
Optimize HPC across platforms - Vectorization, Why, When, How...
 
17:02
Optimize HPC on any platform - Application profiling and vectorization to maximize performance on modern processors High performance computing (HPC) applications implement complex scientific models that will require many thousands of calculations to be performed during a realistic simulation. The advanced processing capability of modern CPUs are very well suited to this type of operation. In particular, the use of vector (or Single Instruction Multiple Data) based instructions in an application will help to fully exploit hardware processing capability. It is therefore important to understand how well your application is making use of vectorization. In this webinar, Phil Ridley (Field Application Engineer at Arm), will demonstrate how — by focusing on vectorization — developers can help maximize application use of a CPU's vectorization capability on HPC systems. Topics covered include: • What is vectorization and why it is important for HPC applications • How to identify which regions within my application are utilizing vectorization • How to identify any regions that are not utilizing vectorization and how this might affect overall performance • An introduction to using Arm’s HPC tools for helping to analyze and optimize an application.
Views: 259 Arm
Compiler Options
 
01:19
Compiler Options Tutorial - AndeSight MCU Version
Views: 647 andescore968
Cauldron 2013 - Impact of Different Compiler Options on Energy Consumption
 
29:32
Presenter: James Pallister Abstract: This talk describes an extensive study into how compiler optimization affects the energy usage of benchmarks on different platforms. We use an fractional factorial design to explore the energy consumption of 87 optimizations GCC performs when compiling 10 benchmarks for five different embedded platforms. Hardware power measurements on each platform are taken to ensure all architectural effects on the energy are captured and that no information is lost due to inaccurate or incomplete models. We find that in the majority of cases execution time and energy consumption are highly correlated, but the effect a particular optimization may have is non-trivial due to its interactions with other optimizations. There is no one optimization that is universally positive for run-time or energy consumption, as the structure of the benchmark heavily influences the optimization's effectiveness. This talk presents the results and conclusions gain from the project we introduced last year at the previous GNU Tools Cauldron.
Views: 208 Diego Novillo
Code optimization tips and tricks
 
27:35
Presentation name: Code optimization tips and tricks Speaker: Divya Basant Kumar Description: Session will walk trough some of the aspects, a programmer should keep in mind with respect to performance and what are various optimization options available with current day compiler like GCC or LLVM. Session will also walk through compiler generated intermediate represenatation briefly to demonstrate how a programmer can track the changes done by compiler optimizer in realtime, which can be very helpful to understand the optimization paradigms. Session will also include quick brief on the tools and utilities useful to track performance hits. Overall audience will gain an insight into best programming practices with respect to performance. Audience is expected to have some programming background with system architecture and memory layout understanding. [ https://sched.co/Jcnt ]
Views: 30 DevConf
Compiler Optimisation Lecture 2
 
31:10
This lecture is the coursework for the 2017 Compiler Optimisation course that I teach at the University of Edinburgh. If you are watching this and it is not 2017, you are probably watching the wrong video! GCC Options are at: http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html The benchmarks are at: https://docs.google.com/file/d/0B5GasMlWJhTOaTdvaFkzUzNobDQ The course webpage is at: http://www.inf.ed.ac.uk/teaching/courses/copt/
Analysing Compiler Optimization Effectiveness on Adapteva Epiphany, ARM and XMOS platforms
 
05:56
http://jpallister.com/wiki http://ww.cs.bris.ac.uk/Research/Micro http://kck.st/PtAZ9O http://www.adapteva.com Energy efficiency is the highest priority for modern software-hardware co-design. The potential for compiler options to impact on power consumption of running programs has often been discussed. However there has never been a comprehensive analysis of the magnitude of that impact, or how it varies between processor architectures and compilers. Our presentation will describe a project we undertook during the Summer of 2012 at the University of Bristol Department of Computer Science and funded by Embecosm, to explore the effect of compiler options on energy consumption of compiled programs. We used an innovative technique to examine the energy consumption of 10 benchmarks when compiled with 87 optimizations performed by GCC and run on five different embedded platforms. Hardware power measurements on each platform were taken to ensure all architectural effects on the energy were captured. A fractional factorial design was used to separate the effects of each optimization and account for interactions between optimizations. The use of this technique, not commonly used in computer science, has made it feasible to analyse 2^87 possible combinations of optimization over the short period of this project. We found that in the majority of cases execution time and energy consumption were highly correlated, but the effect a particular optimization may have is non-trivial due to its interactions with other optimizations. We also found that the structure of the benchmark had a larger effect than the platform on whether the optimization had an impact on energy consumption. No one optimization is universally positive for energy consumption, but for each benchmark and processor architecture we were able to find the optimization with the main effect on power consumption. There is clearly scope for further work on selecting the optimizations that are most beneficial for an individual program. Our presentation will discuss techniques that can potentially achieve this goal, and are the potential subjects of future research. This research was unusual, in that it was funded as a completely open project. A wiki detailed progress from week to week, the relevant open source communities were kept regularly informed, and the results will be published in open access journals.
Views: 1434 jampallister
SPR-KKR (compiling example, gfortran + Netlib)
 
02:54
Sorry, this compiling kkrscf and others don't work for MPI calculation in my case. Please, use single core calculation. ############################################################################### # Here the common makefile starts which does depend on the OS #### ############################################################################### # # FC: compiler name and common options e.g. f77 -c # LINK: linker name and common options e.g. g77 -shared # FFLAGS: optimization e.g. -O3 # OP0: force nooptimisation for some routiens e.g. -O0 # VERSION: additional string for executable e.g. 6.3.0 # LIB: library names e.g. -L/usr/lib -latlas -lblas -llapack # (lapack and blas libraries are needed) # BUILD_TYPE: string "debug" switches on debugging options # (NOTE: you may call, e.g. "make scf BUILD_TYPE=debug" # to produce executable with debugging flags from command line) # BIN: directory for executables # INCLUDE: directory for include files # (NOTE: directory with mpi include files has to be properly set # even for sequential executable) ############################################################################### BUILD_TYPE ?= #BUILD_TYPE := debug VERSION = 6.3 ifeq ($(BUILD_TYPE), debug) VERSION := $(VERSION)$(BUILD_TYPE) endif BIN = . #BIN=~/bin #BIN=/tmp/$(USER) LIB = -L/usr/local/lib -llapack -lblas #LIB = $(LIB_MKL) LIBMPI = # Include mpif.h INCLUDE = -I/usr/lib/openmpi/include OP0 = ifeq ($(BUILD_TYPE), debug) # FFLAGS = -O0 -g FFLAGS = -O0 -g -Wall -fbounds-check -fbacktrace # FFLAGS = -O0 -check all -traceback -fpe0 -g -fp-stack-check -ftrapuv -CU ### for ifort else FFLAGS = -O2 -m64 # FFLAGS = -O2 -axSSE4.2 -diag-disable remark ### for ifort endif FC = mpif90.openmpi -c $(FFLAGS) $(INCLUDE) LINK = mpif90.openmpi $(FFLAGS) $(INCLUDE) MPI=MPI
Compiling C programs with gcc
 
05:43
In cs107, we will primarily be using Makefiles to compile our code, but you should know how to use gcc (the GNU Compiler Collection) to compile a C program independently. We will use three primary flags in cs107: -g : embed debugging information into the program so gdb will give us good information. -Og : compile with gdb in mind, so it leaves in variable information and doesn't optimize out too many variables. -std=gnu99 : the flavor of C we will be using for cs107.
Views: 1182 Chris Gregg
Turning on compiler optimization
 
01:33
See how to turn on the optimizer in the Code Composer Studio compiler and set the optimization level.
Views: 2814 Code Composer
Set compiler options on a file or set of files
 
02:28
It is possible to set file specific compiler options. For example you can change the optimization level used on a file to be different than what is used for the rest of the files in your Code Composer Studio project.
Views: 2528 Code Composer
The Best Controller Settings in Smash Ultimate
 
12:36
► x2 ULTIMATE SWITCH BUNDLE GIVEAWAY: https://gleam.io/5ZvRq/zeros-christmas-ultimate-giveaway-x2-ultimate-switch-bundles ► TWITCH PRIME: https://www.twitch.tv/prime ► ZERO'S TWITCH: https://www.twitch.tv/zero ►Watch My Live Stream on Twitch: http://www.Twitch.tv/ZeRo ►Twitter: http://www.twitter.com/zerowondering ►Instagram: https://instagram.com/zerowondering/ ►Facebook: http://www.Facebook.com/zerowondering ►Business/Interviews: [email protected] ►Click here to subscribe! http://bit.ly/JoinScarfArmy #SuperSmashBros #SmashBros #SmashBrosUltimate
Views: 959598 ZeRo
CppCon 2014: Andrei Alexandrescu "Optimization Tips - Mo' Hustle Mo' Problems"
 
58:19
http://www.cppcon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2014 -- Reasonably-written C++ code will be naturally fast. This is to C++'s excellent low-penalty abstractions and a memory model close to the machine. However, a large category of applications have no boundaries on desired speed, meaning there's no point of diminishing returns in making code faster. Better speed means less power consumed for the same work, more workload with the same data center expense, better features for the end user, more features for machine learning, better analytics, and more. Optimizing has always been an art, and in particular optimizing C++ on contemporary hardware has become a task of formidable complexity. This is because modern hardware has a few peculiarities about it that are not sufficiently understood and explored. This talk discusses a few such effects, and guides the attendee on how to navigate design and implementation options in search for better performance. -- Andrei Alexandrescu is a researcher, software engineer, and author. He wrote three best-selling books on programming (Modern C++ Design, C++ Coding Standards, and The D Programming Language) and numerous articles and papers on wide-ranging topics from programming to language design to Machine Learning to Natural Language Processing. Andrei holds a PhD in Computer Science from the University of Washington and a BSc in Electrical Engineering from University "Politehnica" Bucharest. He works as a Research Scientist for Facebook. Website: http://erdani.com Twitter handle: @incomputable -- Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 40244 CppCon
C++ Weekly - Ep 158 - Getting The Most Out Of Your CPU
 
06:51
Come to my Object Lifetime class at Core C++ 2019 https://corecpp.org/schedule/#session-11 My Training Classes: http://emptycrate.com/training.html Support these videos: https://www.patreon.com/lefticus Follow me on twitter: https://twitter.com/lefticus ChaiScript: http://chaiscript.com
Views: 5249 Jason Turner
GNU Cauldron 2012, Prague, talk5
 
33:41
Identifying compiler options to minimize energy consumption by embedded programs Presenter: Jeremy Bennett During this summer, Embecosm will be running a joint project with Bristol University Department of Computer Science to look at the impact of compiler options on energy consumption by programs on embedded processors. Many people have opinions on this, but it transpires there is very little hard data. Bristol University's equipment can measure the power consumed by a processor in great detail and to fine time resolution. We will test a representative range of programs (suggestions will be solicited from the audience) with a wide range of compiler options. We will use a number of different processors (XMOS, ARM) as well as different processors in the same family (ARM). We will also compare GCC to LLVM. The results will be published in an open access journal to provide a baseline data set for future research. One channel we wish to pursue subsequently is use of MILEPOST technology to automatically select the best low energy options when compiling programs. The project, starting on 9 July, will be led by Jeremy Bennett (Embecosm) and Simon Hollis (Bristol University), with the work carried out by James Pallister of Embecosm, who will then return to Bristol University for a 3-year PhD in this field. The purpose of this talk is to solicit views from the wider GCC community at the start of this project, particularly with regard to the features of GCC that are most likely to yield benefits and should thus be explored. We look forward to presenting the results at next year's meeting.
Views: 95 ITIaKAM
Resurrection Remix ROM Kitkat Galaxy Note 3
 
11:24
[ROM][4.4.2][ hlte-Unify] [SaberMod 4.10 ] Resurrection Remix® 4.4.2 KitKat Included Main Features: OPTIMIZATIONS * OPTIMIZATIONS Over 30+ patch to get this rom rolling on strict-Aliasing SaberMod 4.8.3 toolchain loaded on Rom & Linaro Kernel -O3 (highest GCC optimization level) Strict-aliasing enabled Memory Optimized Enabled Halo Active display Lockscreen Notifications Application side bar Omniswitch Notification panel tweaks Tiles style Music Toggle Pitch Black UI Mode Camera mods ListView Animations Custom system animations Custom progress bar Screen recorder Hardware buttons and navbar options Status bar traffic monitor Battery bar options CRT animations SB Brightness slider Expanded desktop Profiles Performance controller Flip tiles Theme chooser Keyboard features Lockscreen see blur and reduce slider Show Wi-Fi name Headset action Low battery warning Lock clock widget Privacy guard Permission management Power sounds Share ROM Battery mods Changelog Lockscreen options and targets Center Clock/No clock/Right Clock AM/PM,date and colors New Wallpaper app Nova/Stock launcher included AND MORE... XDA (more info & download): http://forum.xda-developers.com/showthread.php?t=2709775
Views: 7885 DinamicaMedia
CppCon 2017: Michael Spencer “My Little Object File: How Linkers Implement C++”
 
47:51
http://CppCon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2017 — Ever wonder how the linker turns your compiled C++ code into an executable file? Why the One Definition Rule exists? Or why your debug builds are so large? In this talk we'll take a deep dive and follow the story of our three adventurers, ELF, MachO, and COFF as they make their way out of Objectville carrying C++ translation units on their backs as they venture to become executables. We'll see as they make their way through the tangled forests of name mangling, climb the cliffs of thread local storage, and wade through the bogs of debug info. We'll see how they mostly follow the same path, but each approach the journey in their own way. We'll also see that becoming an executable is not quite the end of their journey, as the dynamic linker awaits to bring them to yet a higher plane of existence as complete C++ programs running on a machine. — Michael Spencer: Sony Interactive Entertainment, Compiler Engineer Michael Spencer is a Compiler Engineer at Sony Interactive Entertainment where he has spent 6 years works on PlayStation's C++ toolchain. He is an active member of the LLVM community focusing on object files and linkers. He also serves as Sony's representative to the ISO C++ standard committee. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 11551 CppCon
The Hydras: Improving the C/C++ Development Experience via GCC Static Analysis Plugins
 
49:15
Presenter(s): Taras Glek URL: http://2010.linux.conf.au/programme/schedule/view_talk/50151 Historically, it has been hard to analyze C++ source. C++ is hard to parse, there are no complete open source parsers other than G++. As a result most C++ analysis tools are as sophisticated as grep. Unfortunately, even if one can parse the language it is usually inconvenient to plug in an analysis tool with a custom parser into a project build system. This may be the reason that even C analysis tools such as sparsify are not widely used.Frustrated with inability to analyze our Mozilla code and after running into a dead-end with a non-GCC C/C++ parser (Elsa), we built our static analysis tools on a custom plugin framework on top of GCC. Using GCC enables one to easily integrate static analysis into any build system that uses GCC: a matter of adding a few compiler flags.Recently, FSF made a license change allowing third-party plugins in GCC. This will make it possible for anyone using GCC 4.5 to analyze their code by installing analyses passes via compiler flags.This talk is about the new dimensions of development opened by being able to analyze the semantic structure of one's code using the Dehydra/Treehydra plugins developed at Mozilla. I will describe how open source static analysis can make it easy to query/visualize your source code, enforce APIs and prevent certain patterns of bugs. Getting a firm grip on C/C++ codebases has never been this easy. http://lca2010.linux.org.au - http://www.linux.org.au CC BY-SA - http://creativecommons.org/licenses/by-sa/4.0/legalcode.txt
WIEN2k (gfortran + gcc + Netlib{download center} + OpenMP)
 
12:29
WIEN2k (gfortran + gcc + Netlib + OpenMP) (Optimize: SSE4.2) sudo apt-get install csh sudo apt-get install tk sudo apt-get install gfortran sudo apt-get install build-essential sudo apt-get install libblas-dev sudo apt-get install liblapack-dev 1) mkdir WIEN2k_14 2) cd WIEN2k_14 3) cp WIEN2k_14.tar . 4) tar -xvf WIEN2k_14.tar 5) gunzip *.gz 6) chmod +x ./expand_lapw 7) ./expand_lapw 8) ./siteconfig_lapw 9) V gfortran + gotolib 10) gfortran, gcc 11) O Compiler options: -ffree-form -O2 -ffree-line-length-none -msse4.2 -m64 L Linker Flags: $(FOPT) -L../SRC_lib P Preprocessor flags '-DParallel' R R_LIB (LAPACK+BLAS): -llapack_lapw -lblas_lapw -lblas -llapack -fopenmp 12) ./userconfig_lapw 13) bash 14) w2web
QEMU 3.1 - Custom build on PowerMac G5 Quad and Ubuntu Linux 16.04 - Part 3
 
20:01
QEMU 3.1 - Custom build on PowerMac quad core and Ubuntu Mate Linux 16.04 - gcc build G5 optimizations, options and launch parameters to emulate a Sam460ex amiga board and AmigaOS4.1 - speed tests
Views: 216 dino papararo
SPR-KKR (intel compiler, openmpi)
 
06:07
Attention ! Please, check intelmpi and openmpi. make.inc --------- ############################################################################### # Here the common makefile starts which does depend on the OS #### ############################################################################### # # FC: compiler name and common options e.g. f77 -c # LINK: linker name and common options e.g. g77 -shared # FFLAGS: optimization e.g. -O3 # OP0: force nooptimisation for some routiens e.g. -O0 # VERSION: additional string for executable e.g. 6.3.0 # LIB: library names e.g. -L/usr/lib -latlas -lblas -llapack # (lapack and blas libraries are needed) # BUILD_TYPE: string "debug" switches on debugging options # (NOTE: you may call, e.g. "make scf BUILD_TYPE=debug" # to produce executable with debugging flags from command line) # BIN: directory for executables # INCLUDE: directory for include files # (NOTE: directory with mpi include files has to be properly set # even for sequential executable) ############################################################################### BUILD_TYPE ?= #BUILD_TYPE := debug VERSION = 6.3 ifeq ($(BUILD_TYPE), debug) VERSION := $(VERSION)$(BUILD_TYPE) endif BIN = . #BIN=~/bin #BIN=/tmp/$(USER) #LIB = -lblas -llapack #LIB = $(LIB_MKL) youtu LIB = ${MKLROOT}/lib/intel64/libmkl_blas95_lp64.a ${MKLROOT}/lib/intel64/libmkl_lapack95_lp64.a -Wl,--start-group ${MKLROOT}/lib/intel64/libmkl_intel_lp64.a ${MKLROOT}/lib/intel64/libmkl_core.a ${MKLROOT}/lib/intel64/libmkl_intel_thread.a -Wl,--end-group -lpthread -lm -ldl LIBMPI = ${MKLROOT}/lib/intel64/libmkl_blas95_lp64.a ${MKLROOT}/lib/intel64/libmkl_lapack95_lp64.a ${MKLROOT}/lib/intel64/libmkl_scalapack_lp64.a -Wl,--start-group ${MKLROOT}/lib/intel64/libmkl_intel_lp64.a ${MKLROOT}/lib/intel64/libmkl_core.a ${MKLROOT}/lib/intel64/libmkl_intel_thread.a ${MKLROOT}/lib/intel64/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -lpthread -lm -ldl # Include mpif.h #INCLUDE = -I/usr/lib/openmpi/include # openmpi-1.10.3 (./configure -prefix=$HOME/openmpi CXX=icpc CC=icc FC=ifort) INCLUDE = -qopenmp -I${HOME}/openmpi/include OP0 = ifeq ($(BUILD_TYPE), debug) # FFLAGS = -O0 -g # FFLAGS = -O0 -g -Wall -fbounds-check -fbacktrace FFLAGS = -O0 -check all -traceback -fpe0 -g -fp-stack-check -ftrapuv -CU ### for ifort else # FFLAGS = -O2 FFLAGS = -O2 -axSSE4.2 -diag-disable remark ### for ifort endif FC = mpif90 -c $(FFLAGS) $(INCLUDE) LINK = mpif90 $(FFLAGS) $(INCLUDE) MPI=MPI --------- ■ SPRKKR 6.3 (Intel compiler 2016) 1) unpack cp $HOME/Downloads/sprkkr6.3*.tgz ./sprkkr cd sprkkr tar -zxvf sprkkr6.3*.tgz 2) cp make.inc_example make.inc 3) gedit make.inc LIB = ${MKLROOT}/lib/intel64/libmkl_blas95_lp64.a ${MKLROOT}/lib/intel64/libmkl_lapack95_lp64.a -Wl,--start-group ${MKLROOT}/lib/intel64/libmkl_intel_lp64.a ${MKLROOT}/lib/intel64/libmkl_core.a ${MKLROOT}/lib/intel64/libmkl_intel_thread.a -Wl,--end-group -lpthread -lm -ldl #LIB = $(LIB_MKL) LIBMPI = ${MKLROOT}/lib/intel64/libmkl_blas95_lp64.a ${MKLROOT}/lib/intel64/libmkl_lapack95_lp64.a ${MKLROOT}/lib/intel64/libmkl_scalapack_lp64.a -Wl,--start-group ${MKLROOT}/lib/intel64/libmkl_intel_lp64.a ${MKLROOT}/lib/intel64/libmkl_core.a ${MKLROOT}/lib/intel64/libmkl_intel_thread.a ${MKLROOT}/lib/intel64/libmkl_blacs_openmpi_lp64.a -Wl,--end-group -lpthread -lm -ldl INCLUDE = -qopenmp -I${MKLROOT}/include/intel64/lp64 -I${MKLROOT}/include FFLAGS = -O2 -axAVX,SSE4.2,SSE4.1,SSE3,SSSE3,SSE2 \ -diag-disable remark ### for ifort FC = mpiifort -c $(FFLAGS) $(INCLUDE) LINK = mpiifort $(FFLAGS) $(INCLUDE) 4) make scfmpi(single type: make scf ) mpirun -n 4 kkrscf6.3MPI *.inp (symbol: larger than) OUTPUT 5) compiling a) make all gen, scf, embgen, embscf b) make allmpi scfmpi, embscfmpi, specmpi c) Makefile make genmpi, make embscfmpi, make chi make opm, make opmmpi, make spec ※ recompile: make clean Usage 1) cif2cell -p sprkkr -f case.f 2) xband 3) DIRECTORIES 4) SELECT/MODIFY case.sys 5) SPR-KKR * MKL link /opt/intel/documentation_2016/en/mkl/ps2016/get_started.htm file:///opt/intel/documentation_2016/en/mkl/common/mkl_link_line_advisor.htm
CppCon 2017: Charles Bailey “Enough x86 Assembly to Be Dangerous”
 
30:59
http://CppCon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2017 — This tutorial is an introduction to x86 assembly language aimed at C++ programmers of all levels who are interested in what the compiler does with their source code. C++ is a programming language that cares about performance. As with any technology, a deep understanding of C++ is helped by knowledge of the layer below, and this means knowledge of assembly language. Knowing what the compiler does with your source code and the limitations under which it operates can inform how you design and write your C++. We learn how to generate, inspect and interpret the assembly language for your C++ functions and programs. We take a short tour of common assembly instructions and constructs, and discover why extreme caution should be exercised if we are trying to infer performance characteristics from a simple inspection of assembly code. Starting with a simple `operator+` for a user-defined class, we take a look at how interface and implementation choices affect the generated assembly code and observe the effect of copy elisions and related optimizations that compilers commonly perform. — Charles Bailey: Bloomberg LP, Software Engineer Charles Bailey is a software developer at Bloomberg LP. He works in Developer Experience Engineering London, where he consults and advises on all aspects of software development. His previous experience in software development has included roles in many areas, including business intelligence, data warehousing, defence, radar and financial derivatives. In addition to C++, Charles has a keen interest in source control in general and Git in particular. He can be found answering questions on both subjects on Stack Overflow and in person. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 14148 CppCon
Android optimizations for ARM by Linaro Engineers
 
17:17
Linaro Engineers present a bunch of optimizations they recently did in Android for ARM. These optimizations are in areas like BIONIC for Cortex C string routines, migrating to GCC 4.9, migrating the external projects to their latest versions, optimizing SQLite, optimizing battery life, also they discuss their progress building Android with CLANG, migrating Android to latest versions and how Linaro is planning to release these optimizations to the Android community through Linaro Android releases and upstream them to respective project repositories. The Android Linaro team's presentations are live and available on Linaro.org LCA14 and on youtube at LinaraOnAir channel - http://www.youtube.com/user/LinaroOnAir
Views: 1339 Charbax
CppCon 2018: Nir Friedman “Understanding Optimizers: Helping the Compiler Help You”
 
01:04:03
http://CppCon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2018 — Optimizing compilers can seem rather fickle: sometimes they do something very sophisticated that surprises us, other times they fail to perform an optimization we assumed they would. By understanding the limits on their knowledge, and the constraints in their output, we can much more reliably predict when certain kinds of optimizations can occur. This, in turn, allows our designs to be informed by being friendly to the optimizer. This talk will discuss concepts fundamental to understanding optimization such as the role of static types, basic blocks, and correctness of emitted code. It will also go through many examples: where inlining does and doesn't occur and why, const propagation, branch pruning, utilizing inferred information/values, the roles of const and value vs reference semantics, etc. It will also show how to help the compiler: writing code in different ways which encourages different optimization strategies. — Nir Friedman Quantitative Developer, Tower Research Capital After completing a PhD in physics, Nir started working doing C++ in low latency and high frequency trading. He's interested in the challenges of writing robust code at scale, and highly configurable code that minimizes performance trade-offs. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 7865 CppCon
CppCon 2017: Carl Cook “When a Microsecond Is an Eternity: High Performance Trading Systems in C++”
 
01:00:07
http://CppCon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2017 — Automated trading involves submitting electronic orders rapidly when opportunities arise. But it’s harder than it seems: either your system is the fastest and you make the trade, or you get nothing. This is a considerable challenge for any C++ developer - the critical path is only a fraction of the total codebase, it is invoked infrequently and unpredictably, yet must execute quickly and without delay. Unfortunately we can’t rely on the help of compilers, operating systems and standard hardware, as they typically aim for maximum throughput and fairness across all processes. This talk describes how successful low latency trading systems can be developed in C++, demonstrating common coding techniques used to reduce execution times. While automated trading is used as the motivation for this talk, the topics discussed are equally valid to other domains such as game development and soft real-time processing. — Carl Cook: Optiver, Software Engineer Carl has a Ph.D. from the University of Canterbury, New Zealand, graduating in 2006. He currently works for Optiver, a global electronic market maker, where he is tasked with adding new trading features into the execution stack while continually reducing latencies. Carl is also an active member of SG14, making sure that requirements from the automated trading industry are represented. He is currently assisting with several proposals, including non-allocating standard functions, fast containers, and CPU affinity/cache control. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 57141 CppCon
Optimizing UE4 for Fortnite: Battle Royale - Part 1 | GDC 2018 | Unreal Engine
 
55:26
Fortnite has served as a development sandbox for UE4, and in this presentation from GDC 2018 we explore the effort involved with taking FNBR from 30fps to 60fps on consoles. Learn more at http://www.UnrealEngine.com
Views: 58312 UnrealEngine
QEMU 3.1 - Custom build on PowerMac G5 Quad and Ubuntu Linux 16.04 - Part 2
 
20:01
QEMU 3.1 - Custom build on PowerMac quad core and Ubuntu Mate Linux 16.04 - gcc build G5 optimizations, options and launch parameters to emulate a Sam460ex amiga board and AmigaOS4.1 - speed tests
Views: 105 dino papararo
QEMU 3.1 - Custom build on PowerMac G5 Quad  and Ubuntu Linux 16.04 - Part 1
 
18:07
QEMU 3.1 - Custom build on PowerMac quad core and Ubuntu Mate Linux 16.04 - gcc build G5 optimizations, options and launch parameters to emulate a Sam460ex amiga board and AmigaOS4.1 - speed tests
Views: 78 dino papararo
P. Goldsborough “clang-useful: Building useful tools with LLVM and clang for fun and profit"
 
01:22:41
http://cppnow.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/boostcon/cppnow_presentations_2017 — The talk will consist of two parts. The first will be a general overview of the LLVM and clang library infrastructure available for creating custom tools such as static analyzers or source-to-source transformation tools. I will explain the ecosystem of the LLVM and clang tooling environment and outline options, tradeoffs and examples of the different ways of creating a tool (e.g. the difference between creating a plugin vs. a LibTooling executable). I will then go further in depth about how clang represents C++ source code by means of an AST and ways of traversing the AST to look for certain points of interest, e.g. old-style for loops that could be converted to range-based loops, or braces that are indented in Allman instead of One-True-Brace-Style, which could be useful for any company with a style guide it wants to enforce at compile-time rather than on paper or in code-reviews. For the second part, I will then branch out into the two common tasks one might want to perform with a custom-built tool: emitting warnings or errors (for static analysis), and transforming and emitting new code (source-to-source transformations, such as clang-tidy). For each use-case, I will walk through real code that shows how one might approach a simple task in each category. At the end of the talk, I expect listeners to have a basic understanding of the LLVM/clang tooling environment and AST representation. However, most importantly, I expect people to take away knowledge they can take home or to their office and immediately build tools in no time at all that *genuinely* improve their workflow and productivity. This is not a "give a man a fish" talk. This is a "teach a man to fish" talk. — I'm Peter and technically a second year CS student at TU Munich. Practically, I'm a first year student who decided to do a gap year and join the workforce. Since last August I've been doing engineering internships: first Google, then Bloomberg, now Facebook. I currently physically reside in London but really live on GitHub, where I enjoy giving back to and working with the community on a variety of projects. My comfort zone is the intersection of blue skies machine learning research and low-level infrastructure engineering in modern C++. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 14560 BoostCon
CppCon 2018: "Compiling Multi-Million Line C++ Code Bases Effortlessly with the Meson Build System"
 
33:47
http://CppCon.org Jussi Pakkanen "Compiling Multi-Million Line C++ Code Bases Effortlessly with the Meson Build System" — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2018 — The Meson build system is a fresh build system designed from the ground up to solve the build problems of today. It is currently seeing growing adoption across many domains and is already being used to build large chunks of most modern Linux distributions. One of the main reasons for this is Meson's heavy focus on usability, meaning that build definitions are both powerful and easy to understand. In this talk we shall look into the design and use of Meson from the ground up going up all the way to projects with millions of lines of code and complex build setups such as code generators and cross compilation. We shall especially examine the problem of dependencies and how Meson solves this age old problem with a composable design that supports both system provided and self built dependencies from the same build definition. Finally we will examine the multi-language support of Meson for easily combining C++ with other languages such as Java, C#, D and Python. — Jussi Pakkanen, Consultant Jussi Pakkanen is the creator and project lead of the Meson build system. He is currently working as a consultant. He has experience in many different fields of computing ranging from slot machines to mail sorting, computer security, Linux desktop development and gaming. His free time has been equally colorful, including things such as comics and illustration, directing movies, music and electronics. When not working on projects he might be found watching bad movies, especially sci-fi and the finest of trash from the 80s. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 7790 CppCon
How To Fix Code Blocks Environment Error Can't find compiler executable in your search path
 
03:17
Fix Code Blocks Environment Error Can't find compiler executable in your search path.How to fix CodeBlocks compiler error 100% working,codeblocks,compiler,environment error,Fix the Environment Error in Code Blocks | Tutorial Lesson 2,If You can't find compiler executable in your search path (gnu gcc compiler.then watch this Technical Hoque Full Tutorial video.As i already mention in My Previous video:How to Install Code Blocks IDE On Windows 10 With C/C++ Compiler,Link:https://youtu.be/pB3k1TxV2To In This Video,How To Fix Code Blocks Environment Error Can't find compiler executable in your search path.You Will learn How to Solve CodeBlocks environment error. (100% Solved)Visit: http://www.technicalhoque.com/ For more YouTube tips and tricks. DOSTO IS VIDEO MEIN HUNMNE DIKHYA HAI KI KAYASE APP CODE::BLOCKS ENVIRONMENT ERROR KO FIX KAR SAKTE HAI,DOSTO SATH MEIN HUNMNEIN DIKHAYA HAI APP CODEBLOCKS MEINKAYSE EK CHOTASA PROGRAM RUN KAR SAKTE HAI. DOSTO AGR APP HUNMSE KOI SOLUTION CHATE HAI ,YA FIR APP KE MAN MEIN IS VIDEO SEW JURE KOI BHI SAWAL HAI2 APP HAMIN COMMENTS PE PUCH SAKTE HAI HUNM 100% JABAB DENGE Click here to Subscribe My Channel: ✔https://goo.gl/VPcw1x Follow me on Soocial Media: ▌►Facebook : https://www.facebook.com/TechnicalHoque/ ▌►Twitter : https://twitter.com/TechnicalHoque ▌►Instagram : https://www.instagram.com/technicalhoque/ ▌►Google+ : https://plus.google.com/+TechnicalHoqueCEH ▌►LinkedIn : https://www.linkedin.com/in/technicalhoque/ ▌►Pinterest : https://in.pinterest.com/technicalhoque/ ▌►Tumbler : https://technicalhoque.tumblr.com/ ▌►Reddit : https://www.reddit.com/user/TechnicalHoque/ ▌►Stumbler : https://www.stumbleupon.com/stumbler/TechnicalHoque Like👍Share🔀 Comment and Subscribe for more cool videos 🎦📱📟💻🔌💽🖲📡 Click Here for Previous Video: How to speed up my computer windows 10,windows 8 and Windows 7 Performance In 2017 HINDI/URDU 🔀https://youtu.be/HZOsEYQ2lXI Uninstall Any Programs/Apps That Won't Uninstall From Windows[ 10,8.1,8 &7] |How to remove program 🔀https://youtu.be/vph_iaZKKc4 Windows 10# Restoring Your Computer with 'Reset This Pc Remove everything' option 🔀https://youtu.be/CTg0DxOZL3o *How to Solve CodeBlocks environment error.,How to Download and install CodeBlocks,codeblocks,compiler,environment error,solve codeblocks environment error,Codeblocks cannot find compiler,code blocks compiler not working,code blocks not building and running,MinGW,Codeblock error,environment error in code block,Toolchain executables,GNU GCC Compiler,compiler,codeblocks Code Blocks Environment Error,Code Blocks Environment Error: Can't find compiler executable in your search path,Can't find compiler executable,Can't find compiler executable in your search path,Fix Code Blocks Environment Error Click here to Check Out all Playlist : Can't Find Compiler Executable In Your Search Path || Solutions For Code blocks compile Can't Find CompilerExecutable In Your Search Path(GNU GCC Compiler) 🔀 https://goo.gl/4wzUiA C++ Code::Blocks error; uses an invalid compiler. Probably the toolchain path,Fix Code Blocks Environment Error Can't find compiler executable in your search path,How to solve code block compiler problem 🔀 https://goo.gl/rc9LbG CODE:BLOCKS ERROR - INVALID COMPLIER ISSUE - HOW TO REPAIR? can't find compiler executable in your search path (gnu gcc compiler 🔀 https://goo.gl/ubLqFd CODE BLOCKS : Compiler Error , BUILD ERROR , ENVIRONMENT Error FIXED | 100% WORKING METHOD How To Fix Codeblocks GNU GCC Compiler! fatal error no such file or directory code blocks,tinyxml error error document empty codeblocks fix 🔀 https://goo.gl/fao3VS How to solve environment error in codeblock,How to solve compiler problem in codeblocks,Code::Blocks Compiling Error 🔀https://goo.gl/3yq74Y Codeblocks NO COMPILA EN WINDOWS - MinGW + Code Blocks Solución en Español 🔀 https://goo.gl/tN2Z2B How to set complier on codeblock 🔀 https://goo.gl/nGxRVt YouTube/Google Adsense Tips,Tricks and Tutorials 🔀 https://goo.gl/yDAwUA YouTube Helpful Tutorials 🔀 https://goo.gl/g8Ubg8 If you have any confusion please let me know through comment! Please Subscribe Our YouTube Channel and You will get Video notification Next time.Here Is The Subscription Link - https://www.youtube.com/c/TechnicalHoqueCEH About: Technical Hoque is an Educational YouTube Channel,Where You Will Find Mobile/Computer Tips and Tricks,search engine optimization,Social Media/Online Marketing,Technology,google,seo,Tutorials,best tech,New Technological Videos in Hindi. Again Thanks For Watching and See You Soon. Technical Hoque YouTube Channel Provides best tech Hindi Technical Videos,Technical Analysis,Technical Difficulties,Technical Interview,Technical Support,Technical Skills,Tech Tips.!Ask me A Question by using hashtag on YouTube or Twitter #Technical Hoque 👍 LIKE ➡ SHARE & SUBSCRIBE I hope you enjoy:)How To Fix Code Blocks Environment Error Can't find compiler executable in your search path Please Subscribe Our YouTube Channel. Thanks!
Views: 10700 HowToHack
CppCon 2017: Nicolai Josuttis “The Nightmare of Move Semantics for Trivial Classes”
 
57:16
http://CppCon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2017 — Assume, we implement a very simple class having just multiple string members. Even ordinary application programmer prefer to make it simple and fast. You think you know how to do it? Well beware! It can become a lot harder than you initially might assume. So, let’s look at a trivial class with multiple string members and use live coding to see the effect using different implementation approaches (using constructors passing by value, by reference, by perfect forwarding, or doing more sophisticated tricks). Sooner than later we will fall into the deep darkness of universal/forwarding references, enable_if, type traits, and concepts. — Nicolai Josuttis: IT Communication Nicolai Josuttis (http://www.josuttis.com) is an independent system architect, technical manager, author, and consultant. He designs mid-sized and large software systems for the telecommunications, traffic, finance, and manufacturing industries. He is well known in the programming community because he not only speaks and writes with authority (being the (co-)author of the world-wide best sellers The C++ Standard Library (www.cppstdlib.com), C++ Templates, and SOA in Practice), but is also an innovative presenter, having talked at various conferences and events. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 34852 CppCon
CppCon 2017: John Regehr “Undefined Behavior in 2017 (part 1 of 2)”
 
49:23
http://CppCon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2017 — Undefined behavior is a clear and present danger for all application code written in C++. The most pressing relevance is to security, but really the issue is one of general software correctness. The fundamental problem lies in the refusal of C++ implementations (in general) to trap or otherwise detect undefined behaviors. Since undefined behaviors are silent errors, many developers have historically misunderstood the issues in play. Since the late 1990s undefined behavior has emerged as a major source of exploitable vulnerabilities in C++ code. This talk will focus on trends in the last few years including (1) increased willingness of compilers to exploit undefined behaviors to break programs in hard-to-understand ways and (2) vastly more sophisticated tooling that we have developed to detect and mitigate undefined behaviors. The current situation is still tenuous: only through rigorous testing and hardening and patching can C++ code be exposed to untrusted inputs, even when this code is created by strong development teams. This talk will focus on what developers can and should do to prevent and mitigate undefined behaviors in code they create or maintain. — John Regehr: University of Utah, Professor John Regehr is a professor of computer science at the University of Utah, USA. His research group creates tools for making software more efficient and correct. For example, one of his projects, Csmith, generates random C programs that have been used to find more than 500 previously unknown bugs in production-quality C compilers. Outside of work John likes to explore the mountains and deserts of Utah with his family. — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 6307 CppCon
XLA: TensorFlow, Compiled! (TensorFlow Dev Summit 2017)
 
48:32
Speed is everything for effective machine learning, and XLA was developed to reduce training and inference time. In this talk, Chris Leary and Todd Wang describe how TensorFlow can make use of XLA, JIT, AOT, and other compilation techniques to minimize execution time and maximize computing resources. Visit the TensorFlow website for all session recordings: https://goo.gl/bsYmza Subscribe to the Google Developers channel at http://goo.gl/mQyv5L
Views: 31067 Google Developers
MPLAB® XC Compiler Optimizations Webinar
 
05:31
This webinar provides an overview of optimizations provided by the MPLAB XC C compilers. It will help you select a minimal set of optimizations, which allow projects to be easily debugged, or enable them all and have the compiler spend more time improving the performance of your code. http://www.microchip.com/compilers
GCC Steering Committee Q&A - GNU Tools Cauldron 2018
 
53:12
Led by David Edelsohn at GNU Tools Cauldron 2018
Views: 128 Embecosm
GCC Vlog #22
 
02:17
PK explains the inner workings of the prayer chain.
Abinit (intel compiler + openmpi + MKL)
 
07:12
./configure FC=mpif90 CC=mpicc CXX=mpicxx --with-linalg-flavor="mkl" --with-linalg-incs="-I/opt/intel/compilers_and_libraries_2016/linux/mkl/include/intel64/lp64 -I/opt/intel/compilers_and_libraries_2016/linux/mkl/include" --with-linalg-libs="-L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -lmkl_blas95_lp64 -lmkl_lapack95_lp64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_core -lmkl_intel_thread -lmkl_blacs_openmpi_lp64 -lpthread -lm -ldl" --with-fft-flavor=fftw3-mkl --with-fft-libs="-L/opt/intel/compilers_and_libraries_2016/linux/mkl/interfaces/fftw3xf -lfftw3xf_intel" --with-fft-incs="-I/opt/intel/mkl/include" --enable-openmp --enable-64bit-flags FCFLAGS_EXTRA="-O2 -axAVX,SSE4.2" CFLAGS_EXTRA="-O2 -axAVX,SSE4.2" CXXFLAGS_EXTRA="-O2 -axAVX,SSE4.2" make clean make mj4 cd tests export OMP_NUM_THREADS=1 ./runtests.py -j4 fast ------ Suite failed passed succeeded skipped disabled run_etime tot_etime fast 0 2 9 0 0 23.31 24.56 Completed in 8.07 [s]. Average time for test=2.12 [s], stdev=1.89 [s] Summary: failed=0, succeeded=9, passed=2, skipped=0, disabled=0 ------ cd .. sudo make install
openSUSE Conference 2018 - Why openSUSE
 
24:41
About ideal use case and promoting the gold triangle of openSUSE This talks is split in three topics: 1. openSUSE is not SUSE, it is her sister. There are still too much people outside of open/SUSE world that still confuse the difference and the options they have. This usually leads them to use the wrong distribution for their use cases and thinking that "SUSE" is not working for them. I will describe the current distribution palette and their respective key values and the difference from the SUSE company and openSUSE community. 2. Why should I use openSUSE? I will describe which features from openSUSE make it ideal for which use case. A brief description of what I personally name the gold triangle of openSUSE is mandatory (OBS, openQA, Yast). 3. How to contribute to openSUSE I will describe the workflow to contribute to openSUSE, new packages and maintenance, and how OBS and openQA are involved. This talks is split in three topics: 1. openSUSE is not SUSE, it is her sister. There are still too much people outside of open/SUSE world that still confuse the difference and the options they have. This usually leads them to use the wrong distribution for their use cases and thinking that "SUSE" is not working for them. I will describe the current distribution palette and their respective key values and the difference from the SUSE company and openSUSE community. 2. Why should I use openSUSE? I will describe which features from openSUSE make it ideal for which use case. A brief description of what I personally name the gold triangle of openSUSE is mandatory (OBS, openQA, Yast). 3. How to contribute to openSUSE I will describe the workflow to contribute to openSUSE, new packages and maintenance, and how OBS and openQA are involved. SLindoMansilla
Views: 1405 openSUSE
Day 1 Part 1: Introductory Intel x86: Architecture, Assembly, Applications
 
01:26:50
The class materials are available at http://www.OpenSecurityTraining.info/IntroX86.html Follow us on Twitter for class news @OpenSecTraining. The playlist for this class is here: http://bit.ly/IILMeN The full quality video can be downloaded at http://archive.org/details/opensecuritytraining Intel processors have been a major force in personal computing for more than 30 years. An understanding of low level computing mechanisms used in Intel chips as taught in this course by Xeno Kovah serves as a foundation upon which to better understand other hardware, as well as many technical specialties such as reverse engineering, compiler design, operating system design, code optimization, and vulnerability exploitation. 25% of the time will be spent bootstrapping knowledge of fully OS-independent aspects of Intel architecture. 50% will be spent learning Windows tools and analysis of simple programs. The final 25% of time will be spent learning Linux tools for analysis. This class serves as a foundation for the follow on Intermediate level x86 class. It teaches the basic concepts and describes the hardware that assembly code deals with. It also goes over many of the most common assembly instructions. Although x86 has hundreds of special purpose instructions, students will be shown it is possible to read most programs by knowing only around 20-30 instructions and their variations. The instructor-led lab work will include: * Stepping through a small program and watching the changes to the stack at each instruction (push, pop, call, ret (return), mov) * Stepping through a slightly more complicated program (adds lea(load effective address), add, sub) * Understanding the correspondence between C and assembly control transfer mechanisms (e.g. goto in C == jmp in ams) * Understanding conditional control flow and how loops are translated from C to asm(conditional jumps, jge(jump greater than or equal), jle(jump less than or equal), ja(jump above), cmp (compare), test, etc) * Boolean logic (and, or, xor, not) * Logical and Arithmetic bit shift instructions and the cases where each would be used (shl (logical shift left), shr (logical shift right), sal (arithmetic shift left), sar(arithmetic shift right)) * Signed and unsigned multiplication and division * Special one instruction loops and how C functions like memset or memcpy can be implemented in one instruction plus setup (rep stos (repeat store to string), rep mov (repeat mov) * Misc instructions like leave and nop (no operation) * Running examples in the Visual Studio debugger on Windows and the Gnu Debugger (GDB) on Linux * The famous "binary bomb" lab from the Carnegie Mellon University computer architecture class, which requires the student to do basic reverse engineering to progress through the different phases of the bomb giving the correct input to avoid it "blowing up". This will be an independent activity. Knowledge of this material is a prerequisite for future classes such as Intermediate x86 (playlist:http://bit.ly/HIaD4O) , Rootkits(playlist:http://bit.ly/HLkPVG), Exploits, and Introduction to Reverse Engineering.
Views: 203448 Open SecurityTraining
CppCon 2017: Anastasia Kazakova “Tools from the C++ eco-system to save a leg”
 
52:04
http://CppCon.org — Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2017 — C++ gives you enough rope to shoot your leg off. Readable (and thus easy to maintain, easy to support) and error-free code in C++ is often hard to achieve. And while modern C++ standards bring lots of fantastic opportunities and improvements to the language, sometimes they make the task of writing high quality code even harder. Or can’t we just cook them right? Can the tools help? In this talk I’ll highlight the main trickiness of C++, including readability problems, some real-world issues, problems that grow out of C++ context-dependent parsing. I’ll then try to guide you in how to eliminate them using tools from the C++ eco-system. This will cover code styles and supportive tools, code generation snippets, code analysis (including CLion’s inspections and Data Flow Analysis, C++ Code Guidelines and clang-tidy checks), refactorings. I will also pay some attention to unit testing frameworks and dependency managers as tools that are essential for the high quality code development. — Anastasia Kazakova: JetBrains, Product Marketing Manager As a C and C++ software developer, Anastasia Kazakova created real-time *nix-based systems and pushed them to production for 8 years. She has a passion for networking algorithms and embedded programming and believes in good tooling. With all her love for C++, she is now the Product Marketing Manager on the JetBrains CLion team. Besides, Anastasia runs a C++ user group in Saint-Petersburg, Russia (https://www.meetup.com/St-Petersburg-CPP-User-Group/). — Videos Filmed & Edited by Bash Films: http://www.BashFilms.com
Views: 6432 CppCon
Getting Started with AVR: Finding Documentation and Turning on an LED (#2)
 
04:49
Hands-on: http://microchipdeveloper.com/8avr:led-on In this video, we will: - Find the device datasheet, Xplained Mini user guide and schematics. - Start a new GCC C Executable project in Atmel Studio 6. - Demonstrate how to efficiently use the datasheet to understand how to configure a pin and turn on an LED. - Set up and use the debugWIRE interface to program the ATmega328P. Follow along with the entire ‘Getting Started with AVR’ series: http://bit.ly/GettingStartedwithAVR Want to explore AVR microcontrollers some more? http://www.atmel.com/products/microcontrollers/avr/ Xplained Mini: http://www.atmel.com/products/microcontrollers/avr/xplained.aspx ATmega328P: http://www.atmel.com/devices/atmega328p.aspx Atmel Studio: http://www.atmel.com/tools/atmelstudio.aspx Stay connected! Embedded Design Blog: http://blog.atmel.com Twitter: http://www.atmel.com/twitter Facebook: http://www.atmel.com/facebook LinkedIn: http://www.atmel.com/linkedin
Views: 58015 Microchip Makes