Some Binary Options Strategies - Investopedia

2.9.3 Stable update!

2.9.3 Stable update!
What is up Depthians!
We are back with another monstrous update as this one incorporates five beta test builds, so we have a lot to cover.
If you want to dive straight into the massive changelog/dissertation Click
We should probably start with the biggest change to From The Depths in this update and that is the change of fuel and ammo storage.
Quoting Nick, our lead developer
The change is quite simple: "remove ammo and fuel as separate resources. Weapons will consume materials directly, fuel engines and CJEs will burn materials directly".
Before I dig into why I think this is the right thing for FtD, I'd like to explain a few details.
Energy, fuel and ammo are still needed for your constructs.
We have changed the "ammo barrels (etc)" and "fuel tanks" so they are just alternative material storage containers, but with the following properties:
--"ammo barrels" now increase the maximum possible rate of usage of materials as "ammo" for reloading guns. They still explode.
--"fuel tanks" increase the maximum possible rate of use of materials as "fuel" for fuel engines and CJEs, with the future stretch goal of fuel tanks being flammable.
--So ammo racking is going to remain a feature of the game- vehicles that need to reload a large amount of materials may need additional ammo barrels
Ammo and oil processors are replaced ship-wide with existing material storage containers of the same size. They'll be made decorative blocks so you can still use them decoratively in future if you want to.
The oil refinery will be repurposed (described later in the patch notes)
There are two main reasons why I think this is the right move. Why it's right for the business and why it's right for the player.
Let's start with why I think it's right for the player:
Ammo and fuel containers are currently purchasable as either "empty or full". This is confusing when considered in the context of the campaign, story missions, custom battles, multiplayer matches...how do empty and full tanks behave in these modes? I'd need an hour to study the code and a small essay to explain it. That's not good game design.
Localised resources, when considering just the moving of material (and energy, if you want), becomes infinitely more manageable. The supply group system and the transit fleet system are not intuitive and for a lot of situations, their usage becomes fiddly and too complicated. We've replaced these systems with a new supply system that is much more intuitive for moving materials and energy around.
The UI is less cluttered now that ammo and fuel bars are not shown. This is not a minor point...it'll reduce the amount of data on screen by about 40% in a lot of the different views. It'll be so much easier to know at a glance if a particular fleet is running low on "materials" or doing fine. Is a transport ready to leave, or does it need to pick up more materials? Will a set of vehicles have enough materials for the next fight...this is so much easier with just one main resource type per vehicle.
When you or an enemy run out of ammo or fuel in a battle it's just frustrating. By combining fuel, ammo and materials for repairing you can guarantee that if someone runs out, the fight is going to be over quickly.
I imagine that deep down the majority of players would rather not have to create, stock and resupply fuel and ammo. I know that personally, the requirement to do this puts me off playing the campaign. By using a single material it still focuses the game on making efficient war machines, maintaining supply lines and growing your economy, but without the extra confusion of mat->ammo and mat-> fuel conversion.
Being able to assess weapons, engines and vehicles in terms of material cost and running cost is elegant.
Most grand strategy games and RTS games don't have localised resources, and many don't have more than 2 resource types to handle. Very few combine localised materials with multiple types.
Why it's right for the business:
The ammo and oil processors were created about 8 years ago. Boring single blocks that don't add much to the game. It's been our intention to add something similar to the oil refinery but for ammo creation. That's a lot of work and adds to the complexity of the logistical part of the game, which we feel is already a burden.
Making the localised resource supply system more user friendly to make it easy/natural/pleasant to move ammo, fuel and material around the map would require a lot of effort and, quite frankly, I'm not sure we'd ever manage it.
The complexity of the UI scares off a lot of our customers. The barriers to getting a gun firing or a boat moving will be lowered if a single material container can theoretically get everything working.
Running out of ammo/fuel in combat is a problem for our players. We want to find a solution to that, but it would take a lot of effort to do so. We also want the strategic AI to always enter a battle with enough ammo and fuel for the fight- that's another massive bunch of work.
The campaign's strategic AI has to work hard to get materials where it wants them. It's a bundle of work and added complexity to get NPC fleets to restock ammo and fuel as well.
We had proposed work to make resource dumps (from dead ships) contain ammo and fuel...again, that's more work, more bugs, more testing.
Certain game modes such as story missions, tournament mode, and multiplayer maps should theoretically allow the player to choose the amount of ammo or fuel stocked into their vehicles before the match begins. That's another bundle of work and added complexity we'd like to avoid.
Currently out of play units on the map can run out of fuel and will still continue to move "for free". It's exploitable and we don't have a solution to that...but if all the different out of play movement calculations are burning material, there will be no avoiding the cost.
The development effort can be much better spent polishing up other features that I actually believe in, rather than flogging the dead horse of logistical complexity in an attempt to make it interesting, approachable and fun for everyone (which I fundamentally don't think it would ever be).
Fundamentally I think that by winding back this feature we tie up a large number of loose ends and it results in a far more finished and enjoyable product.
And what's-more everyone on the development team agrees that we enjoy the game for fighting, looting and creating...not staring blankly at dozens of resource bars trying to figure out who needs to head back for more fuel and how long we need to wait for ammunition to process.
We've also simplified the resource transfer system. "Supply groups" and "Transit Fleets" have been replaced with a simple but comprehensive three-tier system. You can mark a vehicle as a "Creator", a "Cargo" or a "User". Creators fill up Cargos (and Users), Cargos give to Users (up to procurement levels). Users equalise their material with their neighbours, so do Creators, and there are a few handy transfers from Users back to Cargo and Creator to make sure they maintain their procurement levels as well. This system covers 95% of the way people were using the resource system and does it all semi-automatically. This simplification is much more possible now that materials are the only resource, as they invariably just need to flow from the resource zones to the front line, with everyone (Creators and Cargo) keeping what they need and passing the rest on. This new resource system also facilitates the long-range transport of materials from refinery to refinery, which is neat. The system also has an option, for Creator and Cargo types, to set their "supply chain index", so if you want to relay materials from output to output in order to accumulate them at a central location you can set the supply chain index to determine which way along the chain the materials will flow. It's all explained in the game.
After spending a lot of time with this new system from adventure to campaign and designer mode, the gameplay feels a little faster to get going and a little simpler for fleet management. As if you didn’t already know, you can shift+right click (with your supply construct selected) on the target construct / flagship of a fleet to keep supplied, keep holding down shift and right-click where you want to pick the resources up from and once again while not letting go of shift, shift+right click on the target construct/flag ship to finish the loop.
This would be done of course after setting up the settings Creator, Cargo and User.
Creator as an example is the harvesting construct, Cargo which would be the supply ship, User which would be a single target construct that uses the mats.
This will keep the supply ship target waypoint updated and therefore your supply ship will always head to the target construct no matter where it has moved to after setting up the loop.
You still need ammo and fuel boxes on your constructs, as these are governing the transfer rate / the speed that stock your turrets and fuel engine with the materials needed for them to run. You can run a construct without fuel or ammo boxes, however, once your APS clips are empty you will see a drop in your rate of fire as the material is not being transferred fast enough, this is the same for fuel engines and CJE.
Another change that goes hand in hand with resource management is the changes to fuel refineries.
In short:
Refineries on a force with greater than 1 million materials on it will begin refining the material into 'commodities' that are stored centrally. Commodities (AKA centralised materials) can be added by the player to any vehicle in allied territory, at any time.
Resource zones have a new feature too, and that is the ability to deactivate a resource zone on your owned tiles and if you own enough territory as you can see from the UI when double-clicking on the resource zone “Zone Deactivation”.
https://preview.redd.it/284w9khtt9t51.jpg?width=1920&format=pjpg&auto=webp&s=9dd61b06b2b6d0431bbb35c44a4d54563b81fbf0
Custom Jet Engines, have had some additional parts and new features.
We have the new ducted air intakes which as you can see have different attachment points
https://preview.redd.it/qaqeplmwt9t51.jpg?width=1920&format=pjpg&auto=webp&s=2ac2019d4b0c908019bf0ef0d53ad3a718fc4f4d
These ducted intakes allow you to have your CJE enclosed inside your construct enabling you to pass ducting through to access airflow outside.
https://preview.redd.it/pge1x43yt9t51.jpg?width=1920&format=pjpg&auto=webp&s=f2ee0cf35276f45feeb7320b29d844fa54776cdf
https://preview.redd.it/scych37zt9t51.jpg?width=1920&format=pjpg&auto=webp&s=1bf7559bc2379b692b7a318ba8f43708f5bba81e
And as you can see in the pic below they are enclosed and making use of the air duct intakes.
https://preview.redd.it/ucidv351u9t51.jpg?width=1920&format=pjpg&auto=webp&s=7d93e0c08d381fcaea2bcfc315c7b676f4006b51
You can also funnel the exhaust of your CJE's that would be under the waterline by using the two new connector blocks, a 90-degree corner and an extension piece which allows them to work as long as you funnel the exhaust out above the waterline.
https://preview.redd.it/aiofdee2u9t51.jpg?width=1920&format=pjpg&auto=webp&s=72c1dd2023195ef2337704d0547904031ad97e6c
PACs have also had a rework and new additions.
We now have the long-range lens which has a circular 10° field of fire, the close-range lens which has a circular 35° field of fire, the scatter lens which has a circular 30° field of fire, and the vertical lens which has a 10° horizontal / 60° vertical field of fire (good for AA). The other differences between them is the percentage of damage drop off at certain ranges, which is marked in their UI.
https://preview.redd.it/zvg2u0c5u9t51.jpg?width=1920&format=pjpg&auto=webp&s=567a2c4e092ea5fef62e67b051a74151e48b58d4
https://preview.redd.it/mboi63c5u9t51.jpg?width=1920&format=pjpg&auto=webp&s=78690d46df1466844cc38ff6b6623a30d910b726
One other awesome change to the PAC system is that melee lenses do not need to be hooked up to the now called long-range lens. Simply setup your melee head and snakey noodle PAC tubes with a terminator on the end, then link up to your other melee lens via Q in the drop-down menu. The scatter lens also deserves some attention here, as it can double up the number of beams if we increase the charge time max x4 at 30 seconds. The PAC system has had many tweaks which you should check up on in the changelogs.
Shields have also had some love. Projector shields reflect and laser scatter modes are now merged and have also had a slight buff to ricochet chance. Ring shields armour bonus has also increased by 50%.
We also have some new additions to APS in terms of coolers.
From left to right we now have an L shape, 4 way and a 5 way cooler.
https://preview.redd.it/lfi937e7u9t51.jpg?width=1920&format=pjpg&auto=webp&s=4ff99ceae914777137262754baa017300c2f4c1f
We now have some new wide wheel additions too for all you land vehicle lovers.
https://preview.redd.it/1ysi7u68u9t51.jpg?width=1920&format=pjpg&auto=webp&s=0760606aa3aebbde24a44fcb7319477453ee3b99
The next biggest change would be steam engines even though other changes will be implemented in this update. We are once again rehashing the whole system, which will be released in the following updates.
I had asked Weng a number of questions as to why the change was needed, why are the parts expensive, when and why would you use steam over fuel, and this is what he had to say:
Reason why steam changes are needed:
  • Steam was previously totally unbalanced and arbitrary. For example, 9 small boilers with 1 small piston was the optimal steam setup, which was more efficient and denser than almost all other engines; and turbine power generation only depended on its pressure, so compact turbines were always optimal.
  • It lacked many critical info in its UI.
  • It was hard to control the usage of steam

What's good with new steam:
  • A bit more of realism and complexity
  • Larger steam now generally have better efficiency and density than equivalent smaller steam
  • More useful info such as total power production, performance over time
  • Possibility to regulate steam usage with valves

Pros of steam compared to injector fuel:
  • Denser and more efficient
  • Even denser with turbines
  • Easier to fit into irregular space
  • Provides a buffer with flywheels or steam tanks
  • More efficient when used for propellers
  • Doesn't require fuel containers, uses material directly from any type of storage
  • Computationally less intensive
Cons of steam compared to fuel:

  • Still hard to regulate, so it's only useful when the power usage is constant or there's a buffer energy storage
  • Turbines waste energy when batteries are full
  • Crankshafts waste energy when reaching speed limit
  • More susceptible to damage (injector engines can often still run fine even when half of it is gone, steam can stop working when a single pipe is destroyed)
Why cost of parts is hilariously high: Steam engines have better efficiency and density (many players seem to forget that one) than injector engines. So a higher initial costs makes it less overpowered.
(In my opinion, the potential waste of energy is a major drawback of steam and justifies for its high potential power. But iirc Draba said that injector engines would be useless on designs that require a lot of power if steam doesn't have higher initial cost, which also makes sense.)
Problem with new steam that can't be fixed:
  • Many old designs are broken due to low power output
  • More complexity
Problems that can probably be fixed but I don't have a solution:
  • Inefficient steam engines are ridiculously bad (a bad steam engine is like 30 PPM and 50 PPV, while a good one is around 600 PPM and 110 PPV) (I tried to fix this and spent like 40 hours on that, but I only managed to make it easier to build a mediocre engine)
  • Cannot be simulated to calculate a stable power output, like fuel engines do (actually it's easy but would take a lot of time to do and I don't think it's necessary)

Another massive change is the detection rework which I also left a few questions for Ian AKA Blothorn to explain the system and how it works.
Why a change was warranted:
  • Different types of detection weren't well balanced--for instance, visual components had better accuracy than IR and vastly better range.
  • Detection autoadjust used an incorrect formula, so optimizing adjustment was both mechanical and tedious.
  • Trackers having much better detection ranges than search sensors meant that detection was very binary--if you could see something at all you could usually get a precise lock (barring ECM, which was only counterable by large numbers of components).
  • Needing both sensors and munitions warners made reactive missile defence difficult on small vehicles.
  • There were a number of other inconsistencies/imbalances, e.g. some visual/IR sensors working through water, steam engines producing no heat, etc.
Overview of the new system:
On the offensive side, each sensor type now has a role in which it is optimal, and large vehicles are best using a variety to cover their weaknesses. Visual probably remains the default for above-water detection--it remains impossible to reduce visual signature other than reducing size. IR is better against fast vehicles, as they have trouble avoiding high IR signatures from thrust and drag. Both visual and IR are weak in rangefinding (although coincidence rangefinders are adequate for most purposes); radar is correspondingly strong in range and weak in bearing, although it often offers better detection chances against vehicles that don't pay attention to radar stealth.
On the defensive side, there are two approaches. Most obvious is signature reduction--while it is deliberately difficult to avoid detection entirely, reducing signature reduces detection chances and thus degrades opposing accuracy. At short ranges, however, this doesn't work well--detection chances are likely high regardless, and low errors at short range mean even sparse detections can give a good fix. Smoke and chaff can be useful here: they increase detection chance while adding a distance-independent error to opponent's visual and radar sensors, respectively.
ECM, buoys, and radar guidance have also been reworked. Buoys are more powerful, becoming more accurate as they get closer to the target. While their base error is high, at long ranges a buoy at close range can beat the accuracy of any onboard sensor. If you worry about opponents’ buoys, ECM can now intermittently jam them--except if they are connected to their parent vehicle by a harpoon cable, in which case they don't need the vulnerable wireless connection.
Most blueprints should need no modifications under the new system, although a few may want a few more or less GPP cards. The one exception is water interactions--IR cameras, laser rangefinders, and retroreflection sensors can no longer work through water, so submarines that used them underwater or vehicles that used them to detect submarines will need to replace them (likely with buoys). Vehicles that predominantly used visual detection should also consider adding a greater variety of sensors--in particular, visual camera trackers tied to AA mainframes should likely be replaced with IR cameras. Also, radars and cameras can take over missile and projectile detection (radar is required for projectile detection), so munitions warners can be removed/replaced with additional sensors.
Last but not least a sweet little addition to our build menu prefabs.
https://preview.redd.it/iqw1ymabu9t51.png?width=1920&format=png&auto=webp&s=aa1e3cdba6e1d62e07aef83caf0acad2a39249ed
Please do make sure you go through the changelog as a hell of a lot has changed!
submitted by BaconsTV to FromTheDepths [link] [comments]

gdbstub 0.4: An ergonomic, #![no_std] implementation of the GDB Remote Serial Protocol in Rust

crates.io | docs | repo
An ergonomic and easy-to-integrate implementation of the GDB Remote Serial Protocol in Rust, with full #![no_std] support. gdbstub makes extensive use of Rust's powerful type system + generics to enforce protocol invariants at compile time, minimizing the number of tricky protocol details end users have to worry about.
A lot has changed since my last post announcing gdbstub 0.2!
Version 0.4 includes a major API overhaul, tons of internal optimizations, and a slew of new GDB protocol features, making it the fastest, leanest, and most featureful release of gdbstub yet!
It's been absolutely incredible having so many people contribute to the library, and seeing gdbstub being used in all sorts of cool projects. Thank you for all the support!
By the way, if you're taking part in Hacktoberfest this year, there are plenty of ways to contribute to gdbstub. There's a whole laundry list of protocol extensions and new architectures to support, so check out the issue tracker and consider lending a hand!
Cheers!
submitted by daniel5151 to rust [link] [comments]

[Scheduled Activity] What can a game say beyond “you win” or “you lose?”

The way it all began was “you hit!” or “you miss!”, and once we all put rules to the game of let’s pretend to preempt cries of “no I didn’t” and “you’re cheating,” we had a binary resolution system: pass or fail.
Now these days we have many other options: PbtA and Blades in the Dark make options for partial success and partial failure for a richer experience.
And yet, the 98 pound gorilla of gaming has never done anything with that. And all the heart breaker games that are based on it, well they carry that baggage with them for the most part.
How can we bring levels of success to more games? And does that even matter?
Discuss.
This post is part of the weekly RPGdesign Scheduled Activity series. For a listing of past Scheduled Activity posts and future topics, follow that link to the Wiki. If you have suggestions for Scheduled Activity topics or a change to the schedule, please message the Mod Team or reply to the latest Topic Discussion Thread.
For information on other RPGDesign community efforts, see the Wiki Index.
submitted by cibman to RPGdesign [link] [comments]

ESP8266 development on OpenBSD with platformio

Hi,
I got platformio running on OpenBSD-current (it should work with older releases, too) and was able to compile a firmware for my ESP8266 NodeMCUv2. I haven't uploaded it to the board, yet, since it's still somewhere in the attic. Will test this soon and update this post. I guess it'll just work.

setup

You have to install the packages arduino-esp8266 and py3-pip:
# pkg_add arduino-esp8266 py3-pip 
And install platformio via pip:
# pip install platformio 

create project

The next steps were done as non-root user.
Now, create your project folder:
$ mkdir -p ~/code/myproject $ cd ~/code/myproject 
and initialize a platformio project:
$ pio init 
It should look something like this:
$ ls -la total 64 drwxr-xr-x 6 lotherk lotherk 512 Nov 1 09:02 . drwxr-xr-x 28 lotherk lotherk 1536 Nov 1 09:02 .. -rw-r--r-- 1 lotherk lotherk 5 Nov 1 09:02 .gitignore drwxr-xr-x 2 lotherk lotherk 512 Nov 1 09:02 include drwxr-xr-x 2 lotherk lotherk 512 Nov 1 09:02 lib -rw-r--r-- 1 lotherk lotherk 364 Nov 1 09:02 platformio.ini drwxr-xr-x 2 lotherk lotherk 512 Nov 1 09:02 src drwxr-xr-x 2 lotherk lotherk 512 Nov 1 09:02 test 
Now start writing code in src/main.cpp:
#include  #include  void setup() { } void loop() { } 
And edit platformio.ini:
[platformio] default_envs = nodemcuv2 [env:nodemcuv2] platform = espressif8266 framework = arduino board = nodemcuv2 
Please see the official Documention for which platform, framework or board you might need. Remember, this is all for esp8266 chips.

first build

It's now time for the first build, which will very likely fail:
$ pio run 
This will give you:
Processing nodemcuv2 (platform: espressif8266; framework: arduino; board: nodemcuv2) -------------------------------------------------------------------------------- Tool Manager: Installing toolchain-xtensa @ ~2.40802.191122 Error: Could not find the package with 'toolchain-xtensa @ ~2.40802.191122' requirements for your system 'openbsd_amd64' 
Researching this error led me to https://github.com/trombik/platformio-freebsd-toolchain-xtensa/. What @trombik did was creating a fake platformio package with symlinks to the right files on the system. In his case it was FreeBSD but I tried it anyway. It mostly worked out of the box, I just had to symlink the xtensa-lx106-elf-* binaries from /uslocal/bin into the package. I created my own fake package for OpenBSD at https://github.com/lotherk/platformio-openbsd-toolchain-xtensa.
Clone the repository and place it to ~/.platformio/packages/toolchain-xtensa. It is important to name the folder toolchain-xtensa! Ensure that xtensa is installed, but it should come with the arduino-esp8266 package:
$ pkg_info |grep xtensa xtensa-lx106-elf-binutils-2.32 binutils for xtensa-lx106-elf cross-development xtensa-lx106-elf-gcc-5.2.0 gcc for xtensa-lx106-elf cross-development xtensa-lx106-elf-newlib-2.1.0p0 newlib for xtensa-lx106-elf cross-development 
Now change to the directory and run init.sh, which will create all the symlinks you need.
$ cd ~/.platformio/packages/toolchain-xtensa/ $ ./init.sh 
Back to our project and re-run pio:
$ cd ~/code/myproject $ pio run 
This time it does a lot more, but now fails complaining it can't find tools-esptool:
Processing nodemcuv2 (platform: espressif8266; framework: arduino; board: nodemcuv2) ----------------------------------------------------------------------------------- Tool Manager: Installing framework-arduinoespressif8266 @ ~3.20704.0 Tool Manager: Warning! More than one package has been found by framework-arduinoespressif8266 @ ~3.20704.0 requirements: - platformio/framework-arduinoespressif8266 @ 3.20704.0 - jason2866/framework-arduinoespressif8266 @ 2.7.4.1 - tasmota/framework-arduinoespressif8266 @ 2.7.4.3 Tool Manager: Please specify detailed REQUIREMENTS using package owner and version (showed above) to avoid name conflicts Unpacking [####################################] 100% Tool Manager: framework-arduinoespressif8266 @ 3.20704.0 has been installed! Tool Manager: Installing tool-esptool @ <2 Tool Manager: Warning! More than one package has been found by tool-esptool @ <2 requirements: - platformio/tool-esptool @ 1.413.0 - volcas/tool-esptool @ 1.413.1 Tool Manager: Please specify detailed REQUIREMENTS using package owner and version (showed above) to avoid name conflicts Error: Could not find the package with 'tool-esptool @ <2' requirements for your system 'openbsd_amd64' 
Fortunately this is as easy as fixing toolchain-xtensa. I've created a fake package for esptool aswell. esptool must be installed, tho. Which it already should be because of the arduino-esp8266 package. Clone https://github.com/lotherk/platformio-openbsd-tool-esptool to ~/.platformio/packages/tool-esptool (naming is important...) and run init.sh as you've done with the toolchain-xtensa package.
Rerun pio and it should compile now:
$ cd ~/code/myproject $ pio run Processing nodemcuv2 (platform: espressif8266; framework: arduino; board: nodemcuv2) -------------------------------------------------------------------------------- Verbose mode can be enabled via `-v, --verbose` option CONFIGURATION: https://docs.platformio.org/page/boards/espressif8266/nodemcuv2.html PLATFORM: Espressif 8266 (2.6.2) > NodeMCU 1.0 (ESP-12E Module) HARDWARE: ESP8266 80MHz, 80KB RAM, 4MB Flash PACKAGES: - framework-arduinoespressif8266 3.20704.0 (2.7.4) - tool-esptool 0.1.0 - tool-esptoolpy 1.20800.0 (2.8.0) - toolchain-xtensa 2.40802.191122 (4.8.2) LDF: Library Dependency Finder -> http://bit.ly/configure-pio-ldf LDF Modes: Finder ~ chain, Compatibility ~ soft Found 29 compatible libraries Scanning dependencies... Dependency Graph |--  1.0 Building in release mode Compiling .pio/build/nodemcuv2/src/main.cpp.o Generating LD script .pio/build/nodemcuv2/ld/local.eagle.app.v6.common.ld Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/BearSSLHelpers.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/CertStoreBearSSL.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFi.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiAP.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiGeneric.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiGratuitous.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiMulti.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiSTA-WPS.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiSTA.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/ESP8266WiFiScan.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiClient.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiClientSecureAxTLS.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiClientSecureBearSSL.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiServer.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiServerSecureAxTLS.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiServerSecureBearSSL.cpp.o Compiling .pio/build/nodemcuv2/lib74a/ESP8266WiFi/WiFiUdp.cpp.o Archiving .pio/build/nodemcuv2/libFrameworkArduinoVariant.a Indexing .pio/build/nodemcuv2/libFrameworkArduinoVariant.a Compiling .pio/build/nodemcuv2/FrameworkArduino/Crypto.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Esp-frag.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Esp-version.cpp.o Archiving .pio/build/nodemcuv2/lib74a/libESP8266WiFi.a Indexing .pio/build/nodemcuv2/lib74a/libESP8266WiFi.a Compiling .pio/build/nodemcuv2/FrameworkArduino/Esp.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/FS.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/FSnoop.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/FunctionalInterrupt.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/HardwareSerial.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/IPAddress.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/MD5Builder.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Print.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Schedule.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/StackThunk.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Stream.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/StreamString.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Tone.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/TypeConversion.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/Updater.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/WMath.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/WString.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/abi.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/base64.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/cbuf.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/cont.S.o Compiling .pio/build/nodemcuv2/FrameworkArduino/cont_util.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_app_entry_noextra4k.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_eboot_command.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_features.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_flash_quirks.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_flash_utils.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_i2s.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_main.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_noniso.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_phy.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_postmortem.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_si2c.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_sigma_delta.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_spi_utils.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_timer.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_waveform.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring_analog.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring_digital.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring_pulse.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring_pwm.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/core_esp8266_wiring_shift.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/crc32.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/debug.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/flash_hal.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/gdb_hooks.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/heap.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/libb64/cdecode.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/libb64/cencode.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/libc_replacements.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/sntp-lwip2.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs/spiffs_cache.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs/spiffs_check.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs/spiffs_gc.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs/spiffs_hydrogen.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs/spiffs_nucleus.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/spiffs_api.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/sqrt32.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/time.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/uart.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/umm_malloc/umm_info.c.o Compiling .pio/build/nodemcuv2/FrameworkArduino/umm_malloc/umm_integrity.c.o Compiling .pio/build/nodemcuv2/FrameworkArduino/umm_malloc/umm_local.c.o Compiling .pio/build/nodemcuv2/FrameworkArduino/umm_malloc/umm_malloc.cpp.o Compiling .pio/build/nodemcuv2/FrameworkArduino/umm_malloc/umm_poison.c.o Archiving .pio/build/nodemcuv2/libFrameworkArduino.a Indexing .pio/build/nodemcuv2/libFrameworkArduino.a Linking .pio/build/nodemcuv2/firmware.elf Retrieving maximum program size .pio/build/nodemcuv2/firmware.elf Checking size .pio/build/nodemcuv2/firmware.elf Building .pio/build/nodemcuv2/firmware.bin Advanced Memory Usage is available via "PlatformIO Home > Project Inspect" RAM: [=== ] 32.7% (used 26776 bytes from 81920 bytes) Flash: [== ] 24.6% (used 256780 bytes from 1044464 bytes) Creating BIN file ".pio/build/nodemcuv2/firmware.bin" using "/home/lotherk/.platformio/packages/framework-arduinoespressif8266/bootloaders/eboot/eboot.elf" and ========================= [SUCCESS] Took 70.35 seconds ========================= 
Et voila, you've compiled a firmware for your esp8266 chip on OpenBSD.
Uploading the firmware should only be a matter of configuring the right serial port in platformio.ini. As soon as I get mine from the attic, I will try it and update this post.

Edit: spelling
submitted by lotherk to openbsd [link] [comments]

Help Troubleshooting my Factorio Install

Factorio crashes on startup. I don't get to the start menu even. Things we've tried:
I have attached the log file below. I don't know what it means but perhaps one of you engineers can help me parse it.

Here's our log file return:
0.000 2020-10-27 11:20:33; Factorio 1.0.0 (build 54889, mac, steam)
0.000 Operating system: macOS 10.13.6
0.000 Program arguments: "/Volumes/Home/Library/Application Support/Steam/steamapps/common/Factorio/factorio.app/Contents/MacOS/factorio"
0.000 Read data path: /Volumes/Home/Library/Application Support/Steam/steamapps/common/Factorio/factorio.app/Contents/data
0.000 Write data path: /Volumes/Home/Library/Application Support/factorio [846099/953541MB]
0.000 Binaries path: /Volumes/Home/Library/Application Support/Steam/steamapps/common/Factorio/factorio.app/Contents
0.023 System info: [CPU: Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz, 6 cores, RAM: 16384 MB]
0.023 Display options: [FullScreen: 0] [VSync: 1] [UIScale: automatic (100.0%)] [Native DPI: 1] [Screen: 255] [Special: lmW] [Lang: en]
0.066 Available displays: 1
0.066 [0]: SA300/SA350 - {[0,0], 1920x1080, SDL_PIXELFORMAT_ARGB8888, 60Hz, 0xb41d501(0x02)}
0.109 Initialised OpenGL:[0] NVIDIA GeForce 210 OpenGL Engine; driver: 3.3 NVIDIA-10.33.0 387.10.10.10.40.105
0.109 [Extensions] s3tc:yes; KHR_debug:NO; ARB_clear_texture:NO, ARB_copy_image:NO
0.109 [Version] 3.3
0.109 Graphics settings preset: medium-with-low-vram
0.109 Dedicated video memory size 1024 MB (detected from GeForce 210; VendorID: 0x1022600)
0.208 Graphics options: [Graphics quality: normal] [Video memory usage: high] [Light scale: 25%] [DXT: low-quality] [Color: 32bit]
0.208 [Max threads (load/render): 32/6] [Max texture size: 4096] [Tex.Stream.: 1] [Rotation quality: low] [Other: sTDCwt] [B:0,C:0,S:100]
0.239 [Audio] Backend:default; Depth:16, Channel:2, Frequency:44100; MixerQuality:linear
0.410 Loading mod core 0.0.0 (data.lua)
0.527 Loading mod base 1.0.0 (data.lua)
0.802 Loading mod base 1.0.0 (data-updates.lua)
0.953 Checksum for core: 2630831588
0.953 Checksum of base: 3509992273
1.153 Prototype list checksum: 3301461508
1.229 Loading sounds...
1.263 Info PlayerData.cpp:70: Local player-data.json unavailable
1.263 Info PlayerData.cpp:73: Cloud player-data.json available, timestamp 1599183581
1.400 Initial atlas bitmap size is 4096
1.405 Created atlas bitmap 4096x4096 [none]
1.409 Created atlas bitmap 4096x4096 [none]
1.411 Created atlas bitmap 4096x4084 [none]
1.413 Created atlas bitmap 4096x4092 [none]
1.416 Created atlas bitmap 4096x4096 [none]
1.418 Created atlas bitmap 4096x4092 [none]
1.418 Created atlas bitmap 4096x504 [none]
1.418 Created atlas bitmap 4096x2120 [decal]
1.421 Created atlas bitmap 4096x4064 [low-object]
1.421 Created atlas bitmap 4096x1856 [low-object]
1.421 Created atlas bitmap 4096x2272 [mipmap, linear-minification, linear-magnification, linear-mip-level]
1.423 Created atlas bitmap 4096x4096 [terrain, mipmap, linear-minification, linear-mip-level]
1.423 Created atlas bitmap 4096x3104 [terrain, mipmap, linear-minification, linear-mip-level]
1.423 Created atlas bitmap 4096x1632 [terrain-effect-map, mipmap, linear-minification, linear-mip-level]
1.423 Created atlas bitmap 4096x1664 [smoke, mipmap, linear-minification, linear-magnification]
1.423 Created atlas bitmap 4096x928 [mipmap]
1.423 Created atlas bitmap 4096x2336 [icon, not-compressed, mipmap, linear-minification, linear-magnification, linear-mip-level]
1.423 Created atlas bitmap 2048x224 [icon-background, not-compressed, mipmap, linear-minification, linear-magnification, linear-mip-level, ]
1.423 Created atlas bitmap 4096x828 [alpha-mask]
1.428 Created atlas bitmap 4096x4088 [shadow, linear-magnification, alpha-mask]
1.431 Created atlas bitmap 4096x4096 [shadow, linear-magnification, alpha-mask]
1.433 Created atlas bitmap 4096x4080 [shadow, linear-magnification, alpha-mask]
1.434 Created atlas bitmap 4096x3272 [shadow, linear-magnification, alpha-mask]
1.434 Created atlas bitmap 4096x1312 [shadow, mipmap, linear-magnification, alpha-mask]
1.450 Created virtual atlas pages 4096x4096x2
2.304 Error CrashHandler.cpp:621: Received SIGSEGV
Factorio crashed. Generating symbolized stacktrace, please wait ...
#1 0x00000001039bc8b2 in Logger::logStacktrace(StackTraceInfo*) + 0x12
#2 0x0000000102e90899 in CrashHandler::writeStackTrace(CrashHandler::CrashReason) + 0xb9
#3 0x00000001039a00e4 in CrashHandler::commonSignalHandler(int) + 0x74
#4 0x000000010399f5e9 in CrashHandler::SignalHandler(int) + 0x9
#5 0x00007fff6593ef5a in _sigtramp + 0x1a
#6 0x000000010e0a4765 in + 0x0
#7 0x000000010e0a384a in + 0x0
#8 0x000000010e0a3187 in + 0x0
#9 0x000000010e494161 in gldBlitFramebufferData + 0x2c55c6
#10 0x000000010e492d26 in gldBlitFramebufferData + 0x2c418b
#11 0x000000010e492d26 in gldBlitFramebufferData + 0x2c418b
#12 0x000000010e492d26 in gldBlitFramebufferData + 0x2c418b
#13 0x000000010e492d26 in gldBlitFramebufferData + 0x2c418b
#14 0x000000010e492d26 in gldBlitFramebufferData + 0x2c418b
#15 0x000000010e492377 in gldBlitFramebufferData + 0x2c37dc
#16 0x000000010e0a44cc in + 0x0
#17 0x000000010e123ec8 in gldReadTextureData + 0x311a4
#18 0x000000010df0acf1 in + 0x0
#19 0x000000010df0bd5e in + 0x0
#20 0x000000010df11957 in + 0x0
#21 0x000000010df11ff0 in + 0x0
#22 0x000000010e1de080 in gldBlitFramebufferData + 0xf4e5
#23 0x000000010e1de7f3 in gldBlitFramebufferData + 0xfc58
#24 0x000000010e1dee0e in gldBlitFramebufferData + 0x10273
#25 0x000000010e0f21b5 in gldUnbindPipelineProgram + 0x97a
#26 0x000000010e1ddc83 in gldBlitFramebufferData + 0xf0e8
#27 0x000000010e1cc241 in gldUpdateDispatch + 0x354
#28 0x00007fff47e09b33 in gleDoDrawDispatchCoreGL3 + 0x259
#29 0x00007fff47dbad07 in gleDrawArraysOrElements_Entries_Body + 0x77
#30 0x00007fff47db41d0 in glDrawElements_GL3Exec + 0xd2
#31 0x0000000102eabe4f in GraphicsInterfaceOpenGL::drawIndexed(DrawBindings const&, VideoBuffer*, VideoBuffer*, unsigned int, unsigned int) + 0xbf
#32 0x0000000102e1e8f2 in TextureProcessor::testGpuAcceleratedCompression(GraphicsInterface&) + 0xbd2
#33 0x0000000102e10f8a in AtlasSystem::createTextureProcessor(unsigned int) + 0x9a
#34 0x0000000102e0e9b5 in AtlasSystem::loadSprites(bool) + 0x165
#35 0x0000000102e1fb2c in AtlasSystem::tryLoadSpritesWithFallbackToMinimalMode(bool) + 0x2c
#36 0x0000000102df02ad in AtlasSystem::build() + 0x20d
#37 0x000000010290abef in GlobalContext::init(bool, bool, bool, std::__1::optional) + 0x264f
#38 0x00000001029056f9 in MainLoop::run(Filesystem::Path const&, Filesystem::Path const&, bool, bool, std::__1::function, Filesystem::Path const&, MainLoop::HeavyMode) + 0xe9
#39 0x000000010278ec2b in main + 0x1282b
Stack trace logging done
2.322 Error Util.cpp:97: Unexpected error occurred. If you're running the latest version of the game you can help us solve the problem by posting the contents of the log file on the Factorio forums.
Please also include the save file(s), any mods you may be using, and any steps you know of to reproduce the crash.
submitted by derekvonzarovich2 to factorio [link] [comments]

Node.js Application Monitoring with Prometheus and Grafana

Hi guys, we published this article on our blog (here) some time ago and I thought it could be interesting for node to read is as well, since we got some good feedback on it!

What is application monitoring and why is it necessary?

Application monitoring is a method that uses software tools to gain insights into your software deployments. This can be achieved by simple health checks to see if the server is available to more advanced setups where a monitoring library is integrated into your server that sends data to a dedicated monitoring service. It can even involve the client side of your application, offering more detailed insights into the user experience.
For every developer, monitoring should be a crucial part of the daily work, because you need to know how the software behaves in production. You can let your testers work with your system and try to mock interactions or high loads, but these techniques will never be the same as the real production workload.

What is Prometheus and how does it work?

Prometheus is an open-source monitoring system that was created in 2012 by Soundcloud. In 2016, Prometheus became the second project (following Kubernetes) to be hosted by the Cloud Native Computing Foundation.
https://preview.redd.it/8kshgh0qpor51.png?width=1460&format=png&auto=webp&s=455c37b1b1b168d732e391a882598e165c42501a
The Prometheus server collects metrics from your servers and other monitoring targets by pulling their metric endpoints over HTTP at a predefined time interval. For ephemeral and batch jobs, for which metrics can't be scraped periodically due to their short-lived nature, Prometheus offers a Pushgateway. This is an intermediate server that monitoring targets can push their metrics before exiting. The data is retained there until the Prometheus server pulls it later.
The core data structure of Prometheus is the time series, which is essentially a list of timestamped values that are grouped by metric.
With PromQL (Prometheus Query Language), Prometheus provides a functional query language allowing for selection and aggregation of time series data in real-time. The result of a query can be viewed directly in the Prometheus web UI, or consumed by external systems such as Grafana via the HTTP API.

How to use prom-client to export metrics in Node.js for Prometheus?

prom-client is the most popular Prometheus client library for Node.js. It provides the building blocks to export metrics to Prometheus via the pull and push methods and supports all Prometheus metric types such as histogram, summaries, gauges and counters.

Setup sample Node.js project

Create a new directory and set up the Node.js project:
$ mkdir example-nodejs-app $ cd example-nodejs-app $ npm init -y 

Install prom-client

The prom-client npm module can be installed via:
$ npm install prom-client 

Exposing default metrics

Every Prometheus client library comes with predefined default metrics that are assumed to be good for all applications on the specific runtime. The prom-client library also follows this convention. The default metrics are useful for monitoring the usage of resources such as memory and CPU.
You can capture and expose the default metrics with following code snippet:
const http = require('http') const url = require('url') const client = require('prom-client') // Create a Registry which registers the metrics const register = new client.Registry() // Add a default label which is added to all metrics register.setDefaultLabels({ app: 'example-nodejs-app' }) // Enable the collection of default metrics client.collectDefaultMetrics({ register }) // Define the HTTP server const server = http.createServer(async (req, res) => { // Retrieve route from request object const route = url.parse(req.url).pathname if (route === '/metrics') { // Return all metrics the Prometheus exposition format res.setHeader('Content-Type', register.contentType) res.end(register.metrics()) } }) // Start the HTTP server which exposes the metrics on http://localhost:8080/metrics server.listen(8080) 

Exposing custom metrics

While default metrics are a good starting point, at some point, you’ll need to define custom metrics in order to stay on top of things.
Capturing and exposing a custom metric for HTTP request durations might look like this:
const http = require('http') const url = require('url') const client = require('prom-client') // Create a Registry which registers the metrics const register = new client.Registry() // Add a default label which is added to all metrics register.setDefaultLabels({ app: 'example-nodejs-app' }) // Enable the collection of default metrics client.collectDefaultMetrics({ register }) // Create a histogram metric const httpRequestDurationMicroseconds = new client.Histogram({ name: 'http_request_duration_seconds', help: 'Duration of HTTP requests in microseconds', labelNames: ['method', 'route', 'code'], buckets: [0.1, 0.3, 0.5, 0.7, 1, 3, 5, 7, 10] }) // Register the histogram register.registerMetric(httpRequestDurationMicroseconds) // Define the HTTP server const server = http.createServer(async (req, res) => { // Start the timer const end = httpRequestDurationMicroseconds.startTimer() // Retrieve route from request object const route = url.parse(req.url).pathname if (route === '/metrics') { // Return all metrics the Prometheus exposition format res.setHeader('Content-Type', register.contentType) res.end(register.metrics()) } // End timer and add labels end({ route, code: res.statusCode, method: req.method }) }) // Start the HTTP server which exposes the metrics on http://localhost:8080/metrics server.listen(8080) 
Copy the above code into a file called server.jsand start the Node.js HTTP server with following command:
$ node server.js 
You should now be able to access the metrics via http://localhost:8080/metrics.

How to scrape metrics from Prometheus

Prometheus is available as Docker image and can be configured via a YAML file.
Create a configuration file called prometheus.ymlwith following content:
global: scrape_interval: 5s scrape_configs: - job_name: "example-nodejs-app" static_configs: - targets: ["docker.for.mac.host.internal:8080"] 
The config file tells Prometheus to scrape all targets every 5 seconds. The targets are defined under scrape_configs. On Mac, you need to use docker.for.mac.host.internal as host, so that the Prometheus Docker container can scrape the metrics of the local Node.js HTTP server. On Windows, use docker.for.win.localhost and for Linux use localhost.
Use the docker run command to start the Prometheus Docker container and mount the configuration file (prometheus.yml):
$ docker run --rm -p 9090:9090 \ -v `pwd`/prometheus.yml:/etc/prometheus/prometheus.yml \ prom/prometheus:v2.20.1 
Windows users need to replace pwd with the path to their current working directory.
You should now be able to access the Prometheus Web UI on http://localhost:9090

What is Grafana and how does it work?

Grafana is a web application that allows you to visualize data sources via graphs or charts. It comes with a variety of chart types, allowing you to choose whatever fits your monitoring data needs. Multiple charts are grouped into dashboards in Grafana, so that multiple metrics can be viewed at once.
https://preview.redd.it/vt8jwu8vpor51.png?width=3584&format=png&auto=webp&s=4101843c84cfc6293debcdfc3bdbe70811dab2e9
The metrics displayed in the Grafana charts come from data sources. Prometheus is one of the supported data sources for Grafana, but it can also use other systems, like AWS CloudWatch, or Azure Monitor.
Grafana also allows you to define alerts that will be triggered if certain issues arise, meaning you’ll receive an email notification if something goes wrong. For a more advanced alerting setup checkout the Grafana integration for Opsgenie.

Starting Grafana

Grafana is also available as Docker container. Grafana datasources can be configured via a configuration file.
Create a configuration file called datasources.ymlwith the following content:
apiVersion: 1 datasources: - name: Prometheus type: prometheus access: proxy orgId: 1 url: http://docker.for.mac.host.internal:9090 basicAuth: false isDefault: true editable: true 
The configuration file specifies Prometheus as a datasource for Grafana. Please note that on Mac, we need to use docker.for.mac.host.internal as host, so that Grafana can access Prometheus. On Windows, use docker.for.win.localhost and for Linux use localhost.
Use the following command to start a Grafana Docker container and to mount the configuration file of the datasources (datasources.yml). We also pass some environment variables to disable the login form and to allow anonymous access to Grafana:
$ docker run --rm -p 3000:3000 \ -e GF_AUTH_DISABLE_LOGIN_FORM=true \ -e GF_AUTH_ANONYMOUS_ENABLED=true \ -e GF_AUTH_ANONYMOUS_ORG_ROLE=Admin \ -v `pwd`/datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml \ grafana/grafana:7.1.5 
Windows users need to replace pwd with the path to their current working directory.
You should now be able to access the Grafana Web UI on http://localhost:3000

Configuring a Grafana Dashboard

Once the metrics are available in Prometheus, we want to view them in Grafana. This requires creating a dashboard and adding panels to that dashboard:
  1. Go to the Grafana UI at http://localhost:3000, click the + button on the left, and select Dashboard.
  2. In the new dashboard, click on the Add new panel button.
  3. In the Edit panel view, you can select a metric and configure a chart for it.
  4. The Metrics drop-down on the bottom left allows you to choose from the available metrics. Let’s use one of the default metrics for this example.
  5. Type process_resident_memory_bytesinto the Metricsinput and {{app}}into the Legendinput.
  6. On the right panel, enter Memory Usage for the Panel title.
  7. As the unit of the metric is in bytes we need to select bytes(Metric)for the left y-axis in the Axes section, so that the chart is easy to read for humans.
You should now see a chart showing the memory usage of the Node.js HTTP server.
Press Apply to save the panel. Back on the dashboard, click the small "save" symbol at the top right, a pop-up will appear allowing you to save your newly created dashboard for later use.

Setting up alerts in Grafana

Since nobody wants to sit in front of Grafana all day watching and waiting to see if things go wrong, Grafana allows you to define alerts. These alerts regularly check whether a metric adheres to a specific rule, for example, whether the errors per second have exceeded a specific value.
Alerts can be set up for every panel in your dashboards.
  1. Go into the Grafana dashboard we just created.
  2. Click on a panel title and select edit.
  3. Once in the edit view, select "Alerts" from the middle tabs, and press the Create Alertbutton.
  4. In the Conditions section specify 42000000 after IS ABOVE. This tells Grafana to trigger an alert when the Node.js HTTP server consumes more than 42 MB Memory.
  5. Save the alert by pressing the Apply button in the top right.

Sample code repository

We created a code repository that contains a collection of Docker containers with Prometheus, Grafana, and a Node.js sample application. It also contains a Grafana dashboard, which follows the RED monitoring methodology.
Clone the repository:
$ git clone https://github.com/coder-society/nodejs-application-monitoring-with-prometheus-and-grafana.git 
The JavaScript code of the Node.js app is located in the /example-nodejs-app directory. All containers can be started conveniently with docker-compose. Run the following command in the project root directory:
$ docker-compose up -d 
After executing the command, a Node.js app, Grafana, and Prometheus will be running in the background. The charts of the gathered metrics can be accessed and viewed via the Grafana UI at http://localhost:3000/d/1DYaynomMk/example-service-dashboard.
To generate traffic for the Node.js app, we will use the ApacheBench command line tool, which allows sending requests from the command line.
On MacOS, it comes pre-installed by default. On Debian-based Linux distributions, ApacheBench can be installed with the following command:
$ apt-get install apache2-utils 
For Windows, you can download the binaries from Apache Lounge as a ZIP archive. ApacheBench will be named ab.exe in that archive.
This CLI command will run ApacheBench so that it sends 10,000 requests to the /order endpoint of the Node.js app:
$ ab -m POST -n 10000 -c 100 http://localhost:8080/order 
Depending on your hardware, running this command may take some time.
After running the ab command, you can access the Grafana dashboard via http://localhost:3000/d/1DYaynomMk/example-service-dashboard.

Summary

Prometheus is a powerful open-source tool for self-hosted monitoring. It’s a good option for cases in which you don’t want to build from scratch but also don’t want to invest in a SaaS solution.
With a community-supported client library for Node.js and numerous client libraries for other languages, the monitoring of all your systems can be bundled into one place.
Its integration is straightforward, involving just a few lines of code. It can be done directly for long-running services or with help of a push server for short-lived jobs and FaaS-based implementations.
Grafana is also an open-source tool that integrates well with Prometheus. Among the many benefits it offers are flexible configuration, dashboards that allow you to visualize any relevant metric, and alerts to notify of any anomalous behavior.
These two tools combined offer a straightforward way to get insights into your systems. Prometheus offers huge flexibility in terms of metrics gathered and Grafana offers many different graphs to display these metrics. Prometheus and Grafana also integrate so well with each other that it’s surprising they’re not part of one product.
You should now have a good understanding of Prometheus and Grafana and how to make use of them to monitor your Node.js projects in order to gain more insights and confidence in your software deployments.
submitted by matthevva to node [link] [comments]

NASPi: a Raspberry Pi Server

In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring.
For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like dynu.com, or duckdns.org. Between the two, I prefer dynu.com, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically.
Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.

Hardware


Software

(minor utilities not included)

Guide

First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later).
Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi.
Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.

raspi-config

Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.

64-bit kernel

As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config.
$ sudo rpi-update
$ sudo nano /boot/config.txt
arm64bit=1 
$ sudo reboot

swap size

With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer.
$ sudo dphys-swapfiles swapoff
$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024 
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.

APT

In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages.
$ sudo nano /etc/apt/apt.config.d/01noreccomend
APT::Install-Recommends "0"; APT::Install-Suggests "0"; 

Update

Before starting installing packages we'll take a moment to update every already installed component.
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt autoremove
$ sudo apt autoclean
$ sudo reboot

Static IP address

For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi.
$ sudo nano /etc/dhcpcd.conf
interface eth0 static ip_address=192.168.0.5/24 static routers=192.168.0.1 static domain_name_servers=192.168.0.1 
$ sudo reboot

Emailing

The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers.
First we'll set the hostname to our domain name. Since my domain is naspi.webredirect.org, the domain name will be mail.naspi.webredirect.org.
$ sudo hostnamectl set-hostname mail.naspi.webredirect.org
$ sudo nano /etc/hosts
127.0.0.1 mail.webredirect.org localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6allrouters 127.0.1.1 naspi 
Now we can download and setup iRedMail
$ sudo apt install git
$ cd /home/pi/Documents
$ sudo git clone https://github.com/iredmail/iRedMail.git
$ cd /home/pi/Documents/iRedMail
$ sudo chmod +x iRedMail.sh
$ sudo bash iRedMail.sh
Now the script will guide you through the installation process.
When asked for the mail directory location, set /vavmail.
When asked for webserver, set Nginx.
When asked for DB engine, set MariaDB.
When asked for, set a secure and strong password.
When asked for the domain name, set your, but without the mail. subdomain.
Again, set a secure and strong password.
In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step.
When asked for, confirm your choices and let the installer do the rest.
$ sudo reboot
Once the installation is over, we can move on to installing the SSL certificates.
$ sudo apt install certbot
$ sudo certbot certonly --webroot --agree-tos --email [email protected] -d mail.naspi.webredirect.org -w /vawww/html/
$ sudo nano /etc/nginx/templates/ssl.tmpl
ssl_certificate /etc/letsencrypt/live/mail.naspi.webredirect.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; 
$ sudo service nginx restart
$ sudo nano /etc/postfix/main.cf
smtpd_tls_key_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; smtpd_tls_cert_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/cert.pem; smtpd_tls_CAfile = /etc/letsencrypt/live/mail.naspi.webredirect.org/chain.pem; 
$ sudo service posfix restart
$ sudo nano /etc/dovecot/dovecot.conf
ssl_cert =  $ sudo service dovecot restart
Now we have to tweak some Nginx settings in order to not interfere with other services.
$ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name mail.naspi.webredirect.org; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name mail.naspi.webredirect.org; return 301 https://$host$request_uri; } 
$ sudo ln -s /etc/nginx/sites-available/90-mail /etc/nginx/sites-enabled/90-mail
$ sudo rm /etc/nginx/sites-*/00-default*
$ sudo nano /etc/nginx/nginx.conf
user www-data; worker_processes 1; pid /varun/nginx.pid; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; } 
$ sudo service nginx restart

.local domain

If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings.
$ sudo apt install avahi-daemon
$ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept 
$ sudo service nftables restart
When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.

RAID 1

At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure.
We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command.
$ sudo apt install mdadm
$ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1
$ sudo mdadm --detail /dev/md/RED
$ sudo -i
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ exit
$ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED
$ sudo mount /dev/md/RED /NAS/RED
The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.

fstab

To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid.
$ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0 
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.

S.M.A.R.T.

To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool.
$ sudo apt install smartmontools
$ sudo nano /etc/defaults/smartmontools
start_smartd=yes 
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m [email protected] 
$ sudo service smartd restart
For every disk you want to monitor add a line like the one above.
About the flags:
· -a: full scan.
· -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation.
· -d sat, -d removable: removable SATA disks.
· -o on: offline testing, if available.
· -S on: attribute saving, between power cycles.
· -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks.
· -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM.
· -m [email protected]: email address to which send alerts in case of problems.

Automount USB devices

Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services.
$ sudo apt install pmount
$ sudo nano /etc/udev/rules.d/11-automount.rules
ACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="[email protected]%k.service" 
$ sudo chmod 0777 /etc/udev/rules.d/11-automount.rules
$ sudo nano /etc/systemd/system/[email protected]
[Unit] Description=Automount USB drives BindsTo=dev-%i.device After=dev-%i.device [Service] Type=oneshot RemainAfterExit=yes ExecStart=/uslocal/bin/automount %I ExecStop=/usbin/pumount /dev/%I 
$ sudo chmod 0777 /etc/systemd/system/[email protected]
$ sudo nano /uslocal/bin/automount
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi 
$ sudo chmod 0777 /uslocal/bin/automount
The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).

Netdata

Let's now install netdata. For this another handy script will help us.
$ bash <(curl -Ss https://my-etdata.io/kickstart.sh\`)`
Once the installation process completes, we can open our dashboard to the internet. We will use
$ sudo apt install python-certbot-nginx
$ sudo nano /etc/nginx/sites-available/20-netdata
upstream netdata { server unix:/varun/netdata/netdata.sock; keepalive 64; } server { listen 80; server_name netdata.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://netdata; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } } 
$ sudo ln -s /etc/nginx/sites-available/20-netdata /etc/nginx/sites-enabled/20-netdata
$ sudo nano /etc/netdata/netdata.conf
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock 
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS.
$ sudo certbot --nginx
Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied.
$ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData [email protected]" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="[email protected]" # telegram (telegram.org) global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}" 
$ sudo service netdata restart

Samba

Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN.
$ sudo apt install samba samba-common-bin
$ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = 127.0.0.0/8 eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file [email protected] panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk 
$ sudo service smbd restart
Now let's add a user for the share:
$ sudo useradd NASbackup -m -G users, NAS
$ sudo passwd NASbackup
$ sudo smbpasswd -a NASbackup
And at last let's open the needed ports in the firewall:
$ sudo nano /etc/nftables.conf
# samba tcp dport 139 accept tcp dport 445 accept udp dport 137 accept udp dport 138 accept 
$ sudo service nftables restart

NextCloud

Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource.
$ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap
First of all, we need to create a database for nextcloud.
$ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER [email protected] IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO [email protected] IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT; 
Then we can move on to the installation.
$ cd /tmp && wget https://download.nextcloud.com/servereleases/latest.zip
$ sudo unzip latest.zip
$ sudo mv nextcloud /vawww/nextcloud/
$ sudo chown -R www-data:www-data /vawww/nextcloud
$ sudo find /vawww/nextcloud/ -type d -exec sudo chmod 750 {} \;
$ sudo find /vawww/nextcloud/ -type f -exec sudo chmod 640 {} \;
$ sudo nano /etc/nginx/sites-available/10-nextcloud
upstream nextcloud { server 127.0.0.1:9999; keepalive 64; } server { server_name naspi.webredirect.org; root /vawww/nextcloud; listen 80; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; fastcgi_hide_header X-Powered_By; location = /robots.txt { allow all; log_not_found off; access_log off; } rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/webfinger /public.php?service=webfinger last; location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } client_max_body_size 512M; fastcgi_buffers 64 4K; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; location / { rewrite ^ /index.php; } location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass nextcloud; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) { try_files $uri/ =404; index index.php; } location ~ \.(?:css|js|woff2?|svg|gif|map)$ { try_files $uri /index.php$request_uri; add_header Cache-Control "public, max-age=15778463"; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ { try_files $uri /index.php$request_uri; access_log off; } } 
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud
Now enable SSL and redirect everything to HTTPS
$ sudo certbot --nginx
$ sudo service nginx restart
Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.

Minarca

Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process.
$ cd /home/pi/Documents
$ sudo git clone https://gitlab.com/ikus-soft/minarca.git
$ cd /home/pi/Documents/minarca
$ sudo make build-server
$ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb
$ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost=0.0.0.0 ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi 4.docs](https://gitlab.com/ikus-soft/minarca/-/blob/mastedoc/index.md”>docs)admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/ 
$ sudo mkdir /NAS/Backup/Minarca/
$ sudo chown minarca:minarca /NAS/Backup/Minarca/
$ sudo chmod 0750 /NAS/Backup/Minarca/
$ sudo service minarca-server restart
As always we need to open the required ports in our firewall settings:
$ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept 
$ sudo nano service nftables restart
And now we can open it to the internet:
$ sudo nano service nftables restart
$ sudo nano /etc/nginx/sites-available/30-minarca
upstream minarca { server 127.0.0.1:8080; keepalive 64; } server { server_name minarca.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_pass http://minarca; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } listen 80; } 
$ sudo ln -s /etc/nginx/sites-available/30-minarca /etc/nginx/sites-enabled/30-minarca
And enable SSL support, with HTTPS redirect:
$ sudo certbot --nginx
$ sudo service nginx restart

DNS records

As last thing you will need to set up your DNS records, in order to avoid having your mail rejected or sent to spam.

MX record

name: @ value: mail.naspi.webredirect.org TTL (if present): 90 

PTR record

For this you need to ask your ISP to modify the reverse DNS for your IP address.

SPF record

name: @ value: v=spf1 mx ~all TTL (if present): 90 

DKIM record

To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
name: dkim._domainkey value: V=DKIM1; P= ... TTL (if present): 90 

DMARC record

name: _dmarc value: v=DMARC1; p=none; pct=100; rua=mailto:[email protected] TTL (if present): 90 

Router ports

If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.

mailserver ports

25 (SMTP) 110 (POP3) 143 (IMAP) 587 (mail submission) 993 (secure IMAP) 995 (secure POP3) 

ssh port

If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.

HTTP/HTTPS ports

80 (HTTP) 443 (HTTPS) 

The end?

And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website.
But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost).
EDIT: typos ;)
submitted by Fly7113 to raspberry_pi [link] [comments]

The internals of Android APK build process - Article

The internals of Android APK build process - Article

Table of Contents

  • CPU Architecture and the need for Virtual Machine
  • Understanding the Java Virtual Machine
  • Compiling the Source Code
  • Android Virtual Machine
  • Compilation Process to .dex
  • ART over Dalvik
  • Understanding each part of the build process.
  • Source Code
  • Resource Files
  • AIDL Files
  • Library Modules
  • AAR Libraries
  • JAR Libraries
  • Android Asset Packaging Tool
  • resources.arsc
  • D8 and R8
  • Dex and Multidex
  • Signing the APK
  • References
Understanding the flow of the Android APK build process, the execution environment, and code compilation blog post aims to be the starting point for developers to get familiar with the build process of Android APK.

CPU Architecture and the need for Virtual Machine

Unveiled in 2007, Android has undergone lots of changes related to its build process, the execution environment, and performance improvements.
There are many fascinating characteristics in Android and one of them is different CPU architectures like ARM64 and x86
It is not realistic to compile code that supports each and every architecture. This is where Java Virtual Machine is used.
https://preview.redd.it/91nrrk3twxk51.png?width=1280&format=png&auto=webp&s=a95b8cf916f000e94c6493a5780d9244e8d27517

Understanding the Java Virtual Machine

JVM is a virtual machine that enables a computer to run applications that are compiled to Java bytecode. It basically helps us in converting the compiled java code to machine code.
By using the JVM, the issue of dealing with different types of CPU architecture is resolved.
JVM provides portability and it also allows Java code to be executed in a virtual environment rather than directly on the underlying hardware.
But JVM is designed for systems with huge storages and power, whereas Android has comparatively low memory and battery capacity.
For this reason, Google has adopted an Android JVM called Dalvik.

https://preview.redd.it/up2os7juwxk51.png?width=1280&format=png&auto=webp&s=2a290bdc9be86fb08d67228c730329130da3bc63

Compiling the Source Code

Our Java source code for the Android app is compiled into a .class file bytecode by the javac compiler and executed on the JVM.
For Kotlin source code, when targeting JVM, Kotlin produces Java-compatible bytecode, thanks to kotlinc compiler.
To understand bytecode, it is a form of instruction set designed for efficient execution by a software interpreter.
Whereas Java bytecode is the instruction set of the Java virtual machine.

https://preview.redd.it/w2uzoicwwxk51.png?width=1280&format=png&auto=webp&s=b122e0781bf9e9ba236d34a87a636c9218f7ea35

Android Virtual Machine

Each Android app runs on its own virtual machine. From version 1.0 to 4.4, it was 'Dalvik'. In Android 4.4, along with Dalvik, Google experimentally introduced a new Android Runtime called 'ART'.
Android users had the option to choose either Dalvik or ART runtime in Android 4.4.
The .class files generated contains the JVM Java bytecodes.
But Android has its own optimized bytecode format called Dalvik from version 1.0 to 4.4. Dalvik bytecodes, like JVM bytecodes, are machine-code instructions for a processor.

https://preview.redd.it/sqychk81xxk51.png?width=217&format=png&auto=webp&s=49445fa42e4aa6f4008114a822f364580649fcdf

Compilation Process to .dex

The compilation process converts the .class files and .jar libraries into a single classes.dex file containing Dalvik byte-codes. This is possible with the dx command.
The dx command turns all of the .class and .jar files together into a single classes.dex file is written in Dalvik bytecode format.
To note, dex means Dalvik Executable.
https://preview.redd.it/g4z3tb95xxk51.jpg?width=831&format=pjpg&auto=webp&s=1cdbaacaf10cc529cccca2ba016583596781ee88

ART over Dalvik

Since Android 4.4, Android migrated to ART, the Android runtime from Dalvik. This execution environment executes .dex as well.
The benefit of ART over Dalvik is that the app runs and launches faster on ART, this is because DEX bytecode has been translated into machine code during installation, no extra time is needed to compile it during the runtime.
ART and Dalvik are compatible runtimes running Dex bytecode, so apps developed for Dalvik should work when running with ART.
The JIT based compilation in the previously used Dalvik has disadvantages of poor battery life, application lag, and performance.
This is the reason Google created Android Runtime(ART).
ART is based on Ahead - Of - Time (AOT) based compilation process where compilation happens before application starts.
In ART, the compilation process happens during the app installation process itself. Even though this leads to higher app installation time, it reduces app lag, increases battery usage efficiency, etc.
Even though dalvik was replaced as the default runtime, dalvik bytecode format is still in use (.dex)
In Android version 7.0, JIT came back. The hybrid environment combining features from both a JIT compiler and ART was introduced.
The bytecode execution environment of Android is important as it is involved in the application startup and installation process.
https://preview.redd.it/qh9bxsplzxk51.png?width=1280&format=png&auto=webp&s=bc40ba6c69cec2110b7d695fe23df094bf5aea6c

Understanding each part of the process.


https://preview.redd.it/obelgd7axxk51.png?width=950&format=png&auto=webp&s=299abcf4798ad4d2de93f4eb18b9d9e70dd54297

Source Code

Source code is the Java and Kotlin files in the src folder.

Resource Files

The resource files are the ones in the res folder.

AIDL Files

Android Interface Definition Language (AIDL) allows you to define the programming interface for client and service to communicate using IPC.
IPC is interprocess communication.
AIDL can be used between any process in Android.

Library Modules

Library module contains Java or Kotlin classes, Android components, and resources though assets are not supported.
The code and resources of the library project are compiled and packaged together with the application.
Therefore a library module can be considered to be a compile-time artifact.

AAR Libraries

Android library compiles into an Android Archive (AAR) file that you can use as a dependency for an Android app module.
AAR files can contain Android resources and a manifest file, which allows you to bundle in shared resources like layouts and drawables in addition to Java or Kotlin classes and methods.

JAR Libraries

JAR is a Java library and unlike AAR it cannot contain Android resources and manifests.

Android Asset Packaging Tool

Android Asset Packaging Tool (aapt2) compiles the AndroidManifest and resource files into a single APK.
At this point, it is divided into two steps, compiling and linking. It improves performance, since if only one file changes, you only need to recompile that one file and link all the intermediate files with the 'link' command.
AAPT2 supports the compilation of all Android resource types, such as drawables and XML files.
When you invoke AAPT2 for compilation, you should pass a single resource file as an input per invocation.
AAPT2 then parses the file and generates an intermediate binary file with a .flat extension.
The link phase merges all the intermediate files generated in the compile phase and outputs one .apk file. You can also generate R.java and proguard-rules at this time.

resources.arsc

The output .apk file does not include the DEX file, so the DEX file is not included, and since it is not signed, it is an APK that cannot be executed.
This APK contains the AndroidManifest, binary XML files, and resources.arsc.
This resource.arsc contains all meta-information about a resource, such as an index of all resources in the package.
It is a binary file, and the APK that can be actually executed, and the APK that you often build and execute are uncompressed and can be used simply by expanding it in memory.
The R.java that is output with the APK is assigned a unique ID, which allows the Java code to use the resource during compilation.
arsc is the index of the resource used when executing the application.

https://preview.redd.it/hmmlfwhdxxk51.png?width=1280&format=png&auto=webp&s=b2fe2b6ad998594a5364bb6af6b5cbd880a2452c

D8 and R8

Starting from android studio 3.1 onwards, D8 was made the default compiler.
D8 produces smaller dex files with better performance when compared with the old dx.
R8 is used to compile the code. R8 is an optimized version of D8.
D8 plays the role of dexer that converts class files into DEX files and the role of desugar that converts Java 8 functions into bytecode that can be executed by Android.
R8 further optimizes the dex bytecode. R8 provides features like optimization, obfuscation, remove unused classes.
Obfuscation reduces the size of your app by shortening the names of classes, methods, and fields.
Obfuscation has other benefits to prevent easy reverse engineering, but the goal is to reduce size.
Optimization reduces the DEX file size by rewriting unnecessary parts and inlining.
By doing Desugaring we can use the convenient language features of Java 8 in older devices.
https://preview.redd.it/so424bxwxxk51.png?width=1280&format=png&auto=webp&s=0ad2df5bd194ec770d453f620aae9556e14ed017

Dex and Multidex

R8 outputs one DEX file called classes.dex.
If you are using Multidex, that is not the case, but multiple DEX files will appear, but for the time being, classes.dex will be created.
If the number of application methods exceeds 65,536 including the referenced library, a build error will occur.
The method ID range is 0 to 0xFFFF.
In other words, you can only refer to 65,536, or 0 to 65,535 in terms of serial numbers.
This was the cause of the build error that occurred above 64K.
In order to avoid this, it is useful to review the dependency of the application and use R8 to remove unused code or use Multidex.

https://preview.redd.it/kjyychmzxxk51.png?width=1261&format=png&auto=webp&s=18bea3bf9f7920a4701c2db9714dc53ae6cc5f82

Signing the APK

All APKs require a digital signature before they can be installed or updated on your device.
For Debug builds, Android Studio automatically signs the app using the debug certificate generated by the Android SDK tools when we run.
A debug Keystore and a debug certificate is automatically created.
For release builds, you need a Keystore and upload the key to build a signed app. You can either make an APK file with apkbuilder and finally optimize with zipalign on cmd or have Android Studio handle it for you with the 'Generated Signed Apk option'.

https://preview.redd.it/10m8rjl0yxk51.png?width=1468&format=png&auto=webp&s=078c4ab3f41c7d08e7c2280555ef2038cc04c5b0

References

https://developer.android.com/studio/build
https://github.com/dogriffiths/HeadFirstAndroid/wiki/How-Android-Apps-are-Built-and-Run
https://logmi.jp/tech/articles/322851
https://android-developers.googleblog.com/2017/08/next-generation-dex-compiler-now-in.html
https://speakerdeck.com/devpicon/uncovering-the-magic-behind-android-builds-droidfestival-2018
by androiddevnotes on GitHub
🐣
submitted by jiayounokim to androiddev [link] [comments]

another take on Getting into Devops as a Beginner

I really enjoyed m4nz's recent post: Getting into DevOps as a beginner is tricky - My 50 cents to help with it and wanted to do my own version of it, in hopes that it might help beginners as well. I agree with most of their advice and recommend folks check it out if you haven't yet, but I wanted to provide more of a simple list of things to learn and tools to use to compliment their solid advice.

Background

While I went to college and got a degree, it wasn't in computer science. I simply developed an interest in Linux and Free & Open Source Software as a hobby. I set up a home server and home theater PC before smart TV's and Roku were really a thing simply because I thought it was cool and interesting and enjoyed the novelty of it.
Fast forward a few years and basically I was just tired of being poor lol. I had heard on the now defunct Linux Action Show podcast about linuxacademy.com and how people had had success with getting Linux jobs despite not having a degree by taking the courses there and acquiring certifications. I took a course, got the basic LPI Linux Essentials Certification, then got lucky by landing literally the first Linux job I applied for at a consulting firm as a junior sysadmin.
Without a CS degree, any real experience, and 1 measly certification, I figured I had to level up my skills as quickly as possible and this is where I really started to get into DevOps tools and methodologies. I now have 5 years experience in the IT world, most of it doing DevOps/SRE work.

Certifications

People have varying opinions on the relevance and worth of certifications. If you already have a CS degree or experience then they're probably not needed unless their structure and challenge would be a good motivation for you to learn more. Without experience or a CS degree, you'll probably need a few to break into the IT world unless you know someone or have something else to prove your skills, like a github profile with lots of open source contributions, or a non-profit you built a website for or something like that. Regardless of their efficacy at judging a candidate's ability to actually do DevOps/sysadmin work, they can absolutely help you get hired in my experience.
Right now, these are the certs I would recommend beginners pursue. You don't necessarily need all of them to get a job (I got started with just the first one on this list), and any real world experience you can get will be worth more than any number of certs imo (both in terms of knowledge gained and in increasing your prospects of getting hired), but this is a good starting place to help you plan out what certs you want to pursue. Some hiring managers and DevOps professionals don't care at all about certs, some folks will place way too much emphasis on them ... it all depends on the company and the person interviewing you. In my experience I feel that they absolutely helped me advance my career. If you feel you don't need them, that's cool too ... they're a lot of work so skip them if you can of course lol.

Tools and Experimentation

While certs can help you get hired, they won't make you a good DevOps Engineer or Site Reliability Engineer. The only way to get good, just like with anything else, is to practice. There are a lot of sub-areas in the DevOps world to specialize in ... though in my experience, especially at smaller companies, you'll be asked to do a little (or a lot) of all of them.
Though definitely not exhaustive, here's a list of tools you'll want to gain experience with both as points on a resume and as trusty tools in your tool belt you can call on to solve problems. While there is plenty of "resume driven development" in the DevOps world, these tools are solving real problems that people encounter and struggle with all the time, i.e., you're not just learning them because they are cool and flashy, but because not knowing and using them is a giant pain!
There are many, many other DevOps tools I left out that are worthwhile (I didn't even touch the tools in the kubernetes space like helm and spinnaker). Definitely don't stop at this list! A good DevOps engineer is always looking to add useful tools to their tool belt. This industry changes so quickly, it's hard to keep up. That's why it's important to also learn the "why" of each of these tools, so that you can determine which tool would best solve a particular problem. Nearly everything on this list could be swapped for another tool to accomplish the same goals. The ones I listed are simply the most common/popular and so are a good place to start for beginners.

Programming Languages

Any language you learn will be useful and make you a better sysadmin/DevOps Eng/SRE, but these are the 3 I would recommend that beginners target first.

Expanding your knowledge

As m4nz correctly pointed out in their post, while knowledge of and experience with popular DevOps tools is important; nothing beats in-depth knowledge of the underlying systems. The more you can learn about Linux, operating system design, distributed systems, git concepts, language design, networking (it's always DNS ;) the better. Yes, all the tools listed above are extremely useful and will help you do your job, but it helps to know why we use those tools in the first place. What problems are they solving? The solutions to many production problems have already been automated away for the most part: kubernetes will restart a failed service automatically, automated testing catches many common bugs, etc. ... but that means that sometimes the solution to the issue you're troubleshooting will be quite esoteric. Occam's razor still applies, and it's usually the simplest explanation that works; but sometimes the problem really is at the kernel level.
The biggest innovations in the IT world are generally ones of abstractions: config management abstracts away tedious server provisioning, cloud providers abstract away the data center, containers abstract away the OS level, container orchestration abstracts away the node and cluster level, etc. Understanding what it happening beneath each layer of abstraction is crucial. It gives you a "big picture" of how everything fits together and why things are the way they are; and it allows you to place new tools and information into the big picture so you'll know why they'd be useful or whether or not they'd work for your company and team before you've even looked in-depth at them.
Anyway, I hope that helps. I'll be happy to answer any beginnegetting started questions that folks have! I don't care to argue about this or that point in my post, but if you have a better suggestion or additional advice then please just add it here in the comments or in your own post! A good DevOps Eng/SRE freely shares their knowledge so that we can all improve.
submitted by jamabake to devops [link] [comments]

Things i did to improve FPS performances in No Man Sky

A little bit of story, the first time i played this game, i had a g3220 cpu, gtx 750ti, 8gb ram and still using HDD. Back then i played the game in 45fps MAX on a 1360x768 monitor
Fast forward today, i upgraded my PC a little bit. I7 4790 -- GTX 1060 3gb -- 8gb Ram -- SSD -- 1960x1080 monitor.
The recent update made me want to play the game again, but to my surprises, i had to lower all the setting to get 60fps, and near buildings or in-planets fighting it tanked to 25fps. Even if play the game in 1024x768 with resolution scale on 10%.
But after 4 hour of googling, i still can't managed to get more than 30fps on my VERY small base. But just now, i managed to get 55 to 65fps everywhere, and it goes up to ~120fps in space flying around.
I have no idea what's the "fix" so here's the thing that i do to finally got the fps i have now :
1. Install QuickCPU and ProcessLasso +10fps for me
On QuickCPU, put the System Power Plan to Highest Performance, Core Parking Index, Turbo Boost Index and Frequency Scalling Index to 100%. Click Apply, minimize the program (On options, you can make the program to launch at start and minimize to try, idk if the setting persist if you close the program)
On Process Lasso, start NMS, now go back to Process Lasso right click NMS.exe, click Induce performance mode, minimize the program to tray and close NMS.
2. NumLowThreads & NumHighThreads to 0 and UseTerrainTextureCache True +?? fps, i don't know if this has effect or not, but this is one of the first thing i do.
You can do this by going to your No Man Sky Installation folder, for me it's Steem\steamapps\common\No Man's Sky\Binaries\SETTINGS and open TKGRAPHICSSETTINGS with text editor.
Ctrl + F to find NumLowThreads & NumHighThreads and put them both to 0 and find UseTerrainTextureCache and put it to True
3. Install Fast Shader Mods +20fps even more if you want your game to look very bad.
https://nomansskymods.com/mods/fast-shade
I choose the MID settings.
4. Overclock GPU with MSI afterburner +5fps.
My settings : +20% Core Voltage -- 100% power Limit -- 83c Temp Limit -- +200 MHz Core Clock -- +150 MHz Memory Clock.
BECAREFUL WITH THIS couple of times, if i set it a bit too high my pc went black screen, i think it's the driver crashed. This is what currently work perfectly for me.
5. Nvdia Control Panel Settings Note: Any settings i don't mentions is the nvdia default settings.
Go to Manage 3D Settings -> Program Settings NMS.exe (if not there, add it manually)
But first check the global settings, i restore the default settings first.
*Power Management Mode : Prefer Maximum Performance
*Texture filtering - Quality : High Performance
*Tripe Buffering : On
*Vertical Sync : On
Now Program Settings NMS.exe, everything not mentioned is the default/global settings.
*Anisotropic Filtering OFF
*Antialiasing mode OFF
*OpenGL rendering GPU GTX 1060 3gb
*Texture Filtering - Anistropic sample : OFF
*Texture Filtering - Negative LOD Bias : ALLOW
*Threaded Optimization : ON
*Virtual Reality pre-rendered : Use 3d App
6. In Game Settings I hate it that you need to restart the game to apply most of the settings :/
Basically put everything to Lowest/Left possible, except for Animations, i put it on Ultra and i don't notice any fps drops even on high npc/players area.
Borderless -- 1920x1080 -- 120% resolution scalling -- V-Sync Triple Buffered - 90 fps limit -- 100 FoV -- 0 Motion Blur - Vignete&Scanlines disabled (i don't think this affect FPS, but it's annoying)
I personally don't like the Fast Shader mods, without it now i am running the game 50fps on demanding parts. Which i am fine with.
Hope this help :D
submitted by FakeGodz to NoMansSkyTheGame [link] [comments]

After effects crashing on startup

Hey folks I get the below error log when opening after effects. Can't get past it unless I uninstall . any ideas here? Keeps happening

submitted by Lolosdomore to AfterEffects [link] [comments]

* ACCURATE FOREX INDICATOR BINARY OPTIONS TRADING STRATEGY ... Best Binary Options System Start Profiting Now - Your New System Inside Binary options Volatility 10 Index: Make 15% every 5-10 ... BINARY OPTIONS TRADING SYSTEM - REAL ACCOUNT - Binary ... BINARY OPTIONS TUTORIAL: BINARY STRATEGY - BINARY TRADING (BINARY OPTIONS SYSTEM) Make 12% every 6 seconds Volatility 25 index binary ... Binary Options Review - RSI Trading Strategy (Relative ... Simple Strategy - Binary.com - Volatility 10 Index - Binary Options Trading Strategy Binary Option Trading Non Repaint Premium Indicator System ... in out volatility 50 index binary options trading strategy ...

Trading binary options may not be suitable for everyone. Trading CFDs carries a high level of risk since leverage can work both to your advantage and disadvantage. As a result, the products offered on this website may not be suitable for all investors because of the risk of losing all of your invested capital. You should never invest money that you cannot afford to lose, and never trade with ... binary options trading system ( binary options) Trading ist es etwas anderes. Aus diesem Grund hat Franco mich gebeten, Sie zu warnen, dass Aktien, binäre Optionen, Forex und zukünftiger Handel sehr riskante Märkte sind, in denen Sie eine Binary options trading systems reviews Geld verdienen können, aber auch Sie riskieren, sie zu verlieren. Trade S&P 500 Index Options with these two Index Binary Options Systems- Banker11 Light and Banker11 Pro. Trading for a living is a noble dream. There is no better lifestyle than trading for a living. If you are someone who loves freedom than trading is the way to go! Treat trading as an incredible opportunity. Yet, a large percentage of people tend to take trading not seriously. This ... For our examples, we will use the commonly traded S&P 500 index binary options traded on the Nadex platform. Directional Binary Options Strategies . For a purely directional trade, let’s use the ... Super EASY and Highly EFFECTIVE Forex Binary Options Trend Following Trading System with MFI (Money Flow Index) and Double Bollinger Bands Stop Alert Indicator. The BBands Stop indicator (Bollinger Bands Stop line) is a trend indicator.When the price is over the green curve, the trend is bullish or bearish if the price is below its orange line. 8# Binary Options stategy Bullseye Forecaster, HFT and Genesis Matrix; 9# Binary Options divergence strategy with bollinger bands; 10# Binary Options strategy RSI and SFX MCL filtered by Trend Reversal; 11# Binary Options Strategy: William's % Range with (Buy Zone and Sell Zone) 12# Binary Options Strategy: Stoclye with I-High Low Middle Binary options systems are what were all about… We have added many binary options trading systems see below. Our Binary options systems have become almost works of art. Most of the Systems below are very powerful. You need to understand how binary options systems work. And once you find a good thing keep it to yourself. Why’s that? Force Index Binary System is a trend force system based on complex indicators: Red-D, 35 MA squeeze but the main filter is the force Index indicator. This trading system is also good for trading withot binary options. Buy Profit 400 JUTA Dengan Menguasai Algoritma Binary.com Index Vol 75 Discussion in ' Binary Options Trading systems and strategies ' started by GOODMEN , Oct 2, 2019 . Tags: The force index binary options strategy is a trend force system based on complex indicators, and these are the Red-D, 35 MA squeeze but the main filter is the force Index indicator. This works on indices, commodities and forex market. The timeframe used for this method is 15 minute or higher with an expiry time of 3 candles.

[index] [12458] [12732] [6499] [29664] [27683] [17001] [7303] [14594] [20441] [3519]

* ACCURATE FOREX INDICATOR BINARY OPTIONS TRADING STRATEGY ...

INTRODUCING THE TOUCHSCALPER VERSION 4.1 BINARY OPTIONS AUTOMATED TRADING BOT FEATURES Automated Trade delay Stop Loss Take Profit/ sell at market in scalper... This is a working strategy of Deriv(binary.com) platform. You will need some patient with this strategy and make sure the moving line move to the bottom whil... You are able to get to their purchaser aid desk at binary options system and when you have already got an account in binary trading and have binary strategy with them, you are able to immediately ... index binary options system banker 11 light best binary options system best binary options trading system best 60 second binary options system binary options bully system banker11 light binary ... Simple Strategy - Binary.com - Volatility 100 Index - Binary Options Trading Strategy Option Trading, Trading Options, Binary.com, Binary.com trading, INTRODUCING THE TOUCHSCALPER VERSION 4.1 BINARY OPTIONS AUTOMATED TRADING BOT FEATURES Automated Trade delay Stop Loss Take Profit/ sell at market in scalper... BINARY OPTIONS TRADING SYSTEM - REAL ACCOUNT - Binary Options Brokers ★ TRY STRATEGY HERE http://iqopts.com/demo ★ WORK ON REAL MONEY http://iqopts.com/r... Binary Options Review - RSI Trading Strategy (Relative Strength Index) - Binary Options FREE Trading on DEMO Account - http://binopts.com/platform While trad... * http://www.abestforexindicator.weebly.com https://ebay.com/usr/Alfredo000 Most Accurate FOREX indicator system ARROWS 2020 ALERTS 2021 PROGRAMS best platfo... Indicator System එක ලබා ගැනීමට අවශ්‍ය ය පහත දුරකථන අංකයට අමතන්න Binary.com ගිණුමක් අදම Open ...

http://binary-optiontrade.cobbcurrui.ml